Android Malware Characterization using Metadata and Machine Learning Techniques

Android Malware Characterization using Metadata and Machine Learning Techniques

\authorblockNIgnacio Martín\authorrefmark1, José Alberto Hernández\authorrefmark1, Alfonso Muñoz\authorrefmark2, Antonio Guzmán\authorrefmark2
\authorrefmark1Universidad Carlos III de Madrid, Spain
Email: {ignmarti, jahgutie}
\authorblockA\authorrefmark2Telefónica Digital Identity & Privacy, Spain
Email:{Alfonso.Munoz, Antonio.Guzman}

Android Malware has emerged as a consequence of the increasing popularity of smartphones and tablets. While most previous work focuses on inherent characteristics of Android apps to detect malware, this study analyses indirect features and meta-data to identify patterns in malware applications. Our experiments show that: (1) the permissions used by an application offer only moderate performance results; (2) other features publicly available at Android Markets are more relevant in detecting malware, such as the application developer and certificate issuer, and (3) compact and efficient classifiers can be constructed for the early detection of malware applications prior to code inspection or sandboxing.


Google Play meta-data; Android Malware; malware detection; Feature Hashing; Machine Learning; Data Analytics.

I Introduction and Motivation

The mobile market industry has explosively grown in the last decade. According to latest estimates, the number of smartphone users has reached 2 billion at the beginning of 2014, and is expected to grow up to more than 2.50 billion in 2018 111See:, last access Nov 2016.

Android has positioned itself as the leading operating system in the Smartphone industry, accounting for more than 86.8% of devices by the end of 2016222See, last access Mar 2017. Indeed, one key for its success is that the Android platform is open to any developer, individual or enterprise, who is able to easily design new applications and services and upload them to any of the Android markets available, namely Google Play Store, Amazon Appstore, Samsung Galaxy Apps, etc. At the time of writing, it is estimated that nearly 2.7M applications are uploaded at Google Play, while new applications are uploaded at a pace of more than 60k per month333See, last access Mar 2017.

Unfortunately, the popularity of Android and its ease for develop and upload any app has side effects. In this light, Android has become one of the most valuable targets for malware developers. An extensive taxonomy of Android malware applications, where up to 49 malware families have been identified, can be found in [1].

The ability to early detect malicious Android applications is vital to enhance user security, since Android apps can be tagged, reported and removed from the market and their signatures blacklisted. This is a classification problem and, therefore, many authors have attempted the application of machine learning to different feature sets.

Consequently, machine learning has been profoundly studied, and a survey of techniques may be found in [2]. For instance, the authors in [3] gather features from application code and manifest (permissions, API calls, etc) and use Suport Vector Machines (SVMs) to identify different types of malware families. The authors in [4] analyse bayesian-base machine learning techniques for Android malware detection. In [5], the authors use permissions and control flow graphs along with Suport Vector Machines (SVMs) to differentiate malware from good applications (”goodware” in what follows). The authors in [6] uses API calls and permissions as features to train SVMs and Decision Trees. Androdialysis [7] explores the intents of each application as features for the classification task. Yerima et al [8] try different algorithms over API calls and command sets and shows promising results for ensemble methods, such as random forest.

In general, Android permissions have been extensively studied under the assumption that these are critical in identifying most malware, see [9, 10, 11, 12]. Actually, in [9] the authors show that malware uses less permissions than goodware.

The authors in [13] attempt to detect malware by inspecting other application run-time parameters, such as CPU usage, network transmission and process and memory information. Mas’ud et al [14] also include Android system calls into analysis. Furthermore Elish et al [15] propose a single-feature classification system based on user behaviour profiling. Droidchain authors [16] propose a novel model which analyses static and dynamic features of applications assuming different malware models.

In a different approach, the authors of [17] design a differential-intersection analysis technique to identify repackaged versions of popular applications, which is a common way to disguise malicious applications, showing good performance.

Concerning malware detection systems, there exists two main trends: (1) online services which aim to provide efficient and lightweight solutions to cope with malware detection from the mobile device and (2) offline services to engage in fast analysis of enormous amounts of applications in order to mark potentially harmful code, either for removal or extended inspection. In this light, several authors have explored both trends, obtaining results such as the systems exposed in [3, 18, 19] which provide online solutions to inform or warn the user on the device or more general, hardware-dependent systems such as [20, 21] which are huge scalable systems capable of dealing with huge amounts of applications at once, enabling fast and cheap detection mechanisms for entities like application markets to improve the quality of their apps. [22] surveys extensively the types and works regarding malware detection system.

In addition, obtaining as much information as possible on threats and other undesired applications is really necessary, and various authors propose methodologies and systems to collect diverse and huge amounts of data. For example, Burguera et al [23] propose a framework for collecting application trace and identify uncommon behaviours of common applications. Moreover, the authors of [24, 25] propose a system to gather signatures and malware information automatically.

In fact, a good deal of information is already available at Google Play and can be used to identify patterns not yet pointed out in previous work. Elements like developer name, categories or votes have not been used to the best of our knowledge in malware detection yet. Such meta-data provides a good starting point to develop a lightweight malware detector which does not require performing behaviour analysis and provides a fast first-stage notion on whether an application ”behaves suspiciously” (shows malware patterns) or not. However, very few studies have analysed any subset of this information: only the authors of [26] performed sentiment analysis on the comments made by users regarding Android applications.

To this end, this work focuses on the analysis of such indirect features and their ability to unveil malware. We analyse meta-data to find only a subset of features which have proven predictive power and use them to develop and test different machine learning models.

The remainder of this work is organised as follows: Section II describes the dataset under study, including number of applications and types of features analysed. Section III is explains to the methodology, whereas section IV reports the experiments and results obtained. Finally, section V concludes this work with a summary of the findings.

Ii Dataset description and pre-processing

The dataset used in this study comprises around 140K Android applications collected from Google Play Store during 2015. This dataset has been obtained using the Tacyt cyber-intelligence tool developed internally at Eleven Paths (Telefónica Group, see Acknowledgements for further details). For each application, we have extracted not only intrinsic features of the Application PacKage File (apk), e.g. size in bytes or list of permissions used, but also other meta-data available at Google Play, including that related to the application developer or the number of votes or average star rating. Some of these features are numeric (e.g. application size, average rating), while others are categorical (e.g. whether an application belongs to a certain category or not). Next section overviews the features derived from such data, some of them will reveal extremely powerful in identifying potential malware.

Ii-a Intrinsic application features

These relate to concise application information, including its size (bytes), application category, number of images and files used by the application, etc. This group comprises 15 features.

Other intrinsic features considered in the analysis include the permissions used by each apk. There are over 29K different permissions used by the applications in our dataset; most popular ones are:

  • android.permission.internet (found in 96.07% of apps)

  • android.permission.access_network_state (91.15%)

  • android.permission.read_external_storage (54.5%)

  • android.permission.write_external_storage (54.12%)

  • android.permission.read_phone_state (39.81%)

Many permissions appear only once in the dataset as they are often self-defined permissions. Thus, the binarized permission features comprise a very-sparse high-dimensional matrix. In these cases, feature hashing [27] is an effective strategy for dimensionality reduction; it works by grouping applications according to some hash functions. We will leverage the hashing trick in the paper to increase the number of Intrinsic application features in a smaller number that just adding permissions as they are.

Ii-B Social-related features

These are 7 features and involve feedback collected from users in the market. As Google Play is strongly connected with the social network Google+, features like total number of votes or average rating are provided. For each possible ratings (1, 2, 3, 4 and 5 stars) we acquire the number of votes given. Then, it is possible to easily compute the mean average of any application in the market as well as the total number of votes for that application.

Ii-C Entity-related features: Developers and Certificate Issuers

Android markets often provide information about the application developers (name, email address, website, etc), and the certificate information of the application signature (expedition or expiration dates, issuer or subject names, etc).

Within data, there are around 45K different developer names and 40K certificate issuer names. Following [28], we have created two new features called developerRep and issuerRep which account for the percentage of applications that each developer and certificate issuer have tagged as malware. The reader must note that Google Play allows self-signed applications, i.e. applications where the issuer is the same as the developer.

As a result, in many cases, the issuer of a certificate and the developer may be the same entity. However, their reputations may change, as many issuers may not only sign their own applications and not all developers self-sign their applications (and even if they do, they use different accounts)

Ii-D Malware detection attributes

Once downloaded, all applications have been inspected for malware using the VirusTotal web service (free Online Virus, Malware and URL Scanner, available at:, last access Feb 2017). VirusTotal checks each application against a large number of malware engines, producing a binary result (malware/goodware) per engine (McAfee, AVG, VIPRE, TrendMicro, etc.). In our dataset, around 50% of the applications have been declared as malware by at least one of these engines.

Concerning the number of detectors per malware application, a zipf-like behaviour is observed, i.e. most malware applications are only detected by a single antivirus (AV) engine, while a few number of malware applications are detected by many AV engines. In particular, 25% of the malware applications are detected by 1 AV engines or less (1st Quartile), 50% are detected by 2 AV engines or less (median) and 75% malware applications are detected by 4 AV engines or less (3rd Quartile). We shall use the label ”isMalware” (TRUE/FALSE) to denote whether an application is tagged as Malware or not.

Fig. 1: Histogram of AV detectors per malware application.

Fig. 1 shows in a histogram the frequency of each application detection count. The zipf-like behaviour is clear in the picture, as most applications are only detected by a single engine (34,025 applications), while the average detection count is 3. Furthermore, there is one application detected as malware by 53 AV detectors.

Due to this disparity and disagreement among AVs, we will consider the aforementioned quantiles (1,2 and 4 detections) as different thresholds to establish ground truth rules within the detection scheme.

Iii Methodology and Data Analysis

Iii-a Initial approach

(a) Number of Downloads
(b) Number of Days in Google Play
(c) Associated Developer Reputation
Fig. 2: Goodware/Malware boxplot comparison for three features:Number of downloads, number of days since the application was uploaded and developer reputation

Feature selection is key to reduce complexity and improve performance. We expect some features to have more predictive power than others, as noted in Fig. 2. In this figure, three boxplots for malware/goodware classes are shown for three sample features: the number of times the application has been downloaded from the market (left), the time the application has been in Google Play (centre) and the developer reputation (right).

As observed, downloads is not a very useful feature, since both goodware and malware show similar 25-percentile (around 10) as well as 75-percentile (48), values. Concerning the number of days in Google Play (centre), the 25-, 50- and 75- quantile measures of malware differ from goodware, showing some predictive power. Finally, developers reputation (right) clearly reveals that malware developers tend to develop more malware while goodware developers create almost no malware.

Iii-B Classification models and performance evaluation

In a binary classification problem, we are often given a training set with labeled data , where and is a vector containing the values of predictors or features, namely, . In our case, the labels refer to the categoric variable ”isMalware”, whereas the predictors comprise 512 feature hashes of permissions, 15 intrinsic features, 7 social-related features and reputations.

Machine-Learning algorithms are in charge of constructing a function from the training set that separates the two classes with minimum error. In our experiments, we have used Logistic Regression (LR), Support Vector Machines (SVMs) and Random Forests (RF) as three well-known supervised learning algorithms

Once a model is obtained, the following stage consists on testing its ability to predict the result of unobserved data samples, i.e. evaluate the model’s generalisation capabilities. Ten-fold cross-validation has been used to adjust resulting models and evaluate test error, with well-known metrics: Receiver Operating Characteristic (ROC) curves and the Area Under ROC Curve (AUC-ROC), Precision, Recall and F1-score.

It is worth recalling that the ROC curve compares the False Positive Rate (FPR) vs True Positive Rate (TPR), and the AUC measures the integral of the ROC curve, being unity the highest possible value. In addition, Precision measures how many of the applications tagged as malware are indeed malware, while Recall measures how many true malware applications the model detects from the total. In other words:

where (TP, FP, TN, FN) refer to True/False Positive/Negative respectively. Finally, the F1-score trades off precision and recall by computing their geometrical mean.

Iii-B1 Validation and Significance

Ten-fold cross-validation consists on splitting the entire dataset in 10 chunks of equal size and perform 10 iterations over them, selecting at each turn a different chunk to be the testing set and the reminding ones to be the training. Using this method, one can perform hyper-parameter tuning, but also provide results with statistical significance (i.e. robust results which do not depend on the selection of training/test instances).

Iii-C Feature selection

Some features are critical in the discrimination between good/malware while others are not, either due to correlation or small predictive power. For selecting from those features, we have used the following methods:

Iii-C1 Pearson’s Chi Squared test

Statistical test used to determine whether any difference among variables occurs by chance or there is indeed a statistical relation.

Iii-C2 Entropy-based methods

In information theory, entropy measures the amount of unknown information a certain source provides. The following measurements are considered:

  • Information Gain (IG) or mutual information between a feature and the outcome .

  • Gain Ratio is the result of dividing the information gain by the intrinsic information of the feature, aiming to reduce bias towards features with high information gain value on its own rather than a good relationship with the output variable .

Iii-C3 Random Forest importance

or contribution of its nodes, in particular the Mean Decrease in Node Impurity (MDNI), which measures the inequality among nodes within a Random Forest.

For further reference of machine learning and statistical methods for data analysis, please refer to [29].

Iv Experiments and results

In the experiments, we have used the well-known R open-source statistical software, along with a number of libraries for machine learning and feature selection (MASS, randomForest, kernlab and glmnet). From the original dataset, we have built nine different subsets with different compositions. Concisely, for each subset we vary either the amount of malware it contains (2%, 25% or 50% of malware over the total) and the threshold used for considering an application as malware (1, 2 or 4 AV detectors). As an example, we shall refer to the (1-AV, 25%) malware dataset as a dataset that contains 25% malware and 75% goodware applications where malware is randomly selected among all applications whereby at least 1-AV detector was fired.

Each of these subsets include an amount of 50K applications, except the 50%-4AV dataset which only contains 36K samples due to the lack of malware applications meeting the conditions to be considered malware.

Iv-a Predictive power of permissions

As noted in the introduction, several researchers have studied the permissions used by an application and their ability to detect malware. For instance, the authors in [30] achieve F1-score values in the range of 0.6 to 0.8.

In order to evaluate the effects that feature hashing has on permissions, we try different hashing spaces (32, 64, 128, 256, 512, 1024 and 2048 hashes)to evaluate the feature amount- performance trade off. To measure performance, we run 10-fold cross-validation for threshold tuning in a logistic regression algorithm and compute different AUC (Area Under the Curve) measurements for each of the hashing spaces.

Fig. 3: ROC curve for malware detection using feature hashing on permissions only.

In our case, Fig. 3 shows the ROC curve and AUC-ROC values using logistic regression with different number of hashes for the (4-AV, 50%) dataset. As observed, the more hash-functions used, the higher AUC achieved in the range of 70% for 256 hashes and above, in line with [30]. In conclusion, the permissions set alone offers a moderate approach to detect Android malware.

In the next sections we study the remaining 26 features (i.e. intrinsic, social and entity-related) along with 512 feature hashes and apply feature selection techniques to identify the most relevant ones.

Iv-B Feature selection

Beginning at 538 features in the dataset, variable selection is performed to reduce model complexity. Generally, larger predictor collections do not necessarily imply better performance but larger complexity. In fact, the more predictors considered, the easier to bump into the well-known ”Curse of dimensionality”, which occurs when there is a large proportion of predictors with respect to data, penalising global performance.

(a) Features sorted by importance
(b) Performance of classifiers with different number of features
Fig. 4: Experiment results for feature selection

In the first experiment Fig. 4 (top), we have used the four feature selection methods described in Section III to evaluate the importance of each feature in the dataset. The results show such features sorted by each selection index and normalised with respect to the largest (names of features are self-explanatory). The dataset under study in this experiment was the (4-AV, 50%).

As shown in Fig. 4 (top), the top-7 most relevant features in the dataset are, in order of importance: developerRep, issuerRep, ageInMarket (number of days in market), lastSignatureUpdate, timeForCreation, lastUpdate and certVal. In contrast, the feature hashes on the permissions are not relevant when compared with the others.

In order to establish the number of valid features for modeling, Fig. 4 (bottom) shows the ten-fold cross-validated F1-score versus the number of predictors involved for each algorithm (RF, LR and SVM), where new predictors are added at each iteration in decreasing order of relevance. There, Random Forest provides the highest F1-score (around 0.89), while LR and SVM reach around 0.86 and 0.87 respectively. Moreover, the figure shows that highest performance on any algorithm may be achieved with only the top-15 features, which is set as predictor threshold.

In addition, it is worth remarking that developerRep alone achieves an F1-score above 0.8, showing that this metric alone is more powerful than any other, such as permissions.

Iv-C Malware detection model

We perform a full benchmark test on the 9 composed datasets using only their top-15 features, namely: developerRep, issuerRep, ageInMarket (time in market), lastSignatureUpdate, timeForCreation, lastUpdate, certVal, numPerm, numFiles, numDownloads, versionCode, oneStarRatingCont, f216, size and meanStar. In this light, Table I shows the training/test values of F1-score, precision and recall metrics for each dataset and the three models under study (LR, SVM, RF).

Malware NumDetectors F1-score Precision Recall
Logistic Regression (train/test)
2% 1 0.82/0.11 0.80/0.07 0.85/0.22
25% 1 0.89/0.62 0.93/0.61 0.85/0.63
50% 1 0.93/0.75 0.97/0.91 0.89/0.64
2% 2 0.65/0.23 0.95/0.27 0.5/0.19
25% 2 0.89/0.70 0.94/0.67 0.85/0.74
50% 2 0.94/0.83 0.98/0.9 0.90/0.76
2% 4 0.81/0.29 0.81/0.22 0.81/0.42
25% 4 0.91/0.76 0.95/0.72 0.87/0.79
50% 4 0.95/0.86 0.99/0.86 0.92/0.86
Suport Vector Machine (train/test)
2% 1 0.86/0.08 0.78/0.05 0.96/0.23
25% 1 0.92/0.67 0.91/0.62 0.93/0.71
50% 1 0.95/0.81 0.96/0.88 0.94/0.76
2% 2 0.83/0.18 0.75/0.11 0.93/0.38
25% 2 0.92/0.70 0.92/0.62 0.91/0.80
50% 2 0.95/0.85 0.97/0.89 0.93/0.81
2% 4 0.85/0.27 0.76/0.18 0.96/0.53
25% 4 0.93/0.76 0.93/0.69 0.92/0.84
50% 4 0.96/0.87 0.98/0.87 0.94/0.88
Random Forest (train/test)
2% 1 0.99/0.12 0.99/0.08 0.99/0.32
25% 1 0.99/0.73 0.99/0.70 0.99/0.76
50% 1 0.99/0.83 0.99/0.87 0.99/0.8
2% 2 0.99/0.22 0.99/0.15 0.99/0.45
25% 2 0.99/0.78 0.99/0.74 0.99/0.82
50% 2 0.99/0.87 0.99/0.89 0.99/0.85
2% 4 0.99/0.32 0.99/0.22 0.99/0.58
25% 4 0.99/0.82 0.99/0.77 0.99/0.86
50% 4 0.99/0.89 0.99/0.88 0.99/0.90
TABLE I: Full benchmark test with top-15 predictors.
F1-score Random Forest (train/test)
NDet 1-15 feats. 3-17 feats. 5-19 feats 7-21 feats 9-23 feats. 11-25 feats 13-27 feats
1 AV 0.99/0.83 0.99/0.86 0.99/0.84 0.99/0.74 0.96/0.72 0.88/0.67 0.80/0.64
2 AV 0.99/0.87 0.99/0.87 0.99/0.86 0.99/0.79 0.96/0.75 0.88/0.71 0.75/0.66
4 AV 0.99/0.89 0.99/0.88 0.99/0.87 0.99/0.80 0.96/0.77 0.89/0.73 0.77/0.69
TABLE II: F1-score value of Random Forests with different feature sets.

The results show that algorithms achieve similar results, slightly better in the case of RF. Second, it might be observed that general performance improves as the percentage of malware samples increases, showing best results when malware accounts for 50% of the applications. Actually, in the 2%-malware case, the difference between train and test error suggests that the algorithms are overfitting the data. Finally, the algorithms perform best at identifying those malware applications tagged by several AV engines. Clearly, when the algorithms are trained with malware applications tagged by two engines or more, they reach up to 0.87 F1-score in the test set (bottom line in the table), thus providing a high-level prediction confidence.

Iv-D Robustness of the model

The reader must note that malware developers, after reading this article, may decide to use different email accounts and certificates to evade this detection mechanism. However, the “malwarish” behaviour of applications is fingerprinted in several features redundantly. On the one hand, such redundancy implies that after 13-15 features no extra predictive power is gained by adding new features (as shown in Fig. 4); but on the other hand, it also provides robustness to the model since, if some features are decided not to be used (like developerRep and issuerRep) the others are still able to reach good performance.

To show this, Table II shows the F1-score results of re-running the RF algorithm to a different set of features. Essentially, the first column shows the same train/test F1-score values as in Table I since both use the same top-15 features. The second column shows the F1-values when training and testing with features from 3 to 17 of Fig. 4 (i.e. top-15 without developerRep and issuerRep). As shown the F1-score value is slightly worse than before. Similarly, when using features 5-19, a small decrease is observed again, but still good performance is achieved. F1-score quickly drops when using the features from position 7 in the ranking and on.

V Summary and Discussion

In summary, this work has shown that Google Play meta-data provides valuable information to detect Android malware applications, reaching F1-score values near 0.9, for example when feeding meta-data to a Random Forest. In particular, it has been shown that using no more than 15 features, malware applications can be accurately identified.

Furthermore, this work has also shown that inherent features, in particular application permissions, offer moderate prediction power (AUC-ROC about 0.7) compared to other metadata, such as the developer’s reputation (percentage of malware applications uploaded by the same developer in the past) or certificate issuer reputation. This allows constructing efficient classification models for the early detection of malware applications uploaded at an Android market, as a prior step to more sophisticated techniques, namely code inspection or sandboxing.

The results of this works enable the use of simple static analysis at once for large amounts of Android applications. For apps uploaded to an application market, it might be determined whether it needs further inspection or it is suitable for direct upload. In addition, it is also possible to develop an in-device system which informs users about the appearance of each application and the risk of installing them in the device beforehand.

In a nutshell, the contributions of this work are the following:

  • We evaluated the capabilities of permission-based detection approaches and their limitations by means of the hashing trick as feature reduction technique.

  • We showed that inherent application features, such as the developer’s reputation (percentage of malware applications uploaded by the same developer in the past) or certificate issuer’s reputation offer very good performance for detecting Android malware.

  • We proposed a model for Android Malware detection based on meta-data and machine learning techniques capable of detecting most Android threats, which can be leveraged both at market level and in-device application analysis.

  • We evaluated our proposed model over different benchmarking tests for performance and robustness of the algorithm


The authors would like to acknowledge the support of the Spanish project TEXEO (grant no. TEC2016-80339-R) and the EU-funded H2020 TYPES project (grant no. H2020-653449).

Additionally, Ignacio Martín would like to acknowledge the support of the Spanish Education Ministry for his FPU grant (grant no. FPU15/03518) which supports his position at UC3M.


  • [1] Y. Zhou and X. Jiang, “Dissecting android malware: Characterization and evolution,” in Proc. of Symp. Security and Privacy, May 2012, pp. 95–109.
  • [2] B. Baskaran and A. Ralescu, “A study of android malware detection techniques and machine learning,” 2016.
  • [3] D. Arp, M. Spreitzenbarth, M. Hübner, H. Gascon, K. Rieck, and C. Siemens, “Drebin: Effective and explainable detection of android malware in your pocket,” Proc. of Symp. Network and Distributed System Security, 2014.
  • [4] S. Y. Yerima, S. Sezer, and G. McWilliams, “Analysis of bayesian classification-based approaches for android malware detection,” IET Information Security, vol. 8, no. 1, pp. 25–36, 2014.
  • [5] J. Sahs and L. Khan, “A machine learning approach to android malware detection,” in Intelligence and Security Informatics Conference (EISIC), 2012 European, Aug 2012, pp. 141–147.
  • [6] N. Peiravian and X. Zhu, “Machine learning for android malware detection using permission and api calls,” in Tools with Artificial Intelligence (ICTAI), 2013 IEEE 25th International Conference on, Nov 2013, pp. 300–305.
  • [7] A. Feizollah, N. B. Anuar, R. Salleh, G. Suarez-Tangil, and S. Furnell, “Androdialysis: Analysis of android intent effectiveness in malware detection,” Computers & Security, vol. 65, pp. 121 – 134, 2017. [Online]. Available:
  • [8] S. Y. Yerima, S. Sezer, and I. Muttik, “High accuracy android malware detection using ensemble learning,” IET Information Security, vol. 9, pp. 313–320(7), November 2015. [Online]. Available:
  • [9] B. Sanz, I. Santos, C. Laorden, X. Ugarte-Pedrero, P. Bringas, and G. Ãlvarez, “PUMA: Permission usage to detect malware in Android,” in Proc. Int. Conference CISIS’12-ICEUTE’12-SOCO’12, ser. Advances in Intelligent Systems and Computing, 2013, vol. 189, pp. 289–298.
  • [10] Z. Aung and W. Zaw, “Permission-based Android malware detection,” Int. J. Scientific and Technology Research, vol. 2, no. 3, pp. 228–234, 2013.
  • [11] D. Barrera, H. G. Kayacik, P. C. van Oorschot, and A. Somayaji, “A methodology for empirical analysis of permission-based security models and its application to android,” in Proceedings of the 17th ACM conference on Computer and communications security.   ACM, 2010, pp. 73–84.
  • [12] R. Johnson, Z. Wang, C. Gagnon, and A. Stavrou, “Analysis of android applications’ permissions,” in Software Security and Reliability Companion (SERE-C), 2012 IEEE Sixth International Conference on.   IEEE, 2012, pp. 45–46.
  • [13] H.-S. Ham and M.-J. Choi, “Analysis of android malware detection performance using machine learning classifiers,” in ICT Convergence (ICTC), 2013 International Conference on, Oct 2013, pp. 490–495.
  • [14] M. Mas’ud, S. Sahib, M. Abdollah, S. Selamat, and R. Yusof, “Analysis of features selection and machine learning classifier in android malware detection,” in Information Science and Applications (ICISA), 2014 International Conference on, May 2014, pp. 1–5.
  • [15] K. O. Elish, X. Shu, D. D. Yao, B. G. Ryder, and X. Jiang, “Profiling user-trigger dependence for android malware detection,” Computers & Security, vol. 49, pp. 255 – 273, 2015. [Online]. Available:
  • [16] Z. Wang, C. Li, Z. Yuan, Y. Guan, and Y. Xue, “Droidchain: A novel android malware detection method based on behavior chains,” Pervasive and Mobile Computing, vol. 32, pp. 3 – 14, 2016, mobile Security, Privacy and Forensics. [Online]. Available:
  • [17] K. Chen, P. Wang, Y. Lee, X. Wang, N. Zhang, H. Huang, W. Zou, and P. Liu, “Finding unknown malice in 10 seconds: Mass vetting for new threats at the google-play scale,” in 24th USENIX Security Symposium (USENIX Security 15), 2015, pp. 659–674.
  • [18] A. Shabtai, U. Kanonov, Y. Elovici, C. Glezer, and Y. Weiss, ““andromaly”: a behavioral malware detection framework for android devices,” Journal of Intelligent Information Systems, vol. 38, no. 1, pp. 161–190, 2012.
  • [19] S. Zonouz, A. Houmansadr, R. Berthier, N. Borisov, and W. Sanders, “Secloud: A cloud-based comprehensive and lightweight security solution for smartphones,” Computers & Security, vol. 37, pp. 215 – 227, 2013. [Online]. Available:
  • [20] M. Grace, Y. Zhou, Q. Zhang, S. Zou, and X. Jiang, “Riskranker: Scalable and accurate zero-day android malware detection,” in Proc. of the 10th Int. Conf. on Mobile Systems, Applications, and Services, ser. MobiSys ’12, 2012.
  • [21] Y. Aafer, W. Du, and H. Yin, “Droidapiminer: Mining api-level features for robust malware detection in android,” in Security and Privacy in Communication Networks.   Springer, 2013, pp. 86–103.
  • [22] K. Shaerpour, A. Dehghantanha, and R. Mahmod, “Trends in android malware detection,” Journal of Digital Forensics, Security and Law, vol. 8, no. 3, pp. 21–40, 2013.
  • [23] I. Burguera, U. Zurutuza, and S. Nadjm-Tehrani, “Crowdroid: Behavior-based malware detection system for android,” in Proceedings of the 1st ACM Workshop on Security and Privacy in Smartphones and Mobile Devices, ser. SPSM ’11.   New York, NY, USA: ACM, 2011, pp. 15–26. [Online]. Available:
  • [24] D. Venugopal and G. Hu, “Efficient signature based malware detection on mobile devices,” Mobile Information Systems, vol. 4, no. 1, pp. 33–49, 2008.
  • [25] M. Zheng, M. Sun, and J. C. S. Lui, “Droid analytics: A signature based analytic system to collect, extract, analyze and associate android malware,” in Trust, Security and Privacy in Computing and Communications (TrustCom), 2013 12th IEEE International Conference on, July 2013, pp. 163–171.
  • [26] S. N. Hanumanthegowda, “Automated machine learning-based detection of malicious Android applications using Google Play Metadata,” Master’s thesis, Northeastern University, Illinois, USA, 2013.
  • [27] K. Weinberger, A. Dasgupta, J. Langford, A. Smola, and J. Attenberg, “Feature hashing for large scale multitask learning,” in Proceedings of the 26th Annual International Conference on Machine Learning, ser. ICML ’09.   New York, NY, USA: ACM, 2009, pp. 1113–1120. [Online]. Available:
  • [28] W. Tesfay, T. Booth, and K. Andersson, “Reputation based security model for android applications,” in Trust, Security and Privacy in Computing and Communications (TrustCom), 2012 IEEE 11th International Conference on, June 2012, pp. 896–901.
  • [29] G. James, D. Witten, T. Hastie, and R. Tibshiran, An introduction to statistical learning with applications in R.   Springer Texts in Statistics, 2015.
  • [30] A. Aswini and P. Vinod, “Droid permission miner: Mining prominent permissions for android malware analysis,” in Applications of Digital Information and Web Technologies (ICADIWT), 2014 Fifth International Conference on the, Feb 2014, pp. 81–86.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description