Model Selection for Anomaly Detection

Model Selection for Anomaly Detection

E. Burnaev, P. Erofeev, D. Smolyakov
Institute for Information Transmission Problems (Kharkevich Institute) RAS
Abstract

Anomaly detection based on one-class classification algorithms is broadly used in many applied domains like image processing (e.g. detection of whether a patient is “cancerous” or “healthy” from mammography image), network intrusion detection, etc. Performance of an anomaly detection algorithm crucially depends on a kernel, used to measure similarity in a feature space. The standard approaches (e.g. cross-validation) for kernel selection, used in two-class classification problems, can not be used directly due to the specific nature of a data (absence of a second, abnormal, class data). In this paper we generalize several kernel selection methods from binary-class case to the case of one-class classification and perform extensive comparison of these approaches using both synthetic and real-world data.
Keywords: anomaly detection, model selection, one-class classification, SVDD, kernel width, empirical risk

1 Introduction

In a one-class classification problem statement, it is assumed that we mainly use data of one, normal, class to build a decision function that describes characteristics of the data. On the other hand data of the other abnormal class are either not used at all or used to refine the obtained data description. The one-class classification is broadly applied to many real application domains namely image processing, network intrusion detection, user verification in computer systems, machine fault detection, etc. In most of the real-world applications for one-class classification, the number of normal data samples is much larger than that of abnormal data samples or abnormal samples are even not available at all [1]. The reason is that collecting normal data is inexpensive and easy in comparison to collecting abnormal data. For example in machine fault detection applications, a normal data can be collected directly under the normal condition of a machine while collecting an abnormal data requires observation of machines until they break. Since the prevalence of one class, in a one-class classification, the boundary decision primarily comes from the dominant class compared to a binary classification, where the data of both classes are used to construct the boundary decision.

One of the most popular approaches for one-class classification is based on support vector models. These models have a nice inherited property allowing to build simple linear decision boundaries in highly non-linear infinite-dimensional spaces thanks to the kernel trick. The most wide-spreadly used kernels are Gaussian kernels. Although these techniques became extremely popular in recent years, there still exists an open question of the kernel bandwidth selection, which, as in binary classification, is crucial for model performance. Despite similarity to the binary classification problems, the standard approaches for bandwidth selection, like cross-validation, can not be used for the one-class classification models due to specific nature of the data (absence of second, abnormal, class data).

In this paper we concentrate on Support Vectors Data Description (SVDD) [2] which is a one-class classification algorithm from the family of support vector models. The main difference from a standard one-class SVM is that in a linear case SVDD builds a sphere surface around the normal data instead of a linear hyperplane, separating the normal data from the origin (in a feature space).

Several approaches for kernel selection were proposed in the literature. Some of them are intended for binary classification but can be generalized to the one-class problem, some are specially designed for one-class classification problem. For example, in [3] authors proposed an empirical risk for one-class SVM and proved that under certain conditions it converges to the real risk. Another approach [4] is based on an idea that a kernel width should be selected in such a way that a fraction of outliers and support vectors in the final model should be close to their corresponding theoretical estimates, depending on parameters of the SVDD algorithm. In [5] authors try to optimize the decision region directly in order to get neither too sparse nor too dense area. All the mentioned approaches imply building and comparing several models w.r.t. some criterion, while in [6] a direct optimization method for kernel parameters is proposed without explicit model building.

In this paper we propose an original method for SVDD model selection based on empirical risk approximation using oversampling with SMOTE [7] and develop a methodology for SVDD model selection. Finally, we generalize several kernel selection methods from binary-class case to the case of one-class classification and perform extensive comparison of these approaches using both synthetic and real-world data.

2 Anomaly Detection using SVDD

We consider Support Vector Data Description (SVDD) proposed in [2] for the one-class classification problem. Let us have a dataset . Let be some mapping to a high-dimensional space depending on scalar parameter . In this space we solve the following optimization problem:

(1)

Physical meaning of such an optimization problem statement is that we are looking for a minimum volume ball in high-dimensional space defined by , containing the dataset. Herewith we allow some points to be outside the boundary of the ball and controll their number by . The penalty in (1) is equivalent to penalty that zeros out some of the parameters. In our case it means that several points will belong exactly to the surface of the ball [2]. We can re-write (1) in dual form and find that a decision function, predicting whether a point is an anomaly or not, is defined through the kernel and has the form for some and . A typical example of the kernel we are using is , .

3 Model Selection Techniques

In this section we give brief descriptions of several model selection approaches for SVDD, which are based on optimization of some risk functions.

3.1 Support Vectors and Anomalies Fraction Optimization

Complexity of the model is described by the number of support vectors corresponding to . In the original paper [2] authors proved that parameter in (1) is an upper bound for the fraction of elements in the train data marked as anomaly (outliers) and a lower bound for the fraction of elements used as support vectors. Let us define the risk function as follows

(2)

where is a fraction of support vectors in the dataset, defining the current decision function . Minimizing we can balance complexity and classification accuracy on the train set.

3.2 Empirical Risk

Another approach would be to reduce the one-class classification to binary classification [3]. It can be proved that the following empirical risk converges to a real risk of this binary classification problem

(3)

where denotes average error of classification on anomaly elements assuming that they have distribution . This value is estimated using Monte Carlo simulation. Generally, for modeling it is reasonable to use a least favorable uniform distribution.

3.3 Risk based on Oversampling

In empirical risk (3) the same dataset is used for a model construction and its risk estimation leading to a biased estimates. That is why for the risk assessment we propose to use sampling based on SMOTE [7], an oversampling technique, designed for the imbalanced binary classification. We define the risk as

(4)

New examples of a normal data are generated synthetically based on the training set and the SMOTE algorithm. The first summand in (4) is proportional to the number of synthetic normal elements marked by the classifier as anomalies. The second summand is the same as in (3).

3.4 Kernel Matrix Optimization

Another popular idea is to optimize a kernel matrix directly without building an anomaly detection model. For example, in [6] a simple statistic was proposed:

(5)

where and are a normalized sum of the kernel matrix elements and a normalized variance of the kernel matrix elements correspondingly.

3.5 Kernel Polarization

Another approach for tuning the kernel matrix is a so-called polarization optimization. It was originally proposed for binary classification problems but can be naturally generalized for one class classification problems [3]. In order to calculate the polarization we generate an artificial anomaly data from a uniform distribution. All elements of the initial data set are marked as normal data and finally we have a set of elements , , where for and for . The polarization risk is calculated as [11]

(6)

4 Numerical Experiments

In this section we describe our settings for the numerical experiments and provide results of kernel selection techniques comparison.

4.1 Validation Error

The real risk of anomaly detection consists of two parts: misclassification of points from a normal distribution as anomalies and misclassification of abnormal data as a normal.

For measuring the quality of a decision function we will use sum of these errors on independent validation data set normalized by the number of elements. Let be the set of normal elements in the validation set and be the set of anomaly elements in the validation set. Then the real risk estimate is calculated as follows:

(7)

The validation risk behavior depends on a dimensionality of a data. A typical behavior is presented in figure 0(a). Generally there is a plateau of almost the same validation error values for some range of .

4.2 Behavior of the Risk Functions

In order to understand a typical behavior of the introduced risk functions we estimate of the constructed model for different values of and compare it to the estimates of the risks. In these experiments we fix the value of to , meaning that we have of anomalies. We generate points of normal data from a mixture of two -dimensional normal distributions with means at and , and identity covariance matrices. Also we generate anomalies from the uniform distribution on . For the validation a huge dataset from the same distributions is drawn.

Empirical risk (see figure 0(b)) looks very similar to the validation error, ranges of plateaus mostly coincide. Risk based on SMOTE oversampling (see figure 0(c)) looks even more similar to the real validation error. But in either cases choosing the minimizer of the risk as an optimal value for we obtain similar results. In case of support vectors based risk (see figure 0(d)) the plateau of the risk is much wider than those of the validation error. The kernel risk (see figure 0(e)) behaves very smoothly with a single minimum as it was shown in [6] but this mimimum is biased with respect to the optimal value of . Finally, the polarization risk (see figure 0(f)) also gives a single minimum and the optimum value belongs to the plateau of the validation error.

(a) Validation error behavior
(b) Empirical risk
(c) SMOTE risk
(d) SV risk
(e) Kernel risk
(f) Polarization risk
Figure 1: Dependency of a risk function and a validation error on

4.3 Distribution of Anomalies

We use a uniform distribution of anomalies when calculating the proposed risk functions. It is natural to test how these risk functions depend on the distribution of anomalies. We use a normal distribution with variance, comparable to the range of the normal data, to generate anomalies in the learning data sample. At the same time when estimating risks we use uniformly distributed synthetic anomalies. From figure 2 we see that it is still possible to use uniformly generated synthetic anomalies even if true anomalies are generated from another distribution; and we can select a reasonable as the largest value from the plateau of the corresponding risk function.

(a) Empiric risk
(b) SMOTE risk
(c) Polarization risk
(d) Empiric risk
(e) SMOTE risk
(f) Polarization risk
Figure 2: Dependency of a risk function and a validation error on for a non-uniform anomaly distribution

4.4 Real World Data

Real world data were taken from http://homepage.tudelft.nl/n9d04/occ/index.html. These data sets are generated from multi class classification problems like “Iris Dataset”, “Sonar”, etc. For these problems we considered each class as normal and added anomalies from a uniform distribution to get a one-class classification problem. Borders of the uniform distribution were taken as double borders of the normal class. Number of anomalies were equal to , and of a normal class size. Finally we constructed datasets with smallest value of equal to and biggest value of equal to . For some of the datasets (datasets, obtained from “Leukemia” and “Colon” classification problems). For majority of datasets , but also there are several big datasets like those, obtained from “Spam Base” () and “Concordia” () classification problems. Also there are data sets with a very small like that obtained from “Leukemia” () classification problem.

To compare model selection methods on real data we use Dolan-More curves [12] which are built in the following way. Let be the set of considered model selection methods, be the set of tasks (datasets), be the quality of the method on the dataset . For each method we introduce , a fraction of datasets, on which the method is worse than the best one not more than times:

For example, is a fraction of datasets where the method  is the best. A graph of is called Dolan-More curve for the method . This definition implies that the higher the curve, the better the method. Note that Dolan-More curve for a particular method depends on other methods considered in comparison.

Figure 3: Dolan-More curves for comparison of model selection methods on real data

From figure 3 we can see that perfomances of the SMOTE risk and of the Empirical risk are quite similar, but the curve for the SMOTE risk is slightly higher than that for the Empirical risk. The kernel risk and the Support Vectors risk provides worse perfomance.

5 Conclusions

We considered Support Vectors Data Description as a one-class classification algorithm; generalized several model selection methods from binary classification for SVDD model selection and performed extensive comparison of these approaches using synthetically generated data and real-world data sets.

Acknowledgement: The research was conducted in IITP RAS and supported solely by the Russian Science Foundation grant (project 14-50-00150).

References

  • [1] E. Burnaev, P. Erofeev, A. Papanov. “Influence of Resampling on Accuracy of Imbalanced Classification”, Proceedings of the ICMV-2015 conference (2015)
  • [2] D. Tax, R. Duin. “Support vector data description”, Machine learning, 54(1), p. 45–66 (2004)
  • [3] I. Steinwart, D. Hush, C. Scovel. “A classification framework for anomaly detection”, Journal of Machine Learning Research, p. 211–232 (2005)
  • [4] H. Lukashevich, S. Nowak, P. Dunker. “Using one-class svm outliers detection for verification of collaboratively tagged image training sets”, IEEE International Conference on Multimedia and Expo ICME-2009, p. 682–685, IEEE (2009)
  • [5] Y. Xiao, H. Wang, L. Zhang, W. Xu. “Two methods of selecting gaussian kernel parameters for one-class svm and their application to fault detection”, Knowledge-Based Systems, vol. 59, p. 75–84 (2014)
  • [6] P. Evangelista, M. Embrechts, B. Szymanski. “Some properties of the gaussian kernel for one class learning”, Artificial Neural Networks–ICANN 2007, p. 269–278, Springer (2007)
  • [7] N. Chawla, K. Bowyer, L. Hall, W. Kegelmeyer. “Smote: synthetic minority over-sampling technique”, Journal of artificial intelligence research, 16(1), p. 321–357 (2002)
  • [8] V. Hodge, J. Austin. “A survey of outlier detection methodologies”, Artificial Intelligence Review, 22(2), p.85–126 (2004)
  • [9] Y. Wang, J. Wong, A. Miner. “Anomaly intrusion detection using one class svm”, Proceedings from the Fifth Annual IEEE SMC Information Assurance Workshop 2004, p. 358–364, IEEE (2004)
  • [10] A. Gardner, A. Krieger, G. Vachtsevanos, B. Litt. “One-class novelty detection for seizure analysis from intracranial EEG”, The Journal of Machine Learning Research, vol. 7, p. 1025–1044 (2006)
  • [11] Y. Baram. “Learning by Kernel Polarization”, Neural Computation, 17(6), p. 1264-1275 (2005)
  • [12] E. Dolan, J. More. “Benchmarking Optimization Software With Performance Profiles”, Mathematical Programming, 91(2), p. 201-213 (2002)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
211723
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description