Scalable and Interpretable Oneclass SVMs with Deep Learning and Random Fourier features
Abstract
Oneclass Support Vector Machine (OCSVM) for a long time has been one of the most effective anomaly detection methods and widely adopted in both research as well as industrial applications. The biggest issue for OCSVM is, however, the capability to operate with large and highdimensional datasets due to inefficient features and optimization complexity. Those problems might be mitigated via dimensionality reduction techniques such as manifold learning or autoencoder. However, previous work often treats representation learning and anomaly prediction separately. In this paper, we propose autoencoder based oneclass SVM (AE1SVM) that brings OCSVM, with the aid of random Fourier features to approximate the radial basis kernel, into deep learning context by combining it with a representation learning architecture and jointly exploit stochastic gradient descend to obtain endtoend training. Interestingly, this also opens up the possible use of gradientbased attribution methods to explain the decision making for anomaly detection, which has ever been challenging as a result of the implicit mappings between the input space and the kernel space. To the best of our knowledge, this is the first work to study the interpretability of deep learning in anomaly detection. We evaluate our method on a wide range of unsupervised anomaly detection tasks in which our endtoend training architecture achieves a performance significantly better than the previous work using separate training.
1 Introduction
Anomaly detection (AD), also known as outlier detection, is a unique class of machine learning that has a wide range of important applications, including intrusion detection in networks and control systems, fault detection in industrial manufacturing procedures, diagnosis of certain diseases in medical areas by identifying outlying patterns in medical images or other health records, cybersecurity, etc. AD algorithms are identification processes that are able to single out items or events that are different from an expected pattern, or that have significantly lower frequencies than others in a dataset [15, 8].
In the past, there has been substantial effort in using traditional machine learning techniques for both supervised and unsupervised anomaly detection such as Principal Component Analysis (PCA) [16, 6, 7], one class support vector machine (OCSVM) [25, 32, 12], Isolation forest [21], clustering based methods such as kmeans and Gaussian mixture model (GMM) [4, 39, 18, 35], etc. However, they often becomes inefficient when used in highdimensional problems. There is recently growing interest in using deep learning techniques to tackle this issue. Nonetheless, many work still relies on twostaged or separate training in which a lowdimensional space is firstly learnt via an autoencoder. For example, the work in [13] simply uses a hybrid architecture with a deep belief network to reduce the dimensionality of the input space and separately applies the learned feature space to a conventional OCSVM. Robust Deep Autoencoder (RDA) [38] uses a robust deep autoencoder that combines robust PCA and deep autoencoder. Deep Clustering Embeddings (DEC) [34] is a stateoftheart algorithm that combines unsupervised autoencoding network with clustering. However, all these twostage training methods can not learn efficient features for anomaly detection tasks, especially when the dimensionality grows higher.
Endtoend training of dimensionality reduction and anomaly detection has recently received much interest, such as the frameworks using deep energybased model [37], autoencoder combined with Gaussian mixture model [40], generative adversarial networks (GAN) [24, 36]. However, these methods based on density estimation techniques to detect anomalies as a byproduct of unsupervised learning, therefore might not be efficient for anomaly detection. They might assign high density if there are many proximate anomalies (a new cluster or mixture might be established for them), hence result in false negative cases.
Oneclass support vector machine is one of the most popular technique for unsupervised anomaly detection. OCSVM is known to be insensitive to noise and outliers in the training data. Still, the performance of OCSVM in general are susceptible to dimensionality and complexity of the data [5], while their training speed is also heavily affected by the size of the datasets. As a result, conventional OCSVM may not be desirable in big data and highdimensional AD applications. To tackle these issues, previous work has only performed dimensionality reduction via deep learning and OCSVM based anomaly detection separately. Notwithstanding, separate dimensionality reduction might have negative effect on the performance of the consequential AD, since important information useful to identify outliers can be interpreted differently in the latent space. On the other hand, to the best of our knowledge, studies on application of kernel approximation and stochastic gradient descent (SGD) on OCSVM have been lacking: most of the existing work only apply random Fourier features (RFF) [23] to the input space and treat the problem as a linear SVM; On the other hand, [26, 5], have shown prospect of using SGD to optimize SVM, but those are without the application of kernel approximation.
Another major issue in joint training with dimensionality reduction and AD is the interpretability of the trained models, that is, the capability to explain the reasoning for why they detect the samples as outliers, as of the input features. Very recently, explanation for blackbox deep learning models has been brought about and attracted a respectable amount of attention from the machine learning research community. Especially, gradientbased explanation (attribution) methods [3, 29, 2] are widely studied as protocols to address this challenge. The aim of the approach is to analyse the contribution of each neuron in the input space of a neural network to the neurons in its latent space by calculating the corresponding gradients. As we will demonstrate, this same concept can be applied to kernelapproximated support vector machines to score the importance of each input feature to the margin that separates the decision hyperplane.
Driven by those reasoning, in this paper we propose AE1SVM that is an endtoend autoencoder based OCSVM model combining dimensionality reduction and OCSVM for largescale anomaly detection. RFFs are applied to approximate the RBF kernel, while the input of OCSVM is fed directly from a deep autoencoder that shares the objective function with OCSVM such that dimensionality reduction is forced to learn essential pattern assisting the anomaly detecting task. On top of that, we also extend gradientbased attribution methods on the proposed kernelapproximate OCSVM as well as the whole endtoend architecture to analyse the contribution of the input features on the decision making of the OCSVM.
The remainder of the paper is organised as follows. Section 2 reviews the background on OCSVM, kernel approximation, and gradientbased attribution methods. Section 3 introduce the combined architecture that we have mentioned. In Section 4, we derive expressions and methods to obtain the endtoend gradient of the OCSVM’s decision function w.r.t. the input features of the deep learning model. Experimental setups, results, and analyses are presented in Section 5. Finally, Section 6 draws the conclusions for the paper.
2 Background
In this section, we briefly describe preliminary background that are used in the rest of the paper.
2.1 Oneclass support vector machine
OCSVM [25] for unsupervised anomaly detection extends the idea of support vector method that is regularly used for classification. While classic SVM tries to find the hyperplane to maximize the margin separating the data points, in OCSVM the hyperplane is learned to best separate the data points from the origin. SVMs in general have the ability to capture nonlinearity thanks to the use of kernels. The kernel method maps the data points from the input feature space in to a higherdimensional space in (where is potentially infinite) where the data is linearly separable by a transformation . The most commonly used kernel is the radial basis function (RBF) kernel defined by a similarity mapping between any two points and in the input feature, formulated by , where is a kernel bandwidth.
Let and denote the vector determining the weight of each dimension in the kernel space and the offset parameter determining the distance from the origin to the hyperplane, respectively. The objective of OCSVM is to separate all data points from the origin by a maximum margin w.r.t to some constraint relaxation, and is written as a quadratic program as follows
(1)  
where is a slack variable and is the regularisation parameter. Theoretically, is the upper bound of the fraction of anomalies in the data, and also the main tuning parameter for OCSVM. Additionally, by replacing with the hinge loss, we have the unconstrained objective function as
(2) 
Let , the decision function of OCSVM is
(3) 
The optimization problem of SVM in (2.1) is usually solved as a convex optimization problem in the dual space with the use of Lagrangian multipliers to reduce complexity while increasing solving feasibility. LIBSVM [9] is the most popular library that provides efficient optimization algorithms to train SVMs, and has been widely used in the research community. Nevertheless, solving SVMs in the dual space can be susceptible to the data size, since the function between each pair of points in the dataset has to be calculated and stored in a matrix, resulting in an complexity, where is the size of the dataset.
2.2 Kernel approximation with random Fourier features
To address the scalability problem of kernel machines, approximation algorithms have been introduced and widely applied, with the most two dominant being Nyströem [33] and Random Fourier Features (RFF) [23]. In this paper, we focus on RFF since it has lower complexity and does not require pretraining. The method is based on the Fourier transform of the kernel function, given by a Gaussian distribution:
(4) 
where is the identity matrix and is an adjustable parameter representing the standard deviation of the Gaussian process.
From the distribution , independent and identically distributed (IID) weights are drawn. In the original work [23], two mappings are introduced, which are:

The combined and mapping as , which leads to the complete mapping being defined as follows
(5) 
The offset mapping as , where the offset parameter . Consequently, the complete mapping in this case is
(6)
It has been proven in [31] that the former mapping outperforms the latter one in approximating RBF kernels due to the fact that no phase shift is introduced as a result of the offset variable. Therefore, in this paper, we only consider the combined and mapping.
Applying the kernel approximation mappings to (2.1), the unconstrained OCSVM objective function with hinge loss becomes
(7) 
which is equivalent to a OCSVM in the approximated kernel space , and thus the optimization problem is more trivial, despite the dimensionality of being higher than that of .
2.3 Gradientbased explanation methods
Gradientbased methods exploit the gradient of the latent nodes in a neural network with respect to the input features to rate the attribution of each input to the output of the network. In the recent years, many research studies [29, 30, 22, 2] have applied this approach to explain the classification decision and sensitivity of input features in deep neural networks and especially convolutional neural networks. Intuitively, an input dimension has larger contribution to a latent node if the gradient of with respect to is higher, and vice versa. Instead of using purely gradient as a quantitative factor, various extensions of the method has been developed, including Gradient*Input [28], Integrated gradients [30], or DeepLIFT [27]. The most recent work [2] showed that these methods are strongly related and proved conditions of equivalence or approximation between them. In addition, other non gradientbased can be reformulated to be implemented easily like gradientbased.
3 Deep autoencoding oneclass SVM
In this section, we present our combined model Deep autoencoding Oneclass SVM (AE1SVM), based on OCSVM for anomaly detecting tasks in highdimensional and big datasets. The model consists of two main components, as illustrated in Figure 1 (Left):

A deep autoencoder network for dimensionality reduction and feature representation of the input space.

An OCSVM for anomaly prediction based on support vectors and margin. The RBF kernel is approximated using random Fourier features.
The bottleneck layer of the deep autoencoder network is forwarded directly into the Random features mapper as the input of the OCSVM. By doing this, the autoencoder network is pressed to optimize its variables to represent the input features in the direction that supports the OCSVM in separating the anomalies from the normal class.
Let us denote as the input of the deep autoencoder and as the reconstructed value of . In addition, is the set of parameters of the autoencoder. As such, the joint objective function of the model regarding the autoencoder parameters, the OCSVM’s weights, and its offset is as follows
(8) 
The components and parameters in (8) are described below

is the reconstruction loss of the autoencoder, which is normally chosen to be the L2norm loss .

Since SGD is utilized, the variable , which is formerly the number of training samples, becomes the batch size since the hinge loss is calculated using the data points in the batch.

is the Random Fourier mappings as defined in (5). Due to the random features being dataindependent, the standard deviation of the Gaussian distribution has to be finetuned correlatively with the parameter .

is a hyperparameter controlling the tradeoff between feature compression and SVM margin optimization.
Overall, the objective function is optimized in conjunction using SGD with backpropagation. Furthermore, the autoencoder network can also be extended to a convolutional autoencoder, which is showcased in the experiment section.
4 Interpretable autoencoding oneclass SVM
In this section, we outline the method for interpreting the results obtained from AE1SVM using gradients and present illustrative example to verify its validity.
4.1 Derivations of endtoend gradients
Considering an input of a RFF kernelapproximated OCSVM with dimension . In our model, is the bottleneck representation of the latent space in the deep autoencoder. The expression of the margin with respect to the input is as follows
(9) 
As a result, the gradient of the margin function on each input dimension can be calculated as
(10) 
Next, we can derive the gradient of the latent space nodes with respect to the deep autoencoder’s input layer (extension to convolutional autoencoder is straightforward). In general, considering a neural network with input neurons , and the first hidden layer having neurons , as depicted in Figure 1 (Right). The gradient of with respect to can be derived as
(11) 
where , is the activation function, and are the weight and bias connecting and . The derivative of is different for each activation function. For instance, with a sigmoid activation , the gradient is computed as , while is for activation function.
To calculate the gradient of neuron in the second hidden layer with respect to , we simply apply the chain rule and sum rule as follows
(12) 
The gradient can be obtained in a similar manner to (11). By maintaining the values of at each hidden layer, the gradient of any hidden or output layer with respect to the input layer can be calculated. Finally, combining this and (10), we can get the endtoend gradient of the OCSVM margin with respect to all input features. Besides, stateoftheart machine learning frameworks like TensorFlow also implements methods to obtain the a gradient between any variables without having to define any extra operations.
Using the gradient, the decision making of the AD model can be interpreted as follows

For an outlying sample, the dimension which has higher gradient has higher contribution to the decision making of the ML model. In other words, the sample is further to the boundary in that particular dimension.

For each mentioned dimension, if the gradient is positive, the value of the feature in that dimension is lesser than the the lower limit of the boundary. In contrast, if the gradient holds a negative value, the feature exceeds the level of the normal class.
4.2 Illustrative example
Figure 2 presents an illustrative example of interpreting anomaly detecting results using gradients. We generate 1950 fourdimensional samples as normal instances, where the first two features are uniformly generated such that they are inside a circle with center . The third and fourth dimensions are drawn uniformly in the range so that the contribution of them are significantly less than the other two dimensions. In contrast, 50 anomalies are created which have the first two dimensions being far from the mentioned circle, while the last two dimensions has a higher range of . The whole dataset including both the normal and anomalous classes are trained with the proposed AE1SVM model with a bottleneck layer of size 2 and sigmoid activation.
The figure on the left shows the representation of the 4D dataset on a 2dimensional space. Expectedly, it captures most of the variability from the first two dimensions. Furthermore, we plot the gradients of 9 different anomalous samples, with the two latter dimensions being randomized, and overall, the results have proven the aforementioned interpreting rules. It can easily be observed that the contribution of the third and fourth dimensions to the decision making of the model is always negligible. Among the first two dimensions, the ones having the value of 0.1 or 0.9 has the corresponding gradients perceptibly higher than those being 0.5, as they are further from the boundary and the sample can be considered ”more anomalous” in that dimension. Besides, the gradient of the input 0.1 is always positive due to the fact that it is lower than the normal level. In contrast, the gradient of the input 0.9 is consistently negative.
5 Experimental results
We present qualitative empirical analysis to justify the effectiveness of the AE1SVM model in terms of accuracy and improved training/testing time. The objective is comparing the proposed model with conventional and stateoftheart AD methods over synthetic and wellknown real world data ^{1}^{1}1All codes for reproducibility is available at https://github.com/minhnghia/AE1SVM.
5.1 Datasets
We conduct experiments on one generated datasets and five realworld datasets (we assume all tasks are unsupervised anomaly detection) as listed below in Table 1. The descriptions of each individual dataset is as follows:

Gaussian: This dataset is used to showcase the performance of the methods on highdimensional and large data. The normal samples are drawn from a normal distribution with zero mean and standard deviation , while for the anomalous instances. Theoretically, since the two groups have different distributional attributes, the AD model should be able to separate them.

ForestCover: From the ForestCover/Covertype dataset [11], the class 2 is extracted as the normal class, and class 4 is chosen as the anomaly class.

Shuttle: From the Shuttle dataset [11], we select the normal samples from classes 2, 3, 5, 6, 7, while the outlier group is made of class 1.

KDDCup99: The popular KDDCup99 dataset [11] has approximately 80% proportion as anomalies. Therefore, from the 10percent subset, we randomly select 5120 samples from the outlier classes to form the anomaly set such that the contamination ratio is 5%. The categorical features are extracted using onehot encoding, making 118 features in the raw input space.

USPS: We select from the U.S Postal Service handwritten digits dataset [17] 950 samples from digit 1 as normal data, and 50 samples from digit 7 as anomalous data, as the appearance of the two digits are similar. The size of each image is 16 16, resulting in each sample being a flatten vector of 256 features.

MNIST: From the MNIST dataset [20], 5842 samples of digit ’4’ are chosen as normal class. On the other hand, the set of outliers contains 100 digits from classes ’0’, ’7’, and ’9’. This task is challenging due to the fact that many digits ’9’ are remarkably similar to digit ’4’. Each input sample is a flatten vector with 784 dimensions.
Dataset  Dimensions  Normal instances  Anomalies rate (%) 
Gaussian  512  950  5.0 
ForestCover  54  581012  0.9 
Shuttle  9  49097  7.2 
KDDCup99  118  97278  5.0 
USPS  256  950  5.0 
MNIST  784  5842  1.7 
5.2 Baseline methods
Variants of OCSVM and several stateoftheart methods are selected as baselines to compare the performance with the AE1SVM model. Different modifications of the conventional OCSVM are considered. First, we take into account the version where OCSVM is trained directly on the raw input. Additionally, to give more impartial justifications, a version where an autoencoding network exactly identical to that of the AE1SVM model is considered. We use the same number of training epochs to AE1SVM to investigate the ability of AE1SVM to force the dimensionality reduction network to learn better representation of the data. The OCSVM is then trained on the encoded feature space, and this variant is also similar to the approach given in [13]. We also attempt to use random Fourier features and apply a linear kernel to the approximated feature space as another modification.
Besides, the following method are also considered as baselines to examine the anomaly detecting performance of the proposed model:

Isolation Forest [21]: This ensemble method is based on the idea that the anomalies in the data have less frequencies and are different from the normal points.

Robust Deep Autoencoder (RDA) [38]: In this algorithm, a deep autoencoder is constructed and trained such that it can decompose the data into two components. The first component contains the latent space representation of the input, while the second one is comprised of the noise and outliers that are difficult to reconstruct.

Deep Clustering Embeddings (DEC) [34]: This algorithm combines unsupervised autoencoding network with clustering. As outliers lie in to sparser clusters or are far from their centroids, we apply this method into anomaly detection and calculate the anomaly score of each sample as a product of its distance to the centroid and the density of the cluster it belongs to.
5.3 Evaluation metrics
In all experiments, the area under receiver operating characteristic (AUROC) and area under the PrecesionRecall curve (AUPRC) are applied as metrics to evaluate and compare the performance of anomaly detection methods. Having a high AUROC is necessary for a competent model, whereas AUPRC often highlights the difference between the methods regarding imbalance datasets [10]. The testing procedure follows the unsupervised setup, where each dataset is split with 1:1 ratio, and the entire training set including the anomalies is used for training the model. The output of the models on the test set is measured against the ground truth using the mentioned scoring metrics, with the average scores and approximal training and testing time of each algorithm after 20 runs being reported.
5.4 Model configurations
In all experiments, we employ the sigmoid activation function and implement the architecture using TensorFlow [1]. The initial weights of the autoencoding networks are generated according to Xavier’s method [14]. The optimizing algorithm of choice is Adam [19]. We also discover that for the random Fourier features, a standard deviation produces satisfactory results for all datasets. For other parameters, the network configurations of AE1SVM for each individual dataset are as in Table 2 below.
Dataset  Encoding layers  RFF  Batch size  Learning rate  
Gaussian 
{128, 32}  0.40  1000  500  32  0.01 
ForestCover 
{32, 16}  0.30  1000  200  1024  0.01 
Shuttle 
{6, 2}  0.40  1000  50  16  0.001 
KDDCup99 
{80, 40, 20}  0.30  10000  400  128  0.001 
USPS 
{128, 64, 32}  0.28  1000  500  16  0.005 
MNIST 
{256, 128}  0.40  1000  1000  32  0.001 
For the MNIST dataset, we additionally implement a convolutional autoencoder with pooling and unpooling layers: conv1(), pool1(), conv2(), pool2() and a feedforward layer afterward to continue compressing into 49 dimensions; the decoder: a feedforward layer afterward of dimensions, then deconv1(), unpool1(), deconv2(), unpool2(), then a feedforward layer of 784 dimensions. The dropout rate is set to 0.5 in this convolutional autoencoder network.
For each baseline methods, the best set of parameters are selected. In particular, for different variants of OCSVM, the optimal value for parameter is exhaustively searched. Likewise, for Isolation forest, the fraction ratio is tuned around the anomalies rate for each dataset. For RDA, DEC, as well as OCSVM variants that involves autoencoding network for dimensionality reduction, the autoencoder structures exactly identical to AE1SVM are used, while the hyperparameter in RDA is also adjusted as it is the most important factor of the algorithm.
5.5 Results
Firstly, for the Gaussian dataset, the histograms of the decision scores obtained by different methods are presented in Figure 3. It can clearly be seen that AE1SVM is able to single out all anomalous samples, while giving the best separation between the two classes.
Dataset 
Method 
AUROC 
AUPRC 
Train 
Test 
Forest Cover  OCSVM raw input  0.9295  0.0553  
OCSVM encoded  0.7895  0.0689  
OCSVM encoded + RFF  0.8034  0.0501  
Isolation Forest  0.9396  0.0705  
RDA  0.8683  0.0353  
DEC  0.9181  0.0421  
AE1SVM 
0.9485 
0.1976 

Shuttle  OCSVM raw input  0.9338  0.4383  
OCSVM encoded  0.8501  0.4151  
OCSVM encoded + RFF  0.8472  0.5301  
Isolation Forest 
0.9816 
0.7694  
RDA  0.8306  0.1872  
DEC  0.9010  0.3184  
AE1SVM  0.9747 
0.9483 

KDDCup  OCSVM raw input  0.8881  0.3400  
OCSVM encoded  0.9518  0.3876  
OCSVM encoded + RFF  0.9121  0.3560  
Isolation Forest  0.9572  0.4148  
RDA  0.6320  0.4347  
DEC  0.9496  0.3688  
AE1SVM 
0.9663 
0.5115 

AE1SVM (Full dataset) 
0.9701 
0.4793 

USPS  OCSVM raw input  0.9747  0.5102  
OCSVM encoded  0.9536  0.4722  
OCSVM encoded + RFF  0.9578  0.5140  
Isolation Forest  0.9863  0.6250  
RDA  0.9799  0.5681  
DEC  0.9263  0.7506  
AE1SVM 
0.9926 
0.8024 

MNIST  OCSVM raw input  0.8302  0.0819  
OCSVM encoded  0.7956  0.0584  
OCSVM encoded + RFF  0.7941  0.0819  
Isolation Forest  0.7574  0.0533  
RDA  0.8464  0.0855  
DEC  0.5522  0.0289  
AE1SVM  0.8119  0.0864  
CAE1SVM 
0.8564 
0.0885 

For other datasets, the comprehensive results are given in Table 3. It is obvious that AE1SVM outperforms the conventional OCSVM in terms of accuracy performance in all scenarios, and is always among the top performers. For ForestCover, only the AUROC score of Isolation Forest is close, but the AUPRC is significantly lower, with three time less than that of AE1SVM, suggesting that it has to compensate a higher false alarm rate to identify anomalies correctly. Similarly, Isolation Forest slightly surpasses AE1SVM in AUROC for Shuttle dataset, but is subpar in terms of AUPRC, thus can be considered less optimal choice. Analogous patterns can as well be noticed for other datasets. Especially, for MNIST, it is shown that the proposed method AE1SVM can also operate under a convolutional autoencoder network in image processing contexts.
Regarding training time, AE1SVM outperforms other methods for ForestCover, which are the largest datasets. For KDDCup99 and Shuttle datasets, it is still one of the fastest candidates. Furthermore, we also extend the KDDCup99 experiment and train AE1SVM model on a full dataset, and acquire promising results in only 200 seconds. This verifies the effectiveness and potential application of the model in bigdata circumstances. On top of that, the testing time of AE1SVM is a notable improvement over other methods, especially Isolation Forest and conventional OCSVM, suggesting its feasibility in realtime environments.
5.6 Gradientbased explanation in image datasets
We also investigate the use of gradientbased explanation methods from the image datasets. Figure 4 and Figure 5 illustrate the unsigned gradient maps of several anomalous digits in the USPS and MNIST datasets, respectively. The MNIST results are given by the version with convolutional autoencoder.
Interesting patterns proving the correctness of gradientbased explanation approach can be observed from the Figure 4. The positive gradient maps revolve around the middle part of the images where the pixels in the normal class of digits ’1’ are normally bright (higher values), indicating the absence of those pixels contributes significantly to the reasoning that the samples ’7’ are detected as outliers. Likewise, the negative gradient maps are more intense on the pixels matching the bright pixels outside the center area of its corresponding image, meaning that the values of those pixels in the original image exceeds the range of the normal class, which is around the zero (black) level. Similar perception can be acquired from Figure 5, as it shows the difference between each samples of digits ’0’, ’7’, and ’9’, to digit ’4’.
6 Conclusion
In this paper, we propose the endtoend autoencoding Oneclass Support Vector Machine (AE1SVM) model comprising of a deep autoencoder for dimensionality reduction and a variant structure of OCSVM using random Fourier features for anomaly detection. The model is jointly trained using SGD with a combined loss function to both lessen the complexity of solving support vector problems and force dimensionality reduction to learn better representation that is beneficial for the anomaly detecting task. We also investigate the application of applying gradientbased explanation methods to interpret the decision making of the proposed model, which is not feasible for most of the other anomaly detection algorithms. Extensive experiments have been conducted to verify the strengths of our approach. The results have demonstrated that AE1SVM can be effective in detecting anomalies, while significantly enhance both training and response time for highdimensional and largescale data. Empirical evidence of interpreting the predictions of AE1SVM using gradientbased methods has also been presented using illustrative examples and handwritten image datasets.
References
 [1] Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., Zheng, X.: TensorFlow: Largescale machine learning on heterogeneous systems (2015). URL https://www.tensorflow.org/. Software available from tensorflow.org
 [2] Ancona, M., Ceolini, E., Ãztireli, C., Gross, M.: Towards better understanding of gradientbased attribution methods for deep neural networks. In: International Conference on Learning Representations (2018). URL https://openreview.net/forum?id=Sy21R9JAW
 [3] Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., Müller, K.R.: How to explain individual classification decisions. J. Mach. Learn. Res. 11, 1803–1831 (2010)
 [4] Barnett, V., Lewis, T.: Outliers in statistical data. Wiley (1974)
 [5] Bengio, Y., Lecun, Y.: Scaling learning algorithms towards AI. MIT Press (2007)
 [6] Candès, E.J., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? Journal of the ACM (JACM) 58(3), 11 (2011)
 [7] Chalapathy, R., Menon, A.K., Chawla, S.: Robust, deep and inductive anomaly detection. In: Machine Learning and Knowledge Discovery in Databases  European Conference, ECML PKDD 2017, Skopje, Macedonia, September 1822, 2017, Proceedings, Part I, pp. 36–51 (2017)
 [8] Chandola, V., Banerjee, A., Kumar, V.: Anomaly detection: A survey. ACM computing surveys (CSUR) 41(3), 15 (2009)
 [9] Chang, C.C., Lin, C.J.: LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(3), 27:1–27:27 (2011)
 [10] Davis, J., Goadrich, M.: The Relationship Between PrecisionRecall and ROC Curves. In: Proceedings of the 23rd International Conference on Machine Learning, ICML ’06, pp. 233–240. ACM, New York, NY, USA (2006)
 [11] Dheeru, D., Karra Taniskidou, E.: UCI machine learning repository (2017). URL http://archive.ics.uci.edu/ml
 [12] Erfani, S.M., Baktashmotlagh, M., Rajasegarar, S., Karunasekera, S., Leckie, C.: R1SVM: A randomised nonlinear approach to largescale anomaly detection. In: Proceedings of the TwentyNinth AAAI Conference on Artificial Intelligence, AAAI’15, pp. 432–438. AAAI Press (2015)
 [13] Erfani, S.M., Rajasegarar, S., Karunasekera, S., Leckie, C.: Highdimensional and largescale anomaly detection using a linear oneclass svm with deep learning. Pattern Recogn. 58(C), 121–134 (2016)
 [14] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, vol. 9, pp. 249–256. PMLR, Chia Laguna Resort, Sardinia, Italy (2010)
 [15] Grubbs, F.E.: Procedures for detecting outlying observations in samples. Technometrics 11(1), 1–21 (1969)
 [16] Hotelling, H.: Analysis of a complex of statistical variables into principal components. Journal of educational psychology 24(6), 417 (1933)
 [17] Hull, J.J.: A database for handwritten text recognition research. IEEE Transactions on Pattern Analysis and Machine Intelligence 16(5), 550–554 (1994)
 [18] Kim, J., Scott, C.D.: Robust kernel density estimation. Journal of Machine Learning Research 13(Sep), 2529–2565 (2012)
 [19] Kingma, D.P., Ba, J.: Adam: A Method for Stochastic Optimization. ArXiv eprints (2014)
 [20] LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010). URL http://yann.lecun.com/exdb/mnist/
 [21] Liu, F.T., Ting, K.M., Zhou, Z.H.: Isolation forest. In: 2008 Eighth IEEE International Conference on Data Mining, pp. 413–422 (2008)
 [22] Montavon, G., Bach, S., Binder, A., Samek, W., Müller, K.: Explaining nonlinear classification decisions with deep taylor decomposition. CoRR abs/1512.02479 (2015)
 [23] Rahimi, A., Recht, B.: Random features for largescale kernel machines. In: Advances in Neural Information Processing Systems 20, pp. 1177–1184. Curran Associates, Inc. (2008)
 [24] Schlegl, T., Seeböck, P., Waldstein, S.M., SchmidtErfurth, U., Langs, G.: Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In: International Conference on Information Processing in Medical Imaging, pp. 146–157. Springer (2017)
 [25] Schölkopf, B., Williamson, R., Smola, A., ShaweTaylor, J., Platt, J.: Support vector method for novelty detection. In: Proceedings of the 12th International Conference on Neural Information Processing Systems, NIPS’99, pp. 582–588. MIT Press, Cambridge, MA, USA (1999)
 [26] ShalevShwartz, S., Singer, Y., Srebro, N., Cotter, A.: Pegasos: Primal estimated subgradient solver for svm. Math. Program. 127(1), 3–30 (2011)
 [27] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. CoRR abs/1704.02685 (2017)
 [28] Shrikumar, A., Greenside, P., Shcherbina, A., Kundaje, A.: Not just a black box: Learning important features through propagating activation differences. CoRR abs/1605.01713 (2016)
 [29] Simonyan, K., Vedaldi, A., Zisserman, A.: Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. ArXiv eprints abs/1506.02785 (2013)
 [30] Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. CoRR abs/1703.01365 (2017)
 [31] Sutherland, D.J., Schneider, J.G.: On the error of random fourier features. CoRR (2015)
 [32] Tax, D.M., Duin, R.P.: Support vector data description. Machine learning 54(1), 45–66 (2004)
 [33] Williams, C.K.I., Seeger, M.: Using the Nyström method to speed up kernel machines. In: Proceedings of the 13th International Conference on Neural Information Processing Systems, NIPS’00, pp. 661–667. MIT Press, Cambridge, MA, USA (2000)
 [34] Xie, J., Girshick, R., Farhadi, A.: Unsupervised deep embedding for clustering analysis. In: Proceedings of the 33rd International Conference on International Conference on Machine Learning  Volume 48, ICML’16, pp. 478–487. JMLR.org (2016). URL http://dl.acm.org/citation.cfm?id=3045390.3045442
 [35] Xiong, L., Póczos, B., Schneider, J.G.: Group anomaly detection using flexible genre models. In: Advances in neural information processing systems, pp. 1071–1079 (2011)
 [36] Zenati, H., Foo, C.S., Lecouat, B., Manek, G., Chandrasekhar, V.R.: Efficient ganbased anomaly detection. arXiv preprint arXiv:1802.06222 (2018). ICLR 2018 workshop
 [37] Zhai, S., Cheng, Y., Lu, W., Zhang, Z.: Deep structured energy based models for anomaly detection. In: Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 1924, 2016, pp. 1100–1109 (2016)
 [38] Zhou, C., Paffenroth, R.C.: Anomaly detection with robust deep autoencoders. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, pp. 665–674. ACM, New York, NY, USA (2017)
 [39] Zimek, A., Schubert, E., Kriegel, H.P.: A survey on unsupervised outlier detection in highdimensional numerical data. Statistical Analysis and Data Mining: The ASA Data Science Journal 5(5), 363–387 (2012)
 [40] Zong, B., Song, Q., Min, M.R., Cheng, W., Lumezanu, C., Cho, D., Chen, H.: Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In: International Conference on Learning Representations (2018). URL https://openreview.net/forum?id=BJJLHbb0