Unsupervised and Semi-supervised Anomaly Detection with LSTM Neural Networks

Unsupervised and Semi-supervised Anomaly Detection with LSTM Neural Networks

Abstract

We investigate anomaly detection in an unsupervised framework and introduce Long Short Term Memory (LSTM) neural network based algorithms. In particular, given variable length data sequences, we first pass these sequences through our LSTM based structure and obtain fixed length sequences. We then find a decision function for our anomaly detectors based on the One Class Support Vector Machines (OC-SVM) and Support Vector Data Description (SVDD) algorithms. As the first time in the literature, we jointly train and optimize the parameters of the LSTM architecture and the OC-SVM (or SVDD) algorithm using highly effective gradient and quadratic programming based training methods. To apply the gradient based training method, we modify the original objective criteria of the OC-SVM and SVDD algorithms, where we prove the convergence of the modified objective criteria to the original criteria. We also provide extensions of our unsupervised formulation to the semi-supervised and fully supervised frameworks. Thus, we obtain anomaly detection algorithms that can process variable length data sequences while providing high performance, especially for time series data. Our approach is generic so that we also apply this approach to the Gated Recurrent Unit (GRU) architecture by directly replacing our LSTM based structure with the GRU based structure. In our experiments, we illustrate significant performance gains achieved by our algorithms with respect to the conventional methods.

1Introduction

1.1Preliminaries

Anomaly detection [?] has attracted significant interest in the contemporary learning literature due its applications in a wide range of engineering problems, e.g., sensor failure [?], network monitoring [?], cybersecurity [?] and surveillance [?]. In this paper, we study the variable length anomaly detection problem in an unsupervised framework, where we seek to find a function to decide whether each unlabeled variable length sequence in a given dataset is anomalous or not. Note that although this problem is extensively studied in the literature and there exist different methods, e.g., supervised (or semi-supervised) methods, that require the knowledge of data labels, we employ an unsupervised method due to the high cost of obtaining accurate labels in most real life applications [?] such as in cybersecurity [?] and surveillance [?]. However, we also extend our derivations to the semi-supervised and fully supervised frameworks for completeness.

In the current literature, a common and widely used approach for anomaly detection is to find a decision function that defines the model of normality [?]. In this approach, one first defines a certain decision function and then optimizes the parameters of this function with respect to a predefined objective criterion, e.g., the One Class Support Vector Machines (OC-SVM) and Support Vector Data Description (SVDD) algorithms [?]. However, algorithms based on this approach examine time series data over a sufficiently long time window to achieve an acceptable performance [?]. Thus, their performances significantly depend on the length of this time window so that this approach requires careful selection for the length of time window to provide a satisfactory performance [?]. To enhance performance for time series data, neural networks, especially Recurrent Neural Networks (RNNs), based approaches are introduced thanks to their inherent memory structure that can store “time” or “state” information [?]. However, since the basic RNN architecture does not have control structures (gates) to regulate the amount of information to be stored [?], a more advanced RNN architecture with several control structures, i.e., the Long Short Term Memory (LSTM) network, is introduced [?]. However, neural networks based approaches do not directly optimize an objective criterion for anomaly detection [?]. Instead, they first predict a sequence from its past samples and then determine whether the sequence is an anomaly or not based on the prediction error, i.e., an anomaly is an event, which cannot be predicted from the past nominal data [?]. Thus, they require a probabilistic model for the prediction error and a threshold on the probabilistic model to detect anomalies, which results in challenging optimization problems and restricts their performance accordingly [?]. Furthermore, both the common and neural networks based approaches can process only fixed length vector sequences, which significantly limits their usage in real life applications [?].

In order to circumvent these issues, we introduce novel LSTM based anomaly detection algorithms for variable length data sequences. In particular, we first pass variable length data sequences through an LSTM based structure to obtain fixed length representations. We then apply our OC-SVM [?] and SVDD [?] based algorithms for detecting anomalies in the extracted fixed length vectors as illustrated in Figure 1. Unlike the previous approaches in the literature [?], we jointly train the parameters of the LSTM architecture and the OC-SVM (or SVDD) formulation to maximize the detection performance. For this joint optimization, we propose two different training methods, i.e., a quadratic programming based and a gradient based algorithms, where the merits of each different approach are detailed in the paper. For our gradient based training method, we modify the original OC-SVM and SVDD formulations and then provide the convergence results of the modified formulations to the original ones. Thus, instead of following the prediction based approaches [?] in the current literature, we define proper objective functions for anomaly detection using the LSTM architecture and optimize the parameters of the LSTM architecture via these well defined objective functions. Hence, our anomaly detection algorithms are able to process variable length sequences and provide high performance for time series data. Furthermore, since we introduce a generic approach in the sense that it can be applied to any RNN architecture, we also apply our approach to the Gated Recurrent Unit (GRU) architecture [?], i.e., an advanced RNN architecture as the LSTM architecture, in our simulations. Through extensive set of experiments, we demonstrate significant performance gains with respect to the conventional methods [?].

Figure 1: Overall structure of our anomaly detection approach.
Figure 1: Overall structure of our anomaly detection approach.

1.2Prior Art and Comparisons

Several different methods have been introduced for the anomaly detection problem [?]. Among these methods, the OC-SVM [?] and SVDD [?] algorithms are generally employed due their high performance in real life applications [?]. However, these algorithms provide inadequate performance for time series data due to their inability to capture time dependencies [?]. In order to improve the performances of these algorithms for time series data, in [?], the authors convert time series data into a set of vectors by replicating each sample so that they obtain two dimensional vector sequences. However, even though they obtain two dimensional vector sequences, the second dimension does not provide additional information such that this approach still provides inadequate performance for time series data [?]. As another approach, the OC-SVM based method in [?] acquires a set of vectors from time series data by unfolding the data into a phase space using a time delay embedding process [?]. More specifically, for a certain sample, they create an dimensional vector by using the previous samples along with the sample itself [?]. However, in order to obtain a satisfactory performance from this approach, the dimensionality, i.e., , should be carefully tuned, which restricts its usage in real life applications [?]. On the other hand, even though LSTM based algorithms provide high performance for time series data, we have to solve highly complex optimization problems to get an adequate performance [?]. As an example, the LSTM based anomaly detection algorithms in [?] first predict time series data and then fit a multivariate Gaussian distribution to the error, where they also select a threshold for this distribution. Here, they allocate different set of sequences to learn the parameters of the distribution and threshold via the maximum likelihood estimation technique [?]. Thus, the conventional LSTM based approaches require careful selection of several additional parameters, which significantly degrades their performance in real life [?]. Furthermore, both the OC-SVM (or SVDD) and LSTM based methods are able to process only fixed length sequences [?]. To circumvent these issues, we introduce generic LSTM based anomaly detectors for variable length data sequences, where we jointly train the parameters of the LSTM architecture and the OC-SVM (or SVDD) formulation via a predefined objective function. Therefore, we not only obtain high performance for time series data but also enjoy joint and effective optimization of the parameters with respect to a well defined objective function.

1.3Contributions

Our main contributions are as follows:

  • We introduce LSTM based anomaly detection algorithms in an unsupervised framework, where we also extend our derivations to the semi-supervised and fully supervised frameworks.

  • As the first time in the literature, we jointly train the parameters of the LSTM architecture and the OC-SVM (or SVDD) formulation via a well defined objective function, where we introduce two different joint optimization methods. For our gradient based joint optimization method, we modify the OC-SVM and SVDD formulations and then prove the convergence of the modified formulations to the original ones.

  • Thanks to our LSTM based structure, the introduced methods are able to process variable length data sequences. Additionally, unlike the conventional methods [?], our methods effectively detect anomalies in time series data without requiring any preprocessing.

  • Through extensive set of experiments involving real and simulated data, we illustrate significant performance improvements achieved by our algorithms with respect to the conventional methods [?]. Moreover, since our approach is generic, we also apply it to the recently proposed GRU architecture [?] in our experiments.

1.4Organization of the Paper

The organization of this paper is as follows. In Section 2, we first describe the variable length anomaly detection problem and then introduce our LSTM based structure. In Section 3.1, we introduce anomaly detection algorithms based on the OC-SVM formulation, where we also propose two different joint training methods in order to learn the LSTM and SVM parameters. The merits of each different approach are also detailed in the same section. In a similar manner, we introduce anomaly detection algorithms based on the SVDD formulation and provide two different joint training methods to learn the parameters in Section 3.2. In Section 4, we demonstrate performance improvements over several real life datasets. In the same section, thanks to our generic approach, we also introduce GRU based anomaly detection algorithms. Finally, we provide concluding remarks in Section 5.

2Model and Problem Description

In this paper, all vectors are column vectors and denoted by boldface lower case letters. Matrices are represented by boldface uppercase letters. For a vector a, a is its ordinary transpose and aaa is the -norm. The time index is given as subscript, e.g., a is the th vector. Here, 1 (and 0) is a vector of all ones (and zeros) and I represents the identity matrix, where the sizes are understood from the context.

We observe data sequences X, i.e., defined as

where x, and is the number of columns in X, which can vary with respect to . Here, we assume that the bulk of the observed sequences are normal and the remaining sequences are anomalous. Our aim is to find a scoring (or decision) function to determine whether X is anomalous or not based on the observed data, where and represent the outputs of the desired scoring function for nominal and anomalous data respectively. As an example application for this framework, in host based intrusion detection [?], the system handles operating system call traces, where the data consists of system calls that are generated by users or programs. All traces contain system calls that belong to the same alphabet, however, the co-occurrence of the system calls is the key issue in detecting anomalies [?]. For different programs, these system calls are executed in different sequences, where the length of the sequence may vary for each program. Binary encoding of a sample set of call sequences can be X, X and X for case [?]. After observing such a set of call sequences, our aim is to find a scoring function that successfully distinguishes the anomalous call sequences from the normal sequences.

In order to find a scoring function such that

one can use the OC-SVM algorithm [?] to find a hyperplane that separates the anomalies from the normal data or the SVDD algorithm [?] to find a hypersphere enclosing the normal data while leaving the anomalies outside the hypersphere. However, these algorithms can only process fixed length sequences. Hence, we use the LSTM architecture [?] to obtain a fixed length vector representation for each X_i. Although there exist several different versions of LSTM architecture, we use the most widely employed architecture, i.e., the LSTM architecture without peephole connections [?]. We first feed X to the LSTM architecture as demonstrated in Figure 2, where the internal LSTM equations are as follows [?]:

where c is the state vector, x is the input vector and h is the output vector for the th LSTM unit in Figure 2. Additionally, s, f and o is the input, forget and output gates, respectively. Here, is set to the hyperbolic tangent function, i.e., , and applies to input vectors pointwise. Similarly, is set to the sigmoid function. is the operation for elementwise multiplication of two same sized vectors. Furthermore, W, R and b are the parameters of the LSTM architecture, where the size of each is selected according to the dimensionality of the input and output vectors. After applying the LSTM architecture to each column of our data sequences as illustrated in Figure 2, we take the average of the LSTM outputs for each data sequence, i.e., the mean pooling method. By this, we obtain a new set of fixed length sequences, i.e., denoted as |h, |h. Note that we also use the same procedure to obtain the state information |c for each X as demonstrated in Figure 2.

Figure 2: Our LSTM based structure for obtaining fixed length sequences.
Figure 2: Our LSTM based structure for obtaining fixed length sequences.

3Novel Anomaly Detection Algorithms

In this section, we first formulate the anomaly detection approaches based on the OC-SVM and SVDD algorithms. We then provide joint optimization updates to train the parameters of the overall structure.

3.1Anomaly Detection with the OC-SVM Algorithm

In this subsection, we provide an anomaly detection algorithm based on the OC-SVM formulation and derive the joint updates for both the LSTM and SVM parameters. For the training, we first provide a quadratic programming based algorithm and then introduce a gradient based training algorithm. To apply the gradient based training method, we smoothly approximate the original OC-SVM formulation and then prove the convergence of the approximated formulation to the actual one in the following subsections.

In the OC-SVM algorithm, our aim is to find a hyperplane that separates the anomalies from the normal data [?]. We formulate the OC-SVM optimization problem for the sequence |h as follows [?]

where and w are the parameters of the separating hyperplane, is a regularization parameter, is a slack variable to penalize misclassified instances and we group the LSTM parameters ‘ =13 , WRbWRbWRbWRb into , where . Since the LSTM parameters are unknown and |h is a function of these parameters, we also minimize the cost function in with respect to .

After solving the optimization problem in , and , we use the scoring function

to detect the anomalous data, where the function returns the sign of its input.

We emphasize that while minimizing with respect to , we might suffer from overfitting and impotent learning of time dependencies on the data [?], i.e., forcing the parameters to null values, e.g., 0. To circumvent these issues, we introduce , which constraints the norm of to avoid overfitting and trivial solutions, e.g., 0, while boosting the ability of the LSTM architecture to capture time dependencies [?].

Quadratic Programming Based Training Algorithm

Here, we introduce a training approach based on quadratic programming for the optimization problem in , and , where we perform consecutive updates for the LSTM and SVM parameters. For this purpose, we first convert the optimization problem to a dual form in the following. We then provide the consecutive updates for each parameter.

We have the following Lagrangian for the SVM parameters

where , are the Lagrange multipliers. Taking derivative of with respect to w, and and then setting the derivatives to zero gives

Note that at the optimum, the inequalities in become equalities if and are nonzero, i.e., [?]. With this relation, we compute as

By substituting and into , we obtain the following dual problem for the constrained minimization in , and

where is a vector representation for ’s. Since the LSTM parameters are unknown, we also put the minimization term for into as in . By substituting into , we have the following scoring function for the dual problem

where we calculate using .

In order to find the optimal and for the optimization problem in , and , we employ the following procedure. We first select a certain set of the LSTM parameters, i.e., . Based on , we find the minimizing values, i.e., , using the Sequential Minimal Optimization (SMO) algorithm [?]. Now, we fix as and then update from to using the algorithm for optimization with orthogonality constraints in [?]. We repeat these consecutive update procedures until and converge [?]. Then, we use the converged values in order to evaluate . In the following, we explain the update procedures for and in detail.

Based on , i.e., the LSTM parameter vector at the th iteration, we update , i.e., the vector at the th iteration, using the SMO algorithm due to its efficiency in solving quadratic constrained optimization problems [?]. In the SMO algorithm, we choose a subset of parameters to minimize and fix the rest of parameters. In the extreme case, we choose only one parameter to minimize, however, due to , we must choose at least two parameters. To illustrate how the SMO algorithm works in our case, we choose and to update and fix the rest of the parameters in . From , we have

We first replace in with . We then take the derivative of with respect to and equate the derivative to zero. Thus, we obtain the following update for at the th iteration

where |h|h, and represents the th element of . Due to , if the updated value of is outside of the region , we project it to this region. Once is updated as , we obtain using . For the rest of the parameters, we repeat the same procedure, which eventually converges to a certain set of parameters [?]. By this way, we obtain , i.e., the minimizing for .

Following the update of , we update based on the updated vector. For this purpose, we employ the optimization method in [?]. Since we have that satisfies , we reduce the dual problem to

For and , we update W as follows

where the subscripts represent the current iteration index, is the learning rate, AGWWG and the element at the th row and the th column of G, i.e., G, is defined as

With these updates, we obtain a quadratic programming based training algorithm (see Algorithm ? for the pseudocode) for our LSTM based anomaly detector.

Gradient Based Training Algorithm

Although the quadratic programming based training algorithm directly optimizes the original OC-SVM formulation without requiring any approximation, since it depends on the separated consecutive updates of the LSTM and OC-SVM parameters, it might not converge to even a local minimum [?]. In order to resolve this issue, in this subsection, we introduce a training method based on only the first order gradients, which updates the parameters at the same time. However, since we require an approximation to the original OC-SVM formulation to apply this method, we also prove the convergence of the approximated formulation to the original OC-SVM formulation in this subsection.

Considering , we write the slack variable in a different form as follows

where

By substituting into , we remove the constraint and obtain the following optimization problem

Since is not a differentiable function, we are unable to solve the optimization problem in using gradient based optimization algorithms. Hence, we employ a differentiable function

to smoothly approximate , where is the smoothing parameter and represents the natural logarithm. In , as increases, converges to (see Proposition ? at the end of this section), hence, we choose a large value for . With , we modify our optimization problem as follows

where is the objective function of our optimization problem and defined as

To obtain the optimal parameters for and , we update w, and until they converge to a local or global optimum [?]. For the update of w and , we use the SGD algorithm [?], where we compute the first order gradient of the objective function with respect to each parameter. We first compute the gradient for w as follows

Using , we update w as

where the subscript indicates the value of any parameter at the th iteration. Similarly, we calculate the derivative of the objective function with respect to as follows

Using , we update as

For the LSTM parameters, we use the method for optimization with orthogonality constraints in [?] due to . To update each element of W, we calculate the gradient of the objective function as

We then update W using as

where BMWWM and

Hence, we complete the required updates for each parameter. The complete algorithm is also provided in Algorithm ? as a pseudocode. Moreover, we illustrate the convergence of our approximation to in Proposition ?. Using Proposition ?, we then demonstrate the convergence of the optimal values for our objective function to the optimal values of the actual SVM objective function in Theorem ?.

In order to simplify our notation, for any given w, , X and , we denote w|h as . We first show that , . Since

and , we have . Then, for any , we have

and for any , we have

thus, we conclude that is a monotonically decreasing function of . As the last step, we derive an upper bound for the difference . For , the derivative of the difference is as follows

hence, the difference is a decreasing function of for . Therefore, the maximum value is and it occurs at . Similarly, for , the derivative of the difference is positive, which shows that the maximum for the difference occurs at . With this result, we obtain the following bound

Using , for any , we can choose sufficiently large so that . Hence, as increases, uniformly converges to . By averaging over all the data points and multiplying with , we obtain

which proves the uniform convergence of to .

We have the following Hessian matrix of w with respect to w

which satisfies vwwv for any nonzero column vector v. Hence, the Hessian matrix is positive definite, which shows that w is strictly convex function of w. Consequently, the solution w is both global and unique given any and . Additionally, we have the following second order derivative for

which implies that w is strictly convex function of . As a result, the solution is both global and unique for any given w and .

Let w and be the solutions of for any fixed . From the proof of Proposition ?, we have

Using the convergence result in Proposition ? and , we have

which proves the following equality

3.2Anomaly Detection with the SVDD algorithm

In this subsection, we introduce an anomaly detection algorithm based on the SVDD formulation and provide the joint updates in order to learn both the LSTM and SVDD parameters. However, since the generic formulation is the same with the OC-SVM case, we only provide the required and distinct updates for the parameters and proof for the convergence of the approximated SVDD formulation to the actual one.

In the SVDD algorithm, we aim to find a hypersphere that encloses the normal data while leaving the anomalous data outside the hypersphere [?]. For the sequence |h, we have the following SVDD optimization problem [?]

where is a trade-off parameter between and the total misclassification error, is the radius of the hypersphere and is the center of the hypersphere. Additionally, and represent the LSTM parameters and the slack variable respectively as in the OC-SVM case. After solving the constrained optimization problem in , and , we detect anomalies using the following scoring function

Quadratic Programming Based Training Algorithm

In this subsection, we introduce a training algorithm based on quadratic programming for , and . As in the OC-SVM case, we first assume that the LSTM parameters are fixed and then perform optimization over the SVDD parameters based on the fixed LSTM parameters. For and , we have the following Lagrangian

where , are the Lagrange multipliers. Taking derivative of with respect to , and and then setting the derivatives to zero yields

Putting and into , we obtain a dual form for and as follows

Using , we modify as

In order to solve the constrained optimization problem in , and , we employ the same approach as in the OC-SVM case. We first fix a certain set of the LSTM parameters . Based on these parameters, we find the optimal using the SMO algorithm. After that, we fix to update using the algorithm for optimization with orthogonality constraints. We repeat these procedures until we reach convergence. Finally, we evaluate based on the converged parameters.

Hence, we obtain a quadratic programming based training algorithm for our LSTM based anomaly detector, which is also described in Algorithm ? as a pseudocode.

Gradient Based Training Algorithm

In this subsection, we introduce a training algorithm based on only the first order gradients for , and . We again use the function in in order to eliminate the constraint in as follows

where

Since the gradient based methods cannot optimize due to the nondifferentiable function , we employ instead of and modify as

where is the objective function of . To obtain the optimal values for and , we update , and till we reach either a local or a global optimum. For the updates of and , we employ the SGD algorithm, where we use the following gradient calculations. We first compute the gradient of as