Anomaly Detection and Localisation
using Mixed Graphical Models
We propose a method that performs anomaly detection and localisation within heterogeneous data using a pairwise undirected mixed graphical model. The data are a mixture of categorical and quantitative variables, and the model is learned over a dataset that is supposed not to contain any anomaly. We then use the model over temporal data, potentially a data stream, using a version of the two-sided CUSUM algorithm. The proposed decision statistic is based on a conditional likelihood ratio computed for each variable given the others. Our results show that this function allows to detect anomalies variable by variable, and thus to localise the variables involved in the anomalies more precisely than univariate methods based on simple marginals.
CNRS LTCI, Télécom ParisTech, Université Paris-Saclay,
46 Rue Barrault, 75013 Paris
Thales Airborne Systems, 2 Avenue Gay Lussac, 78990 Élancourt \icmladdressCNRS LTCI, Télécom ParisTech, Université Paris-Saclay, 46 Rue Barrault, 75013 Paris \icmladdressCNRS LTCI, Télécom ParisTech, Université Paris-Saclay, 46 Rue Barrault, 75013 Paris
Anomaly detection refers to the task of detecting anomalous samples within a dataset described by variables, also called features. The localisation is the task that aims at identifying the subset of variables that are at the origin of the detected anomalies. While the problem of detection has been extensively studied in the machine learning literature (see (Hodge & Austin, 2004)), the problem of localisation in the presence of dependant variables remains a challenge.
In this paper, we propose to address this question using undirected probabilistic graphical models. Such models are particularly useful to represent the joint distribution over a set of random variables , in an efficient and compact way. Undirected graphical models are commonly tied to Gaussian random variables yet recent works have studied the possibility of building models over heterogeneous variables : (Yang et al., 2014) proposes a general class of graphical models where each node-conditional distribution is a member of a univariate exponential distribution and (Lee & Hastie, 2015; Laby et al., 2015) investigate the problem of learning the structure of pairwise graphical model over both discrete and continuous variables. This is done by optimizing the likelihood or the pseudo-likelihood, penalized with a Lasso or group Lasso regularisation.
A standard approach to perform online anomaly detection on temporal data such as signals, is to use the CUSUM algorithm (see (Page, 1954) and (Basseville et al., 1993)). In this work we propose a two-sided test with an adapted CUSUM algorithm to detect anomalies that occur in the conditional distributions rather than in the marginal. The resulting algorithm allows to perform change-point detection and to detect, variable by variable, continuous or categorical, the time when the distribution of the data changes from the “normal” distribution.
2 Mixed Model Presentation
The definition of graphical models relies on the factorisation of the joint distribution. Pairwise models form a particular class of models where the features are grouped in sets of one or two variables. Such models have been widely studied and have a number of practical advantages (Schmidt, 2010). In this paper, we focus on mixed models mixing binary variables (called categorical thereafter), and continuous variables (called quantitative thereafter). We have , with values in . We use the pairwise mixed model
where contains all the parameters of the model. Here, is a symmetric matrix, , is a positive definite symmetric matrix and is a general matrix.
The model (1) is a mixture between the classic Ising Graphical Model (IGM) and Gaussian Graphical Model (GGM). In the Gaussian model, that is, when takes the form of a Gaussian density, the partition function is easy to calculate and only requires the calculation of the determinant of a matrix. The Ising model is one of the earliest studied undirected model for modeling energy of a physical system involving interactions between atoms (see (Ising, 1925)). The Ising model has binary variables, i.e. each takes values in or , depending on the authors. Here we use the state space . The Ising model can be generalized for discrete variables, for example with the Potts model (Potts, 1953), but this one can be reparametrized as an IGM using 1-of- encoding, as explained in (Bishop, 2006), §4.3.4. In the following, we will therefore only consider binary categorical variables.
To illustrate the model (1), we show some simulations made with 2 quantitative and 3 categorical variables. Figure 1 shows simulations of when (the quantitative variables are independent of the categorical variables and thus have a Gaussian distribution) and when ( is not independent of and its distribution is a mixture of Gaussian distribution). Given , the conditional distribution of is always Gaussian, namely
While, except when , the unconditional law of is not Gaussian but is a mixture of Gaussian distributions, the unconditional law of is again an Ising model with density
With these two properties, we can design an algorithm to efficiently sample from the distribution (1). Since , one just need to first sample from (3), using for instance Wolff’s algorithm (Wolff, 1989), and then to sample from the conditional Gaussian density (2). This procedure will be used in our numerical experiments below.
In this paper, we do not aim at learning a graphical model but rather at exploiting one for anomaly detection and localisation. See (Yang et al., 2014), (Lee & Hastie, 2015) and (Laby et al., 2015) for recent works that investigate the task of learning the parameters of a mixed undirected graphical model.
3 Anomaly detection and localisation
In this section, we present a method to detect and localise anomalies from a sequence of new data , , assuming a reference model that has already been learned using normal data.
The idea to localise anomalies is to monitor each term of the log-pseudo-likelihood (Besag, 1975) as a function of time. The CUSUM algorithm (Page, 1954) has been introduced to sequentially detect a change in the mean of a random variable. Since we want to detect an increase or a decrease, we use the two-sided CUSUM algorithm as proposed in (Basseville et al., 1993). For each and each variable , we define the instantaneous conditional log-likelihood ratio
where , and a decision statistic defined recursively by and
where . Here denotes the density of the alternative hypothesis, that is, the conditional density of the targeted anomalous behaviour.
We focus first on the quantitative variables. By (2), the conditional distribution of given is the multivariate Gaussian , with . It follows that, for all , the conditional distribution of given is Gaussian univariate with mean
We actually see from (2) that depends on and a fortiori on , whereas it is not the case for .
For each quantitative variable , , we want to detect a change in . We define the conditional density of the alternative hypothesis as a Gaussian density with same variance and a modified mean . The ratio (4) then becomes, for ,
Setting or defines two statistics and in (5), for detecting respectively increase and decrease of the conditional mean . In our experiments in Section 4, we will consider the sum in order to detect a change in both possible directions.
We focus now on the categorical variables. Each variable has a conditional Bernoulli distribution with mean
In the case of categorical variables, we define the conditional distribution of the alternative hypothesis as a Bernoulli distribution with mean . The instantaneous log-likelihood ratio is then given by
We choose such as the drift of the decision function (5) under the null hypothesis is set to the same value , as for quantitative variables in (3). This drift is given by computing with as in (3), yielding the equation
It is easy to show that this equation in (with and fixed) has two distinct solutions (associated to the statistic ) and (associated to ), detecting respectively increase and decrease of the mean , with a conditional negative drift under the null hypothesis. For the same reasons as with quantitative variables, we will consider the sum in the experiments.
Under the null hypothesis each decision statistic or evolves with a negative drift . Hence, because of the positive part in (5), it remains close to zero with high probability. In contrast, under the alternative, the conditional drift becomes positive and the decision statistic eventually increase above any arbitrarily high threshold . We thus label as a change time the first times when . The choice of sets how sensitive the test is to a close alternative, while the choice of is a compromise between the false alarm probability over a given horizon and the delay needed to raise an alarm after a change of distribution. Finally and most interestingly, the set of indices for which the alarm is raised provides a way to identify the variables for which not only the marginal distribution has changed but also the conditional one, given all other available variables.
4 Applications on synthetic data
In this section, we present results of anomaly detection and localisation with synthetic data. We suppose here that we have already learned the model parameters from normal data. The data are composed of 50 normal observations sampled from the model using the algorithm explained at the end of section 2, and 50 anomalous observations sampled from an altered model where one parameter value in has been modified.
We use the same model structure as in (Lee & Hastie, 2015), with 4 categorical and 4 quantitative variables. The model is represented in Fig. 2, with a colormap that will be kept for all experiments. The parameters have been chosen as follows: upper and lower diagonal of are filled with .5, and 0 elsewhere; , , lower and upper diagonal of is filled with .25 and 0 elsewhere; and 0 elsewhere.
We have tested three different modifications on the parameters of : 1) the conditional distribution of the second (green) quantitative variable is changed by moving from to , 2) the conditional distribution of the first (red) categorical variable is changed by moving from to and 3) the conditional distributions of the first (red) categorical and third (blue) quantitative variable are changed by moving from to . Figure 3 shows the temporal evolution of the statistic computed for every variable and for the three kinds of anomalies. As expected, the plots on the top row show that when changing , only the statistic of the green quantitative variable is increasing, indicating that the green variable is carrying alone the change of conditional distribution. The same thing can be concluded for the two others modifications on and . These results show that our method correctly detects and localises the changes in the conditional distributions.
We compare our method to the Wilcoxon test presented in (Lung-Yut-Fong et al., 2011), which is designed to detect changes in the distribution of a set of quantitative variables from batch data. In the following, we thus apply this approach to detect a change of distribution for each quantitative variable. Figure 4 displays the statistic of this test as a function of the possible change times. When only one change occurs in the data, this statistic is expected to approximately have a triangle shape with a maxima or a minima around the true change time. We use the same dataset as for the experiment with the anomalies localised on the second (green) quantitative variable, where changes from 0 to at time . Figure 4 should thus be compared with the top row of Figure 3. In contrast to online methods such as the one we propose, this Wilcoxon statistic cannot be computed recursively as it it requires the whole set of data to be computed. Moreover it is not suited to localise the anomaly since a change of , although it only modifies the conditional distribution of given , yields a change of all the marginal distributions. This is why in Figure 4, the Wilcoxon statistics display triangle shapes for all the quantitative variables with a more obvious change for the variables directly connected to .
In this paper, we proposed an online method that allows to detect anomalies in a data stream, but more importantly to localise which variables are at the origin of the problem. By using a mixed undirected graphical model learned over a set of normal data, we manage to track changes occurring in the conditional distributions which offers more specific detections than when studying only marginal distributions. This method is based on a two-sided CUSUM algorithm, where decision statistics are computed for every variable and involve the calculation of conditional likelihoods.
- Basseville et al. (1993) Basseville, Michèle, Nikiforov, Igor V, et al. Detection of abrupt changes: theory and application, volume 104. Prentice Hall Englewood Cliffs, 1993.
- Besag (1975) Besag, J. Statistical analysis of non-lattice data. The Statistician, 24, 1975.
- Bishop (2006) Bishop, C. Pattern Recognition and Machine Learning. 2006.
- Hodge & Austin (2004) Hodge, Victoria J and Austin, Jim. A survey of outlier detection methodologies. Artificial Intelligence Review, 22(2):85–126, 2004.
- Ising (1925) Ising, E. Beitrag zur theorie des ferromagnetismus. Zeitschrift fur Physik, 31:253–258, 1925.
- Laby et al. (2015) Laby, Romain, Gramfort, Alexandre, Roueff, François, Enderli, Cyrille, and Alain, Larroque. Sparse pairwise Markov model learning for anomaly detection in heterogeneous data. June 2015. URL https://hal-institut-mines-telecom.archives-ouvertes.fr/hal-01167391.
- Lee & Hastie (2015) Lee, Jason D and Hastie, Trevor J. Learning the structure of mixed graphical models. Journal of Computational and Graphical Statistics, 24(1):230–253, 2015.
- Lung-Yut-Fong et al. (2011) Lung-Yut-Fong, Alexandre, Lévy-Leduc, Céline, and Cappé, Olivier. Homogeneity and change-point detection tests for multivariate data using rank statistics. arXiv preprint arXiv:1107.1971, 2011.
- Page (1954) Page, ES. Continuous inspection schemes. Biometrika, 41(1/2):100–115, 1954.
- Potts (1953) Potts, R.B. Some generalized order-disorder transformations. Proc. Cambridge Philosophie Soc, 1953.
- Schmidt (2010) Schmidt, Mark. Graphical model structure learning with l1-regularization. PhD thesis, University Of Bristish Columbia (Vancouver), 2010.
- Wolff (1989) Wolff, Ulli. Collective monte carlo updating for spin systems. Physical Review Letters, 62(4):361, 1989.
- Yang et al. (2014) Yang, Eunho, Baker, Yulia, Ravikumar, Pradeep D, Allen, Genevera I, and Liu, Zhandong. Mixed graphical models via exponential families. In AISTATS, pp. 1042–1050, 2014.