1 Introduction
Abstract

The following detection problem is studied, in which there are sequences of samples out of which one outlier sequence needs to be detected. Each typical sequence contains independent and identically distributed (i.i.d.) continuous observations from a known distribution , and the outlier sequence contains i.i.d. observations from an outlier distribution , which is distinct from , but otherwise unknown. A universal test based on KL divergence is built to approximate the maximum likelihood test, with known and unknown . A data-dependent partitions based KL divergence estimator is employed. Such a KL divergence estimator is further shown to converge to its true value exponentially fast when the density ratio satisfies , where and are positive constants, and this further implies that the test is exponentially consistent. The performance of the test is compared with that of a recently introduced test for this problem based on the machine learning approach of maximum mean discrepancy (MMD). We identify regimes in which the KL divergence based test is better than the MMD based test.

Universal Outlying sequence detection For Continuous Observations

Yuheng Bu   Shaofeng Zou   Yingbin Liang   Venugopal V. Veeravalli

University of Illinois at Urbana-Champaign

Syracuse University

Email: bu3@illinois.edu, szou02@syr.edu, yliang06@syr.edu, vvv@illinois.edu

1 Introduction

In this paper, we study problem, in which there are sequences of samples out of which one outlier sequence needs to be detected. Each typical sequence consists of independent and identically (i.i.d.) continuous observations drawn from a known distribution , whereas the outlier sequence consists of i.i.d. samples drawn from a distribution , which is distinct from , but otherwise unknown. The goal is to design a test to detect the outlier sequence.

The study of such a model is very useful in many applications [1]. For example, in cognitive wireless networks, signals follow different distributions depending on whether the channel is busy or vacant. The goal in such a network is to identify vacant channels out of busy channels based on their corresponding signals in order to utilize the vacant channels for improving spectral efficiency. Such a problem was studied in [2] and [3] under the assumption that both and are known. Other applications include anomaly detection in large data sets [4, 5], event detection and environment monitoring in sensor networks [6], understanding of visual search in humans and animals [7], and optimal search and target tracking [8].

The outlying sequence detection problem with discrete and was studied in [9]. A universal test based on generalized likelihood ratio test was proposed, and was shown to be exponentially consistent. The error exponent was further shown to be optimal as the number of sequences goes to infinity. The test utilizes empirical distributions to estimate and , and is therefore applicable only for the case where and are discrete.

In this paper, we study the case where distributions and are continuous and is unknown. We construct a Kullback-Leibler (KL) divergence based test, and further show that this test is exponentially consistent.

Our exploration of the problem starts with the case in which both and are known, and the maximum likelihood test is optimal. An interesting observation is that the test statistic of the optimal test converges to as the sample size goes to infinity if the sequence is the outlier. This motivates the use of a KL divergence estimator to approximate the test statistic for the case when is unknown. We apply a divergence estimator based on the idea of data-dependent partitions [10], which was shown to be consistent. Our first contribution here is to show that such an estimator converges exponentially fast to its true value when the density ratio satisfies the boundedness condition: , where and are positive constants. We further design a KL divergence based test using such an estimator and show that the test is exponentially consistent.

The rest of the paper is organized as follows. In Section 2, we describe the problem formulation. In Section 3, we present the KL divergence based test and establish its exponential consistency. In Section 4, we review the maximum mean discrepancy (MMD) based test. In Section 5, we provide a numerical comparison of our KL divergence based test and the MMD based test. All the detailed proofs is shown in the appendix.

2 Problem Model

Throughout the paper, random variables are denoted by capital letters, and their realizations are denoted by the corresponding lower-case letters. All logarithms are with respect to the natural base.

We study an outlier detection problem, in which there are in total data sequences denoted by for . Each data sequence consists of i.i.d. samples drawn from either a typical distribution or an outlier distribution , where and are continuous, i.e., defined on , and . We use the notation , where denotes the -th observation of the -th sequence. We assume that there is exactly one outlier among sequences. If the -th sequence is the outlier, the joint distribution of all the observations is given by

We are interested in the scenario in which the outlier distributions is unknown a priori, but we know the typical distribution exactly. This is reasonable because in practical scenarios, systems typically start without outliers and it is not difficult to collect sufficient information about .

Our goal is to build a distribution-free test to detect the outlier sequence generated by . The the test can be captured by a universal rule , which must not depend on .

The maximum error probability, which is a function of the detector and , is defined as

and the corresponding error exponent is defined as

A test is said to be universally consistent if

for any . It is said to be universally exponentially consistent if

for any .

3 KL divergence based test

We first introduce the optimal test when both and are known, which is the maximum likelihood test. We then construct a KL divergence estimator, and prove its exponential consistency. Next, we employ the KL divergence estimator to approximate the test statistics of the optimal test for the outlying sequence detection problem, and construct the KL divergence based test.

3.1 Optimal test with and known

If both and are known, the optimal test for the outlying sequence detection problem is the maximum likelihood test:

(1)

By normalizing with , (1) is equivalent to:

where

(2)

The following theorem characterizes the error exponent of test (1).

Theorem 1.

[9, Theorem 1] Consider the outlying sequence detection problem with both and known. The error exponent for the maximum likelihood test (1) is given by

where is the Bhattacharyya distance between and which is defined as

Proof.

See Appendix A. ∎

Consider defined in (2). If is generated from , almost surely as , by the Law of Large Numbers. Here,

is the KL divergence between and . Similarly, if is generated from , almost surely as If is generated from , is an empirical estimate of the KL divergence between and . This motivates us to construct a test based on an estimator of KL divergence between and , if is unknown.

3.2 KL divergence estimator

We introduce a KL divergence estimator of continuous distributions based on data-dependent partitions [10].

Assume that the distribution is unknown and the distribution is known, and both and are continuous. A sequence of i.i.d. samples is generated from . We wish to estimate the KL divergence between and . We denote the order statistics of by where . We further partition the real line into empirically equiprobable segments as follows:

where is the number of points in each interval except possibly the last one, and is the number of intervals. A divergence estimator between the sequence and the distribution was proposed in [10], which is given by

(3)

where is the number of points in the last segment.

The consistency of such an estimator was shown in [10]. Here, we characterize the convergence rate by introducing the following boundedness condition on the density ratio between and , i.e.,

(4)

where and are positive constants. In practice, such a boundedness condition is often satisfied, for example, for truncated Gaussian distributions.

The following theorem characterizes a lower bound on the convergence rate of estimator (3).

Theorem 2.

If the density ratio between and satisfies (4), and estimator (3) is applied with , as , then for ,

Proof.

See Appendix B. ∎

Remark 1.

The convergence rate of estimator (3) in Theorem 2 is equivalent to

where denotes “bounded in probability ” [11].

3.3 Test and performance

In this subsection, we utilize the estimator based on data-dependent partitions to construct our test.

It is clear that if is the outlier, then is a good estimator of , which is a positive constant. On the other hand, if is a typical sequence, should be close to . Based on this understanding and the convergence guarantee in Theorem 2, we use in place of in (2), and construct the following test for the outlying sequence detection problem:

(5)

The following theorem provides a lower bound on the error exponent of , which further implies that is universally exponentially consistent.

Theorem 3.

If the density ratio between and satisfies (4), then defined in (5) is exponentially consistent, and the error exponent is lower bounded as follows,

(6)
Proof.

See Appendix C. ∎

4 MMD-Based Test

In this section, we introduce the MMD based test, which we previously studied in [12]. We will compare to the MMD based test.

4.1 Introduction to MMD

In this subsection, we briefly introduce the idea of mean embedding of distributions into RKHS [13] and the metric of MMD. Suppose is a set of probability distributions, and suppose is the RKHS with an associated kernel . We define a mapping from to such that each distribution is mapped to an element in as follows

Here, is referred to as the mean embedding of the distribution into the Hilbert space . Due to the reproducing property of , it is clear that for all .

In order to distinguish between two distributions and , Gretton et al. [14] introduced the following quantity of maximum mean discrepancy (MMD) based on the mean embeddings and of and in RKHS:

It can be shown that

Due to the reproducing property of kernel, the following is true

where and are independent but have the same distribution , and and are independent but have the same distribution . An unbiased estimator of based on and samples of generated from is given as follows,

where and are independent but have the same distribution .

4.2 Test and performance

For each sequence , we compute for . It is clear that if is the outlier, is a good estimator of , which is a positive constant. On the other hand, if is a typical sequence, should be a good estimator of , which is zero. Based on the above understanding, we construct the following test:

(7)

The following theorem provides a lower bound on the error exponent of , and further demonstrates that the test is universally exponentially consistent.

Theorem 4.

Consider the universal outlying sequence detection problem. Suppose defined in (7) applies a bounded kernel with for any . Then, the error exponent is lower bounded as follows,

(8)
Proof.

See Appendix D. ∎

5 Numerical results and Discussion

In this section, we compare the performance of and .

We set the number of sequences . We choose the typical distribution , and choose the outlier distribution , respectively. In Fig. 1, Fig. 2, Fig. 3 and Fig. 4, we plot the logarithm of the probability of error as a function of the sample size .

It can be seen that for both tests as the number of samples increases, the probability of error converges to zero as the sample size increases. Furthermore, decreases with linearly, which demonstrates the exponential consistency of both and .

By comparing the four figures, it can be seen that as the variance of deviates from the variance of , outperforms . The numerical results and theoretical lower bounds on error exponents give us some intuitions to identify regimes in which one test outperforms the other. As shown above, when the distribution and become more different from each other, will outperform . The reason is that for any pair of distributions, MMD is bounded between , while the KL divergence is not bounded. As the distributions become more different from each other, the KL divergence will increase, and the KL divergence based test will have a larger error exponent than MMD based test.

Figure 1: Comparison of the performance between KL divergence and MMD based test with and
Figure 2: Comparison of the performance between KL divergence and MMD based test with and
Figure 3: Comparison of the performance between KL divergence and MMD based test with and
Figure 4: Comparison of the performance between KL divergence and MMD based test with and

Appendix

Appendix A Proof of Theorem 1

Recall the maximum likelihood test is defined as

Now we will characterize the exponent for the maximum likelihood test. By the symmetry of the problem, it is clear that is the same for every , hence

It now follows

Since , the left hand side and right hand side will share a same error probability exponent, so we just need to compute the exponent for .

Let us use the notation,

Then, we can rewrite the probability,

Thus we can apply the Cramer’s theorem directly.

for , and is the large-deviation rate function.

In our case, for . So

We just need to compute the log-MGF of random variable ,

Given that is generated from , is generated from , we have

where

In this case, it is easy to show that the error exponent

(9)

Since is concave with , and , (9) is maximized when , so

where is the Bhattacharyya distance between and which is defined as

Appendix B Proof of Theorem 2

To show the exponential consistency of our estimator, we invoke a result by Lugosi and Nobel [15], that specifies sufficient conditions on the partition of the space under which the empirical measure converges to the true measure.

Let be a family of partitions of . The maximal cell count of is given by

where denotes the number of cells in partition .

The complexity of is measured by the growth function as described below. Fix points in ,

Let be the number of distinct partitions

of the finite set that can be induced by partitions . Define the growth function of as

which is the largest number of distinct partitions of any -point subset of that can be induced by the partitions in .

Lemma 1.

(Lugosi and Nobel ) Let be i.i.d. random variables in with and let denote the empirical probability measure based on samples. be any collection of partitions of . For each and every , then

(10)

To prove theorem 2, we consider the case when typical distribution is known, and a given sequence is independently generated from an unknown distribution . We further assume that and are both absolutely continuous probability measures defined on , and satisfy

Denote the empirical probability measure based on the sequence by (Since is generated from ) and defined the empirical equiprobable partitions as follow. If the order statistics of can be expressed as where . The real line is partitioned into empirically equivalent segments according to

where is the number of points in each interval except possibly the last one, and is the number of intervals. Assume that as , both . So our estimator can be written as

If we denote the true equiprobable partitions based on true distribution by , then

The estimation error can be decomposed as

Intuitively, is the approximation error caused by numerical integration, which diminishes as increases; is the estimation error caused by the difference of the empirical equivalent partitions from the true equiprobable partitions and the difference of the empirical probability measure on an interval from its true probability measure.

In addition, is only depends on and distribution and , namely, is a deterministic term, while also depends on data , which is random. Next, we will focus on bounding the term.

Since , the approximation error can be written as

where , and , is a real number between and . We utilize the mean value theorem to get the last inequality.

Since , we get

(11)

where

To get an exponential bound for , we will apply lemma 1 to our problem. For our case, are the equivalent segments based on the empirical measure . Suppose is the collection of all the partitions of into empirically equiprobable intervals based on sample points. Then, from (10)

(12)

If we want to get a meaningful exponential bound, we still need to verify 2 conditions in our case: as ,

Here,

Since as , we have that

Next consider the growth function which is defined as the largest number of distinct partitions of any -point subset of that can be induced by the partitions in . Namely

In our algorithm, the partitioning number is the number of ways that fixed points can be partitioned by intervals. Then

Let be the binary entropy function, defined as

By the inequality , we obtain

As , the last inequality implies that

Now, we can conclude that the inequality (B) is actually an exponential bound, the coefficients and will not influence the exponent.

Since and , the following holds

(13)

Combine with (B), we can control the estimation error with the following bound

Recall that

Since we show that converges to exponentially fast in (B), we have

Finally, we can compute the error exponent,

Since is the approximation error caused by numerical integration, . We prove that

Appendix C Proof of Theorem 3

Recall our test is defined as

Now we will show the test we proposed is exponentially consistent. By the symmetry of the problem, it is clear that is the same for every , hence

It now follows