Distributed Detection Fusion via Monte Carlo Importance Sampling

Distributed Detection Fusion via Monte Carlo Importance Sampling

Hang Rao, Xiaojing Shen,  Yunmin Zhu and Jianxin Pan This work was supported in part by the open research funds of BACC-STAFDL of China under Grant No. 2015afdl010, the special funds of NEDD of China under Grant No. 201314, the NSF No. 61273074£© and the PCSIRT1273. Hang Rao, Xiaojing Shen (corresponding author), Yunmin Zhu and Jianxin Pan are with Department of Mathematics, Sichuan University, Chengdu, Sichuan 610064, China. E-mail: shenxj@scu.edu.cn, ymzhu@scu.edu.cn, jianxin.pan@manchester.ac.uk.
Abstract

Distributed detection fusion with high-dimension conditionally dependent observations is known to be a challenging problem. When a fusion rule is fixed, this paper attempts to make progress on this problem for the large sensor networks by proposing a new Monte Carlo framework. Through the Monte Carlo importance sampling, we derive a necessary condition for optimal sensor decision rules in the sense of minimizing the approximated Bayesian cost function. Then, a Gauss-Seidel/person-by-person optimization algorithm can be obtained to search the optimal sensor decision rules. It is proved that the discretized algorithm is finitely convergent. The complexity of the new algorithm is compared with of the previous algorithm where is the number of sensors and is a constant. Thus, the proposed methods allows us to design the large sensor networks with general high-dimension dependent observations. Furthermore, an interesting result is that, for the fixed AND or OR fusion rules, we can analytically derive the optimal solution in the sense of minimizing the approximated Bayesian cost function. In general, the solution of the Gauss-Seidel algorithm is only local optimal. However, in the new framework, we can prove that the solution of Gauss-Seidel algorithm is same as the analytically optimal solution in the case of the AND or OR fusion rule. The typical examples with dependent observations and large number of sensors are examined under this new framework. The results of numerical examples demonstrate the effectiveness of the new algorithm.

keywords: Distributed detection, Monte Carlo importance sampling, dependent observations, sensor decision rule, fusion rule

1 Introduction

Distributed signal detection has received significant attention in surveillance applications over the past thirty years [1, 2, 3, 4, 5, 6, 7, 8]. Tenney and Sandell [1] firstly considered Bayesian formulation of distributed detection for parallel sensor network structures and proved that the optimal decision rules at the sensors are likelihood ratio (LR) for conditionally independent sensor observations. However, the optimal thresholds of LR at individual sensors can be only obtained by solving a set of coupled nonlinear equations. When the sensor decision rules are fixed, Chair and Varshney [3] derived an optimal fusion rule based on the LR test. For conditionally independent sensor observations, many excellent results on distributed detection have been derived and are summarized in [4] and references therein. The emerging wireless sensor networks [7] motivated the optimality of LR thresholds to be extended to non-ideal detection systems in which sensor outputs are to be communicated through noisy, possibly coupled channels to the fusion center [6, 9, 10].

There is much less attention on the studies of sensor decision rules for generally dependent observations which were considered to be difficult (see, e.g., [1, 2, 11]). Tsitsiklis and Athans [2] provided a rigorous mathematical analysis to demonstrate the computational difficulty in obtaining the optimal sensor decision rules for dependent sensor observations. However, some progresses have been made for the special dependent observations cases (see, e.g., [12, 13, 14, 15, 16, 17, 18]). Willett et al. [18] discussed difficulties for dealing with dependent observations. Zhu et al.[19] proposed a computationally efficient iterative algorithm which computes a discrete approximation of the optimal sensor decision rules for general dependent observations and a fixed fusion rule. This algorithm converges in finite steps. In [20], the authors developed an efficient algorithm to simultaneously search for the optimal fusion rule and the optimal sensor rules by combining the methods of Chair and Varshney [3] and Zhu et al. [19]. Recently, a new framework for distributed detection with conditionally dependent observations was introduced in [21], which can identify several classes of problems with dependent observations whose optimal sensor decision rules resemble the ones for the independent case.

Although large sensor networks have attracted much attention in both theory and application [22, 23, 24], the studies of sensor decision rules for large sensor networks with general dependent observations have had little progress. The fundamental reason is that the computation complexity is for the previous algorithms, where is the number of sensors and is a given constant. In this paper, we propose a new Monte Carlo framework to overcome the limitation of the discretized algorithms in [19, 20] for the large sensor networks. Through the Monte Carlo importance sampling [25, 26], the Bayesian cost function is approximated by the sample average by the strong law of large number. Then, we derive a necessary condition for optimal sensor decision rules so that a Gauss-Seidel optimization algorithm can be obtained to search the optimal sensor decision rules. It is proved that the new discretized algorithm is finitely convergent. The complexity of the new algorithm is order of compared with of the algorithms in [19, 20]. Thus, the proposed methods allows us to design the large sensor networks with general dependent observations. Furthermore, an interesting result is that, for the fixed AND or OR fusion rules, we can analytically derive the optimal solution in the sense of minimizing the approximated Bayesian cost function. In general, the solution of the Gauss-Seidel algorithm is only local optimal. However, in the new framework, we can prove that the solution of Gauss-Seidel algorithm is same as the analytically optimal solution when the fusion rule is the AND or OR. The typical examples with dependent observations and large number of sensors are examined under this new framework. The results of numerical examples demonstrate the effectiveness of the new algorithm. The performance of the new algorithm based on Mixture-Gaussian trial distribution is better than that based on Gaussian trial distribution.

The rest of the paper is organized as follows. Preliminaries are given in Section 2, including problem formulation and Monte Carlo approximation of the cost function. In Section 3, necessary conditions for optimal sensor decision rules are given. In Section 4, a Gauss-Seidel iterative algorithm is presented based on the necessary conditions. The convergence of this algorithm is proved. For the fixed AND or OR fusion rules, the optimal solution in the sense of minimizing the approximated Bayesian cost function can be analytically derived. Moreover, we prove that the solution of Gauss-Seidel algorithm is same as the analytically optimal solution in the case of the AND or OR fusion rule. In Section 5, numerical examples are given that exhibit the effectiveness of the new algorithm class. In Section 6, we draw conclusions.

2 Preliminaries

2.1 Problem formulation

The -sensor Bayesian detection model with two hypotheses and are considered as follows. A parallel architecture is assumed. The th sensor compresses the -dimensional vector observation to one bit: . In this paper, we consider deterministic (non-randomized) decision rules. When the fusion rule is fixed , the distributed multisensor Bayesian decision problem is to minimize the following Bayesian cost function by optimizing the sensor decision rule ,

(1)

where are the known cost coefficients, and are the prior probabilities for the hypotheses and , and is the probability that the fusion center decides for hypothesis given hypothesis is true. The general form of the binary fusion rule is denoted by an indicator function on a set :

(2)

Note that a fusion rule is a binary division of the set and the number of elements of the set is , thus there exists fusion rules. Let be the -th element of , . Every is -dimensional vector and or . For convenience, we denote sets and as the elements in for which the algorithm took decision and respectively, i.e.

(3)
(4)

Moreover, we let and denote

(5)
(6)

Obviously, and . Suppose that and are the known conditional joint probability density functions under each hypothesis.

Substituting the definitions of fusion rule and sensor decision rule into (1) and simplifying, we have

(7)

where is an indicator function on ,

(8)
(9)

, , are fixed constants.

The indicator function can be written as equivalent polynomials of the sensor decision rules and the fusion rule as follows (see [20]):

(11)

where, for ,

(13)

Note that both and are independent of for . For convenience, we also denote them by , , respectively. Moreover, (2.1) is also a key equation in the following results.

2.2 Monte Carlo importance sampling

In this section, we present an approximation of the cost function (7) by Monte Carlo importance sampling (see, e.g., [25, 26]). More specifically, assume that the samples are from population with a given trial distribution , where . From (7),

(14)
(15)
(16)
(17)

where is the trial density such that (14) is well-defined. (15) is from . (16) is denoted by . Based on the strong law of large number, (15) can be approximated by (16), i.e.,  a.s. as . The optimal trial distribution is , (see, e.g., [25, 26]). By (11), (11) and (16), so that we have

(19)

3 Necessary Conditions For Optimum Sensor Decision Rules

The distributed detection fusion problem is to minimize the Bayesian cost function (7). Based on the Monte Carlo approximation (17), we concentrate on selecting a set of optimal sensor decision rules such that the approximated cost function is minimum.

Firstly, we prove that the minimum of the cost functional converges to the infimum of the cost function as the sample size tends to infinity, under some mild assumptions. Since the deterministic (non-randomized) decision rules are considered in this paper, in the following sections, we assume that the samples drawn from the trial distribution have been fixed so that has no randomness.

Theorem 3.1.

Let be the infimum of and be the minimum of the Monte Carlo approximation (17) where are decision variables. If satisfies

(20)

where the constant does not depend on and , then we have

(21)
Proof.

By the definition of , for arbitrary , there exists a set of sensor rules such that

Since definition of and (20), there exists such that for any

Thus, . By the definition of , we have

which implies that

Since is arbitrary, we have

(22)

On the other hand, suppose that

Then there would be a positive constant , and a sequence such that , and

(23)

For every such , there must be a set of such that

Using the inequality (20) and (23), for large enough , we have ,

which contradicts the definition of . Therefore,

(24)

By the inequality (22) and (24),

which implies that (21).  q.e.d.

Remark 3.2.

The assumption (20) is not restrictive, since, by the central limit theorem, the error term of this Monte Carlo approximation is regardless of the dimensionality of (see [25]).

Secondly, we derive the necessary conditions for optimal sensor decision rules in the sense of minimizing for a parallel distributed detection system.

Theorem 3.3.

If are a set of optimal sensor decision rules which minimize in (16) in a parallel distributed Bayesian detection fusion system, then must satisfy the following equations:

(25)
(27)

where are defined by (2.1), is an indicator function denoted as follows:

(28)
Proof.

Since both and are independent of for , if minimizes the Monte Carlo approximation of (16), then should be equal to 1 when is positive for , otherwise it should be equal to 0. Thus, we have (25) by the definition of in (28). Similarly, by (19), we have (3.3)–(27).  q.e.d.

4 Monte Carlo Gauss-Seidel Iterative Algorithm Its Convergence

4.1 Monte Carlo Gauss-Seidel Iterative Algorithm

Let the sensor decision rules at the th stage of iteration be denoted by with the initial set . Suppose the fusion rule is fixed. Based on Theorem 3.3, we can drive a Gauss-Seidel iterative algorithm for minimizing in (17) as follows.

Algorithm 4.1 (Monte Carlo Gauss-Seidel iterative algorithm).

  • Step 1: Draw samples from an importance density .

  • Step 2: Given a fusion rule and initialize sensor decision rules ,

    (29)
  • Step 3: Iteratively search sensor decision rules for better system performance until a terminate criterion step 4 is satisfied. The th stage of the iteration is as follows:

    (30)
    (32)
  • Step 4: A termination criterion of the iteration process is, for

    (33)
Remark 4.2.

Once we obtain for , then can be obtained by defining when the distance is less than , for all . Similarly, we can obtain for .

Remark 4.3.

The main computation burden of Algorithm 4.1 is in (30)–(32). If the number of discretized points in (10) of [19], then are computed times in Algorithm 4.1. However, in [19], they are computed times. In next section, we prove Algorithm 4.1 terminates in finite steps. Thus, the computation complexity of Algorithm 4.1 is compared with of the algorithm in [19].

4.2 Convergence of Monte Carlo Gauss-Seidel Iterative Algorithm

Now we prove that Algorithm 4.1 must converge to a local optimal value and the algorithm cannot oscillate infinitely often, i.e., terminate after a finite number of iterations.

For convenience, for , we denote (19)–(19) in the th iteration process by

(34)

Similarly, we denote the th iteration process of the iterative items in (30)–(32) by

(35)
Lemma 4.4.

is non-increasing as is increased and