Distributed Detection Fusion via Monte Carlo Importance Sampling
Abstract
Distributed detection fusion with highdimension conditionally dependent observations is known to be a challenging problem. When a fusion rule is fixed, this paper attempts to make progress on this problem for the large sensor networks by proposing a new Monte Carlo framework. Through the Monte Carlo importance sampling, we derive a necessary condition for optimal sensor decision rules in the sense of minimizing the approximated Bayesian cost function. Then, a GaussSeidel/personbyperson optimization algorithm can be obtained to search the optimal sensor decision rules. It is proved that the discretized algorithm is finitely convergent. The complexity of the new algorithm is compared with of the previous algorithm where is the number of sensors and is a constant. Thus, the proposed methods allows us to design the large sensor networks with general highdimension dependent observations. Furthermore, an interesting result is that, for the fixed AND or OR fusion rules, we can analytically derive the optimal solution in the sense of minimizing the approximated Bayesian cost function. In general, the solution of the GaussSeidel algorithm is only local optimal. However, in the new framework, we can prove that the solution of GaussSeidel algorithm is same as the analytically optimal solution in the case of the AND or OR fusion rule. The typical examples with dependent observations and large number of sensors are examined under this new framework. The results of numerical examples demonstrate the effectiveness of the new algorithm.
keywords: Distributed detection, Monte Carlo importance sampling, dependent observations, sensor decision rule, fusion rule
1 Introduction
Distributed signal detection has received significant attention in surveillance applications over the past thirty years [1, 2, 3, 4, 5, 6, 7, 8]. Tenney and Sandell [1] firstly considered Bayesian formulation of distributed detection for parallel sensor network structures and proved that the optimal decision rules at the sensors are likelihood ratio (LR) for conditionally independent sensor observations. However, the optimal thresholds of LR at individual sensors can be only obtained by solving a set of coupled nonlinear equations. When the sensor decision rules are fixed, Chair and Varshney [3] derived an optimal fusion rule based on the LR test. For conditionally independent sensor observations, many excellent results on distributed detection have been derived and are summarized in [4] and references therein. The emerging wireless sensor networks [7] motivated the optimality of LR thresholds to be extended to nonideal detection systems in which sensor outputs are to be communicated through noisy, possibly coupled channels to the fusion center [6, 9, 10].
There is much less attention on the studies of sensor decision rules for generally dependent observations which were considered to be difficult (see, e.g., [1, 2, 11]). Tsitsiklis and Athans [2] provided a rigorous mathematical analysis to demonstrate the computational difficulty in obtaining the optimal sensor decision rules for dependent sensor observations. However, some progresses have been made for the special dependent observations cases (see, e.g., [12, 13, 14, 15, 16, 17, 18]). Willett et al. [18] discussed difficulties for dealing with dependent observations. Zhu et al.[19] proposed a computationally efficient iterative algorithm which computes a discrete approximation of the optimal sensor decision rules for general dependent observations and a fixed fusion rule. This algorithm converges in finite steps. In [20], the authors developed an efficient algorithm to simultaneously search for the optimal fusion rule and the optimal sensor rules by combining the methods of Chair and Varshney [3] and Zhu et al. [19]. Recently, a new framework for distributed detection with conditionally dependent observations was introduced in [21], which can identify several classes of problems with dependent observations whose optimal sensor decision rules resemble the ones for the independent case.
Although large sensor networks have attracted much attention in both theory and application [22, 23, 24], the studies of sensor decision rules for large sensor networks with general dependent observations have had little progress. The fundamental reason is that the computation complexity is for the previous algorithms, where is the number of sensors and is a given constant. In this paper, we propose a new Monte Carlo framework to overcome the limitation of the discretized algorithms in [19, 20] for the large sensor networks. Through the Monte Carlo importance sampling [25, 26], the Bayesian cost function is approximated by the sample average by the strong law of large number. Then, we derive a necessary condition for optimal sensor decision rules so that a GaussSeidel optimization algorithm can be obtained to search the optimal sensor decision rules. It is proved that the new discretized algorithm is finitely convergent. The complexity of the new algorithm is order of compared with of the algorithms in [19, 20]. Thus, the proposed methods allows us to design the large sensor networks with general dependent observations. Furthermore, an interesting result is that, for the fixed AND or OR fusion rules, we can analytically derive the optimal solution in the sense of minimizing the approximated Bayesian cost function. In general, the solution of the GaussSeidel algorithm is only local optimal. However, in the new framework, we can prove that the solution of GaussSeidel algorithm is same as the analytically optimal solution when the fusion rule is the AND or OR. The typical examples with dependent observations and large number of sensors are examined under this new framework. The results of numerical examples demonstrate the effectiveness of the new algorithm. The performance of the new algorithm based on MixtureGaussian trial distribution is better than that based on Gaussian trial distribution.
The rest of the paper is organized as follows. Preliminaries are given in Section 2, including problem formulation and Monte Carlo approximation of the cost function. In Section 3, necessary conditions for optimal sensor decision rules are given. In Section 4, a GaussSeidel iterative algorithm is presented based on the necessary conditions. The convergence of this algorithm is proved. For the fixed AND or OR fusion rules, the optimal solution in the sense of minimizing the approximated Bayesian cost function can be analytically derived. Moreover, we prove that the solution of GaussSeidel algorithm is same as the analytically optimal solution in the case of the AND or OR fusion rule. In Section 5, numerical examples are given that exhibit the effectiveness of the new algorithm class. In Section 6, we draw conclusions.
2 Preliminaries
2.1 Problem formulation
The sensor Bayesian detection model with two hypotheses and are considered as follows. A parallel architecture is assumed. The th sensor compresses the dimensional vector observation to one bit: . In this paper, we consider deterministic (nonrandomized) decision rules. When the fusion rule is fixed , the distributed multisensor Bayesian decision problem is to minimize the following Bayesian cost function by optimizing the sensor decision rule ,
(1)  
where are the known cost coefficients, and are the prior probabilities for the hypotheses and , and is the probability that the fusion center decides for hypothesis given hypothesis is true. The general form of the binary fusion rule is denoted by an indicator function on a set :
(2) 
Note that a fusion rule is a binary division of the set and the number of elements of the set is , thus there exists fusion rules. Let be the th element of , . Every is dimensional vector and or . For convenience, we denote sets and as the elements in for which the algorithm took decision and respectively, i.e.
(3)  
(4) 
Moreover, we let and denote
(5)  
(6)  
Obviously, and . Suppose that and are the known conditional joint probability density functions under each hypothesis.
Substituting the definitions of fusion rule and sensor decision rule into (1) and simplifying, we have
(7)  
where is an indicator function on ,
(8)  
(9) 
, , are fixed constants.
The indicator function can be written as equivalent polynomials of the sensor decision rules and the fusion rule as follows (see [20]):
(11)  
where, for ,
(13) 
Note that both and are independent of for . For convenience, we also denote them by , , respectively. Moreover, (2.1) is also a key equation in the following results.
2.2 Monte Carlo importance sampling
In this section, we present an approximation of the cost function (7) by Monte Carlo importance sampling (see, e.g., [25, 26]). More specifically, assume that the samples are from population with a given trial distribution , where . From (7),
(14)  
(15)  
(16)  
(17) 
where is the trial density such that (14) is welldefined. (15) is from . (16) is denoted by . Based on the strong law of large number, (15) can be approximated by (16), i.e., a.s. as . The optimal trial distribution is , (see, e.g., [25, 26]). By (11), (11) and (16), so that we have
(19)  
3 Necessary Conditions For Optimum Sensor Decision Rules
The distributed detection fusion problem is to minimize the Bayesian cost function (7). Based on the Monte Carlo approximation (17), we concentrate on selecting a set of optimal sensor decision rules such that the approximated cost function is minimum.
Firstly, we prove that the minimum of the cost functional converges to the infimum of the cost function as the sample size tends to infinity, under some mild assumptions. Since the deterministic (nonrandomized) decision rules are considered in this paper, in the following sections, we assume that the samples drawn from the trial distribution have been fixed so that has no randomness.
Theorem 3.1.
Let be the infimum of and be the minimum of the Monte Carlo approximation (17) where are decision variables. If satisfies
(20) 
where the constant does not depend on and , then we have
(21) 
Proof.
By the definition of , for arbitrary , there exists a set of sensor rules such that
Since definition of and (20), there exists such that for any
Thus, . By the definition of , we have
which implies that
Since is arbitrary, we have
(22) 
Remark 3.2.
Secondly, we derive the necessary conditions for optimal sensor decision rules in the sense of minimizing for a parallel distributed detection system.
Theorem 3.3.
4 Monte Carlo GaussSeidel Iterative Algorithm Its Convergence
4.1 Monte Carlo GaussSeidel Iterative Algorithm
Let the sensor decision rules at the th stage of iteration be denoted by with the initial set . Suppose the fusion rule is fixed. Based on Theorem 3.3, we can drive a GaussSeidel iterative algorithm for minimizing in (17) as follows.
Algorithm 4.1 (Monte Carlo GaussSeidel iterative algorithm).

Step 1: Draw samples from an importance density .

Step 2: Given a fusion rule and initialize sensor decision rules ,
(29) 
Step 3: Iteratively search sensor decision rules for better system performance until a terminate criterion step 4 is satisfied. The th stage of the iteration is as follows:
(30) (32) 
Step 4: A termination criterion of the iteration process is, for
(33)
Remark 4.2.
Once we obtain for , then can be obtained by defining when the distance is less than , for all . Similarly, we can obtain for .
Remark 4.3.
The main computation burden of Algorithm 4.1 is in (30)–(32). If the number of discretized points in (10) of [19], then are computed times in Algorithm 4.1. However, in [19], they are computed times. In next section, we prove Algorithm 4.1 terminates in finite steps. Thus, the computation complexity of Algorithm 4.1 is compared with of the algorithm in [19].
4.2 Convergence of Monte Carlo GaussSeidel Iterative Algorithm
Now we prove that Algorithm 4.1 must converge to a local optimal value and the algorithm cannot oscillate infinitely often, i.e., terminate after a finite number of iterations.
Lemma 4.4.
is nonincreasing as is increased and