Robust Distributed Maximum Likelihood Estimation with Dependent Quantized Data
Abstract
In this paper, we consider distributed maximum likelihood estimation (MLE) with dependent quantized data under the assumption that the structure of the joint probability density function (pdf) is known, but it contains unknown deterministic parameters. The parameters may include different vector parameters corresponding to marginal pdfs and parameters that describe dependence of observations across sensors. Since MLE with a single quantizer is sensitive to the choice of thresholds due to the uncertainty of pdf, we concentrate on MLE with multiple groups of quantizers (which can be determined by the use of prior information or some heuristic approaches) to fend off against the risk of a poor/outlier quantizer. The asymptotic efficiency of the MLE scheme with multiple quantizers is proved under some regularity conditions and the asymptotic variance is derived to be the inverse of a weighted linear combination of Fisher information matrices based on multiple different quantizers which can be used to show the robustness of our approach. As an illustrative example, we consider an estimation problem with a bivariate nonGaussian pdf that has applications in distributed constant false alarm rate (CFAR) detection systems. Simulations show the robustness of the proposed MLE scheme especially when the number of quantized measurements is small.
Chengdu]Xiaojing Shen, Syracuse]Pramod K. Varshney, Chengdu]Yunmin Zhu
Department of Mathematics, Sichuan University, Chengdu, Sichuan 610064, China.
Department of Electrical Engineering and Computer Science, Syracuse University, NY, 13244, USA.
Key words: Maximum likelihood estimation; distributed estimation; Fisher information matrix; wireless sensor networks
1 Introduction
Wireless sensor networks have attracted much attention with a lot of research taking place over the past several years. Many advances have been made in distributed detection, estimation, tracking and control (see e.g., [16] and references therein). Distributed estimation and quantization problems have been considered in a number of previous studies. The parameters to be estimated are modeled as random and deterministic in different situations. For random parameters, there exist various prior studies under the assumption of known joint pdf of parameters and sensor measurements (see, e.g., [9]). We concentrate on deterministic parameters in this paper. For deterministic parameters, several universal distributed estimation schemes have been proposed [19] in the presence of unknown, additive sensor noises that are bounded and identically distributed. The work in [11] addressed design and implementation issues under the assumption of a scalar parameter to be estimated and using scalar quantizers. The work in [4] proposed vector quantization design for distributed estimation under the assumption of additive observation noise model.
System identification based on quantized measurements is a challenging problem even for very simple models and has been researched for a wide range of applications (see, e.g., [17]). A method for recursive identification of the nonlinear Wiener model was developed in [18] and the corresponding convergence properties were analyzed. In [5], Godoy et al. developed an MLE approach and used a scenariobased form of the expectation maximization algorithm to parameter estimation for general MIMO FIR linear systems with quantized outputs. The problem of set membership system identification with quantized measurements was considered in [2]. In [6], the results from statistical quantization theory were surveyed and applied to both moment calculations and the likelihood function of the measured signal. The system identification of ARMA models using intermittent and quantized output observations was proposed in [8]. The formal conditions for the asymptotic normality of the MLE to the reliability of a complex system based on a combination of full system and subsystem tests were proposed in [12].
In previous works, the MLE with quantized data is extensively used to estimate the deterministic parameters. In this paper, robust distributed MLE with dependent quantized data is considered. Our work differs from previous studies in several aspects. Previous results concentrate on the problem of how to design the quantization schemes for estimating a deterministic parameter where each sensor makes one noisy observation. The observations are usually assumed independent across sensors, and they discuss the relationship between MLE performance and the number of sensors. Here, we focus on the problem of how to design estimation schemes for the unknown parameter vector associated with the joint pdf of the observations where the number of sensors is fixed. The emphasis here is on system robustness. These observations may be dependent across sensors. The unknown parameters may include different vector parameters corresponding to marginal pdfs and parameters that describe dependence of observations across sensors. Actually, the dependence between sensors is very important in multisensor fusion systems, for example, see the recent work on distributed location estimation with dependent sensor observations [13].
In this paper, we investigate the performance of MLE with multiple quantizers, since MLE with a single quantizer is sensitive to the choice of thresholds due to the uncertainty of pdf (see, e.g., [3]). Our main contribution is that we analytically derive the asymptotic efficiency and robustness of a practical MLE with multiple quantizers in the context of dependent quantized measurements at the sensors, unknown parameter vector and without the knowledge of measurement models. The difficulties include the fact that due to dependence between measurements across sensors, the unknown high dimensional vector parameter estimation problem cannot be decoupled to scalar parameter estimation problems; and the quantized samples are not identically distributed due to the use of multiple different quantizers. Therefore, we have to deal with unknown vector parameter and unidentically distributed samples simultaneously. The asymptotic variance is derived to be the inverse of a weighted linear combination of Fisher information matrices based on different quantizers which can be used to verify the robustness of our approach. A typical estimation problem with a bivariate nonGaussian pdf with application to the distributed CFAR detection systems is considered. Simulations show that the new MLE scheme is robust and much better than that based on the worst quantization scheme from among the groups of quantizers. Moreover, when the number of quantized measurements is small, a surprising result is that the robust MLE has a significant and dominated advantage over the MLE with a single quantizer. It is also shown that the performance of the robust MLE is not the average performance of multiple quantizers. The rest of the paper is organized as follows. Problem formulation is given in Section 2. In Section 3, the robust MLE scheme is proposed and the asymptotic results are derived. In Section 4, numerical examples are given and discussed. In Section 5, conclusions are made.
2 Problem formulation
The basic sensor distributed estimation system is considered (see Figure 1). Each sensor has dimensional observation population , . Suppose that the joint observation population has a given family of joint pdf:
(1) 
where denotes the transpose and is the unknown dimensional deterministic parameter vector which may include marginal parameters and dependence parameters. Here, we do not assume independence across sensors, knowledge of measurement models and Gaussianity of the joint pdf. Let independently and identically distributed (i.i.d.) sensor observation samples and joint observation samples be
(2)  
(3) 
Suppose the sensors and the fusion center wish to jointly estimate the unknown parameter vector based on the spatially distributed observations. If there is sufficient communication bandwidth and power, the fusion center can obtain asymptotically efficient estimates with the complete observation samples based on MLE procedure under some regularity conditions on the joint pdf.
In many practical situations, however, to reduce the communication requirement from sensors to the fusion center due to limited communication bandwidth and power, the th sensor quantizes the observation vector to 1 bit (it is straightforward to extend to multiple bits) by a measurable indicator quantization function:
(4) 
for . Here, the quantization region of each quantizer may be continuous or union of discontinuous regions. Moreover, we denote by
(5) 
Once the binary quantized samples are generated at sensor , , they are transmitted to the fusion center, for . The fusion center is then required to estimate the true parameter vector based on the received quantized data. By the definition of observation samples and quantizers, we define
(6)  
(7)  
(8) 
If we take as the joint quantized observation sample and denote the quantized observation population by , , we know that has a discrete/categorical distribution. Based on the pdf of and quantizers , the probability mass function (pmf) of the quantized observation population is
(9)  
where
(10)  
(11)  
Thus, the quantized observation population has a family of joint pmf which yields the following log likelihood function of samples by (2)(11):
(12)  
(13)  
(14) 
where , ; is the cardinality of the set. The parameter vector is estimated by maximizing the log likelihood function (14). Let denote the MLE of .
Based on the classical asymptotic properties of MLE (see, e.g., textbooks [1, 14]), we have the following lemma.
Lemma 1
Assume that and sensor quantizers , , generate the quantized samples and satisfies the regularity conditions (A1)–(A6) given on page 516 of [1] with respect to the vector parameter ; the Fisher information matrix is nonsingular. Then,
(15) 
where is the CramérRao lower bound for one quantized sample which depends on the quantizer . That is, is a consistent and asymptotically efficient estimator of .
From Lemma 1, a natural problem that arises is how should quantizers be designed such that the asymptotic variance , of MLE with quantized data is as small as possible. The true parameter , however, is not known, i.e. the pdf is not known. Most of the existing work on the design of optimal quantizers depends on the availability of the pdf or signal models. When both of them are not known, an optimal quantizer cannot be derived or the optimal quantizer depends on unknown parameters which can not be implemented (see, e.g., [3]). Since MLE with a single quantizer is sensitive to the choice of thresholds due to the uncertainty of pdf, we employ multiple groups of quantizers (which can be determined by the use of prior information or some heuristic approach) at each sensor to fend off against the risk of a single poor/outlier quantizer. To the best of our knowledge, the asymptotic efficiency and robustness of MLE scheme with multiple quantizers are not derived analytically in the context of dependent quantized measurements at the sensors, unknown parameter vector and without the knowledge of measurement models.
3 Robust maximum likelihood estimation with quantized data
The word “robust” has many and sometimes inconsistent connotations. In the theory of robust estimation, robustness generally means the ability to resist against outliers or the departure from a uncertain model with nominal values and bounds of uncertainty. In this paper, for our purpose, it means the ability to resist against outliers. In this section, we will employ multiple groups of quantizers which can be determined by the use of prior information or some heuristic approach at each sensor to fend off against the risk of a single poor/outlier quantizer. The asymptotic efficiency of the MLE scheme with multiple quantizers is derived analytically. It enables us to verify and discuss the robustness of our approach.
The MLE scheme with multiple groups of quantizers is given as follows.

Choose groups of different quantizers , where

Observe joint observation samples , which are quantized by the th group of quantizers for . We denote by . The quantized observation samples are denoted by . Moreover, we denote by . The population of the quantized sample is denoted by whose pmf is
(16) which can be similarly obtained by (9) and is determined by and .

Estimate the parameter with the quantized samples which are generated by groups of quantizers by maximizing the log likelihood function:
(17) (18) where is the log likelihood function of the th group of quantized data . Let denote the solution of MLE with quantizers.
Obviously, the quantized samples are unidentically distributed due to the use of different quantizers. One may question whether the new estimator based on the different quantizers is still asymptotically efficient? What is the asymptotic variance of the new estimator? Why is it robust compared to using one group of quantizers? Actually, these questions can be analytically answered by the following Theorem.
Theorem 1
There are groups of different sensor quantizers , . Assume that and quantizers generate the quantized samples and the quantized pmf , defined by (16) satisfies the regularity conditions (A1)–(A6) given on page 516 of [1] with respect to the vector parameter \@footnotemark\@footnotetextSince the regularity conditions are fairly standard and the space is limited, we do not repeat them in the paper. More discussion on when the regularity conditions are reasonable can be seen in 10.6.2 of [1].; the Fisher information matrix is nonsingular. Then,
(19) 
where , ,
(20)  
is the CramérRao lower bound, where is the Fisher information matrix for one quantized sample of . That is, is a consistent and asymptotically efficient estimator of .
Proof: The regularity of and quantizers ensures that the quantized samples and the corresponding pmf defined by (16) satisfy the regularity conditions (A1)–(A4) (from [1] page 516), and it is easy to prove that is a consistent estimator of , i.e., in probability. The proof is similar to that of Theorem 10.1.6 in [1]. However, quantized samples are independent but unidentically distributed due to the use of different quantizers. Thus, to prove the asymptotic normality, we will use the Lyapunov central limit theorem by checking the Lyapunov condition (see, e.g., [14]). Simultaneously, the CramérWold device (see, e.g., [14]) will be used to deal with the high dimensional estimated parameters.
First, we expand the first derivative of the log likelihood function (17) around the true value ,
(21)  
where
(22)  
is between and . Substituting for and realizing that the lefthand of (21) is 0 to obtain
0  (23)  
Thus,
(24)  
Then, we check the Lyapunov condition. Denote by , for an arbitrary ( is a trivial case), and
(25) 
which exists, since condition (A3) is satisfied and is a categorical distribution. Moreover, by condition (A5) and (25),
That is, the Lyapunov condition is satisfied. Thus, by the Lyapunov central limit theorem (see, e.g., [14]), for all ,
where . Moreover, by the CramérWold device (see, e.g., [14]), we have
(26)  
where . By application of the weak law of large number, we have
(27)  
where is defined in (18). By Slutsky’s Theorem and Equation (27), we have
(28)  
Since condition (A6) given on page 516 of [1] guarantees that three times differentiation of the log likelihood function can be bounded by an integrable function for all in a small neighborhood of and note that is between and , (in probability), we have
(29) 
Moreover, based on Equations (24) (26), (28), (29) and Slutsky’s Theorem, we have
(30)  
where defined by (20) and is the CramérRao lower bound. Therefore, is a consistent and asymptotically efficient estimator of .
Remark 1
As we have shown that the asymptotic variance of multiple quantizers is the inverse of a weighted mean of Fisher information matrices based on different quantizers. Without loss of generality, assume that the weights are equal and the first quantizer is an outlier, i.e., the asymptotic variance of multiple quantizers is the inverse of the mean of Fisher information matrices based on different quantizers and the asymptotic variance is much larger than the other asymptotic variances . Since are positive definite matrices and implies for positive definite matrices, the Fisher information is much smaller than the other Fisher informations respectively. Thus, the mean of Fisher informations with outlier and that without outlier are much larger than and are very close to each other with the same order of magnitude. Moreover, the corresponding asymptotic variances are much smaller than and are very close to each other with the same order of magnitude by the continuity of matrix inverse. Therefore, the MLE scheme with multiple quantizers is a robust scheme.
As a simple numerical example, let us consider that there are 3 quantizers with asymptotic variances , respectively. Obviously, the first quantizer is an outlier. It can be calculated that the asymptotic variance of robust MLE when equally using 3 different quantizers is which is much smaller than that of the outlier and has the same order of magnitude as and .
4 Numerical Examples
In distributed detection systems, the detection performance relies heavily on the knowledge of the joint pdf under hypotheses and . Here, we consider the problem of estimating joint pdf under for distributed CFAR detection systems [15] that has great practical relevance. In these systems, the marginal distribution of measurements is usually assumed exponentially distributed or Gamma pdf. By noting that the exponential pdf is a special case of the Gamma pdf, we consider the marginals of a twosensor system to follow a Gamma distribution as follows:
where and are the parameters to be estimated. It has been shown recently that the dependence between sensors is very important to the distributed detection performance (see e.g., [7]). To estimate the dependence between sensors, copula theory can be used to construct the structure of dependence. By Sklar’s Theorem in copula therory (see, e.g., [10]), the joint pdfs can be written as follows:
where and are marginal pdf and cumulative distribution function respectively; is the copula density. For a specific numerical example, we consider the joint Clayton copula density as follows:
which is a frequently used copula model to describe dependence (see [10]). The parameter vector to be estimated is corresponding to the copula density and the two marginals. We compare the robust MLE with MLE based on a single quantizer. We assume that the prior information is that the thresholds are in . Based on this information, we uniformly choose the following four groups of different quantizers.
where if ; otherwise . For the robust MLE, we let where is the number of samples for MLE with fixed quantizer respectively.
The robustness of the MLE with multiple quantizers is illustrated in Figs. 2–5. In Figs. 2–5, MSEs based on 1000 Monte Carlo (M.C.) runs as a function of the number of measurements for different estimation methods (MLE with single quantizer, robust MLE and MLE with raw measurements) are plotted for parameters , and respectively, where corresponds to the dependence measure namely Spearman’s . Figs 2–3 present the MSEs of in linear and logarithmic scales respectively. In Figs 4–5, we present the MSEs of and in linear scale respectively. In our work thus far, we have assumed that 1bit quantized data is transmitted to the fusion center in simulations. We consider another system where finely quantized data (5bit data corresponding to a subset of samples instead of 1bit data corresponding to all the samples) is transmitted while maintaining the total number of bits equal to . While evaluating the performance of the system with 5bit data, we employ the results corresponding to raw data which, in fact, give more optimistic results. The MSEs based on 1000 M.C. runs of transmitting finely quantized 5bit measurements are given in Figs 2–5 for , and respectively.
From Figs 2–5, we have the following observations: (1). From Figs 2–5, MSEs based on 1000 M.C. runs for robust MLE are much smaller than those of the MLE based on the single quantizer that is the worst (outlier) in the group. This phenomenon is consistent with the results in Theorem 1 and Remark 1. Robust MLE is a conservative estimate, but it can avoid large errors in the worst case. The advantage of robustness (MSE of the worst MLE minus MSE of Robust MLE) is much larger than the loss due to conservative estimation to enhance robustness (MSE of Robust MLE minus MSE of the best one), especially in Figs 2, 3 and 4. (2). From Figs 2–3, a surprising result that is observed is that the Robust MLE based on 1000 M.C. runs has a significant advantage over MLE with a single quantizer, when the number of quantized measurements is small N=40. The reason is that, for small number of samples, the MLE with single quantizer is sensitive to the randomized samples so that it may be an outlier in each M.C. run resulting in poor performance. (3). By comparing our robust MLE with 1bit quantized data with MLE that transmits a subset of finely quantized data in Figs 2–3, we observe that their performance of estimating is very close. However, for the performance of estimating and , robust MLE is much better than the latter from Figures 4–5. Thus, robust MLE is a better estimation method in distributed dynamic systems with limited bandwidth.
5 Conclusion
In this paper, we have proposed an approach for robust distributed MLE with dependent quantized data under the assumption that the structure of the joint pdf is known, but it contains unknown deterministic parameters. We considered a practical estimation problem with a bivariate nonGaussian pdf arising from the distributed constant false alarm rate (CFAR) detection systems. Simulation results show that the new MLE scheme is robust and much better than that based on the worst (outlier) quantization scheme from among the groups of quantizers. An important obersvation is that the robust MLE has a significant advantage over MLE with a single quantizer, when the number of quantized measurements is small.
Acknowledgment
We would like to thank the anonymous reviewers, the associate editor and Dr. Kush Varshney of IBM for their helpful suggestions that greatly improved the quality of this paper.
References
 [1] George Casella and Roger L. Berger. Statistical Inference. Duxbury, New York, second edition, 2001.
 [2] Marco Casini, Andrea Garulli, and Antonio Vicino. Input design in worstcase system identification with quantized measurements. Automatica, 48:2997–3007, 2012.
 [3] Jun Fang and Hongbin Li. Distributed adaptive quantization for wireless sensor networks: From delta modulation to maximum likelihood. IEEE Transactions on Signal Processing, 56(10):5246–5257, 2008.
 [4] Jun Fang and Hongbin Li. Hyperplanebased vector quantization for distributed estimation in wireless sensor networks. IEEE Transactions on Information Theory, 55:5682–5699, 2009.
 [5] Boris I. Godoy, Graham C. Goodwin, Juan C. Aguero, Damian Marelli, and Torbjorn Wigren. On identification of fir systems having quantized output data. Automatica, 47:1905–1915, 2011.
 [6] Fredrik Gustafsson and Rickard Karlsson. Statistical results for system identification based on quantized observations. Automatica, 45:2794–2801, 2009.
 [7] Satish G. Iyengar, Pramod K. Varshney, and Thyagaraju Damarla. A parametric copulabased framework for hypothesis testing using heterogeneous data. IEEE Transactions on Signal Processing, 59(5):2308–2319, May 2011.
 [8] Dami¨¢n Marelli, Keyou You, and Minyue Fu. Identification of ARMA models using intermittent and quantized output observations. Automatica, 49:360–369, 2013.
 [9] V. Megalooikonomou and Y. Yesha. Quantizer design for distributed estimation with communication constraints and unknown observation statistics. IEEE Transactions on Communications, 48(2):181–184, February 2000.
 [10] R. B. Nelsen. An Introduction to Copulas. SpringerVerlag, New York, 1999.
 [11] A. Ribeiro and G. B. Giannakis. Bandwidthconstrained distributed estimation for wireless sensor networks–Part I: Gaussian case. IEEE Transactions on Signal Processing, 54(3):1131–1143, March 2005.
 [12] James C. Spall. Asymptotic normality and uncertainty bounds for reliability estimates from subsystem and full system tests. In American Control Conference, pages 56–61, Fairmont Queen Elizabeth, Montreal, Canada, June 2012.
 [13] Ashok Sundaresan and Pramod K. Varshney. Location estimation of a random signal source based on correlated sensor observations. IEEE Transactions on Signal Processing, 59(2):787–799, February 2011.
 [14] A. W. Van der Vaart. Asymptotic statistics. Cambridge University Press, New York, 2000.
 [15] Pramod K. Varshney. Distributed Detection and Data Fusion. New York: SpringerVerlag, 1997.
 [16] Venugopal V. Veeravalli and Pramod K. Varshney. Distributed inference in wireless sensor networks. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 370(1958):100–117, January 2012.
 [17] Le Yi Wang, G. George Yin, JiFeng Zhang, and Yanlong Zhao. System Identification with Quantized Observations. Birkhauser, 2010.
 [18] Torbjorn Wigren. Approximate gradients, convergence and positive realness in recursive identification of a class of nonlinear systems. International Journal of Adaptive Control and Signal Processing, 9:325–354, 1995.
 [19] JinJun Xiao, Alejandro Ribeiro, ZhiQuan Luo, and Georgios B. Giannakis. Distributed compressionestimation using wireless sensor networks. IEEE Signal Processing Magazine, 23(4):27–41, July 2006.