Maximum Likelihood Estimation from Sign Measurements with Sensing Matrix Perturbation

Maximum Likelihood Estimation from Sign Measurements with Sensing Matrix Perturbation

Jiang Zhu, Xiaohan Wang, and Yuantao Gu The authors are with the Department of Electronic Engineering, Tsinghua University, Beijing 100084, CHINA. The corresponding author of this work is Yuantao Gu (e-mail: gyt@tsinghua.edu.cn).
December 6, 2013
Abstract

The problem of estimating an unknown deterministic parameter vector from sign measurements with a perturbed sensing matrix is studied in this paper. We analyze the best achievable mean square error (MSE) performance by exploring the corresponding Cramér-Rao Lower Bound (CRLB). To estimate the parameter, the maximum likelihood (ML) estimator is utilized and its consistency is proved. We show that the perturbation on the sensing matrix exacerbates the performance of ML estimator in most cases. However, suitable perturbation may improve the performance in some special cases. Then we reformulate the original ML estimation problem as a convex optimization problem, which can be solved efficiently. Furthermore, theoretical analysis implies that the perturbation-ignored estimation is a scaled version with the same direction of the ML estimation. Finally, numerical simulations are performed to validate our theoretical analysis.

Keywords: Maximum likelihood estimation, sign measurements, Gaussian perturbation, CRLB.

1 Introduction

The linear regression problem with perturbed sensing matrix has been extensively studied in recent years [1, 2, 3]. Mathematically, the vector {\mathbf{y}}\in{\mathbb{R}}^{N} is observed via a corrupted sensing matrix as

\displaystyle{\mathbf{y}}=({\mathbf{H}}+{\mathbf{E}})^{\rm T}{\mathbf{w}}+{% \mathbf{n}}, (1)

where {\mathbf{H}}\in{\mathbb{R}}^{p\times N} is a deterministic known sensing matrix, and \mathbf{E} is a random matrix each of whose elements is i.i.d., e_{ij}\sim\mathcal{N}(0,\sigma_{e}^{2}),i=1,\cdots,p,j=1,\cdots,N. The additive noise vector \mathbf{n} is independent of \mathbf{E} and satisfies {{\mathbf{n}}\sim{\mathcal{N}}({\mathbf{0}},\sigma_{n}^{2}{\mathbf{I}})}, where \sigma_{e}^{2} is viewed as the strength of perturbation. To estimate the unknown parameter vector {\mathbf{w}}\in{\mathbb{R}}^{p}, the perturbation \mathbf{E} is treated as a nuisance parameter and the maximum likelihood (ML) method is used. Several numerical methods have been proposed including minimax search, maximin search, and the classical expectation-maximization (EM) algorithm [3].

It is natural to further study the parameter estimation problem with perturbed sensing matrix by the sign measurements

\displaystyle{\mathbf{y}}={\rm sign}\left(\left({\mathbf{H}}+{\mathbf{E}}% \right)^{\rm T}\mathbf{w}+\mathbf{n}\right), (2)

where \mathbf{y} denotes a binary measurement vector and {\rm sign}(\cdot) is a vector each of whose entries is equal to the sign of the corresponding element (we assume that the sign of a real number is 1 or -1, when the number is positive or nonpositive, respectively).

1.1 Problem Background

Most available works focus on the simplified case of (2) in which the perturbation does not exist, i.e., {\mathbf{E}}={\mathbf{0}} [4, 5]. In this setting, the model is reduced to

\displaystyle{\mathbf{y}}={\rm sign}\left({{\mathbf{H}}^{\rm T}}{\mathbf{w}}+{% \mathbf{n}}\right). (3)

Model (3) is closely related to the binary regression model in statistics, where only binary outcomes are obtained to estimate the factors that affect the results. When the noise is Gaussian distributed, the binary regression model is also called a probit model [6], which can be described as

\displaystyle{\mathbf{y}}={\rm sign}\left({{\mathbf{H}}^{\rm T}}\tilde{\mathbf% {w}}+\tilde{\mathbf{n}}\right), (4)

where \tilde{\mathbf{n}} is a normalized Gaussian vector satisfying \tilde{\mathbf{n}}\sim\mathcal{N}({\mathbf{0}},{\mathbf{I}}), and \tilde{\mathbf{w}}={\mathbf{w}}/\sigma_{n} is what we wish to estimate. Once the estimation of \tilde{\mathbf{w}} is acquired, the distribution of the sign measurement {\rm sign}({\mathbf{h}}^{\rm T}\tilde{\mathbf{w}}+\tilde{\mathbf{n}}) can be predicted for a new {\mathbf{h}}\in{\mathbb{R}}^{p}.

Another application related to model (3) is to estimate some physical quantities (pressure, temperature, mean-location, and etc.) based on binary quantized measurements in wireless sensor network. The mathematical model of most related works in this scenario is a special case of (3) in which the parameter to be estimated is a scalar. In this application, there are a large number of spatially distributed nodes. Each node is available to a subset of observations and has to transmit the information to the fusion center. Due to the limited bandwidth, the node may quantize the measurements coarsely. It is known that the minimum variance of the estimator based on binary measurements is only \pi/2 times of the clairvoyant estimator [7, 8], which motivates researchers to achieve this excellent performance by proposing carefully designed strategies. In [9, 10], distributed estimation algorithms are proposed to reduce the transmission requirements by exploiting spatial correlation. Furthermore, a universal decentralized estimation scheme is proposed to cope with the unknown noise distribution case [11, 12, 13]. While all above works focus on the estimation of the scalar case, [14] analyzes the performance of the ML estimator for multivariate parameters with dithered quantization.

1.2 Main Contribution

This paper focuses on the ML estimation of the vector parameter from sign measurements with sensing matrix perturbation. The main contribution of this work is two-fold. On the one hand, the Cramér-Rao Lower Bound (CRLB) on the mean square error (MSE) is theoretically derived to analyze the performance of unbiased estimators. The ML estimator is proved to be consistent, then its performance is studied using the CRLB. It is shown that the perturbation on the sensing matrix worsens the performance in most cases. However, suitable perturbation may improve the estimation accuracy in some special cases. On the other hand, the ML estimation problem is reformulated as a convex optimization problem, implying that if the global optimal point exists, there are numerical algorithms guaranteed to converge to it. We analyze the probability that the optimal point of the ML estimator exists. It is shown that moderate perturbation may be beneficial by providing randomness for the measurements. Moreover, the mismodeling effects is studied in the case that the perturbation is ignored. We show that the estimator ignoring the perturbation can provide a scaled estimation with the same direction with that of the ML estimator. It implies that we can also obtain the correct direction estimation when the perturbation information is unknown. Finally, we compare the MSE performance of the ML estimator against the CRLB by simulation.

1.3 Related Work

For model (3), there has been a lot of works focusing on the estimation of a scalar parameter [8, 10, 14]. In [8], the case in which the sensing matrix is {\mathbf{H}}=[1,\cdots,1] is studied. The parameter w are supposed to lie in the range (-\Delta,\Delta). Thus the worst-case CRLB is optimized with respect to the variance of the additive noise. It is also shown that the performance of the estimation can be improved by a periodic waveform or feedback signal prior to quantization. Recently, an additive outlier \mathbf{o} is introduced in (3) to model the errors [16] by

\displaystyle{\mathbf{y}}={\rm{sign}}\left({{\mathbf{H}}^{\rm T}}\mathbf{w}+% \mathbf{n}+\mathbf{o}\right).

The sparsity of the outliers is controlled. Desirable tradeoff between model fit and complexity is attained by a new classification-based approach. In [17, 18], both the outliers and the unknown parameters are sparse. The ML method for the probit model is proposed to estimate the model parameters. Suppose that the numbers of nonzero entries of \mathbf{o} and \mathbf{w} are less than or equal to k_{o} and k_{w}, respectively. By defining the concatenated matrix {\mathbf{Q}}\triangleq[{\mathbf{H}}^{\rm T},{\mathbf{I}}_{N\times N}], a sufficient condition for the identifiability of \mathbf{w} and \mathbf{o} can be described by

\displaystyle{\rm{Spark}}({\mathbf{Q}})>2(k_{o}+k_{w}),

where {\rm{Spark}}({\mathbf{Q}}) denotes the minimum number of the dependent columns in \mathbf{Q}. The ML estimation of the vector [{\mathbf{w}}^{\rm T},{\mathbf{o}}^{\rm T}]^{\rm T} is equivalent to the following optimization problem

\displaystyle\underset{{\mathbf{w}},{\mathbf{o}}}{\operatorname{minimize}}~{}-% \left(\sum\limits_{i=1}^{N}{\log}\Phi\left(y_{i}\frac{{\mathbf{h}}_{i}^{\rm{T}% }{\mathbf{w}}+o_{i}}{\sigma_{n}}\right)\right)
\displaystyle{\operatorname{subject~{}to}}~{}\|{\mathbf{w}}\|_{0}\leq k_{w},\|% {\mathbf{o}}\|_{0}\leq k_{o},

where \Phi(u)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{u}{{{\textrm{e}}^{-\frac{x^{2}}{2% }}}}{\textrm{d}}x is the cumulative distribution function of the standard Gaussian distribution. In [18], it is shows that the outliers and the unknown parameters can be jointly estimated by using the convex l_{1}-norm to replace with the cardinality constraint. Though some methodologies utilized in above papers are adopted in this work, the model they studied is different from (2).

For the probit model (4), the standard ML procedure is often used to estimate the unknown parameter vector. The ML estimation is equivalent to the following optimization problem

\displaystyle\underset{\tilde{\mathbf{w}}}{\operatorname{minimize}}-\sum% \limits_{i=1}^{N}{\log}\Phi\left(y_{i}{\mathbf{h}}_{i}^{\rm{T}}\tilde{\mathbf{% w}}\right).

This problem is convex and is first solved in [15]. Model (4) with uncertainty in the sensing matrix has been studied in a number of literature. There are two approaches to describe the uncertainty of the sensing matrix [19]. The first approach is the standard errors in variables (EIV) model, where \mathbf{H} is modeled as a deterministic unknown sensing matrix, and \mathbf{G} is a noisy observation on \mathbf{H} which can be described by {\mathbf{G}}={\mathbf{H}}+{\mathbf{E}}. Given the observations \mathbf{G} and \mathbf{y}, both \tilde{\mathbf{w}} and \mathbf{H} are estimated by solving

\displaystyle\underset{\tilde{\mathbf{w}},{\mathbf{H}}}{\operatorname{minimize% }}~{}-l({{\mathbf{y}},{\mathbf{G}}};{\tilde{\mathbf{w}},{\mathbf{H}}}), (5)

where l({{\mathbf{y}},{\mathbf{G}}};{\tilde{\mathbf{w}},{\mathbf{H}}}) is the log-likelihood function of \mathbf{y} and \mathbf{G} parameterized by \tilde{\mathbf{w}} and \mathbf{H}. Equation (5) is equivalent to

\displaystyle\underset{{\mathbf{H}},\tilde{\mathbf{w}}}{\operatorname{minimize% }}\left(-\sum\limits_{i=1}^{N}{\log}\Phi\left(y_{i}{\mathbf{h}}_{i}^{\rm{T}}% \tilde{\mathbf{w}}\right)+\frac{\|{\mathbf{H}}-{\mathbf{G}}\|_{\rm F}^{2}}{2% \sigma_{e}^{2}}\right). (6)

The number of variables increases by a factor of p({\textstyle 1}+{\textstyle 1}/{\textstyle N}) with respect to the number of measurements N. In [19], it is shown that the ML estimator is in general not consistent, implying that the ML estimator will not converge to the true parameter in probability when the number of measurements tends to infinity. The second approach to describe the uncertainty is to model the sensing matrix as a random matrix. The statistical characterization of the sensing matrix is known, thus the nuisance parameter can be eliminated and the estimation of \tilde{\mathbf{w}} is available. The above works focused on the regression analysis and some basic assumptions are different from this work.

Notation

For any scalar x\in\mathbb{R}, \lfloor x\rfloor (\lceil x\rceil) denotes the nearest integer less than or equal to (greater than or equal to) x. For an unknown estimated parameter vector \mathbf{w} (scalar parameter w), {\mathbf{w}}_{0} (w_{0}) denotes its true value. For a random vector \mathbf{y}, {\rm E}_{\mathbf{y}}[\cdot] denotes the expectation taken with respect to \mathbf{y}. For {\mathbf{w}}=[w_{1},\cdots,w_{n}]^{\rm T} and a continuous and differentiable function f:\mathbb{R}^{n}\rightarrow\mathbb{R}, \nabla_{\mathbf{w}}f and \nabla_{\mathbf{w}}^{2}f denotes its gradient and Hessian. For a vector function {\mathbf{g}}:S\rightarrow\mathbb{R}^{r} defined on a set S in \mathbb{R}^{s}, {\partial{\mathbf{g}}({\boldsymbol{\theta}})}/{\partial{\boldsymbol{\theta}}} denotes its Jacobian matrix [{\partial{g}_{i}({\theta})}/{\partial{\theta}_{j}}]_{r\times s}. For any appropriate matrix \mathbf{A}, a_{ij} denotes its (i,j)th element, {\mathbf{a}}_{i} denotes its ith column, \|\mathbf{A}\|_{\rm F} denotes its Frobenius norm, {\rm{tr}}(\mathbf{A}) denotes its trace, {\mathbf{A}}\succeq{\mathbf{0}} ({\mathbf{A}}\succ{\mathbf{0}}) means that \mathbf{A} is positive semidefinite (positive definite), and {\mathbf{A}}\succeq{\mathbf{B}} means that {\mathbf{A}}-{\mathbf{B}}\succeq{\mathbf{0}}. {\rm diag}(\lambda_{1},\cdots,\lambda_{p}) is a p\times p diagonal matrix with the ith diagonal elements \lambda_{i}. Other notations will be introduced when needed.

The rest of this paper is organized as follows. In Section II, the ML estimator is utilized and its consistency is proved. In Section III, the theoretical CRLB is derived, and the theoretical performance limits is analyzed. In Section IV, we reformulate the original ML estimation problem as a convex optimization problem. In Section V, we discuss the probability that the likelihood function is unimodal, and provide some insights on the similarity and the difference between the ML estimator and the perturbation-ignored estimator. In Section VI, the numerical results are presented. Finally we conclude the paper in Section VII.

2 Maximum Likelihood Estimator

The model (2) can be written in a more canonical form

\displaystyle{\mathbf{y}}={\rm{sign}}\left({\mathbf{H}}^{\rm T}{\mathbf{w}}+{% \mathbf{z}}\right), (7)

where {\mathbf{z}}={{\mathbf{E}}^{\rm T}}{\mathbf{w}}+{\mathbf{n}} is regarded as the sum of a multiplicative noise and an additive noise [20]. The variance of the equivalent noise \mathbf{z} depends on the parameter vector \mathbf{w}, which makes the problem more complex than the perturbation free setting. Because e_{ij} is i.i.d. Gaussian random variable, {{\mathbf{E}}^{\rm T}}{\mathbf{w}} is an N dimensional Gaussian distributed random vector. It follows by straightforward calculation that {\rm E}[{{\mathbf{E}}^{\rm T}}{\mathbf{w}}]={\mathbf{0}}, and {\rm{Cov}}[{{\mathbf{E}}^{\rm T}}{\mathbf{w}}]=\sigma_{e}^{2}\|{\mathbf{w}}\|_% {2}^{2}{\mathbf{I}}. Thus the variance of the multiplicative noise is \sigma_{e}^{2}\|{\mathbf{w}}\|_{2}^{2}. Then, from the mutual independence of {{\mathbf{E}}^{\rm T}}{\mathbf{w}} and {\mathbf{n}}, one has {\mathbf{z}}\sim{\mathcal{N}}({\mathbf{0}},\sigma_{z}^{2}{\mathbf{I}}), where

\displaystyle\sigma_{z}^{2}=\|\mathbf{w}\|_{2}^{2}\sigma_{e}^{2}+\sigma_{n}^{2}. (8)

Now we calculate the likelihood function {\rm Pr}({\mathbf{y}};{\mathbf{w}}). Let {\bf H}=[{\bf h}_{1},{\bf h}_{2},\cdots,{\bf h}_{N}], {\mathcal{I}}_{+} and {\mathcal{I}}_{-} denote the set of indices \{i|y_{i}=1\} and \{i|y_{i}=-1\}, respectively. By partitioning the observations into {\mathcal{I}}_{+} and {\mathcal{I}}_{-}, the likelihood function {\rm Pr}(\mathbf{y};\mathbf{w}) is calculated to be

\displaystyle{\rm{Pr}}(\mathbf{y};\mathbf{w})= \displaystyle\prod\limits_{i\in{\mathcal{I}}_{+}}{\rm{Pr}}({\mathbf{h}}_{i}^{% \rm T}{\mathbf{w}}+z_{i}>0)\prod\limits_{i\in{\mathcal{I}}_{-}}{\rm{Pr}}({% \mathbf{h}}_{i}^{\rm T}{\mathbf{w}}+z_{i}\leq 0)
\displaystyle= \displaystyle\prod\limits_{i\in{\mathcal{I}}_{+}}\Phi\left(\frac{{\mathbf{h}}_% {i}^{\rm T}{\mathbf{w}}}{\sigma_{z}}\right)\prod\limits_{i\in{\mathcal{I}}_{-}% }\Phi\left(-\frac{{\mathbf{h}}_{i}^{\rm T}{\mathbf{w}}}{\sigma_{z}}\right)
\displaystyle= \displaystyle\prod\limits_{i=1}^{N}\Phi\left(y_{i}\frac{{\mathbf{h}}_{i}^{\rm T% }{\mathbf{w}}}{\sigma_{z}}\right).

The corresponding log-likelihood function l({\mathbf{y}};{\mathbf{w}}) is given by

\displaystyle l({\mathbf{y}};{\mathbf{w}})=\sum\limits_{i=1}^{N}{\log}\Phi% \left(y_{i}\frac{{{\mathbf{h}}_{i}^{\rm T}{\mathbf{w}}}}{{\sigma_{z}}}\right). (9)

Therefore, the ML estimation of the vector \mathbf{w} is equivalent to minimizing the negative log-likelihood function (9). Substituting (8) in (9), one has

\displaystyle\underset{{\mathbf{w}}\in{\mathbb{R}}^{p}}{\operatorname{minimize% }}~{}-\sum_{i=1}^{N}{\log}\Phi\left(y_{i}\frac{{{\mathbf{h}}_{i}^{\rm T}{% \mathbf{w}}}}{{\sqrt{\|\mathbf{w}\|_{2}^{2}\sigma_{e}^{2}+\sigma_{n}^{2}}}}% \right). (10)

Now we briefly discuss the statistical identifiability of the model (2). The model is statistically identifiable if the underlying parameter can be estimated accurately by an infinite number of measurements. Mathematically, this means that if {\mathbf{w}}_{1} is not equal to {\mathbf{w}}_{2}, the corresponding measurements {\mathbf{y}}_{1} and {\mathbf{y}}_{2} must follow different probability distributions. A necessary and sufficient condition to guarantee the identifiability of model (2) is that {\mathbf{H}} should be of full row rank, which is the same with the linear regression model (1).

We will close this section by studying the consistency of the ML estimator on

y_{i}={\rm{sign}}\left({\mathbf{h}}_{i}^{\rm T}{\mathbf{w}}+{z}_{i}\right),% \quad i=1,2,\cdots,N, (11)

where {\bf h}_{i} are generated from any underlying continuous distribution and {z}_{i}\sim{\mathcal{N}}({\mathbf{0}},\sigma_{z}^{2}) is an i.i.d. sequence. The consistency means that as the number of the measurements N tends to infinity, the estimator converges to the true parameter value {\mathbf{w}}_{0} in probability. Though it has been demonstrated that the ML estimator (6) in EIV model is not consistent in general [19], we could prove that the consistency of the ML estimator is satisfied in the model (11).

Theorem

Assume that \mathbf{w} lies in the parameter space {\mathcal{W}}=\{{\mathbf{w}}|\|\mathbf{w}\|_{2}\leq R_{w}\}, where R_{w} is a positive constant. \{{\mathbf{h}}_{i}\}_{i=1}^{N} are generated from an underlying continuous distribution. The ML estimator (10) is consistent. {}_{\blacksquare}

Proof

The proof is postponed to Appendix A.  

One may notice that the unknown parameter is assumed to be bounded, which is a technical mathematical condition needed for many theoretical analysis [3]. In practice, we can choose R_{w} sufficiently large, then the estimator is assumed to have no knowledge of this constraint.

3 Cramér-Rao Lower Bound

We now provide a lower bound on the variance of any unbiased estimator of the model (2). It is well known that the MSE of the ML estimator asymptotically achieves the CRLB under certain regularity conditions. Therefore, the CRLB provides a reasonable benchmark shedding light on the performance of the ML estimator.

The Fisher information matrix (FIM) is used to find the bounds for unbiased estimators. We can calculate the FIM as the negative expectation of the Hessian of the log-likelihood function with respect to \mathbf{y},

\displaystyle{\mathbf{J}}({\mathbf{w}})=-{\rm E}_{\mathbf{y}}[\nabla_{\mathbf{% w}}^{2}l({\mathbf{y}};{\mathbf{w}})].

The CRLB matrix is equal to the inverse of the FIM by

\displaystyle{\rm CRLB}({\mathbf{w}})=\left({\mathbf{J}}({\mathbf{w}})\right)^% {-1},

and the CRLB on the MSE is the trace of the CRLB matrix.

Now a closed-form expression of the CRLB on the MSE for the model (2) is provided in the following theorem.

Theorem

Consider the estimation of \mathbf{w} in the model (2) with both \sigma_{n}^{2} and \sigma_{e}^{2} known. The FIM is {\mathbf{J}}(\mathbf{w})={\mathbf{M}}{\mathbf{\Lambda}}{\mathbf{M}}^{\rm T} and the MSE {\rm mse}({\hat{\mathbf{w}}})={\rm E}[\|{\hat{\mathbf{w}}}-{\mathbf{w}}\|_{2}^% {2}] of any unbiased estimator {\hat{\mathbf{w}}} satisfies

\displaystyle{\rm{mse}}({\hat{\mathbf{w}}})\geq{\rm{tr}}\left(\left({\mathbf{M% }}{\mathbf{\Lambda}}{\mathbf{M}}^{\rm T}\right)^{-1}\right), (12)

where {\mathbf{\Lambda}} is a positive diagonal matrix with elements

\displaystyle{\lambda}_{ii}=\frac{1}{{2\pi\sigma_{z}^{2}}}{\left(\frac{1}{{% \Phi\left(\frac{\mathbf{h}_{i}^{\rm T}{\mathbf{w}}}{{\sigma_{z}}}\right)}}+% \frac{1}{{\Phi\left(-\frac{{\mathbf{h}}_{i}^{\rm T}{\mathbf{w}}}{{\sigma_{z}}}% \right)}}\right)}{{\rm e}^{-\frac{{{\left({\mathbf{h}}_{i}^{\rm T}{\mathbf{w}}% \right)}^{2}}}{{\sigma_{z}^{2}}}}}, (13)

and

\displaystyle{\mathbf{M}}=\left({\mathbf{I}}-\frac{\sigma_{e}^{2}}{\sigma_{z}^% {2}}{\mathbf{w}}{\mathbf{w}}^{\rm T}\right){\mathbf{H}}. (14)

{}_{\blacksquare}

Proof

The proof is postponed to Appendix B.  

For simplicity, let \mathbf{J} denote the FIM instead of {\mathbf{J}}(\mathbf{w}) in the following text. Two extreme cases will be discussed. The first case corresponds to the setting of perturbation free. Then {\mathbf{M}}={\mathbf{H}} and \sigma_{z}^{2}=\sigma_{n}^{2}. The FIM is degenerated to {\mathbf{J}}={\mathbf{H}}{\mathbf{\Lambda}}{\mathbf{H}}^{\rm T}, which is consistent with [18]. The second case corresponds to the setting of additive noise free. Hence \mathbf{M} is rank deficient and {\mathbf{J}} is singular, implying that there exists no finite variance unbiased estimator [22]. We can also see it in the reduced model

\displaystyle{\mathbf{y}}={\rm{sign}}\left({{({\mathbf{H}}+{\mathbf{E}})}^{\rm T% }}\mathbf{w}\right). (15)

For an estimator \hat{\mathbf{w}}, its scaled version k\hat{\mathbf{w}} satisfies (15) for all k>0. This result demonstrates that the magnitude information of \mathbf{w} is lost from sign measurements. Therefore, the additive noise \mathbf{n} is necessary for the estimation in that it provides a dynamic bias for the sign function [21]. We always assume that \sigma_{n}^{2} is nonzero.

We will then discuss how the multiplicative noise and the additive noise affect the CRLB on the MSE. The sign measurement can be viewed as a nonlinear system, thus its performance can be enhanced by the presence of optimized random noise [25, 26, 27]. In the model (2), there may exist optimal variances of multiplicative noise and additive noise that minimize the CRLB on the MSE. Viewing \sigma_{n}^{2} and \sigma_{e}^{2} as variables, we will discuss three cases in the following subsections, corresponding to the situations in which the variance of the equivalent noise \sigma_{z}^{2}, of the multiplicative noise \sigma_{e}^{2}\|{\mathbf{w}}\|_{2}^{2}, or of the additive noise \sigma_{n}^{2} is fixed, respectively.

3.1 The Case of Equivalent Noise Fixed

Suppose that we have two models. One is model (2), the corresponding estimator and the FIM are denoted as \hat{\mathbf{w}}({\mathbf{y}},{\mathbf{H}},\sigma_{e}^{2},\sigma_{n}^{2}) and \mathbf{J}, respectively. The other is model (3). We use \hat{\mathbf{w}}({\mathbf{y}},{\mathbf{H}},0,\sigma_{z}^{2}) to denote the estimator with the FIM \tilde{\mathbf{J}} in this situation. One may define \gamma={\sigma_{e}^{2}\|{\mathbf{w}}\|_{2}^{2}}/{\sigma_{n}^{2}} to denote the ratio of the variance of the multiplicative noise to that of the additive noise. Let \tilde{\lambda}_{i} denote the ith largest eigenvalue of the FIM \tilde{\mathbf{J}}, then the following result is obtained.

Proposition

The multiplicative noise exacerbates the performance of estimation when the variance of equivalent noise is fixed. In the MSE sense, we have the following inequality

\displaystyle\frac{{\gamma^{2}+2\gamma}}{\tilde{\lambda}_{1}}\leq{\rm{tr}}({% \mathbf{J}}^{-1})-{\rm{tr}}(\tilde{\mathbf{J}}^{-1})\leq\frac{{\gamma^{2}+2% \gamma}}{\tilde{\lambda}_{p}}. (16)

{}_{\blacksquare}

Proof

The proof is postponed to Appendix C.  

Proposition 3.1 demonstrates that the minimum MSE is achieved at \sigma_{e}^{2}=0 when \sigma_{z}^{2} is fixed. It is also shown that when the variance of the multiplicative noise is much smaller than that of the additive noise, the lower and the upper bounds of {\rm{tr}}({\mathbf{J}}^{-1})-{\rm{tr}}(\tilde{\mathbf{J}}^{-1}) are proportional to \gamma. Whereas when the variance of the multiplicative noise is larger than that of the additive noise, the two bounds are proportional to \gamma^{2}. Therefore, the performance of the estimator deteriorates dramatically with the increase of the multiplicative noise when the variance of equivalent noise is fixed.

If back to the unquantized problem \mathbf{y}={{({\mathbf{H}}+{\mathbf{E}})}^{\rm T}}\mathbf{w}+\mathbf{n}, one will draw a contrary conclusion. We define two unquantized problems which are the same with the above situation. Using the same notation and assuming the variance of the equivalent noise is equal, the result is contrary to (16). According to (12) of [28], one has

\displaystyle{\rm{tr}}(\tilde{\mathbf{J}}^{-1})\geq{\rm{tr}}({\mathbf{J}}^{-1}).

This result demonstrates that when the measurement is unquantized and the variance of the equivalent noise is fixed, noise coupled the parameter information can help us to estimate the parameter.

3.2 The Case of Multiplicative Noise Fixed

Now we discuss the case in which the variance of the multiplicative noise is fixed. Because \mathbf{w} is deterministic, \sigma_{e}^{2} can be viewed as a variable instead of \|{\mathbf{w}}\|_{2}^{2}\sigma_{e}^{2}. We consider two extreme cases. One is that \sigma_{n}^{2} is zero. In this case, we have known that the corresponding FIM is singular, thus there does not exist a finite unbiased estimator for \mathbf{w}. In the other case, when \sigma_{n}^{2} tends to infinity, according to (13), one has

\displaystyle\lim_{\sigma_{n}^{2}\rightarrow\infty}\lambda_{ii}=\lim_{\sigma_{% n}^{2}\rightarrow\infty}\frac{2}{{\pi\sigma_{z}^{2}}}{{\rm e}^{-\frac{{{\left(% \mathbf{h}_{i}^{\rm{T}}\mathbf{w}\right)}^{2}}}{{\sigma_{z}^{2}}}}}=0,

which implies that the CRLB on the MSE tends to infinity as \sigma_{n}^{2} gradually increases. Except for these two cases, the CRLB on the MSE is finite. These results show that the additive noise has two opposing effects. On the one hand, it provides variant thresholds for the sign measurement, which is beneficial to the estimation. On the other hand, the additive noise increases the variance of the estimation [30]. Therefore, there may exist an optimal variance of the additive noise which balances these opposing effects and minimizes the CRLB. The above analysis will be substantiated by an example later.

3.3 The Case of Additive Noise Fixed

When \sigma_{e}^{2} is zero, the CRLB on the MSE is finite. Whereas when \sigma_{e}^{2} tends to infinity, according to (13), one has

\displaystyle\lim_{\sigma_{e}^{2}\rightarrow\infty}\lambda_{ii}=\lim_{\sigma_{% e}^{2}\rightarrow\infty}\frac{2}{{\pi\sigma_{z}^{2}}}{\rm e}^{-\frac{{\left({% \mathbf{h}}_{i}^{\rm T}{\mathbf{w}}\right)}^{2}}{\sigma_{z}^{2}}}=0.

Thus the FIM tends to singular and the CRLB on the MSE tends to infinity. Intuitively, one may expect that the optimal variance of the multiplicative noise is zero. However, we will show that this is indeed not always true. There may exist an optimal nonzero variance of the multiplicative noise, as we will show in the following example.

3.4 An Example

An example is now illustrated to verify our analysis on all cases. Consider a scalar parameter estimation problem and the mean of the sensing matrix is {\mathbf{H}}=[1,1,\cdots,1]. According to (12), the CRLB is

\displaystyle{\rm CRLB}(w)=\frac{2\pi\sigma_{z}^{2}}{N}\left(1+\frac{\sigma_{e% }^{2}w^{2}}{\sigma_{n}^{2}}\right)^{2}\Phi\left(-\frac{w}{\sigma_{z}}\right)% \Phi\left(\frac{w}{\sigma_{z}}\right){\rm e}^{\frac{w^{2}}{\sigma_{z}^{2}}}. (17)

We wish to minimize the CRLB (17) in three cases, respectively. It is obvious that the minimum CRLB is attained at \sigma_{e}^{2}=0 when \sigma_{z}^{2} is fixed. When either \sigma_{n}^{2} or \sigma_{e}^{2} is fixed, it is difficult to exactly analyze (17). Fortunately, by using the Chernoff bound for the CDF [29]

\displaystyle\Phi\left(-\frac{w}{\sigma_{z}}\right)\Phi\left(\frac{w}{\sigma_{% z}}\right)\leq\frac{1}{4}{\rm e}^{-\frac{w^{2}}{2\sigma_{z}^{2}}}, (18)

one can find an upper bound for {\rm CRLB}(w) by substituting (18) in (17)

\displaystyle{\rm CRLB}(w) \displaystyle\leq\frac{\pi\sigma_{z}^{2}}{2N}\left(1+\frac{\sigma_{e}^{2}w^{2}% }{\sigma_{n}^{2}}\right)^{2}{\rm e}^{\frac{w^{2}}{2\sigma_{z}^{2}}}. (19)

In fact, the Chernoff bound is a very tight approximation for finding the optimal value of \sigma_{n}^{2} or \sigma_{e}^{2}, which will be shown later. By substituting (8) in (19) and dropping out the constant coefficient items, we define the natural logarithm of the right hand side of (19) as

\displaystyle f(\sigma_{n}^{2},\sigma_{e}^{2})\triangleq 3\log(\sigma_{n}^{2}+% \sigma_{e}^{2}w^{2})+\frac{w^{2}}{2(\sigma_{n}^{2}+\sigma_{e}^{2}w^{2})}-2\log% \sigma_{n}^{2}. (20)

When \sigma_{e}^{2} is fixed, we minimize (20) with respect to \sigma_{n}^{2}. The optimal variance of the additive noise is approximated by

\displaystyle{}_{\rm opt}{\sigma}_{n}^{2}\approx{}_{\rm app}{\sigma}_{n}^{2}=% \frac{w^{2}}{2}\left(\sqrt{9\sigma_{e}^{4}+\sigma_{e}^{2}+\frac{1}{4}}+\frac{1% }{2}+\sigma_{e}^{2}\right). (21)

This means that there exists an optimal additive noise that matches the multiplicative noise and the unknown parameter. Whereas when the variance of the additive noise \sigma_{n}^{2} is fixed, the optimal {}_{\rm opt}{\sigma}_{e}^{2} is

\displaystyle{}_{\rm opt}{\sigma}_{e}^{2}\approx{}_{\rm app}{\sigma}_{e}^{2}=% \begin{cases}\frac{1}{6}-\frac{\sigma_{n}^{2}}{w^{2}},&{\rm if}\;\frac{\sigma_% {n}^{2}}{w^{2}}\leq\frac{1}{6};\\ 0,&{\rm otherwise}.\end{cases} (22)

It seems unreasonable that the multiplicative noise may improve the performance of the estimation. By carefully studying the condition of (22), one can find that the variance of the additive noise \sigma_{n}^{2} should be very small compared to w^{2}. In this setting, the randomness introduced by the additive noise is so weak that suitable perturbation may improve the MSE performance. However, to estimate the parameter accurately, a very large number of measurements is needed to ensure enough the fluctuation of the measurements. Thus the ML estimator achieves the CRLB only when the number of measurements is very large. When the variance of additive noise \sigma_{n}^{2} is comparable with the energy of parameter w, the randomness introduced by the additive noise suffices and {}_{\rm opt}{\sigma}_{e}^{2} is zero.

Notice that the above analysis is established for a given w. Although the parameter w is unknown in practice, the theoretical analysis is still useful in three aspects. First, it gives us an insight into the relationship between the additive noise and the perturbation. Second, the theoretical MSE performance limits for unbiased estimators is provided by choosing the optimal {}_{\rm opt}{\sigma}_{n}^{2} or {}_{\rm opt}{\sigma}_{e}^{2}. Third, one may extend the above ideas to the case of unknown parameter w. In this case, one may optimize the Bayesian CRLB [31] or the worst CRLB [8] instead if some prior information is known.

4 ML Estimation via Convex Optimization

At first sight, one wish the ML estimation problem (10) could be solved by steepest descent or Newton’s method. However, problem (10) is non-convex. Hence the gradient and Hessian based numerical algorithms may not be guaranteed to converge to the optimal point. Moreover, direct solution can not provide us more insight into the problem itself. Fortunately, (10) can be reformulated as a convex optimization problem. By introducing a new variable

\displaystyle{\mathbf{v}}=\frac{\mathbf{w}}{\sqrt{\|\mathbf{w}\|_{2}^{2}\sigma% _{e}^{2}+\sigma_{n}^{2}}}, (23)

we transform the original optimization problem (10) to another one with respect to \mathbf{v}. According to (23), one has

\displaystyle\|{\mathbf{v}}\|_{2}<\frac{1}{\sigma_{e}}. (24)

As long as {\mathbf{v}} satisfies the inequality (24), the relationship between \mathbf{w} and {\mathbf{v}} is a one to one mapping. Consequently, the original problem (10) can be conquered by first solving an equivalent convex optimization problem,

\displaystyle\underset{\mathbf{v}}{\operatorname{minimize}}~{}-\sum_{i=1}^{N}{% \log}\Phi({y_{i}{\mathbf{h}}_{i}^{\rm{T}}{\mathbf{v}}}) (25a)
\displaystyle{\operatorname{subject~{}to}}~{}\|{\mathbf{v}}\|_{2}^{2}<\frac{1}% {\sigma_{e}^{2}}, (25b)

and then finding the optimal point by

\displaystyle{\mathbf{w}}=\frac{\sigma_{n}}{{\sqrt{1-\sigma_{e}^{2}\|{{\mathbf% {v}}}\|_{2}^{2}}}}{\mathbf{v}}. (26)
Proposition

The optimization problem (10) is equivalent to problem (25). {}_{\blacksquare}

Proof

The proof is direct and is not included.  

The constraint set (25b) is an open ball. Thus if the optimal point of the problem (25) exists, it must be an interior point of the constraint set. For the uniqueness of the optimal point of the objective function (25a), we have the following result.

Proposition

The objective function of problem (25) is strictly convex, thus the optimal point of problem (25) is unique if it exists. {}_{\blacksquare}

Proof

The proof is postponed to Appendix D.  

Based on Proposition 4 and Proposition 4, the following proposition provides a necessary and sufficient condition of the existence of the optimal point of the original ML estimation problem (10).

Proposition

The optimal point of problem (10) exists if and only if the optimal point {\mathbf{v}}_{u}^{*} of the unconstrained convex optimization problem

\displaystyle\underset{{\mathbf{v}}\in\mathbb{R}^{p}}{\operatorname{minimize}}% ~{}-\sum_{i=1}^{N}{\log}\Phi({y_{i}{\mathbf{h}}_{i}^{\rm{T}}{\mathbf{v}}}) (27)

satisfies the constraint (25b). {}_{\blacksquare}

Proof

According to Proposition 4, the optimal point of the original ML estimation problem (10) exists if and only if problem (25) has an optimal point. Considering that the objective function of (25) is strictly convex and the constraint (25b) is an open ball, the existence of optimal point of (25) is equivalent to that {\mathbf{v}}_{u}^{*} satisfies the constraint (25b). Then the result is established.  

Therefore, we can solve at first the unconstrained optimization problem (27). Then we check whether {\mathbf{v}}_{u}^{*} satisfies the constraint (25b) to determine whether the original ML estimation problem (10) has an optimal point.

It is shown that (10) does not have an optimal point in some cases, in which we say that there “exists” an optimal point with infinite norm. In order to provide an finite estimation, we adopt a norm limit operation. We will project the optimal point (the infinite case included) onto a set {\mathcal{W}}=\{{\mathbf{w}}|\|{\mathbf{w}}\|_{2}\leq R_{w}\}, if its norm is larger than a threshold R_{w}, where R_{w} is much larger than the norm of true parameter {\bf w}_{0}. Because {\mathcal{W}} is a closed ball, it can be proved that the projection of {\bf w} onto {\mathcal{W}} is equivalent to the projection of {\bf v} onto {\mathcal{V}}=\{{\mathbf{v}}|\|\mathbf{v}\|_{2}\leq{R_{w}}/{{\sqrt{R_{w}^{2}% \sigma_{e}^{2}+\sigma_{n}^{2}}}}\}.

In practice, we first get {\mathbf{v}}_{p}^{*}=\Pi_{\mathcal{V}}({\mathbf{v}}_{u}^{*}), where \Pi_{\mathcal{V}} represents the projection onto the parameter set {\mathcal{V}}. According to (26), we then obtain the corresponding {\mathbf{w}}_{p}^{*} which satisfies {\mathbf{w}}_{p}^{*}\in{\mathcal{W}} and is regarded as the ML estimator

\displaystyle{\hat{\mathbf{w}}}_{\rm ML}=\frac{\sigma_{n}}{{\sqrt{1-\sigma_{e}% ^{2}\|{{\mathbf{v}}_{p}^{*}}\|_{2}^{2}}}}{\mathbf{v}}_{p}^{*}. (28)

We will show that the norm limit operation is almost unnecessary, when the number of measurements N is large enough. As shown in Theorem 2, the ML estimator is consistent. This means that the optimal point of (10) converges to {\mathbf{w}}_{0} in probability. Thus the optimal point of (25) also converges to {\mathbf{v}}_{0} in probability, where {\mathbf{v}}_{0} is determined by (23) using {\mathbf{w}}_{0}. Because {\mathbf{w}}_{0}\in{\mathcal{W}}, we can see that {\mathbf{v}}_{0}\in{\mathcal{V}}. As a consequence, in the situation of large measurement set, the optimal point {\mathbf{v}}_{u}^{*} of (27) satisfies {\mathbf{v}}_{u}^{*}\in{\mathcal{V}} with high probability.

5 Further Discussion

5.1 Probability Analysis

We will analyze the probability that the optimal point of the ML estimation problem exists, in which situation the likelihood function is unimodal.

The previous section has shown that the optimal point {\mathbf{v}}_{u}^{*} of (27) may violate the constraint (25b). Since {\mathbf{v}}_{u}^{*} is a random vector, we may define the probability

\displaystyle{\rm{P}_{\mathcal{V}}}={\rm{Pr}}\left[\|{\mathbf{v}}_{u}^{*}\|_{2% }<\frac{1}{\sigma_{e}}\right], (29)

which is the probability that the original log-likelihood function (9) is unimodal. This probability is meaningful in that it sheds light upon the perturbation in a different perspective. On the one hand, the perturbation provides randomness for the sign function. The randomness may make the likelihood function unimodal. On the other hand, {\rm P}_{\mathcal{V}} may decrease as the strength of perturbation becomes larger through (29). Notice that this probability can be written explicitly as {\rm P}_{\mathcal{V}}({\mathbf{H}},{\mathbf{w}}_{0},\sigma_{e}^{2},\sigma_{n}^% {2},N), where {\mathbf{w}}_{0} denotes the true value of \mathbf{w}. Computing the above probability seems to be hard. However, there may exist an analytic solution in a special case.

Proposition

Suppose that the true parameter w_{0} is a scalar, and the mean of the sensing matrix is {\mathbf{H}}=[1,1,\cdots,1]. By defining k^{-}=\left\lfloor N\Phi\left(-\frac{\textstyle 1}{\textstyle\sigma_{e}}\right% )\right\rfloor+1 and {k^{+}}=\left\lceil N\Phi\left(\frac{\textstyle 1}{\textstyle\sigma_{e}}\right% )\right\rceil-1, {\rm P}_{\mathcal{V}} is obtained by

\displaystyle{\rm P}_{\mathcal{V}}=\sum_{k={k^{-}}}^{k^{+}}{N\choose k}\left[% \Phi\left(\frac{w_{0}}{\sigma_{z}}\right)\right]^{k}\left[1-\Phi\left(\frac{w_% {0}}{\sigma_{z}}\right)\right]^{N-k}. (30)

{}_{\blacksquare}

Proof

The proof is postponed to Appendix E.  

It is difficult to analyze {\rm{P}_{\mathcal{V}}} by (30). When the number of measurements N is large, however, we may compute {\rm{P}_{\mathcal{V}}} with the normal approximation [34]

\displaystyle{\rm{P}_{\mathcal{V}}}\approx\Phi(\eta^{+})-\Phi(\eta^{-}),

where

\displaystyle\eta^{\pm}=\frac{k^{\pm}-N\Phi\left(\frac{w_{0}}{\sigma_{z}}% \right)}{\sqrt{N\Phi\left(\frac{w}{\sigma_{z}}\right)\Phi\left(-\frac{w}{% \sigma_{z}}\right)}}\approx\frac{\Phi\left(\pm\frac{1}{\sigma_{e}}\right)-\Phi% \left(\frac{w_{0}}{\sigma_{z}}\right)}{\sqrt{\Phi\left(\frac{w_{0}}{\sigma_{z}% }\right)\Phi\left(-\frac{w_{0}}{\sigma_{z}}\right)}}\sqrt{N}.

Now we give some results using the normal approximation. Note that the conclusion only applies to the case of large enough measurements. The normal approximation indicates that {\rm P}_{\mathcal{V}} is an increasing function of N and \underset{N\rightarrow\infty}{\lim}{{\rm P}_{\mathcal{V}}}=1. In fact, {\rm P}_{\mathcal{V}} is not a monotone increasing function of N, but the overall trend of {\rm P}_{\mathcal{V}} is increasing in N. When \sigma_{z} and N are fixed, the higher value of \sigma_{e}, the smaller value of {{\rm P}_{\mathcal{V}}}. This means that for a larger \sigma_{e}, more observations is needed to make sure that {\rm P}_{\mathcal{V}} achieves a given probability.

In view of the limited number of observations, {\rm P}_{\mathcal{V}} is always less than 1. Therefore, for any fixed N, the probability that the estimated parameter lies in the boundary of the parameter set {\mathcal{W}}=\{{\mathbf{w}}|\|{\mathbf{w}}\|_{2}\leq R_{w}\} is nonzero. Since R_{w} is larger than the norm of {\mathbf{w}}_{0}, the estimator has significant bias in the case that \|\hat{w}_{\rm ML}\|_{2}=R_{w}.

5.2 Effects of Mismodeling on the ML Estimation

In this subsection, we study the effects on estimation due to the ignorance of perturbation. Then we analyze the performance of both estimators.

Assume that the true data generating model is (2), and the corresponding ML estimation is {\mathbf{w}}_{t}^{*}. If the perturbation is ignored, we obtain the corresponding ML estimation as {\mathbf{w}}_{w}^{*} using model (3). {\mathbf{w}}_{w}^{*} is denoted as the perturbation-ignored estimator. Assume that {\mathbf{w}}_{t}^{*}\in{\mathcal{W}}. The relationship between {\mathbf{w}}_{t}^{*} and {\mathbf{w}}_{w}^{*} is given in the following proposition.

Proposition

The direction of {\mathbf{w}}_{w}^{*} is the same with {\mathbf{w}}_{t}^{*}, with magnitude scaled. {\mathbf{w}}_{w}^{*} and {\mathbf{w}}_{t}^{*} satisfy the following relationship,

\displaystyle{\mathbf{w}}_{w}^{*}=\frac{{{\mathbf{w}}_{t}^{*}}}{{\sqrt{1+\frac% {\sigma_{e}^{2}}{\sigma_{n}^{2}}\|{\mathbf{w}}_{t}^{*}\|_{2}^{2}}}}. (31)

{}_{\blacksquare}

Proof

Substituting \sigma_{e}^{2}=0 in (26), one has

\displaystyle{\mathbf{w}}_{w}^{*} \displaystyle=\sigma_{n}{\mathbf{v}}_{u}^{*}. (32)

Considering the existence of the perturbation, one has

\displaystyle{\mathbf{v}}_{u}^{*} \displaystyle=\frac{{\mathbf{w}}_{t}^{*}}{{\sqrt{\|{\mathbf{w}}_{t}^{*}\|_{2}^% {2}\sigma_{e}^{2}+\sigma_{n}^{2}}}}. (33)

Using (32) and (33) to eliminate {\mathbf{v}}_{u}^{*}, the equation (31) is established.  

In some applications such as binary regression problems, the direction of the parameter \mathbf{w} is much more important than the magnitude of \mathbf{w}. In this situation, even though we do not know the strength of perturbation, we can still estimate the direction of \mathbf{w} by perturbation-ignored estimator using model (3). Similar result in binary regression problem is obtained in [23], which focuses on model (4) with sensing matrix perturbation and is consistent with Proposition 5.

According to the analysis above, if the information of the perturbation {\bf E} and additive noise {\bf n} is both unknown, and the measurements are generated by model (2), we can still use model (4) to estimate the direction of the parameter {\bf w}.

Now we discuss the performance of both estimators. Both the parameters {\mathbf{w}}_{0} and \sigma_{n}^{2} are supposed to be fixed. We show that each estimator has its own advantages and disadvantages, and the performance of the estimator depends on the strength of perturbation \sigma_{e}^{2} and the number of measurements N.

When the number of measurements N tends to infinity, {\mathbf{w}}_{t}^{*} converges to {\mathbf{w}}_{0} in probability. According to (31), the estimator ignored perturbation is inconsistent. For the vector parameter estimation problem, the square error between {\mathbf{w}}_{w}^{*} and {\mathbf{w}}_{t}^{*} is

\displaystyle\|{\mathbf{w}}_{w}^{*}-{\mathbf{w}}_{t}^{*}\|_{2}^{2}=\|{\mathbf{% w}}_{t}^{*}\|_{2}^{2}\left(1-\left(\frac{\sigma_{e}^{2}}{\sigma_{n}^{2}}\|{% \mathbf{w}}_{t}^{*}\|_{2}^{2}+1\right)^{-\frac{1}{2}}\right)^{2}.

According to the continuous mapping theorem [24], the squared error converges to

\displaystyle\|{\mathbf{w}}_{w}^{*}-{\mathbf{w}}_{t}^{*}\|_{2}^{2}\lx@stackrel% {{\scriptstyle\rm p}}{{\rightarrow}}\|{\mathbf{w}}_{0}\|_{2}^{2}\left(1-\left(% {\frac{\sigma_{e}^{2}}{\sigma_{n}^{2}}\|{\mathbf{w}}_{0}\|_{2}^{2}+1}\right)^{% -\frac{1}{2}}\right)^{2}. (34)

If {\sigma_{e}^{2}}\ll{\sigma_{n}^{2}}/\|{\mathbf{w}}_{0}\|_{2}^{2}, the squared error is approximately \frac{\textstyle\sigma_{e}^{4}}{\textstyle 4\sigma_{n}^{4}}\|{\mathbf{w}}_{0}% \|_{2}^{6}, and the relative error is \frac{\textstyle\sigma_{e}^{2}}{\textstyle 2\sigma_{n}^{2}}\|{\mathbf{w}}_{0}% \|_{2}^{2}.

We have discussed the probability that the ML estimator {\mathbf{w}}_{t}^{*} is finite. When the number of measurements is not large enough, the ML estimator {\mathbf{w}}_{t}^{*} has a much larger probability of being infinite than {\mathbf{w}}_{w}^{*}. Meanwhile, even if the ML estimator {\mathbf{w}}_{t}^{*} is finite, the norm of {\mathbf{w}}_{t}^{*} may be much larger than that of {\mathbf{w}}_{0}. In this situation, {\mathbf{w}}_{t}^{*} may have a larger MSE than {\mathbf{w}}_{w}^{*} despite the bias of {\mathbf{w}}_{w}^{*}.

To sum up, the direction of perturbation-ignored estimator is the same with that of ML estimator. If the perturbation and additive noise is both unknown, we can still estimate the direction of the unknown parameter vector. Although the perturbation-ignored estimator is inconsistent and biased, it works better in the MSE sense when the number of measurements is not large.

6 Numerical Simulations

In this section, several numerical simulations are performed to verify the theoretical results presented in previous sections. In these simulations, when the unknown parameter w is a scalar, the mean sensing matrix is chosen as {\mathbf{H}}=[1,1,\cdots,1]. Whereas when the unknown parameter \mathbf{w} is a vector, the mean sensing matrix \mathbf{H} is drawn with each entry h_{ij}\sim{\mathcal{N}}(0,1). All the MSEs of the ML estimator are averaged over 2000 Monte Carlo (MC) trials unless stated otherwise. We assume that R_{w}=4\|{{\mathbf{w}}_{0}}\|_{2}. The binary measurements are generated by model (2).

6.1 Validation of the Performance Limits

The first simulation is to validate the correctness of Proposition 3.1. The data is generated as follows. We set p=4, N=300, \sigma_{z}^{2}=4\|{\mathbf{w}}_{0}\|_{2}^{2}, and generate the true parameter {\mathbf{w}}_{0}\in{\mathbb{R}}^{p} from {\mathcal{N}}({\mathbf{0}},{\mathbf{I}}). The results are plotted in Fig. 1 with \gamma varying from 10^{-2} to 10^{2}. It can be seen that the bounds are proportional to \gamma when \gamma is small, while they are proportional to \gamma^{2} when \gamma is large.

Figure 1: Validation of Proposition 3.1 with a log-log plot. All the three curves are approximately piecewise.

The second simulation is to validate the existence of the optimal variance of additive noise when \sigma_{e}^{2} is fixed. The parameters are selected as \sigma_{e}^{2}=0.3 and w_{0}=1. It can be seen that the Chernoff bound is a very accurate approximation of the CRLB. Meanwhile, it is also demonstrated that (21) is a good approximation of the optimal variance of the additive noise.

Figure 2: The optimal variance of the additive noise when \sigma_{e}^{2} is fixed as 0.3. Note that the red dashed line indicates the CRLB, while the blue solid line indicates the Chernoff bound. The red point denotes the optimal value {}_{\rm opt}\sigma_{n}^{2}=0.88, and the blue point denotes the approximated value {}_{\rm app}\sigma_{n}^{2}=0.98.

In the third simulation, the existence of the optimal strength of perturbation is validated when \sigma_{n}^{2} is fixed. We set \sigma_{n}^{2}=0.1 and w_{0}=1. In this case, \sigma_{n}^{2}/w_{0}^{2}\leq 1/6 and the optimal strength of perturbation exists. Although the Chernoff bound is not very tight when \sigma_{e}^{2} is near 0, it still reflects the trend of the CRLB well.

Figure 3: The optimal strength of the perturbation when \sigma_{n}^{2} is fixed as 0.1. The red dashed line indicates the CRLB, while the blue solid line indicates the Chernoff bound. The red point denotes the optimal value {}_{\rm opt}\sigma_{e}^{2}=0.0475, and the blue point denotes the approximated value {}_{\rm app}\sigma_{e}^{2}=0.0667.

6.2 Simulations of Probability Results

The first simulation is to substantiate Proposition 4. The parameters are selected as: \sigma_{n}^{2}=1, \sigma_{e}^{2}=0.5, w_{0}=1 and N=40. Two typical realizations about the negative log-likelihood function in (10) versus w are plotted in Fig. 4. It can be seen that the negative log-likelihood function (10) is nonconvex. It is also shown that the original problem (10) has an optimal point if v_{u}^{*} satisfies the constraint (25b). Whereas when v_{u}^{*} violates the constraint (25b), the optimal point of problem (10) does not exist.

Figure 4: The negative log-likelihood (10) versus the parameter w. The blue dashed line indicates that v_{u}^{*} does not violate the constraint (25b), while the red solid line indicates that v_{u}^{*} violates the constraint (25b). The black vertical solid line denotes the true value w_{0}=1, and the blue point denotes the estimated value \hat{w}_{\rm ML}=0.89, corresponding to the optimal point of the blue dashed line.

The next two simulations focus on the probability that the log-likelihood function is unimodal. The probability {\rm P}_{\mathcal{V}} is computed by (30) and two cases are considered. For the first case, we assume that \sigma_{n}^{2}=0.3 and w_{0}=1. Note that the variance of the additive noise is small compared with w_{0}. From Fig. 5, we see that {\rm P}_{\mathcal{V}} is not monotonically increasing with N. This is mainly because of the floor and ceil operations. Nevertheless, the overall trend is increasing with N. Meanwhile, for a fixed N, it is shown that {\rm P}_{\mathcal{V}} increases with \sigma_{e}^{2} when \sigma_{e}^{2} is small. When the strength of perturbation is large, {\rm P}_{\mathcal{V}} decreases with it. Therefore, suitable perturbation can improve the probability {\rm P}_{\mathcal{V}} when N is not too large. For the second case, we set \sigma_{z}^{2}=2 and w_{0}=1. In Fig. 6, it can be seen that the larger the strength of perturbation, the smaller the probability {\rm P}_{\mathcal{V}} is. This result is consistent with our analysis by the normal approximation.

Figure 5: The probability {\textrm{P}_{\mathcal{V}}} versus the number of observations N when \sigma_{n}^{2} is fixed as 0.3.
Figure 6: The probability {\textrm{P}_{\mathcal{V}}} versus the number of observations N when \sigma_{z}^{2} is fixed as 2.

6.3 Performance of ML Estimator

In this subsection, several MC simulations are performed to evaluate the MSE performance of the ML estimator against the CRLB. The MATLAB fminunc function is used to solve the problem [33]. We assume that {\mathbf{w}}_{0}=[0.7,0.5,-0.6]^{\rm T} in the first and the third simulations.

In the first simulation, we compare the MSE performance of the ML estimator with the CRLB with fixed \sigma_{z}^{2} or \sigma_{n}^{2}. The results are plotted in Fig. 7. It is shown that the performance worsens with the increase of the proportion of the multiplicative noise when \sigma_{z}^{2} is fixed. When \sigma_{n}^{2} is fixed, it is demonstrated that the perturbation exacerbates the performance of estimation. The reason is that the norm of {\mathbf{w}}_{0} is comparable with \sigma_{n} in this case, and the optimal {}_{\rm opt}{\sigma_{e}^{2}} is zero. Note that the MSE of the ML estimator is much larger than the corresponding CRLB when N is small. This is mainly because the norm of the ML estimator has a substantial probability of being R_{w}.

Figure 7: The MSE of the ML estimator and the CRLB for different number of observations. Note that the blue and the red lines correspond to the case that the variance of equivalent noise \sigma_{z}^{2} is fixed. While the blue and the black lines correspond to the case that the variance of additive noise \sigma_{n}^{2} is fixed.

The second simulation assumes that \sigma_{e}^{2}=0.3 and w_{0}=1, and the MSEs are averaged over 5000 MC trials. The results are plotted in Fig. 8. The optimal variance of the additive noise is 0.88, thus its CRLB is smaller than the \sigma_{n}^{2}=2 case. Meanwhile, the ML estimator approaches faster to the CRLB in the \sigma_{n}^{2}=2 case. As the number of measurements increases, the ML estimator also attains the CRLB in the \sigma_{n}^{2}=0.88 case.

Figure 8: The MSE of the ML estimator and the CRLB for different number of observations when \sigma_{e}^{2} is fixed as 0.3.

Finally, the ML estimator is compared with other estimation methods. The parameters are set as follows: \sigma_{n}^{2}=1, \sigma_{e}^{2}=0.4. Three estimators are considered, including the ML estimator (28), the perturbation-ignored estimator (32) and a perturbation-known estimator corresponding to a completely known sensing matrix. All the MSEs are then compared with the CRLB (12). The results are plotted in Fig. 9. It is obvious that the perturbation-known case performs better than the CRLB in which the perturbation is assumed unknown. When the number of measurements N is smaller than 300, the perturbation-ignored estimator works better than the ML estimator. The reason is that the norm of the ML estimator has a much larger probability of being the norm threshold R_{w} than that of the perturbation-ignored estimator when the number of measurements is not large enough. While the effect of the bias of the perturbation-ignored estimator becomes apparent when the number of measurements increases. Thus the MSE of the perturbation-ignored estimator decreases slowly as N increases. In fact, the MSE converges to 0.031 according to (34). The MSE of the ML estimator decreases as expected, and asymptotically achieves the CRLB.

Figure 9: MSE comparison in estimating the vector parameter by three estimators.

7 Conclusion

In this paper, we have studied the problem of estimating a deterministic parameter vector from sign measurements with a perturbed sensing matrix. Firstly, the ML estimator was utilized to estimate the unknown parameter and it was proved to be consistent. The CRLB was derived to analyze the performance of the estimator. It was demonstrated that with the variance of the equivalent noise fixed, the perturbation exacerbates the performance of the estimation. Meanwhile, under certain relationship between the variance of additive noise and the strength of the perturbation, the CRLB on the MSE will achieve its minimum. This result demonstrates that in the MSE sense, suitable perturbation may be beneficial in some special cases. Secondly, it was shown that the original ML estimation problem could be transformed to a convex optimization problem, which can be efficiently solved. Theoretical analysis implied that suitable perturbation may be beneficial to improve the probability that the optimal point of the ML estimator exists. Furthermore, under a perturbed sensing matrix, the perturbation-ignored estimator is a scaled version with the same direction of the ML estimator. It was also shown that the perturbation-ignored estimator works well when the number of measurements is not large enough and the perturbation is small. However, the perturbation-ignored estimator is biased and its MSE converges to a constant as the number of measurements increases. In contrast, the ML estimator is unbiased and achieves the CRLB in the asymptotic sense.

Appendix A Proof of Theorem 2

Proof

We first define the normalized log-likelihood function as

\displaystyle l_{N}({\mathbf{w}}) \displaystyle\triangleq\frac{1}{N}l({\mathbf{y}};{\mathbf{w}})=\frac{1}{N}\sum% \limits_{i=1}^{N}{\log}\Phi\left(y_{i}\frac{{{\mathbf{h}}_{i}^{\rm T}{\mathbf{% w}}}}{\sigma_{z}}\right). (35)

As N tends to infinity, the weak law of large numbers implies that

\displaystyle\lim_{N\rightarrow\infty}l_{N}({\mathbf{w}})\lx@stackrel{{% \scriptstyle\rm p}}{{\rightarrow}}l_{0}({\mathbf{w}})\triangleq{\rm E}_{y,{% \mathbf{h}}}\left[\log\Phi\left(y\frac{{{\mathbf{h}}^{\rm{T}}\mathbf{w}}}{% \sigma_{z}}\right)\right], (36)

where the expectation is taken with respect to y and \mathbf{h}, and the notation \lx@stackrel{{\scriptstyle\textrm{p}}}{{\rightarrow}} denotes convergence in probability. Since model (2) is identifiable, l_{0}({\mathbf{w}}) has a unique maximum attained at {\mathbf{w}}_{0} by the information inequality [35]. In order to claim that \hat{\mathbf{w}}_{\rm ML} converges to {\mathbf{w}}_{0} in probability as N\rightarrow\infty, one needs to ensure that the limiting and maximization operations in (10) and (36) can be interchanged. Sufficient conditions for the maximum of the limit to be the limit of the maximum are that the parameter space is compact and the normalized log-likelihood function l_{N}({\mathbf{w}}) converges uniformly to l_{0}({\mathbf{w}}) in probability [6]. It is obvious that the parameter space \mathcal{W} is bounded and closed. To prove the uniform convergence in probability, note that l_{N}({\mathbf{w}}) is continuous, thus it suffices to show that there exists a function U({\mathbf{y}},{\mathbf{H}}) such that

\displaystyle|l_{N}({\mathbf{w}})|\leq U({\mathbf{y}},{\mathbf{H}}),\quad% \forall~{}{\mathbf{w}}\in{\mathcal{W}}. (37)

To find such a function U({\mathbf{y}},{\mathbf{H}}), we may use the mean value expansion of q({\mathbf{w}})=\log\Phi\left(y_{i}\frac{\textstyle{{\mathbf{h}}_{i}^{\rm T}{% \mathbf{w}}}}{\textstyle{\sigma_{z}}}\right) around the origin {\mathbf{w}}={\mathbf{0}}. Notice that the derivative of \log\Phi(x) is

\displaystyle k(x)\triangleq\frac{\partial\log\Phi(x)}{\partial x}=\frac{1}{{% \Phi(x)}}\frac{{\partial\Phi(x)}}{\partial x},

which is convex and positive. When x\rightarrow\infty, k(x) tends to zero, on the other hand, k(x) tends to -x as x\rightarrow-\infty. As a consequence, there exists a suitable constant C>0 such that

\displaystyle k(x)\leq C(1+|x|).

By the mean value theorem, the following result is obtained,

\displaystyle\left|\log\Phi\left(y_{i}\frac{{{\mathbf{h}}_{i}^{\rm T}{\mathbf{% w}}}}{{\sigma_{z}}}\right)\right| \displaystyle=\left|\log\Phi(0)+\nabla q({\mathbf{w}}^{\prime})^{\rm T}{% \mathbf{w}}\right|
\displaystyle\leq\left|\log\Phi(0)\right|+\left\|\nabla q({{\mathbf{w}}^{% \prime}})\right\|_{2}\left\|{\mathbf{w}}\right\|_{2}, (38)

where {\mathbf{w}}^{\prime} is some point lying in the parameter space \mathcal{W}. The norm of the gradient \nabla q({\mathbf{w}}^{\prime}) can be upper bounded by

\displaystyle\left\|\nabla q\left({{\mathbf{w}}^{\prime}}\right)\right\|_{2} \displaystyle=\left\|\frac{y_{i}}{{\tilde{\sigma}_{z}}}k\left(y_{i}\frac{{{% \mathbf{h}}_{i}^{\rm T}{{\mathbf{w}}^{\prime}}}}{{\tilde{\sigma}_{z}}}\right)% \left({\mathbf{h}}_{i}-\frac{\sigma_{e}^{2}}{{\tilde{\sigma}_{z}^{2}}}\left({% \mathbf{h}}_{i}^{\rm T}{{\mathbf{w}}^{\prime}}\right){{\mathbf{w}}^{\prime}}% \right)\right\|_{2}
\displaystyle\leq\frac{1}{\sigma_{n}}k\left(y_{i}\frac{{{\mathbf{h}}_{i}^{\rm T% }{\mathbf{w}}^{\prime}}}{{\tilde{\sigma}_{z}}}\right)\left\|{\mathbf{h}}_{i}-% \frac{{\sigma_{e}^{2}}}{{\tilde{\sigma}_{z}^{2}}}\left({\mathbf{h}}_{i}^{\rm T% }{\mathbf{w}}^{\prime}\right){\mathbf{w}}^{\prime}\right\|_{2}
\displaystyle\leq\frac{C}{{\sigma}_{n}}\left(1+\left|\frac{{{\mathbf{h}}_{i}^{% \rm T}{\mathbf{w}}^{\prime}}}{{\tilde{\sigma}_{z}}}\right|\right)\left\|{% \mathbf{h}}_{i}-\frac{{\sigma_{e}^{2}}}{{\tilde{\sigma}_{z}^{2}}}\left({% \mathbf{h}}_{i}^{\rm T}{{\mathbf{w}}^{\prime}}\right){{\mathbf{w}}^{\prime}}% \right\|_{2}
\displaystyle\leq\frac{C}{{\sigma}_{n}}\left(1+\frac{{R_{w}}}{\sigma_{n}}\left% \|{\mathbf{h}}_{i}\right\|_{2}\right)\left\|{\mathbf{h}}_{i}-\frac{{\sigma_{e}% ^{2}}}{{\tilde{\sigma}_{z}^{2}}}\left({\mathbf{h}}_{i}^{\rm T}{{\mathbf{w}}^{% \prime}}\right){{\mathbf{w}}^{\prime}}\right\|_{2}, (39)

where

\displaystyle\left\|{\mathbf{h}}_{i}-\frac{{\sigma_{e}^{2}}}{{\tilde{\sigma}_{% z}^{2}}}\left({\mathbf{h}}_{i}^{\rm T}{{\mathbf{w}}^{\prime}}\right){\mathbf{w% }}^{\prime}\right\|_{2} \displaystyle\leq\left(\left\|{\mathbf{h}}_{i}\right\|_{2}+\frac{{\sigma_{e}^{% 2}}}{{{\sigma}_{n}^{2}}}{R_{w}}\left\|{\mathbf{h}}_{i}\right\|_{2}\left\|{% \mathbf{w}}^{\prime}\right\|_{2}\right)
\displaystyle\leq\left(1+\frac{{\sigma_{e}^{2}}}{{{\sigma}_{n}^{2}}}R_{w}^{2}% \right)\left\|{\mathbf{h}}_{i}\right\|_{2}. (40)

The above inequalities follow from \|{\mathbf{w}}\|_{2}\leq R_{w}, \|{{\mathbf{w}}^{\prime}}\|_{2}\leq R_{w}, and \tilde{\sigma}_{z}^{2}=\|{\mathbf{w}}^{\prime}\|_{2}^{2}\sigma_{e}^{2}+\sigma_% {n}^{2}\geq\sigma_{n}^{2}. Plugging (A), (A) and (A) into (35), it follows that

\displaystyle l_{N}({\mathbf{w}})\leq|\log\Phi(0)|+\frac{C_{1}}{N}\sum_{i=1}^{% N}\left(1+\frac{\textstyle{R_{w}}}{{{\sigma}_{n}}}\|{\mathbf{h}}_{i}\|_{2}% \right)\|{\mathbf{h}}_{i}\|_{2}, (41)

where C_{1} is a constant. Using U({\mathbf{y}},{\mathbf{H}}) to denote the right side of the equation (41), the condition (37) is satisfied. Therefore, the consistency of the ML estimator is proved.  

Appendix B Proof of Theorem 3

Proof

We first show that the regularity condition holds for the likelihood function {\rm{Pr}}(\mathbf{y};\mathbf{w}). The gradient of the log-likelihood function l(\mathbf{y};\mathbf{w}) with respect to \mathbf{w} is

\displaystyle\nabla_{\mathbf{w}}l({\mathbf{y}};{\mathbf{w}})=\frac{1}{\sqrt{2% \pi}\sigma_{z}}\sum_{i=1}^{N}\Bigg{(}\frac{y_{i}}{\Phi\left(y_{i}\frac{{% \mathbf{h}}_{i}^{\rm T}{\mathbf{w}}}{\sigma_{z}}\right)}{\textrm{e}}^{-\frac{{% \left({\mathbf{h}}_{i}^{\rm T}{\mathbf{w}}\right)}^{2}}{2\sigma_{z}^{2}}}\left% ({\mathbf{h}}_{i}-\frac{\sigma_{e}^{2}}{\sigma_{z}^{2}}({\mathbf{h}}_{i}^{\rm T% }{\mathbf{w}}){\mathbf{w}}\right)\Bigg{)}.

The probability distribution function for y_{i} is

\displaystyle y_{i}=\begin{cases}-1,&{\rm{with~{}probability}}\quad\Phi\left(-% \frac{{\mathbf{h}}_{i}^{\rm T}{\mathbf{w}}}{\sigma_{z}}\right);\\ 1,&{\rm{with~{}probability}}\quad\Phi\left(\frac{{\mathbf{h}}_{i}^{\rm T}{% \mathbf{w}}}{\sigma_{z}}\right).\end{cases}

It follows that for all \mathbf{w}, the regularity condition {\rm E}_{\mathbf{y}}\left[\nabla_{\mathbf{w}}l({\mathbf{y}};{\mathbf{w}})% \right]={\mathbf{0}} holds.

Fortunately, a closed-from expression for the CRLB can be obtained in the case of a vector parameter CRLB for transformation.

Suppose that we wish to estimate {\boldsymbol{\alpha}}={\mathbf{g}}({\boldsymbol{\theta}}), where \mathbf{g} is a r-dimensional function and \boldsymbol{\theta} is a s-dimensional parameter vector. Then the CRLB of {\boldsymbol{\alpha}} from {\boldsymbol{\theta}} is given by [36]

\displaystyle{\rm{Cov}}(\hat{\boldsymbol{\alpha}})\succeq\frac{\partial{% \mathbf{g}}({\boldsymbol{\theta}})}{\partial{\boldsymbol{\theta}}}\left({% \mathbf{J}}(\boldsymbol{\theta})\right)^{-1}\frac{\partial{\mathbf{g}}({% \boldsymbol{\theta}})^{\rm T}}{\partial{\boldsymbol{\theta}}}, (42)

where {\mathbf{J}}(\boldsymbol{\theta}) is the FIM of \boldsymbol{\theta}.

We may define

\displaystyle{\mathbf{v}}=\frac{\mathbf{w}}{\sqrt{\|\mathbf{w}\|_{2}^{2}\sigma% _{e}^{2}+\sigma_{n}^{2}}}.

Then \mathbf{w} can be uniquely determined from {\mathbf{v}} by

\displaystyle{\mathbf{w}}=\frac{\sigma_{n}}{{\sqrt{1-\sigma_{e}^{2}\|{\mathbf{% v}}\|_{2}^{2}}}}{{\mathbf{v}}}.

In our setting, {\boldsymbol{\alpha}}={\mathbf{w}}, {\boldsymbol{\theta}}={{\mathbf{v}}} and {\mathbf{w}}={\mathbf{g}}({{\mathbf{v}}}). The log-likelihood function l({\mathbf{y}};{\mathbf{v}}) for {\mathbf{v}} is

\displaystyle l({\mathbf{y}};{\mathbf{v}})=\sum\limits_{i=1}^{N}{\log}\Phi% \left(y_{i}{\mathbf{h}}_{i}^{\rm{T}}{\mathbf{v}}\right).

Its gradient and Hessian are

\displaystyle\nabla_{{\mathbf{v}}}l({\mathbf{y}};{\mathbf{v}})=\frac{1}{\sqrt{% 2\pi}}\sum_{i=1}^{N}\frac{y_{i}}{\Phi\left(y_{i}{\mathbf{h}}_{i}^{\rm T}{% \mathbf{v}}\right)}{\textrm{e}}^{-\frac{\left({\mathbf{h}}_{i}^{\rm T}{\mathbf% {v}}\right)^{2}}{2}}{\mathbf{h}}_{i},

and

\displaystyle\nabla_{{\mathbf{v}}}^{2}l({\mathbf{y}};{\mathbf{v}})= \displaystyle-\frac{1}{\sqrt{2\pi}}\sum_{i=1}^{N}\frac{y_{i}}{\Phi\left(y_{i}{% \mathbf{h}}_{i}^{\rm T}{\mathbf{v}}\right)}{\rm e}^{-\frac{\left({\mathbf{h}}_% {i}^{\rm T}{\mathbf{v}}\right)^{2}}{2}}({\mathbf{h}}_{i}^{\rm T}{{\mathbf{v}}}% ){\mathbf{h}}_{i}{\mathbf{h}}_{i}^{\rm T}
\displaystyle-\frac{1}{2\pi}\sum_{i=1}^{N}\frac{1}{\Phi^{2}\left(y_{i}{\mathbf% {h}}_{i}^{\rm T}{\mathbf{v}}\right)}{\rm e}^{-{\left({\mathbf{h}}_{i}^{\rm T}{% \mathbf{v}}\right)^{2}}}{\mathbf{h}}_{i}{\mathbf{h}}_{i}^{\rm T}, (43)

respectively. The FIM can be computed as

\displaystyle{\mathbf{J}}({{\mathbf{v}}})=\sigma_{z}^{2}{\mathbf{H}}{\mathbf{% \Lambda}}{\mathbf{H}}^{\rm T}, (44)

where \mathbf{\Lambda} is defined as (13). The corresponding Jacobian matrix is

\displaystyle\frac{\partial{\mathbf{g}}({{\mathbf{v}}})}{\partial{{{\mathbf{v}% }}}}=\sigma_{z}({\mathbf{I}}+\frac{\sigma_{e}^{2}}{\sigma_{n}^{2}}{\mathbf{w}}% {\mathbf{w}}^{\rm T}). (45)

By employing the Sherman-Morrison formula [37] and (8), one has

\displaystyle\left({\mathbf{I}}-\frac{\sigma_{e}^{2}}{\sigma_{z}^{2}}{\mathbf{% w}}{\mathbf{w}}^{\rm T}\right)^{-1}={\mathbf{I}}+\frac{\sigma_{e}^{2}}{\sigma_% {n}^{2}}{\mathbf{w}}{\mathbf{w}}^{\rm T}. (46)

Substituting (44), (45) and (46) in (42), the CRLB is

\displaystyle{\rm{Cov}}({\hat{\mathbf{w}}})\succeq\left({\mathbf{M}}{\mathbf{% \Lambda}}{\mathbf{M}}^{\rm T}\right)^{-1},

where \mathbf{M} is defined as (14). The MSE is equal to the trace of the covariance matrix. Therefore, the result (12) is established.  

Appendix C Proof of Proposition 3.1

Proof

We first define

\displaystyle{\mathbf{N}}={\mathbf{I}}-\frac{\sigma_{e}^{2}}{\sigma_{z}^{2}}{% \mathbf{w}}{\mathbf{w}}^{\rm T}.

Since \tilde{\mathbf{J}} and \mathbf{N} are both positive definite matrices, they can be factored as \tilde{\mathbf{J}}=\mathbf{U}\tilde{\mathbf{\Delta}}{\mathbf{U}}^{\rm T} and {\mathbf{N}}={\mathbf{V}}{\mathbf{\Delta}}_{\mathbf{N}}{\mathbf{V}}^{\rm T}, where {\mathbf{U}},{\mathbf{V}}\in\mathbb{R}^{p\times p} are both orthogonal matrices, and \tilde{\mathbf{\Delta}}={\rm{diag}}(\tilde{\lambda}_{1},\cdots,\tilde{\lambda}% _{p}), {\mathbf{\Delta}}_{\mathbf{N}}={\rm{diag}}\left(1,\cdots,1,\frac{\textstyle{1}% }{\textstyle{1+\gamma}}\right). Note that all eigenvalues except the last one are equal to 1. Using the equality {\rm{tr}}(\mathbf{AB})={\rm{tr}}(\mathbf{BA}), we obtain

\displaystyle{\rm{tr}}(\tilde{\mathbf{J}})={\rm{tr}}(\tilde{\mathbf{\Delta}}^{% -1})=\sum_{i=1}^{p}\frac{1}{\tilde{\lambda}_{i}}.

When the variance of the equivalent noise \sigma_{z}^{2} is fixed, one has

\displaystyle{\mathbf{J}} \displaystyle={\mathbf{N}}{\tilde{\mathbf{J}}}{\mathbf{N}}^{\rm T}
\displaystyle={\mathbf{V}}{\mathbf{\Delta}}_{\mathbf{N}}{\mathbf{V}}^{\rm T}{% \mathbf{U}}\tilde{\mathbf{\Delta}}{\mathbf{U}}^{\rm T}{\mathbf{V}}{\mathbf{% \Delta}}_{\mathbf{N}}{\mathbf{V}}^{\rm T}.

It follows that

\displaystyle{\rm{tr}}({\mathbf{J}}^{-1})={\rm{tr}}({\mathbf{\Delta}}_{\mathbf% {N}}^{-1}{\mathbf{V}}^{\rm T}{\mathbf{U}}\tilde{\mathbf{\Delta}}{\mathbf{U}}^{% \rm T}{\mathbf{V}}{\mathbf{\Delta}}_{\mathbf{N}}^{-1}).

Defining {\mathbf{Q}}={\mathbf{V}}^{\rm T}{\mathbf{U}}\tilde{\mathbf{\Delta}}{\mathbf{U% }}^{\rm T}{\mathbf{V}} and {\mathbf{T}}={\mathbf{\Delta}}_{\mathbf{N}}^{-1}{\mathbf{\Delta}}_{\mathbf{N}}% ^{-1}, it is obvious that \mathbf{Q} has the same eigenvalues with \tilde{\mathbf{\Delta}}. Since both \mathbf{Q} and {\mathbf{T}} are positive definite, by using the trace inequality [38], we have

\displaystyle\sum_{i=1}^{n}{\lambda_{{\mathbf{Q}},i}{\lambda_{{\mathbf{T}},n-i% }}}\leq{\rm{tr}}({\mathbf{QT}})\leq\sum_{i=1}^{n}{\lambda_{{\mathbf{Q}},i}{% \lambda_{{\mathbf{T}},i}}}.

Therefore, the desired result (16) is obtained.  

Appendix D Proof of Proposition 4

Proof

We use f({\mathbf{v}}) to denote the objective function of (25). According to (43), the Hessian of f({\mathbf{v}}) is

\nabla_{\mathbf{v}}^{2}f({\mathbf{v}})=\sum_{i=1}^{N}\beta_{i}{\mathbf{h}}_{i}% {\mathbf{h}}_{i}^{\rm T},

where

\displaystyle\beta_{i}=\frac{1}{\sqrt{2\pi}\Phi^{2}\left(y_{i}x_{i}\right)}{% \rm e}^{-\frac{1}{2}x_{i}^{2}}\left(\frac{1}{\sqrt{2\pi}}{\rm e}^{-\frac{1}{2}% x_{i}^{2}}+y_{i}x_{i}\Phi(y_{i}x_{i})\right),

and x_{i}={\mathbf{h}}_{i}^{\rm T}{\mathbf{v}}. By the inequality x\Phi(-x)<\frac{1}{\sqrt{2\pi}}{\rm e}^{-\frac{1}{2}x^{2}},\forall~{}x\in% \mathbb{R}, one can show that \beta_{i}>0. Thus \nabla_{\mathbf{v}}^{2}f({\mathbf{v}})\succ 0. The result is established.  

Appendix E Proof of Proposition 5.1

Proof

We first solve the unconstrained optimization problem

\displaystyle v_{u}^{*}=\underset{v\in\mathbb{R}}{\operatorname{argmin}}~{}-% \sum_{i=1}^{N}{\log}\Phi\left(y_{i}v\right).

Assuming that the observation \{y_{i}\}_{i=1}^{N} has k ones. Setting the derivative of the objective function to zero and using the equality \Phi(v_{u}^{*})+\Phi(-v_{u}^{*})=1, one has

\displaystyle v_{u}^{*}=\Phi^{-1}\left(\frac{k}{N}\right),

where \Phi^{-1} denotes the inverse function of \Phi. Now we calculate the probability {\rm P}_{\mathcal{V}} as

\displaystyle{\rm P}_{\mathcal{V}} \displaystyle={\rm{Pr}}\left[\left|v_{u}^{*}\right|<\frac{1}{\sigma_{e}}\right]
\displaystyle={\rm{Pr}}\left[-\frac{1}{\sigma_{e}}<\Phi^{-1}\left(\frac{k}{N}% \right)<\frac{1}{\sigma_{e}}\right]
\displaystyle={\rm{Pr}}\left[N\Phi\left(-\frac{1}{\sigma_{e}}\right)<k<N\Phi% \left(\frac{1}{\sigma_{e}}\right)\right].

where the last step follows from the monotone increasing property of \Phi^{-1}. Since

\displaystyle y_{i}=\begin{cases}-1,&{\rm with~{}probability}\quad\Phi\left(-% \frac{w_{0}}{\sigma_{z}}\right);\\ 1,&{\rm with~{}probability}\quad\Phi\left(\frac{w_{0}}{\sigma_{z}}\right),\end% {cases}

the result (30) is established.  

References

  • [1] A. Wiesel, Y. C. Eldar and A. Beck, “Maximum likelihood estimation in linear models with a Gaussian model matrix,” IEEE Signal Processing Letters, vol. 13, no. 5, pp. 292-295, May 2006.
  • [2] Y. C. Eldar, “Minimax estimation of deterministic parameters in linear models with a random model matrix,” IEEE Transactions on Signal Processing, vol. 45, no. 2, pp. 601-612, Feb. 2006.
  • [3] A. Wiesel, Y. C. Eldar and A. Yeredor, “Linear regression with Gaussian model uncertainty: Algorithms and bounds,” IEEE Transactions on Signal Processing, vol. 56, no. 6, pp. 2194-2205, Jun. 2008.
  • [4] A. DeMaris, Regression with social data: Modeling continuous and limited response variables, John Wiley & Sons, New Jersey, 2004.
  • [5] A. Gustafsson, A. Herrmann and F. Huber, Conjoint Measurement: Methods and Applications, Springer-Verlag, Berlin, 2007.
  • [6] W. Newey and D. McFadden, “Chapter 35: Large sample estimation and hypothesis testing,” in Handbook of Econometrics, vol. 4, pp. 2111-2245, Elsevier Science, North Holland: Amsterdam, 1994.
  • [7] M. Abdallah and H. Papadopoulos, “Sequential signal encoding and estimation for distributed sensor networks,” in Proceedings of International Conference on Acoustics, Speech, and Signal Processing (ICASSP2001), vol. 4, Salt Lake City, UT, pp. 2577¨C2580, May 2001.
  • [8] H. C. Papadopoulos, G. W. Wornell and A. V. Oppenheim, “Sequential signal encoding from noisy measurements using quantizers with dynamic bias control,” IEEE Transactions on Information Theory, vol. 47, no. 3, pp. 978-1002, Mar. 2001.
  • [9] A. Ribeiro and G. B. Giannakis, “Distributed estimation in Gaussian noise for bandwidth-constrained wireless sensor networks,” in Proceedings of 38th Asilomar Conference on Signals, Systems, and Computers, vol. 2, pp. 1407-1411, Nov. 2004.
  • [10] A. Ribeiro and G. B. Giannakis, “Bandwidth-constrained distributed estimation for wireless sensor Networks-part I: Gaussian case,” IEEE Transactions on Signal Processing, vol. 54, no. 3, pp. 1131-1143, Mar. 2006.
  • [11] Z. Luo, “Universal decentralized estimation in a bandwidth constrained sensor network,” IEEE Transactions on Information Theory, vol. 51, no. 6, pp. 2210-2219, June 2005.
  • [12] Z. Luo, “An isotropic universal decentralized estimation scheme for a bandwidth constrained ad hoc sensor network,” IEEE Journal on Selected Areas in Communications, vol. 23, no. 4, pp. 735-744, Apr. 2005.
  • [13] Z. Luo and J. Xiao, “Decentralized estimation in an inhomogeneous sensing environment,” IEEE Transactions on Information Theory, vol. 51, no. 10, pp. 3564-3575, Oct. 2005.
  • [14] A. Ribeiro, and G.B. Giannakis, “Bandwidth-constrained distributed estimation for wireless sensor networks-part II: Unknown probability density function,” IEEE Transactions on Signal Processing, vol. 54, no. 7, pp. 2784-2796, July, 2006.
  • [15] C. I. Bliss, “The calculation of the dosage-mortality curve,” Annals of Applied Biology, vol. 22, pp. 134-167, 1935.
  • [16] G. Mateos and G. B. Giannakis. “Robust conjoint analysis by controlling outlier sparsity,” in Proceedings of European Signal Processing Conference, Aug. 2011.
  • [17] E. Tsakonas, J. Jaldén, N. Sidiropoulos and B. Ottersten, “Connections between sparse estimation and robust statistical learning,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2013), Vancouver, Canada, May 2013.
  • [18] E. Tsakonas, J. Jaldén, N. Sidiropoulos and B. Ottersten, “Sparse conjoint analysis through maximum likelihood estimation,” submitted to IEEE Transactions on Signal Processing, Oct. 2012.
  • [19] R. J. Carroll, C. H. Spiegelman, K. K. G. Lan, K. T. Bailey and R. D. Abbott, “On errors-in-variables for binary regression models,” Biometrika, vol. 71, pp. 19-25, 1984.
  • [20] R. J. Carroll, D. Ruppert, L. A. Stefanski and C. M. Crainiceanu, Measurement error in nonlinear models: A modern perspective, CRC Press, 2010.
  • [21] M. A. Davenport, Y. Plan, E. Berg and M. Wootters, “1-bit matrix completion,” arXiv:1209.3672, 2012.
  • [22] P. Stoica and T. L. Marzetta, “Parameter estimation problems with singular information matrices,” IEEE Transactions on Signal Processing, vol. 49, no. 1, pp. 87-90, Jan. 2001.
  • [23] D. Burr, “On Errors-in-Variables in Binary Regression-Berkson Case,” Journal of the American Statistical Association, vol. 83, no. 403, pp. 739-743, Sep. 1988.
  • [24] L. Wasserman, All of nonparametric statistics, Springer, New York, pp. 4, 2006.
  • [25] M. DeWeese and W. Bialek, “Information flow in sensory neurons,” Nuovo Cimento Soc. Ital. Fys., vol. 17D, no. 7-8, pp. 733-741, July-Aug. 1995.
  • [26] J. K. Douglass, L.Wilkens, E. Pantazelou and F. Moss, “Noise enhancement of information transfer in crayfish mechanoreceptors by stochastic resonance,” Nature, vol. 365, pp. 337-340, Sep. 1993.
  • [27] J. Levin and J. Miller, “Broadband neural encoding in the cricket sensory system enhanced by stochastic resonance,” Nature, vol. 380, no. 6570, pp. 165-168, Mar. 1996.
  • [28] Y. Tang, L. Chen and Y. Gu, “On the performance bound of sparse estimation with sensing matrix perturbation,” IEEE Transactions on Signal Processing, vol. 61, no. 17, pp. 4372-4386, Sep. 2013.
  • [29] J .G. Proakis, Digital Communications, McGraw-Hill, New York, pp. 42, 2001.
  • [30] O. Dabeer and A. Karnik, “Signal parameter estimation using 1-bit dithered quantization,” IEEE Transactions on Information Theory, vol. 52, no. 12, pp. 5389-5405, Dec. 2006.
  • [31] G. O. Balkan and S. Gezici, “CRLB based optimal noise enhanced parameter estimation using quantized observations,” IEEE Signal Processing Letters, vol. 17, no. 5, pp. 477-480, May 2010.
  • [32] S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004.
  • [33] A. Geletu, “Solving Optimization Problems using the Matlab Optimization Toolbox - a Tutorial,” available at http://www.tu-ilmenau.de/fileadmin/media/simulation/ Lehre/Vorlesungsskripte/Lecture_materials_Abebe/OptimizatioWithMatlab.pdf, Dec, 2007.
  • [34] A. Papoulis, Probability, Random Variables, and Stochastic Processes, 3rd edition, New York: McGraw-Hill, 1991.
  • [35] S. M. Kay, Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory, Englewood Cliffs, NJ: Prentice Hall, pp. 211-212, 1993.
  • [36] S. M. Kay, Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory, Englewood Cliffs, NJ: Prentice Hall, pp. 45-46, 1993.
  • [37] G. H. Golub and C. F. Van Loan, Matrix Computations, Johns Hopkins University Press, Baltimore, MD, 3rd edition, 1996.
  • [38] J. B. Lasserre, “A trace inequality for matrix product,” IEEE Transactions on Automatic Control, vol. 40, no. 8, pp. 1500-1501, Aug. 1995.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
48413
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description