Asymptotic Analysis of MAP Estimation via the Replica Method and Applications to Compressed Sensing

Asymptotic Analysis of MAP Estimation via the Replica Method and Applications to Compressed Sensing

Abstract

The replica method is a non-rigorous but well-known technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method, under the assumption of replica symmetry, to study estimators that are maximum a posteriori (MAP) under a postulated prior distribution. It is shown that with random linear measurements and Gaussian noise, the replica-symmetric prediction of the asymptotic behavior of the postulated MAP estimate of an -dimensional vector “decouples” as scalar postulated MAP estimators. The result is based on applying a hardening argument to the replica analysis of postulated posterior mean estimators of Tanaka and of Guo and Verdú.

The replica-symmetric postulated MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, lasso, linear estimation with thresholding, and zero norm-regularized estimation. In the case of lasso estimation the scalar estimator reduces to a soft-thresholding operator, and for zero norm-regularized estimation it reduces to a hard-threshold. Among other benefits, the replica method provides a computationally-tractable method for precisely predicting various performance metrics including mean-squared error and sparsity pattern recovery probability.

1Introduction

Estimating a vector from measurements of the form

where represents a known measurement matrix and represents measurement errors or noise, is a generic problem that arises in a range of circumstances. When the noise is i.i.d. zero-mean Gaussian with variance and is i.i.d. with components having a probability distribution function , the maximum a posteriori (MAP) estimate is given by

where . Estimators of the form are also used with the regularization function or noise level parameter not matching the true prior or noise level, either since those quantities are not known or since the optimization in using the true values is too difficult to compute. In such cases, the estimator can be interpreted as a MAP estimate for a postulated distribution and noise level, and we will thus call estimators of the form postulated MAP estimators.

Due to their prevalence, characterizing the behavior of postulated MAP estimators is of interest in a wide range of applications. However, for most regularization functions , the postulated MAP estimator is nonlinear and not easy to analyze. Even if, for the purpose of analysis, one assumes separable priors on and , the analysis of the postulated MAP estimate may be difficult since the matrix couples the unknown components of with the measurements in the vector .

This paper provides a general analysis of postulated MAP estimators based on the replica method—a non-rigorous but widely-used method from statistical physics for analyzing large random systems. It is shown that, under a key assumption of replica symmetry described below, the replica method predicts that with certain large random and Gaussian , there is an asymptotic decoupling of the vector postulated MAP estimate into scalar MAP estimators. Specifically, the replica method predicts that the joint distribution of each component of and its corresponding component in the estimate vector is asymptotically identical to the outputs of a simple system where is a postulated MAP estimate of the scalar random variable observed in Gaussian noise. Using this scalar equivalent model, one can then readily compute the asymptotic joint distribution of for any component .

The replica method’s non-rigorous but simple prescription for computing the asymptotic joint componentwise distributions has three key, attractive features:

  • Sharp predictions:

    Most importantly, the replica method provides—under the assumption of the replica hypotheses—not just bounds, but sharp predictions of the asymptotic behavior of postulated MAP estimators. From the joint distribution, various further computations can be made, to provide precise predictions of quantities such as the mean-squared error (MSE) and the error probability of any componentwise hypothesis test computed from a postulated MAP estimate.

  • Computational tractability:

    Since the scalar equivalent model involves only a scalar random variable , scalar Gaussian noise, and scalar postulated MAP estimate , any quantity derived from the joint distribution can be computed numerically from one- or two-dimensional integrals.

  • Generality:

    The replica analysis can incorporate arbitrary separable distributions on and regularization functions . It thus applies to a large class of estimators and test scenarios.

1.1Replica Method and Contributions of this Work

The replica method was originally developed by Edwards and Anderson [1] to study the statistical mechanics of spin glasses. Although not fully rigorous from the perspective of probability theory, the technique was able to provide explicit solutions for a range of complex problems where many other methods had previously failed. Indeed, the replica method and related ideas from statistical mechanics have found success in a number of classic NP-hard problems including the traveling salesman problem [2], graph partitioning [3], -SAT [4] and others [5]. Statistical physics methods have also been applied to the study of error correcting codes [6]. There are now several general texts on the replica method [8].

The replica method was first applied to the study of nonlinear MAP estimation problems by Tanaka [12]. That work applied what is called a replica symmetric analysis to multiuser detection for large CDMA systems with random spreading sequences. Müller [13] considered a mathematically-similar problem for MIMO communication systems. In the context of the estimation problem considered here, Tanaka’s and Müller’s papers essentially characterized the behavior of the MAP estimator of a vector with i.i.d. binary components observed through linear measurements of the form (Equation 1) with a large random and Gaussian .

Tanaka’s results were then generalized in a remarkable paper by Guo and Verdú [14] to vectors with arbitrary separable distributions. Guo and Verdú’s result was also able to incorporate a large class of postulated minimum mean squared error (MMSE) estimators, where the estimator may assume a prior that is different from the actual prior. Replica analyses have also been applied to related communication problems such as lattice precoding for the Gaussian broadcast channel [15]. A brief review of the replica method analysis by Tanaka [12] and Guo and Verdú [14] is provided in Appendix Section 8.

The result in this paper is derived from Guo and Verdú [14] by a standard hardening argument. Specifically, the postulated MAP estimator is first expressed as a limit of the postulated MMSE estimators analyzed in [14]. Then, the behavior of the postulated MAP estimator can be derived by taking appropriate limits of the results in [14] on postulated MMSE estimators. This hardening technique is well-known and is used in Tanaka’s original work [12] in the analysis of MAP estimators with binary and Gaussian priors.

Through the limiting analysis via hardening, the postulated MAP results here follow from the postulated MMSE results in [14]. Thus, the central contribution of this work is to work out these limits to provide a set of equations for a general class of postulated MAP estimators. In particular, while Tanaka has derived the equations for replica predictions of MAP estimates for binary and Gaussian priors, the results here provide explicit equations for general priors and regularization functions.

1.2Replica Assumptions

The non-rigorous aspect of the replica method involves a set of assumptions that include a self-averaging property, the validity of a “replica trick,” and the ability to exchange certain limits. Importantly, this work is based on an additional strong assumption of replica symmetry. As described in Appendix Section 8, the replica method reduces the calculation of a certain free energy to an optimization problem over covariance matrices. The replica symmetric (RS) assumption is that the maxima in this optimization satisfy certain symmetry properties. This RS assumption is not always valid, and indeed Appendix Section 8 provides several examples from other applications of the replica method where replica symmetry breaking (RSB) solutions are known to be needed to provide correct predictions.

For the analysis of postulated MMSE estimators, [12] and [14] derive analytic conditions for the validity of the RS assumption only in some limited cases. Our analysis of postulated MAP estimators depends on [14], and, unfortunately, we have not provided a general analytic test for the validity of the RS assumption in this work. Following [14], our approach instead is to compare, where possible, the predictions under the RS assumption to numerical simulations of the postulated MAP estimator. As we will see in Section 6, the RS predictions appear to be accurate, at least for many common estimators arising in compressed sensing. That being said, the RS analysis can also provide predictions for optimal MMSE and zero norm-regularized estimators that cannot be simulated tractably. Extra caution must be applied in assuming the validity of the RS predictions for these estimators.

To emphasize our dependence on these unproven assumptions—notably replica symmetry—we will refer to the general MMSE analysis in Guo and Verdú’s work [14] as the replica symmetric postulated MMSE decoupling property. Our main result will be called the replica symmetric postulated MAP decoupling property.

1.3Connections to Belief Propagation

Although not explored in this work, it is important to point out that the results of the replica analysis of postulated MMSE and MAP estimation are similar to those derived for belief propagation (BP) estimation. Specifically, there is now a large body of work analyzing BP and approximate BP algorithms for estimation of vectors observed through linear measurements of the form with large random . For both certain large sparse random matrices [16], and more recently for certain large dense random matrices [23], several results now show that BP estimates exhibit an asymptotic decoupling property similar to RS predictions for postulated MMSE and MAP estimators. Graphical model arguments have also been used to establish a decoupling property under a very general, random sparse observation model [27].

The effective noise level in the scalar equivalent model for BP and approximate BP methods can be predicted by certain state evolution equations similar to density evolution analysis of BP decoding of LDPC codes [28]. It turns out that in several cases, the fixed point equations for state evolution are identical to the equations for the effective noise level predicted by the RS analysis of postulated MMSE and MAP estimators. In particular, the equations in [23] agree exactly with the RS predictions for LASSO estimation given in this work.

These connections are significant in several regards: Firstly, the state evolution analysis of BP algorithms can be made fully rigorous under suitable assumptions and thus provides an independent, rigorous justification for some of the RS claims.

Secondly, the replica method provides only an analysis of estimators, but no method to actually compute those estimators. In contrast, the BP and approximate BP algorithms provide a possible tractable method for achieving the performance predicted by the replica method.

Finally, the BP analysis provides an algorithmic intuition as to why decoupling may occur (and hence when replica symmetry may be valid): As described in [30], BP and approximate BP algorithms can be seen as iterative procedures where the vector estimation problem is reduced to a sequence of “decoupled” scalar estimation problems. This decoupling is based essentially on the principle that, in each iteration, when estimating one component , the uncertainty in the other components can be aggregated as Gaussian noise. Based on the state evolution analysis of BP algorithms, we know that this Central Limit Theorem-based approximation is asymptotically valid when the components of the mixing matrix are sufficiently dense and independent. Thus, the validity of RS is possibly connected to validity of this Gaussian approximation.

1.4Applications to Compressed Sensing

As an application of our main result, we will develop a few analyses of estimation problems that arise in compressed sensing [31]. In compressed sensing, one estimates a sparse vector from random linear measurements. A vector is sparse when its number of nonzero entries is smaller than its length . Generically, optimal estimation of with a sparse prior is NP-hard [34]. Thus, most attention has focused on greedy heuristics such as matching pursuit [35] and convex relaxations such as basis pursuit [39] or lasso [40]. While successful in practice, these algorithms are difficult to analyze precisely.

Compressed sensing of sparse through (Equation 1) (using inner products with rows of ) is mathematically identical to sparse approximation of with respect to columns of . An important set of results for both sparse approximation and compressed sensing are the deterministic conditions on the coherence of that are sufficient to guarantee good performance of the suboptimal methods mentioned above [41]. These conditions can be satisfied with high probability for certain large random measurement matrices. Compressed sensing has provided many sufficient conditions that are easier to satisfy than the initial coherence-based conditions. However, despite this progress, the exact performance of most sparse estimators is still not known precisely, even in the asymptotic case of large random measurement matrices. Most results describe the estimation performance via bounds, and the tightness of these bounds is generally not known.

There are, of course, notable exceptions including [44] and [45], which provide matching necessary and sufficient conditions for recovery of strictly sparse vectors with basis pursuit and lasso. However, even these results only consider exact recovery and are limited to measurements that are noise-free or measurements with a signal-to-noise ratio (SNR) that scales to infinity.

Many common sparse estimators can be seen as MAP estimators with certain postulated priors. Most importantly, lasso and basis pursuit are MAP estimators assuming a Laplacian prior. Other commonly-used sparse estimation algorithms, including linear estimation with and without thresholding and zero norm-regularized estimators, can also be seen as postulated MAP-based estimators. For these postulated MAP-based sparse estimation algorithms, the replica method can provide non-rigorous but sharp, easily-computable predictions for the asymptotic behavior. In the context of compressed sensing, this analysis can predict various performance metrics such as MSE or fraction of support recovery. The expressions can apply to arbitrary ratios , , and . Due to the generality of the replica analysis, the methodology can also incorporate arbitrary distributions on including several sparsity models, such as Laplacian, generalized Gaussian, and Gaussian mixture priors. Discrete distributions can also be studied.

It should be pointed out that this work is not the first to use ideas from statistical physics for the study of sparse estimation. Guo, Baron and Shamai [46] have provided a replica analysis of compressed sensing that characterizes not just the postulated MAP or postulated MMSE estimate, but the asymptotic posterior marginal distribution. That work also shows an independence property across finite sets of components. Merhav, Guo and Shamai [47] consider, among other applications, the estimation of a sparse vector from measurements of the form . In their model, there is no measurement matrix such as in (Equation 1), but the components of are possibly correlated. Their work derives explicit expressions for the MMSE as a function of the probability distribution on the number of nonzero components. The analysis does not rely on replica assumptions and is fully rigorous. More recently, Kabashima, Wadayama and Tanaka [48] have used the replica method to derive precise conditions on which sparse signals can be recovered with -based relaxations such as lasso. Their analysis does not consider noise, but can find conditions on recovery on the entire vector , not just individual components. Also, using free probability theory [49], a recent analysis [51] extends the replica analysis of compressed sensing to larger classes of matrices, including matrices that are possibly not i.i.d.

1.5Outline

The remainder of the paper is organized as follows. The precise estimation problem is described in Section 2. We review the RS postulated MMSE decoupling property of Guo and Verdú in Section 3. We then present our main result, an RS postulated MAP decoupling property, in Section 4. The results are applied to the analysis of compressed sensing algorithms in Section 5, which is followed by numerical simulations in Section 6. Conclusions are possible avenues for future work are given in Section 7. The proof of the main result is somewhat long and given in a set of appendices; Appendix Section 9 provides an overview of the proof and a guide through the appendices with detailed arguments.

2Estimation Problem and Assumptions

Consider the estimation of a random vector from linear measurements of the form

where is a vector of observations; , with , is a measurement matrix; is a diagonal matrix of positive scale factors,

and is zero-mean, white Gaussian noise. We consider a sequence of such problems indexed by , with . For each , the problem is to determine an estimate of from the observations knowing the measurement matrix and scale factor matrix .

The components of are modeled as zero mean and i.i.d. with some prior probability distribution . The per-component variance of the Gaussian noise is . We use the subscript “0” on the prior and noise level to differentiate these quantities from certain “postulated” values to be defined later. When we develop applications in Section 5, the prior will incorporate presumed sparsity of .

In (Equation 3), we have factored so that even with the i.i.d. assumption on above and an i.i.d. assumption on entries of , the model can capture variations in powers of the components of that are known a priori at the estimator. Specifically, multiplication by scales the variance of the th component of by a factor . Variations in the power of that are not known to the estimator should be captured in the distribution of .

We summarize the situation and make additional assumptions to specify the problem precisely as follows:

  • The number of measurements is a deterministic quantity that varies with and satisfies

    for some . (The dependence of on is usually omitted for brevity.)

  • The components of are i.i.d. with probability distribution . All moments of are finite.

  • The noise is Gaussian with .

  • The components of the matrix are i.i.d. and distributed as for some random variable with zero mean, unit variance and all other moments of finite.

  • The scale factors are i.i.d., satisfy almost surely, and all moments of are finite.

  • The scale factor matrix , measurement matrix , vector , and noise are all independent.

3Review of the Replica Symmetric Postulated MMSE Decoupling Property

We begin by reviewing the RS postulated MMSE decoupling property of Guo and Verdú [14].

3.1Postulated MMSE Estimators

To define the concept of a postulated MMSE estimator, suppose one is given a “postulated” prior distribution and a postulated noise level that may be different from the true values and . We define the postulated minimum MSE (PMMSE) estimate of as

where is the conditional distribution of given under the distribution and noise variance specified as parameters after the semicolon. We will use this sort of notation throughout the rest of the paper, including the use of without a subscript for the p.d.f. of the scalar or vector quantity understood from context. In this case, due to the Gaussianity of the noise, we have

where the normalization constant is

and is the joint p.d.f.

In the case when and , so that the postulated and true values agree, the PMMSE estimate reduces to the true MMSE estimate.

3.2Decoupling under Replica Symmetric Assumption

The essence of the RS PMMSE decoupling property is that the asymptotic behavior of the PMMSE estimator is described by an equivalent scalar estimator. Let be a probability distribution defined on some set . Given , let be the conditional distribution

where is the Gaussian distribution

The distribution is the conditional distribution of the scalar random variable given an observation of the form

where . Using this distribution, we can define the scalar conditional MMSE estimate

Also, given two distributions, and , and two noise levels, and , define

which is the MSE in estimating the scalar from the variable in (Equation 9) when has a true distribution and the noise level is , but the estimator assumes a distribution and noise level .

Replica Symmetric Postulated MMSE Decoupling Property [14]:

Consider the estimation problem in Section 2. Let be the PMMSE estimator based on a postulated prior and postulated noise level . For each , let be some deterministic component index with . Then under replica symmetry, there exist effective noise levels and such that:

  • As , the random vectors converge in distribution to the random vector consistent with the block diagram in Fig. ?. Here , , and are independent with , , , and

  • The effective noise levels satisfy the equations

    where the expectations are taken over and generated by ( ?).

This result asserts that the asymptotic behavior of the joint estimation of the -dimensional vector can be described by equivalent scalar estimators. In the scalar estimation problem, a component is corrupted by additive Gaussian noise yielding a noisy measurement . The additive noise variance is , which is the effective noise divided by the scale factor . The estimate of that component is then described by the (generally nonlinear) scalar estimator .

The effective noise levels and are described by the solutions to fixed-point equations ( ?). Note that and appear implicitly on the left- and right-hand sides of these equations via the terms and . In general, there is no closed form solution to these equations. However, the expectations can be evaluated via (one-dimensional) numerical integration.

It is important to point out that there may, in general, be multiple solutions to the fixed-point equations ( ?). In this case, it turns out that the true solution is the minimizer of a certain Gibbs’ function described in [14].

3.3Effective Noise and Multiuser Efficiency

To understand the significance of the effective noise level , it is useful to consider the following estimation problem with side information. Suppose that when estimating the component an estimator is given as side information the values of all the other components . Then, this hypothetical estimator with side information can “subtract out” the effect of all the known components and compute

where is the th column of the measurement matrix . It is easily checked that

where

Thus, (Equation 12) shows that with side information, estimation of reduces to a scalar estimation problem where is corrupted by additive noise . Since is Gaussian with mean zero and per-component variance , is Gaussian with mean zero and variance . Also, since is an -dimensional vector whose components are i.i.d. with variance , as . Therefore, for large , will approach .

Comparing (Equation 12) with ( ?), we see that the equivalent scalar model predicted by the RS PMMSE decoupling property ( ?) is identical to the estimation with perfect side information (Equation 12), except that the noise level is increased by a factor

In multiuser detection, the factor is called the multiuser efficiency [52].

The multiuser efficiency can be interpreted as degradation in the effective signal-to-noise ratio (SNR): With perfect side-information, an estimator using in (Equation 12) can estimate with an effective SNR of

In CDMA multiuser detection, the factor is called the post-despreading SNR with no multiple access interference. The RS PMMSE decoupling property shows that without side information, the effective SNR is given by

Therefore, the multiuser efficiency in (Equation 13) is the ratio of the effective SNR with and without perfect side information.

4Analysis of Postulated MAP Estimators via Hardening

The main result of the paper is developed in this section.

4.1Postulated MAP Estimators

Let be some (measurable) set and consider an estimator of the form

where is an algorithm parameter and is some scalar-valued, nonnegative cost function. We will assume that the objective function in (Equation 16) has a unique essential minimizer for almost all .

The estimator (Equation 16) can be interpreted as a MAP estimator. To see this, suppose that for sufficiently large,

where we have extended the notation to vector arguments such that

When (Equation 17) is satisfied, we can define a prior probability distribution depending on :

Also, let

Substituting (Equation 19) and (Equation 20) into (Equation 6), we see that

for some constant that does not depend on . (The scaling of the noise variance along with enables the factorization in the exponent of (Equation 21).) Comparing to (Equation 16), we see that

Thus for all sufficiently large , we indeed have a MAP estimate—assuming the prior and noise level .

4.2Decoupling under Replica Symmetric Assumption

To analyze the postulated MAP (PMAP) estimator, we consider a sequence of postulated MMSE estimators indexed by . For each , let

which is the MMSE estimator of under the postulated prior in (Equation 19) and noise level in (Equation 20). Using a standard large deviations argument, one can show that under suitable conditions

for all . A formal proof is given in Appendix Section 11 (see Lemma ?). Under the assumption that the behaviors of the postulated MMSE estimators are described by the RS PMMSE decoupling property, we can then extrapolate the behavior of the postulated MAP estimator. This will yield our main result.

In statistical physics the parameter has the interpretation of inverse temperature (see a general discussion in [54]). Thus, the limit as can be interpreted as a cooling or “hardening” of the system.

In preparation for the main result, define the scalar MAP estimator

where

The estimator (Equation 23) plays a similar role as the scalar MMSE estimator (Equation 10).

The main result pertains to the estimator (Equation 16) applied to the sequence of estimation problems defined in Section 2. Our assumptions are as follows:

Assumption ? is simply stated to again point out that we are assuming the validity of replica symmetry for the postulated MMSE estimates. We make the additional Assumptions ? and ?, which are also difficult to verify but similar in spirit. Taken together, Assumptions ?? reflect the main limitations of the replica symmetric analysis and precisely state the manner in which the analysis is non-rigorous.

Assumptions ?? are technical conditions on the existence and uniqueness of the MAP estimate. Assumption ? will be true for any strictly convex regularization , although it is difficult to verify in the non-convex case. The other two assumptions, Assumptions ? and ?, will be verified for the problems of interest. In fact, we will explicitly calculate .

We can now state our extension of the RS PMMSE decoupling property.

Replica Symmetric Postulated MAP Decoupling Property:

Consider the estimation problem in Section 2. Let be the postulated MAP estimator (Equation 16) defined for some and satisfying Assumptions ??. For each , let be some deterministic component index with . Then under replica symmetry (as part of Assumption ?):

  • As , the random vectors converge in distribution to the random vector consistent with the block diagram in Fig. ? for the limiting effective noise levels and in Assumption ?. Here , , and are independent with , , , and

  • The limiting effective noise levels and satisfy the equations

    where the expectations are taken over , , and , with and defined in ( ?).

Analogously to the RS PMMSE decoupling property, the RS PMAP decoupling property asserts that asymptotic behavior of the PMAP estimate of any single component of is described by a simple equivalent scalar estimator. In the equivalent scalar model, the component of the true vector is corrupted by Gaussian noise and the estimate of that component is given by a scalar PMAP estimate of the component from the noise-corrupted version.

5Analysis of Compressed Sensing

Our results thus far hold for any separable distribution for (see Section 2) and under mild conditions on the cost function (see especially Assumption ?, but other assumptions also implicitly constrain ). In this section, we provide additional details on replica analysis for choices of that yield PMAP estimators relevant to compressed sensing. Since the role of is to determine the estimator, this is not the same as choosing sparse priors for . Numerical evaluations of asymptotic performance with sparse priors for are given in Section 6.

5.1Linear Estimation

We first apply the RS PMAP decoupling property to the simple case of linear estimation. Linear estimators only use second-order statistics and generally do not directly exploit sparsity or other aspects of the distribution of the unknown vector . Nonetheless, for sparse estimation problems, linear estimators can be used as a first step in estimation, followed by thresholding or other nonlinear operations [55]. It is therefore worthwhile to analyze the behavior of linear estimators even in the context of sparse priors.

The asymptotic behavior of linear estimators with large random measurement matrices is well known. For example, using the Marčenko-Pastur theorem [57], Verdú and Shamai [58] characterized the behavior of linear estimators with large i.i.d. matrices and constant scale factors . Tse and Hanly [59] extended the analysis to general . Guo and Verdú [14] showed that both of these results can be recovered as special cases of the general RS PMMSE decoupling property. We show here that the RS PMAP decoupling property can also recover these results. Although the calculations are very similar to [14], and indeed we arrive at precisely the same results, walking through the computations will illustrate how the RS PMAP decoupling property is used.

To simplify the notation, suppose that the true prior on is such that each component has zero mean and unit variance. Choose the cost function

which corresponds to the negative log of a Gaussian prior also with zero mean and unit variance. With this cost function, the PMAP estimator (Equation 16) reduces to the linear estimator

When , the true noise variance, the estimator (Equation 25) is the linear MMSE estimate.

Now, let us compute the effective noise levels from the RS PMAP decoupling property. First note that in (Equation 24) is given by

and therefore the scalar MAP estimator in (Equation 23) is given by

A simple calculation also shows that in ( ?) is given by

As part (a) of the RS PMAP decoupling property, let and . Observe that

where (a) follows from (Equation 26); (b) follows from ( ?); and (c) follows from the fact that and are uncorrelated with zero mean and unit variance. Substituting (Equation 27) and (Equation 28) into the fixed-point equations ( ?), we see that the limiting noise levels and must satisfy

where the expectation is over . In the case when , it can be verified that a solution to these fixed-point equations is , which results in and

The expression (Equation 29) is precisely the Tse-Hanly formula [59] for the effective interference. Given a distribution on , this expression can be solved numerically for . In the special case of constant , (Equation 29) reduces to Verdú and Shamai’s result in [60] and can be solved via a quadratic equation.

The RS PMAP decoupling property now states that for any component index , the asymptotic joint distribution of is described by corrupted by additive Gaussian noise with variance followed by a scalar linear estimator.

As described in [14], the above analysis can also be applied to other linear estimators including the matched filter (where ) or the decorrelating receiver ().

5.2Lasso Estimation

We next consider lasso estimation, which is widely used for estimation of sparse vectors. The lasso estimate [40] (sometimes referred to as basis pursuit denoising [39]) is given by

where is an algorithm parameter. The estimator is essentially a least-squares estimator with an additional regularization term to encourage sparsity in the solution. The parameter is selected to trade off the sparsity of the estimate with the prediction error. An appealing feature of lasso estimation is that the minimization in (Equation 30) is convex; lasso thus enables computationally-tractable algorithms for finding sparse estimates.

The lasso estimator (Equation 30) is identical to the PMAP estimator (Equation 16) with the cost function

With this cost function, in (Equation 24) is given by

and therefore the scalar MAP estimator in (Equation 23) is given by

where is the soft thresholding operator

The RS PMAP decoupling property now states that there exists effective noise levels and such that for any component index , the random vector converges in distribution to the vector where , , and is given by

where , , and . Hence, the asymptotic behavior of lasso has a remarkably simple description: the asymptotic distribution of the lasso estimate of the component is identical to being corrupted by Gaussian noise and then soft-thresholded to yield the estimate .

This soft-threshold description has an appealing interpretation. Consider the case when the measurement matrix . In this case, the lasso estimator (Equation 30) reduces to scalar estimates,

where , , and . Comparing (Equation 33) and (Equation 34), we see that the asymptotic distribution of with large random is identical to the distribution in the trivial case where , except that the noise levels and are replaced by effective noise levels and .

To calculate the effective noise levels, one can perform a simple calculation to show that in ( ?) is given by

Hence,

where we have used the fact that . Substituting (Equation 31) and (Equation 36) into ( ?), we obtain the fixed-point equations

where the expectations are taken with respect to , , and in (Equation 33). Again, while these fixed-point equations do not have a closed-form solution, they can be relatively easily solved numerically given distributions of and .

5.3Zero Norm-Regularized Estimation

Lasso can be regarded as a convex relaxation of zero norm-regularized estimator

where is the number of nonzero components of . For certain strictly sparse priors, zero norm-regularized estimation may provide better performance than lasso. While computing the zero norm-regularized estimate is generally very difficult, we can use the replica analysis to provide a simple prediction of its performance. This analysis can provide a bound on the performance achievable by practical algorithms.

To apply the RS PMAP decoupling property to the zero norm-regularized estimator (Equation 37), we observe that the zero norm-regularized estimator is identical to the PMAP estimator (Equation 16) with the cost function

Technically, this cost function does not satisfy the conditions of the RS PMAP decoupling property. For one thing, without bounding the range of , the bound (Equation 17) is not satisfied. Also, the minimum of (Equation 23) does not agree with the essential infimum. To avoid these problems, we can consider an approximation of (Equation 38),

which is defined on the set . We can then take the limits and . For space considerations and to simplify the presentation, we will just apply the decoupling property with in (Equation 38) and omit the details of taking the appropriate limits.

With given by (Equation 38), the scalar MAP estimator in (Equation 23) is given by

where is the hard thresholding operator,

Now, similar to the case of lasso estimation, the RS PMAP decoupling property states that there exists effective noise levels and such that for any component index , the random vector converges in distribution to the vector where , , and is given by

where , , , and

Thus, the zero norm-regularized estimation of a vector is equivalent to scalar components corrupted by some effective noise level and hard-thresholded based on an effective noise level .

The fixed-point equations for the effective noise levels and can be computed similarly to the case of lasso. Specifically, one can verify that (Equation 35) and (Equation 36) are both satisfied for the hard thresholding operator as well. Substituting (Equation 36) and (Equation 39) into ( ?), we obtain the fixed-point equations

where the expectations are taken with respect to , , in (Equation 41), and given by (Equation 42). These fixed-point equations can be solved numerically.

5.4Optimal Regularization

The lasso estimator (Equation 30) and zero norm-regularized estimator (Equation 37) require the setting of a regularization parameter . Qualitatively, the parameter provides a mechanism to trade off the sparsity level of the estimate with the fitting error. One of the benefits of the replica analysis is that it provides a simple mechanism for optimizing the parameter level given the problem statistics.

Consider first the lasso estimator (Equation 30) with some and distributions and . Observe that there exists a solution to ( ?) with if and only if

This leads to a natural optimization: we consider an optimization over two variables and , where we minimize subject to ( ?) and (Equation 43).

One simple procedure for performing this minimization is as follows: Start with and some initial value of . For any iteration , we update with the minimization

where, on the right-hand side, the expectation is taken over , , in (Equation 33), , and . The minimization in (Equation 44) is over subject to (Equation 43). One can show that with a sufficiently high initial condition, the sequence monotonically decreases to a local minimum of the objective function. Given the final value for , one can then recover from ( ?). A similar procedure can be used for the zero norm-regularized estimator.

6Numerical Simulations

6.1Bernoulli–Gaussian Mixture Distribution

As discussed above, the replica method is based on certain unproven assumptions and even then the decoupling results under replica symmetry are only asymptotic for the large dimension limit. To validate the predictive power of the RS PMAP decoupling property for finite dimensions, we first performed numerical simulations where the components of are a zero-mean Bernoulli–Gaussian process, or equivalently a two-component, zero-mean Gaussian mixture where one component has zero variance. Specifically,

where represents a sparsity ratio. In the experiments, . This is one of many possible sparse priors.

We took the vector to have i.i.d. components with this prior, and we varied for 10 different values of from 0.5 to 3. For the measurements (Equation 3), we took a measurement matrix with i.i.d. Gaussian components and a constant scale factor matrix . The noise level was set so that = 10 dB, where is the signal-to-noise ratio with perfect side information defined in (Equation 14).

We simulated various estimators and compared their performances against the asymptotic values predicted by the replica analysis. For each value of , we performed 1000 Monte Carlo trials of each estimator. For each trial, we measured the normalized squared error (SE) in dB

where is the estimate of . The results are shown in Fig. ?, with each set of 1000 trials represented by the median normalized SE in dB.

The top curve shows the performance of the linear MMSE estimator (Equation 25). As discussed in Section 5.1, the RS PMAP decoupling property applied to the case of a constant scale matrix reduces to Verdú and Shamai’s result in [60]. As can be seen in Fig. ?, the result predicts the simulated performance of the linear estimator extremely well.

The next curve shows the lasso estimator (Equation 30) with the factor selected to minimize the MSE as described in Section 5.4. To compute the predicted value of the MSE from the RS PMAP decoupling property, we numerically solve the fixed-point equations ( ?) to obtain the effective noise levels and . We then use the scalar MAP model with the estimator (Equation 31) to predict the MSE. We see from Fig. ? that the predicted MSE matches the median SE within 0.3 dB over a range of values. At the time of initial dissemination of this work [61], precise prediction of lasso’s performance given a specific noise variance and prior was not achievable with any other method. Now, as discussed in Section 1.3, such asymptotic performance predictions can also be proven rigorously through connections with approximate belief propagation.

Fig. ? also shows the theoretical minimum MSE (as computed with the RS PMMSE decoupling property) and the theoretical MSE from the zero norm-regularized estimator as computed in Section Section 5.3. For these two cases, the estimators cannot be simulated since they involve NP-hard computations. But we have depicted the curves to show that the replica method can be used to calculate the gap between practical and impractical algorithms. Interestingly, we see that there is about a 2.0 to 2.5 dB gap between lasso and zero norm-regularized estimation, and another 1 to 2 dB gap between zero norm-regularized estimation and optimal MMSE.

It is, of course, not surprising that zero norm-regularized estimation performs better than lasso for the strictly sparse prior considered in this simulation, and that optimal MMSE performs better yet. However, what is valuable is that replica analysis can quantify the precise performance differences.

In Fig. ?, we plotted the median SE since there is actually considerable variation in the SE over the random realizations of the problem parameters. To illustrate the degree of variability, Fig. ? shows the CDF of the SE values over the 1000 Monte Carlo trials. Each trial has different noise and measurement matrix realizations, and both contribute to SE variations. We see that the variation of the SE is especially large at the smaller dimension . While the median value agrees well with the theoretical replica limit, any particular instance of the problem can vary considerably from that limit. This is a significant drawback of the replica method: at lower dimensions, the replica method may provide accurate predictions of the median behavior, but it does not bound the variations from the median.

As one might expect, at the higher dimension of , the level of variability is reduced and the observed SE begins to concentrate around the replica limit. In his original paper [12], Tanaka assumes that concentration of the SE will occur; he calls this the self-averaging assumption. Fig. ? provides some empirical evidence that self-averaging does indeed occur. However, even at , the variation is not insignificant. As a result, caution should be exercised in using the replica predictions on particular low-dimensional instances.

6.2Discrete Distribution with Dynamic Range

The RS PMAP decoupling property can also be used to study the effects of dynamic range in power levels. To validate the replica analysis with power variations, we ran the following experiment: the vector was generated with i.i.d. components

where is a random power level and is a discrete three-valued random variable with probability mass function

As before, the parameter represents the sparsity ratio and we chose a value of . The measurements were generated by

where is an i.i.d. Gaussian measurement matrix and is Gaussian noise. As in the previous section, the post-despreading SNR with side-information was normalized to 10 dB.

The factor in (Equation 45) accounts for power variations in . We considered two random distributions for : (a) , so that the power level is constant; and (b) is uniform (in dB scale) over a 10 dB range with unit average power.

In case (b), when there is variation in the power levels, we can analyze two different scenarios for the lasso estimator:

  • Power variations unknown:

    If the power level in (Equation 45) is unknown to the estimator, then we can apply the standard lasso estimator:

    which does not need knowledge of the power levels . To analyze the behavior of this estimator with the replica method, we simply incorporate variations of both and into the prior of and assume a constant scale factor in the replica equations.

  • Power variations known:

    If the power levels are known, the estimator can compute

    and then take . This can be analyzed with the replica method by incorporating the distribution of into the scale factors.

Fig. ? shows the performance of the lasso estimator for the different power range scenarios. As before, for each , the figure plots the median SE over 1000 Monte Carlo simulation trials. Fig. ? also shows the theoretical asymptotic performance as predicted with the RS PMAP decoupling property. Simulated values are based on a vector dimension of and optimal selection of as described in Section 5.4.

We see that in all three cases (constant power and power variations unknown and known to the estimator), the replica prediction is in excellent agreement with the simulated performance. With one exception, the replica method matches the simulated performance within 0.2 dB. The one exception is for with constant power, where the replica method underpredicts the median SE by about 1 dB. A simulation at a higher dimension of (not shown here) reduced this discrepancy to 0.2 dB, suggesting that the replica method is still asymptotically correct.

We can also observe two interesting phenomena in Fig. ?. First, the lasso method’s performance with constant power is almost identical to the performance with unknown power variations for values of . However, at higher values of , the power variations actually improve the performance of the lasso method, even though the average power is the same in both cases. Wainwright’s analysis [44] demonstrated the significance of the minimum component power in dictating lasso’s performance. The above simulation and the corresponding replica predictions suggest that dynamic range may also play a role in the performance of lasso. That increased dynamic range can improve the performance of sparse estimation has been observed for other estimators [62].

A second phenomena we see in Fig. ? is that knowing the power variations and incorporating them into the measurement matrix can actually degrade the performance of lasso. Indeed, knowing the power variations appears to result in a 1 to 2 dB loss in MSE performance.

Of course, one cannot conclude from this one simulation that these effects of dynamic range hold more generally. The study of the effect of dynamic range is interesting and beyond the scope of this work. The point is that the replica method provides a simple analytic method for quantifying the effect of dynamic range that appears to match actual performance well.

6.3Support Recovery with Thresholding

In estimating vectors with strictly sparse priors, one important problem is to detect the locations of the nonzero components in the vector . This problem, sometimes called support recovery, arises for example in subset selection in linear regression [64], where finding the support of the vector corresponds to determining a subset of features with strong linear influence on some observed data . Several works have attempted to find conditions under which the support of a sparse vector can be fully detected [44] or partially detected [66]. Unfortunately, with the exception of [44], the only available results are bounds that are not tight.

One of the uses of RS PMAP decoupling property is to exactly predict the fraction of support that can be detected correctly. To see how to predict the support recovery performance, observe that the decoupling property provides the asymptotic joint distribution for the vector , where is the component of the unknown vector, is the corresponding scale factor and is the component estimate. Now, in support recovery, we want to estimate , the indicator function that is nonzero

One natural estimate for is to compare the magnitude of the component estimate to some scale-dependent threshold ,

This idea of using thresholding for sparsity detection has been proposed in [55] and [69]. Using the joint distribution , one can then compute the probability of sparsity misdetection

The probability of error can be minimized over the threshold levels .

To verify this calculation, we generated random vectors with i.i.d. components given by (Equation 45) and (Equation 46). We used a constant power () and a sparsity fraction of . As before, the observations were generated with an i.i.d. Gaussian matrix with = 10 dB.

Fig. ? compares the theoretical probability of sparsity misdetection predicted by the replica method against the actual probability of misdetection based on the average of 1000 Monte Carlo trials. We tested two algorithms: linear MMSE estimation and lasso estimation. For lasso, the regularization parameter was selected for minimum MMSE as described in Section 5.4. The results show a good match.

7Conclusions and Future Work

We have applied the replica method from statistical physics for computing the asymptotic performance of postulated MAP estimators of non-Gaussian vectors with large random linear measurements, under a replica symmetric assumption. The method can be readily applied to problems in compressed sensing. While the method is not theoretically rigorous, simulations show an excellent ability to predict the performance for a range of algorithms, performance metrics, and input distributions. Indeed, we believe that the replica method provides the only method to date for asymptotically-exact prediction of performance of compressed sensing algorithms that can apply in a large range of circumstances.

Moreover, we believe that the availability of a simple scalar model that exactly characterizes certain sparse estimators opens up numerous avenues for analysis. For one thing, it would be useful to see if the replica analysis of lasso can be used to recover the scaling laws of Wainwright [44] and Donoho and Tanner [45] for support recovery and to extend the latter to the noisy setting. Also, the best known bounds for MSE performance in sparse estimation are given by Haupt and Nowak [70] and Candès and Tao [71]. Since the replica analysis is asymptotically exact (subject to various assumptions), we may be able to obtain much tighter analytic expressions. In a similar vein, several researchers have attempted to find information-theoretic lower bounds with optimal estimation [72]. Using the replica analysis of optimal estimators, one may be able to improve these scaling laws as well.

Finally, there is a well-understood connection between statistical mechanics and belief propagation-based decoding of error correcting codes [6]. These connections may suggest improved iterative algorithms for sparse estimation as well.

8Review of the Replica Method

We provide a brief summary of the replica method, with a focus on some of the details of the replica symmetric analysis of postulated MMSE estimation in [12]. This review will elucidate some of the key assumptions, notably the assumption of replica symmetry. General descriptions of the replica method can be found in texts such as [8].

The replica method is based on evaluating variants of the so-called asymptotic free energy

where is the postulated partition function

and the expectation in is with respect to the true distribution on . For the replica PMMSE and PMAP analyses in [12], various joint moments of the variables and are computed from certain variants of the free energy, and the convergence of the joint distribution of is then analyzed based on these moments.

To evaluate the asymptotic free energy, the replica method uses the identity that, for any random variable ,

Therefore, the asymptotic free energy can be rewritten as