Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems

Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems

Alyson K. Fletcher, Mojtaba Sahraee-Ardakan, Philip Schniter, and Sundeep Rangan A. K. Fletcher and M. Sahraee-Ardakan are with the University of California, Los Angeles.P. Schniter is with The Ohio State University.S. Rangan is with New York University.
Abstract

The problem of estimating a random vector from noisy linear measurements with unknown parameters on the distributions of and , which must also be learned, arises in a wide range of statistical learning and linear inverse problems. We show that a computationally simple iterative message-passing algorithm can provably obtain asymptotically consistent estimates in a certain high-dimensional large-system limit (LSL) under very general parameterizations. Previous message passing techniques have required i.i.d. sub-Gaussian matrices and often fail when the matrix is ill-conditioned. The proposed algorithm, called adaptive vector approximate message passing (Adaptive VAMP) with auto-tuning, applies to all right-rotationally random . Importantly, this class includes matrices with arbitrarily bad conditioning. We show that the parameter estimates and mean squared error (MSE) of in each iteration converge to deterministic limits that can be precisely predicted by a simple set of state evolution (SE) equations. In addition, a simple testable condition is provided in which the MSE matches the Bayes-optimal value predicted by the replica method. The paper thus provides a computationally simple method with provable guarantees of optimality and consistency over a large class of linear inverse problems.

I Introduction

Consider the problem of estimating a random vector from linear measurements of the form

(1)

where is a known matrix, is a density on with parameters , is additive white Gaussian noise (AWGN) independent of , and is the noise precision (inverse variance). The goal is to estimate , while simultaneously learning the unknown parameters , from the data and . This problem arises in Bayesian forms of linear inverse problems in signal processing, as well as in linear regression in statistics.

Exact estimation of the parameters via maximum likelihood or other methods is generally intractable. One promising class of approximate methods combines approximate message passing (AMP) [1] with expectation-maximization (EM). AMP and its generalizations (e.g., [2]) constitute a powerful, relatively new class of algorithms based on expectation propagation [3] (EP)-type techniques. The resulting AMP algorithms are computationally fast and have been successfully applied to a wide range of problems, e.g., [4, 5, 6, 7, 8, 9, 10, 11]. Most importantly, for large, zero-mean, sub-Gaussian i.i.d. random matrices , the performance of these AMP methods can be exactly predicted by a scalar state evolution (SE) [12, 13] that provides testable conditions for optimality, even for non-convex priors. When the parameters are unknown, AMP can be easily combined with EM for joint learning of the parameters and vector  [14, 15, 16]. Under more general conditions on , however, not only do the theoretical results not hold, but standard AMP techniques often diverge and require a variety of modifications for stability [17, 18, 19, 20]. For example, when has nonzero mean, its largest singular value grows with the problem size, making arbitrarily poorly conditioned.

A recent work [21] combined EM with the so-called Vector AMP (VAMP) method of [22]. Similar to AMP, VAMP is based on EP-like [3] approximations of belief propagation [22] and can also be considered as a special case of expectation consistent (EC) approximate inference [23, 24, 25]. VAMP’s key attraction is that it applies to a larger class of matrices than the original AMP method. It has provable SE analyses and convergence guarantees that apply to all right-rotationally invariant matrices [22, 26] – a significantly larger class of matrices than i.i.d. sub-Gaussian. Under further mild conditions, the mean-squared error (MSE) of VAMP matches the replica prediction for optimality [27, 28, 29]. When the distributions on and are unknown, the work [21] proposed to combine EM and VAMP using the approximate inference framework of [30]. While [21] provided numerical simulations suggesting excellent performance for EM-VAMP on several synthetic problems, there were no provable convergence guarantees.

The contributions of this work are as follows:

  • Rigorous state evolution analysis: We provide a rigorous analysis of a generalization of EM-VAMP that we call Adaptive VAMP. Similar to the analysis of VAMP, we consider a certain large-system limit (LSL) where the matrix is random and right-rotationally invariant. Importantly, this class of matrices includes much more than i.i.d. Gaussian, as used in the original LSL analysis of Bayati and Montanari [12]. It is shown (Theorem 1) that, in this LSL, the parameter estimates at each iteration converge to deterministic limits that can be computed from a set of SE equations that extend those of VAMP. The analysis also exactly characterizes the asymptotic joint distribution of the estimates and the true vector . The SE equations depend only on the initial parameter estimate , the adaptation function (to be discussed), and statistics on the matrix , the vector , and the noise .

  • Asymptotic consistency: It is also shown (Theorem 2) that, under an additional identifiability condition and a simple auto-tuning procedure, Adaptive VAMP can yield provably consistent parameter estimates in the LSL. This approach is inspired by an ML-estimation approach from [16]. Remarkably, the result is true under very general problem formulations.

  • Bayes optimality: In the case when the parameter estimates converge to the true value, the behavior of adaptive VAMP matches that of VAMP. In this case, it is shown in [22] that, when the SE equations have a unique fixed point, the MSE of VAMP matches the MSE of the Bayes optimal estimator, as predicted by the replica method [27, 28, 29].

In this way, we have developed a computationally efficient inference scheme for a large class of linear inverse problems. In a certain high-dimensional limit, our scheme guarantees that (i) the performance of the algorithm can be exactly characterized, (ii) the parameter estimates are asymptotically consistent, and (iii) the algorithm has testable conditions for when the signal estimates match the replica prediction of Bayes optimality.

Ii VAMP with Adaptation

We assume that the prior density on can be written as

(2)

where is a separable penalty function, is a parameter vector and is a normalization constant. With some abuse of notation, we have used for the function on the vector and its components . Since is separable, has i.i.d. components conditioned on . The likelihood function under the AWGN model (1) can be written as

(3)

where . The joint density of given parameters is then

(4)

The problem is to estimate the parameters along with the vector .

0:  Matrix , measurement vector , denoiser function , statistic function , adaptation function and number of iterations .
1:   Select initial , , , .
2:  for  do
3:     // Input denoising
4:     ,   
5:     
6:     
7:     
8:     // Input parameter update
9:      ,   
10:     
11:     // Output estimation
12:     ,   
13:     
14:     
15:     
16:     
17:     // Output parameter update
18:     
19:  end for
Algorithm 1 Adaptive VAMP

The steps of the proposed adaptive VAMP algorithm that performs this estimation are shown in Algorithm 1. Adaptive VAMP is a generalization of the EM-VAMP method in [21]. At each iteration , the algorithm produces, for , estimates of the parameter , along with estimates of the vector . The algorithm is tuned by selecting three key functions: (i) a denoiser function ; (ii) an adaptation statistic ; and (iii) a parameter selection function . The denoiser is used to produce the estimates , while the adaptation statistic and parameter estimation functions produce the estimates .

Denoiser function

The denoiser function is discussed in detail in [22] and is generally based on the prior . In the original EM-VAMP algorithm [21], is selected as the so-called minimum mean-squared error (MMSE) denoiser. Specifically, in each iteration, the variables , and were used to construct belief estimates,

(5)

which represent estimates of the posterior density . To keep the notation symmetric, we have written for even though the first penalty function does not depend on . The EM-VAMP method then selects to be the mean of the belief estimate,

(6)

For line 4 of Algorithm 1, we define and we use for the empirical mean of a vector, i.e., . Hence, in line 4 is a scaled inverse divergence. It is shown in [22] that, for the MMSE denoiser (6), is the inverse average posterior variance.

Estimation for with finite statistics

For the EM-VAMP algorithm [21], the parameter update for is performed via a maximization

(7)

where the expectation is with respect to the belief estimate in (5). It is shown in [21] that using (7) is equivalent to approximating the M-step in the standard EM method. In the adaptive VAMP method in Algorithm 1, the M-step maximization (7) is replaced by line 9. Note that line 9 again uses to denote empirical average,

(8)

so is the empirical average of some -dimensional statistic over the components of . The parameter estimate update is then computed from some function of this statistic, .

We show in Appendix A that there are two important cases where the EM update (7) can be computed from a finite-dimensional statistic as in line 9: (i) The prior is given by an exponential family, for some sufficient statistic ; and (ii) There are a finite number of values for the parameter . For other cases, we can approximate more general parameterizations via discretization of the parameter values . The updates in line 9 can also incorporate other types of updates as we will see below. But, we stress that it is preferable to compute the estimate for directly from the maximization (7) – the use of a finite-dimensional statistic is for the sake of analysis.

Estimation for with finite statistics

It will be useful to also write the adaptation of in line 18 of Algorithm 1 in a similar form as line 9. First, take a singular value decomposition (SVD) of of the form

(9)

and define the transformed error and transformed noise,

(10)

Then, it is shown in Appendix B that in line 18 can be written as

(11)

where

(12)

Of course, we cannot directly compute in (10) since we do not know the true . Nevertheless, this form will be useful for analysis.

Iii State Evolution in the Large System Limit

Iii-a Large System Limit

Similar to the analysis of VAMP in [22], we analyze Algorithm 1 in a certain large-system limit (LSL). The LSL framework was developed by Bayati and Montanari in [12] and we review some of the key definitions in Appendix C. As in the analysis of VAMP, the LSL considers a sequence of problems indexed by the vector dimension . For each , we assume that there is a “true” vector that is observed through measurements of the form

(13)

where is a known transform, is white Gaussian noise with “true” precision . The noise precision does not change with .

Identical to [22], the transform is modeled as a large, right-orthogonally invariant random matrix. Specifically, we assume that it has an SVD of the form (9), where and are orthogonal matrices such that is deterministic and is Haar distributed (i.e. uniformly distributed on the set of orthogonal matrices). As described in [22], although we have assumed a square matrix , we can consider general rectangular by adding zero singular values.

Using the definitions in Appendix C, we assume that the components of the singular-value vector in (9) converge empirically with second-order moments as

(14)

for some non-negative random variable with and for some finite maximum value . Additionally, we assume that the components of the true vector, , and the initial input to the denoiser, , converge empirically as

(15)

where is a random variable representing the true distribution of the components ; is an initial error and is an initial error variance. The variable may be distributed as for some true parameter . However, in order to incorporate under-modeling, the existence of such a true parameter is not required. We also assume that the initial second-order term and parameter estimate converge almost surely as

(16)

for some and .

Iii-B Error and Sensitivity Functions

We next need to introduce parametric forms of two key terms from [22]: error functions and sensitivity functions. The error functions describe MSE of the denoiser and output estimators under AWGN measurements. Specifically, for the denoiser , we define the error function as

(17)

where is distributed according to the true distribution of the components (see above). The function thus represents the MSE of the estimate from a measurement corrupted by Gaussian noise of variance under the parameter estimate . For the output estimator, we define the error function as

(18)

which is the average per component error of the vector estimate under Gaussian noise. The dependence on the true noise precision, , is suppressed.

The sensitivity functions describe the expected divergence of the estimator. For the denoiser, the sensitivity function is defined as

(19)

which is the average derivative under a Gaussian noise input. For the output estimator, the sensitivity is defined as

(20)

where is distributed as in (18). The paper [22] discusses the error and sensitivity functions in detail and shows how these functions can be easily evaluated.

Iii-C State Evolution Equations

We can now describe our main result, which are the SE equations for Adaptive VAMP. The equations are an extension of those in the VAMP paper [22], with modifications for the parameter estimation. For a given iteration , consider the set of components,

This set represents the components of the true vector , its corresponding estimate and the denoiser input . We will show that, under certain assumptions, these components converge empirically as

(21)

where the random variables are given by

(22)

for constants , and that will be defined below. We will also see that , so represents the asymptotic parameter estimate. The model (22) shows that each component appears as the true component plus Gaussian noise. The corresponding estimate then appears as the denoiser output with as the input and as the parameter estimate. Hence, the asymptotic behavior of any component and its corresponding is identical to a simple scalar system. We will refer to (21)-(22) as the denoiser’s scalar equivalent model.

We will also show that these transformed errors and noise in (10) and singular values converge empirically to a set of independent random variables given by

(23)

where has the distribution of the singular values of , is a variance that will be defined below and is the true noise precision in the measurement model (13). All the variables in (23) are independent. Thus (23) is a scalar equivalent model for the output estimator.

The variance terms are defined recursively through the state evolution equations,

(24a)
(24b)
(24c)
(24d)
(24e)
(24f)

which are initialized with and the defined from the limit (16). The expectation in (24b) is with respect to the random variables (21) and the expectation in (24e) is with respect to the random variables (23).

Theorem 1.

Consider the outputs of Algorithm 1. Under the above assumptions and definitions, assume additionally that for all iterations :

  1. The solution from the SE equations (24) satisfies .

  2. The functions , and are continuous at .

  3. The denoiser function and its derivative are uniformly Lipschitz in at . (See Appendix C for a precise definition of uniform Lipschitz continuity.)

  4. The adaptation statistic is uniformly pseudo-Lipschitz of order 2 in at .

Then, for any fixed iteration ,

(25)

almost surely. In addition, the empirical limit (21) holds almost surely for all , and (23) holds almost surely for all .

Theorem 1 shows that, in the LSL, the parameter estimates converge to deterministic limits that can be precisely predicted by the state-evolution equations. The SE equations incorporate the true distribution of the components on the prior , the true noise precision , and the specific parameter estimation and denoiser functions used by the Adaptive VAMP method. In addition, similar to the SE analysis of VAMP in [22], the SE equations also predict the asymptotic joint distribution of and their estimates . This joint distribution can be used to measure various performance metrics such as MSE – see [22]. In this way, we have provided a rigorous and precise characterization of a class of adaptive VAMP algorithms that includes EM-VAMP.

Iv Consistent Parameter Estimation with Variance Auto-Tuning

By comparing the deterministic limits with the true parameters , one can determine under which problem conditions the parameter estimates of adaptive VAMP are asymptotically consistent. In this section, we show with a particular choice of parameter estimation functions, one can obtain provably asymptotically consistent parameter estimates under suitable identifiability conditions. We call the method variance auto-tuning, which generalizes the approach in [16].

Definition 1.

Let be a parameterized set of densities. Given a finite-dimensional statistic , consider the mapping

(26)

where the expectation is with respect to the model (46). We say the is identifiable in Gaussian noise if there exists a finite-dimensional statistic such that (i) is pseudo-Lipschitz continuous of order 2; and (ii) the mapping (26) has a continuous inverse.

Theorem 2.

Under the assumptions of Theorem 1, suppose that follows for some true parameter . If is identifiable in Gaussian noise, there exists an adaptation rule such that, for any iteration , the estimate and noise estimate are asymptotically consistent in that and almost surely.

The theorem is proved in Appendix F which also provides details on how to perform the adaptation. The Appendix also provides a similar result for consistent estimation of the noise precision . The result is remarkable as it shows that a simple variant of EM-VAMP can provide provably consistent parameter estimates under extremely general distributions.

V Numerical Simulations

Sparse signal recovery

We consider a sparse linear inverse problem of estimating a vector from measurements from (1) without knowing the signal parameters or the noise precision . The paper [21] presented several numerical experiments to assess the performance of EM-VAMP relative to other methods. To support the results in this paper, our goal is to demonstrate that state evolution can correctly predict the performance of EM-VAMP, and to validate the consistency of EM-VAMP with auto-tuning. Details are given in Appendix G. Briefly, to model the sparsity, is drawn as an i.i.d. Bernoulli-Gaussian (i.e., spike and slab) prior with unknown sparsity level, mean and variance. The true sparsity is . Following [17, 18], we take to be a random right-orthogonally invariant matrix with dimensions under , with the condition number set to (high condition number matrices are known to be problem for conventional AMP methods). The left panel of Fig. 1 shows the normalized mean square error (NMSE) for various algorithms. Appendix G describes the algorithms in details and also shows similar results for .

Fig. 1: Numerical simulations. Left panel: Sparse signal recovery: NMSE versus iteration for condition number for a random matrix with a condition number . Right panel: NMSE for sparse image recovery (in AWGN at 40 dB SNR) as a function of the condition number .

We see several important features. First, for all variants of VAMP and EM-VAMP, the SE equations provide an excellent prediction of the per iteration performance of the algorithm. Second, consistent with the simulations in [22], the oracle VAMP converges remarkably fast ( 10 iterations). Third, the performance of EM-VAMP with auto-tuning is virtually indistinguishable from oracle VAMP, suggesting that the parameter estimates are near perfect from the very first iteration. Fourth, the EM-VAMP method performs initially worse than the oracle-VAMP, but these errors are exactly predicted by the SE. Finally, all the VAMP and EM-VAMP algorithms exhibit much faster convergence than the EM-BG-AMP. In fact, consistent with observations in [21], EM-BG-AMP begins to diverge at higher condition numbers. In contrast, the VAMP algorithms are stable.

Compressed sensing image recovery

While the theory is developed on theoretical signal priors, we demonstrate that the proposed EM-VAMP algorithm can be effective on natural images. Specifically, we repeat the experiments in [31] for recovery of a sparse image. Again, see Appendix G for details including a picture of the image and the various reconstructions. An image of a satellite with non-zero pixels is observed through a linear transform in AWGN at 40 dB SNR, where is the fast Hadamard transform, is a random sub-selection to measurements, is a scaling to adjust the condition number, and is a random vector. As in the previous example, the image vector is modeled as sparse Bernoulli-Gaussian and the EM-VAMP algorithm is used to estimate the sparsity ratio, signal variance, and noise variance. The transform is set to have measurement ratio . We see from the right panel of Fig. 1 that the EM-VAMP algorithm is able to reconstruct the images with improved performance over the standard basis pursuit denoising method SPGL1 [32] and the EM-BG-GAMP method from [15].

Vi Conclusions

Due to its analytic tractability, computational simplicity, and potential for Bayes optimal inference, VAMP is a promising technique for statistical linear inverse problems. Importantly, it does not suffer the convergence issues of standard AMP methods when is ill conditioned. However, a key challenge in using VAMP and related methods is the need to precisely specify the problem parameters. This work provides a rigorous foundation for analyzing VAMP in combination with various parameter adaptation techniques including EM. The analysis reveals that VAMP, with appropriate tuning, can also provide consistent parameter estimates under very general settings, thus yielding a powerful approach for statistical linear inverse problems.

Appendix A Adaptation with Finite-Dimensional Statistics

We provide two simple examples where the EM parameter update (47) can be computed from finite-dimensional statistics.

Finite number of parameter values

Suppose that takes only a finite number of values . Define the statistics,

where the outer expectation is with respect to the belief estimate in given in (5). Hence, the EM update in (47) is given by where

Exponential family

Suppose that is given by an exponential family where the penalty function in (2) is given by for some sufficient statistics . Define the statistic,

Then, the EM parameter update (47) can be computed by

Hence, we see that the EM parameter update can be computed from a finite set of statistics.

Appendix B Proof of (11)

Using the SVD (9) and the equations for and in line 12 of Algorithm 1, we obtain

Thus, the parameter estimate in line 18 is given by

where (a) follows since is unitary and therefore, and (b) follows from the SVD (9) and some simple manipulations. Therefore, if we define as in (12), we obtain (11).

Appendix C Convergence of Vector Sequences

We review some definitions from the Bayati and Montanari paper [12], since we will use the same analysis framework in this paper. Fix a dimension , and suppose that, for each , is a block vector of the form

where each component . Thus, the total dimension of is . In this case, we will say that is a block vector sequence that scales with under blocks . When , so that the blocks are scalar, we will simply say that is a vector sequence that scales with . Such vector sequences can be deterministic or random. In most cases, we will omit the notational dependence on and simply write .

Now, given , a function is called pseudo-Lipschitz of order , if there exists a constant such that for all ,

Observe that in the case , pseudo-Lipschitz continuity reduces to the standard Lipschitz continuity.

Given , we will say that the block vector sequence converges empirically with -th order moments if there exists a random variable such that

  1. ; and

  2. for any scalar-valued pseudo-Lipschitz continuous function of order ,

Thus, the empirical mean of the components converges to the expectation . In this case, with some abuse of notation, we will write

(27)

where, as usual, we have omitted the dependence . Importantly, empirical convergence can be defined on deterministic vector sequences, with no need for a probability space. If is a random vector sequence, we will often require that the limit (27) holds almost surely.

Finally, let be a function on , dependent on some parameter vector . We say that is uniformly pseudo-Lipschitz continuous of order in at if there exists an open neighborhood of , such that (i) is pseudo-Lipschitz of order in for all ; and (ii) there exists a constant such that

(28)

for all and . It can be verified that if is uniformly pseudo-Lipschitz continuous of order in at , and , then

In the case when is uniformly pseudo-Lipschitz continuous of order , we will simply say is uniformly Lipschitz.

Appendix D A General Convergence Result

The proof of Theorem 1 uses the following minor modification of the general convergence result in [22, Theorem 4]: We are given a dimension , an orthogonal matrix , an initial vector , and disturbance vectors . Then, we generate a sequence of iterates by the following recursion:

(29a)
(29b)
(29c)