AMP-Inspired Deep Networks for Sparse Linear Inverse Problems

AMP-Inspired Deep Networks for Sparse Linear Inverse Problems

Mark Borgerding, Philip Schniter, and Sundeep Rangan M. Borgerding (email: borgerding.7@osu.edu) and P. Schniter (email: schniter.1@osu.edu) are with the Department of Electrical and Computer Engineering, The Ohio State University, Columbus OH. Their work was supported in part by the National Science Foundation under grants 1527162 and 1539960. S. Rangan (email: srangan@nyu.edu) is with the Department of Electrical and Computer Engineering, New York University, Brooklyn, NY, 11201. His work was supported by the National Science Foundation under Grants 1302336, 1547332, and 1564142.Portions of this work were presented at the 2016 IEEE Global Conference on Signal and Information Processing [1].
Abstract

Deep learning has gained great popularity due to its widespread success on many inference problems. We consider the application of deep learning to the sparse linear inverse problem, where one seeks to recover a sparse signal from a few noisy linear measurements. In this paper, we propose two novel neural-network architectures that decouple prediction errors across layers in the same way that the approximate message passing (AMP) algorithms decouple them across iterations: through Onsager correction. First, we propose a “learned AMP” network that significantly improves upon Gregor and LeCun’s “learned ISTA.” Second, inspired by the recently proposed “vector AMP” (VAMP) algorithm, we propose a “learned VAMP” network that offers increased robustness to deviations in the measurement matrix from i.i.d. Gaussian. In both cases, we jointly learn the linear transforms and scalar nonlinearities of the network. Interestingly, with i.i.d. signals, the linear transforms and scalar nonlinearities prescribed by the VAMP algorithm coincide with the values learned through back-propagation, leading to an intuitive interpretation of learned VAMP. Finally, we apply our methods to two problems from 5G wireless communications: compressive random access and massive-MIMO channel estimation.

Deep learning, compressive sensing, approximate message passing, random access, massive MIMO.

I Introduction

We consider the problem of recovering a signal from a noisy linear measurement of the form111Although we focus on real-valued quantities for ease of illustration, the methods in this paper could be easily extended to the complex-valued case.

(1)

where represents a linear operator and is additive white Gaussian noise (AWGN). In many cases of interest, . We will assume that the signal vector has an (approximately) sparse222Although we focus on sparse signals, the methods in this paper can be applied to other signals, such as the finite-alphabet signals used in digital communications. representation in a known orthonormal basis , i.e., that for some (approximately) sparse vector . Thus we define , write (1) as

(2)

and seek to recover a sparse from . In the sequel, we will refer to this problem as the “sparse linear inverse” problem. The resulting estimate of can then be converted into an estimate of via .

The sparse linear inverse problem has received enormous attention over the last few years, in large part because it is central to compressive sensing [2]. Many methods have been developed to solve this problem. Most of the existing methods involve a reconstruction algorithm that inputs a pair and produces a sparse estimate . A myriad of such algorithms have been proposed, including both sequential (e.g., greedy) and iterative varieties. Some relevant algorithms will be reviewed in Section II-A.

Recently, a different approach to solving this problem has emerged along the lines of “deep learning” [3], whereby a many-layer neural network is optimized to minimize reconstruction mean-squared error (MSE) on a large set of training examples333Since orthonormal implies , training examples of the form can be converted to via . . Once trained, the network can be used to predict the sparse that corresponds to a new input . Although the operator and signal/noise statistics are not explicitly used when training, the learned network will be implicitly dependent on those parameters. Previous work (e.g., [4, 5, 6, 7, 8]) has shown that the deep-learning approach to solving sparse linear inverse problems has the potential to offer significant improvements, in both accuracy and complexity, over traditional algorithms like ISTA [9] and FISTA [10]. A short review of relevant concepts from deep learning will be provided in Section II-B.

In this paper, we show how recent advances in iterative reconstruction algorithms suggest modifications to traditional neural-network architectures that yield improved accuracy and complexity when solving sparse linear inverse problems. In particular, we show how “Onsager correction,” which lies at the heart of the approximate message passing (AMP) [11] and vector AMP (VAMP) [12] algorithms, can be employed to construct deep networks that i) require fewer layers to reach a given level of accuracy and ii) yield greater accuracy overall. To our knowledge, the use of Onsager correction in deep networks is novel.

The contributions of our work are as follows. First, in Section III, we show how the soft-thresholding-based AMP algorithm from [11] can be “unfolded” to form a feedforward neural network whose MSE-optimal parameters can be learned using a variant of back-propagation. The structure of the resulting “learned AMP” (LAMP) network is similar to that of learned ISTA (LISTA) [4] but contains additional “bypass” paths whose gains are set in a particular way. While bypass paths can also be found in recently proposed “residual networks” [13, 14] and “highway networks” [15], the bypass paths in LAMP have a different topology and a different gain-control mechanism. We show numerically that LAMP’s outputs are more accurate than those of LISTA at each iteration, in some cases by more than a factor of . To isolate the effects of LAMP’s change in network topology, the aforementioned experiments restrict the shrinkage function to classical soft-thresholding.

Next, in Section IV, we show that the accuracy of LAMP can be significantly improved by learning jointly MSE-optimal shrinkage functions and linear transforms. In particular, we consider several families of shrinkage functions, each controlled by a small number of learnable parameters: piecewise linear functions, exponential shrinkage functions, cubic B-splines, and Bernoulli-Gaussian denoisers. Our work in this section is inspired by [6], which learned cubic B-splines for ISTA, but goes farther in that it i) considers shrinkage families beyond splines, ii) jointly learns the shrinkage functions and linear transforms, and iii) includes Onsager correction.

Then, in Section V, we show how the VAMP algorithm from [12] can be unfolded to form a feedforward neural network whose MSE-optimal linear-transforms and shrinkage-functions can be jointly learned using a variant of back-propagation. Interestingly, we find that learned LVAMP parameters are nearly identical to the prescribed matched-VAMP parameters (i.e., VAMP under statistically matched prior and likelihood) when the signal is i.i.d. In this sense, matched VAMP “predicts” the parameters learned by back-propagation. Furthermore, since the parameters prescribed by VAMP have an intuitive interpretation based on MMSE estimation principles, VAMP “explains” the parameters learned by back-propagation.

Finally, in Section VII, we apply the proposed networks to two problems arising in 5th-generation (5G) wireless communications: the compressive random access problem and the massive-MIMO channel-estimation problem.

An early version of this work appeared in [1]. There, we proposed the LAMP- algorithm and compared it to LISTA. In this work, we go beyond [1] by i) providing justification for our LAMP- parameterization (in Appendix A), ii) jointly optimizing the shrinkage functions and the linear stages of LAMP, iii) proposing the LVAMP method, and iv) detailing two applications to 5G communications.

Notation

We use capital boldface letters like for matrices, small boldface letters like for vectors, for transposition, and to denote the th element of . Also, we use for the spectral norm of , for the norm of when , and for the or “counting” pseudo-norm of . Likewise, we use for the diagonal matrix created from vector , for the identity matrix and for the zero vector. For a random vector , we denote its probability density function (pdf) by and its expectation by . For a random variable , we denote its variance by . Similarly, we use , , and for the pdf, expectation, and variance (respectively) conditioned on . We refer to the Dirac delta pdf using and to the pdf of a Gaussian random vector with mean and covariance using . Finally, we use to denote the signum function, where when and when .

Ii Iterative Algorithms and Deep Learning

Ii-a Iterative Algorithms

One of the best known algorithmic approaches to solving the sparse linear inverse problem is through solving the convex optimization problem [16, 17]

(3)

where is a tunable parameter that controls the tradeoff between sparsity and measurement fidelity in . The convexity of (3) leads to provably convergent algorithms and bounds on the performance of the estimate (see, e.g., [18]). In the sequel, we will refer to (3) as the “” problem.

Ii-A1 Ista

One of the simplest approaches to solving (3) is the iterative shrinkage/thresholding algorithm (ISTA) [9], which iterates the steps (for and )

(4a)
(4b)

where is a stepsize, is the iteration- residual measurement error, and is the “soft thresholding” shrinkage function, defined componentwise as

(5)

Ii-A2 Fista

Although ISTA is guaranteed to converge under [19], it converges somewhat slowly and so many modifications have been proposed to speed it up. Among the most famous is “fast ISTA” (FISTA) [10],

(6a)
(6b)

which converges in roughly an order-of-magnitude fewer iterations than ISTA (see Fig. 1).

Ii-A3 Amp

Recently, an approximate message passing (AMP) algorithm [11, 20] was proposed for the problem. The resulting algorithm, which we call AMP-, manifests as

(7a)
(7b)

where , , , and

(8)
(9)

In (9), is a tuning parameter that has a one-to-one correspondence with in (3) [20]. Comparing AMP- to ISTA, we see two major differences: i) AMP’s residual in (7a) includes the “Onsager correction” term , and ii) AMP’s shrinkage threshold in (7b) takes the prescribed, -dependent value (9). In the sequel, we explain the rationale behind these differences.

AMP can in fact be used with any Lipschitz-continuous shrinkage function. For this, we write the AMP algorithm as

(10a)
(10b)

where , , , and

(11)
(12)

In writing (10b), we assume that the shrinkage function accepts the noise-standard-deviation estimate as an argument. Although this is not a required feature of AMP, we find it useful in the sequel. It is straightforward to show that AMP in (10)-(12) reduces to AMP- from (7)-(9) when and .

When is a typical realization of a large i.i.d. sub-Gaussian random matrix with variance- entries and has identical scalar components, the Onsager correction decouples the AMP iterations in the sense that the input to the shrinkage function,

(13)

can be accurately modeled as444The AMP model (14)-(12) is provably accurate in the large-system limit (i.e., with converging to a positive constant) [21, 22].

(14)

with from (12). In other words, the Onsager correction ensures that the shrinkage input is an AWGN-corrupted version of the true signal with known variance . (See Fig. 5(b) for numerical evidence.) The resulting “denoising” problem, that of estimating from , is well understood.

For example, when the elements of are statistically independent with known prior , the MSE-optimal denoiser555AMP with MSE-optimal denoising was first described in [23]. is simply the posterior mean estimator (i.e., ), which can be computed in closed form for many distributions . In the case that are unknown, we may be more interested in the minimax denoiser, i.e., the minimizer of the maximum MSE over an assumed family of priors. Remarkably, for generic sparse priors, i.e., with and arbitrary unknown , soft-thresholding (5) with a threshold proportional to the AWGN standard deviation (i.e., recalling (12)) is nearly minimax optimal [20]. Thus, we can interpret the AMP- algorithm (7) as a nearly minimax approach to the sparse linear inverse problem under unknown

The behavior of AMP is well understood when is i.i.d. sub-Gaussian [21, 22], but even small deviations from this model can lead AMP to diverge [24] or at least behave in ways that are not well understood.

Ii-A4 Vector AMP

Very recently, the VAMP algorithm (see Algorithm 1) was proposed in [12] to address AMP’s fragility with respect to the matrix . The VAMP algorithm retains all the desirable properties of the original AMP (i.e., low per-iteration complexity, very few iterations to convergence, and shrinkage inputs that obey the AWGN model (14)), but over a much larger class of matrices: those that are large and right-rotationally invariant .

A right-rotationally invariant matrix is a random matrix whose distribution remains the same after right multiplication by any fixed orthogonal matrix. An intuitive understanding of such matrices arises from their singular value decomposition (SVD). Suppose that

(15)

is the economy-sized666By “economy-sized,” we mean that if and contains the positive singular values of , then , , and . SVD of . For right-rotationally invariant , the matrix will contain the first columns of a matrix that is uniformly distributed on the group of orthogonal matrices. Note that i.i.d. Gaussian matrices are a special case of right-rotationally invariant, one where is random orthogonal and has a particular distribution. Importantly, VAMP behaves well under any orthogonal matrix and any singular values , as long as the dimensions are large enough [12].

0:   LMMSE estimator from (16), shrinkage , max iterations , parameters and .
1:  Select initial and .
2:  for  do
3:     // LMMSE stage:
4:     
5:     
6:     
7:     
8:     // Shrinkage stage:
9:     
10:     
11:     
12:     
13:  end for
14:  Return .
Algorithm 1 Vector AMP [12]

The VAMP algorithm is defined in Algorithm 1. The algorithm can be seen to consist of two stages, each comprising the same four steps: estimation (lines 4 and 9), divergence computation (lines 5 and 10), Onsager correction (lines 6 and 11), and variance computation (lines 7 and 12). The only difference between the two stages is their choice of estimator. The first stage uses

(16)

which depends on the measurements and the parameters

(17)

while the second stage performs componentwise nonlinear shrinkage via , just as in step (10b) of the AMP algorithm.

Lines 5 and 10 in Algorithm 1 compute the average of the diagonal entries of the Jacobian of and , respectively. That is,

(18)

From (16), we see that the Jacobian of is

(19)

and so the average of its diagonal (or times its trace) is

(20)

The first-stage estimator in (16) can be interpreted as computing the MMSE estimate of under the likelihood function

(21)

which follows from (2) under the assumption that and the pseudo-prior

(22)

We refer to (22) as a “pseudo” prior because it is constructed internally by VAMP at each iteration . The MMSE estimate of is then given by the conditional mean , which in the case of (21)-(22) is

(23)

Replacing in (23) with its SVD from (15) yields the expression in (16). Since the estimate is linear in , we refer to the first stage as the “linear MMSE” (LMMSE) stage.

The 2nd-stage estimator , in line 9 of Algorithm 1, essentially denoises the pseudo-measurement

(24)

The AWGN-corruption model (24) holds under large, right-rotationally invariant and with identical components, as proven in [12]. If the prior on was known,777Although the prior and noise variance are often unknown in practice, they can be learned online using the EM-VAMP approach from [25]. then it would be appropriate to choose the MMSE denoiser for :

(25)

With an i.i.d. signal and MMSE denoiser, VAMP produces a sequence whose fixed points have MSE consistent with the replica prediction of MMSE from [26]. In the sequel, we shall refer to VAMP with MMSE i.i.d.-signal denoising and known as “matched VAMP.”

In summary, VAMP alternates between i) MMSE inference of under likelihood and pseudo-prior , and ii) MMSE inference of under pseudo-likelihood and prior . The intermediate quantities and are updated in each stage of VAMP using the Onsager correction terms and , respectively, where and are the divergences888Notice that the Onsager correction term in AMP step (10a) also involves a (-scaled) divergence, , defined in (11). associated with the estimators and . Essentially, the Onsager correction acts to decouple the two stages (and iterations) of VAMP from each other so that local MSE optimization at each stage leads to global MSE optimization of the algorithm.

Ii-A5 Comparison of ISTA, FISTA, AMP-, and VAMP-

For illustration, we now compare the average per-iteration behavior of ISTA, FISTA, AMP-, and VAMP- in two scenarios: i) for an drawn i.i.d. , and ii) when the singular values of the same are replaced by a geometric series that yields the condition number . That is, , with set to achieve the condition-number and set to yield . In both cases, the problem dimensions were and ; the elements of were i.i.d.  with probability and were otherwise set to zero (i.e., was Bernoulli-Gaussian); and the noise was i.i.d. , with set to yield a signal-to-noise ratio (SNR) of  dB. Recall that ISTA, FISTA, AMP-, and VAMP- all estimate by iteratively minimizing (3) for a chosen value of (selected via in the case of AMP and VAMP). We chose the minimax optimal value of for AMP (which equals since [20]) and VAMP, and we used the corresponding for ISTA and FISTA.

Figure 1 shows the average normalized MSE (NMSE) versus iteration , where NMSE and realizations of were averaged. In Fig. 1(a), we see that AMP- required an order-of-magnitude fewer iterations than FISTA, which required an order-of-magnitude fewer iterations than ISTA. Meanwhile, we see that VAMP- required about half the iterations of AMP-. In Fig. 1(b), AMP- is not shown because it diverged. But VAMP- required an order-of-magnitude fewer iterations than FISTA, which required an order-of-magnitude fewer iterations than ISTA.

Fig. 1: Average NMSE versus iteration for VAMP-, AMP-, FISTA, and ISTA under (a) i.i.d. Gaussian and (b) with condition number . Note that the horizontal axis is plotted on a log scale.

Ii-B Deep Learning

In deep learning [3], training data comprised of (feature,label) pairs are used to train the parameters of a deep neural network, with the goal of accurately predicting the unknown label associated with a newly observed feature . The deep network accepts and subjects it to many layers of processing, where each layer usually consists of a linear transformation followed by a simple, componentwise nonlinearity.

Typically, the label space is discrete (e.g., is an image and is its class in {cat, dog, …, tree}). In our sparse linear inverse problem, however, the “labels” are continuous and high-dimensional. Remarkably, Gregor and LeCun demonstrated in [4] that a well-constructed deep network can accurately predict even labels such as these.

The neural network architecture proposed in [4] is closely related to the ISTA algorithm discussed in Section II-A1. To understand the relation, we rewrite the ISTA iteration (4) as

(26)

and “unfold” the iterations , resulting in the -layer feed-forward neural network shown in Fig. 2.

Fig. 2: The feed-forward neural network constructed by unfolding iterations of ISTA.

Whereas ISTA uses the values of and prescribed in (26) and a common value of at all layers, Gregor and LeCun [4] proposed to use layer-dependent thresholds and “learn” both the thresholds and the matrices from the training data by minimizing the quadratic loss

(27)

Here, denotes the set of learnable parameters and the output of the -layer network with input and parameters . The resulting approach was coined “learned ISTA” (LISTA).

The LISTA network generated estimates of comparable MSE with significantly fewer matrix-vector multiplications than existing algorithms for the problem (3) with optimally tuned regularization parameters (e.g., or ). As an example, for the i.i.d. Gaussian version of the problem described in Section II-A5, LISTA took only layers to reach an NMSE of  dB, whereas AMP- took iterations,999The computational complexity of one layer of LISTA is essentially equal to one iteration of ISTA, FISTA, or AMP. FISTA took , and ISTA took . (More details will be given in Section VI-A.)

Other authors have also applied ideas from deep learning to the sparse linear inverse problem. For example, [5] extended the LISTA approach [4] to handle structured sparsity and dictionary learning (when the training data are and is unknown). More recently, [7, 8] extended LISTA from the objective of (3) to the objective, and [6] proposed to learn the MSE-optimal scalar shrinkage functions by learning the parameters of a B-spline. It has also been proposed to recover signals using deep networks other than the “unfolded” type. For example, convolutional neural networks and stacked denoising autoencoders have been applied to speech enhancement [27], image denoising [28], image deblurring [29, 30], image super resolution [31], 3D imaging [32], compressive imaging [33, 34, 35], and video compressive sensing [36].

Iii Learned AMP-

As described earlier, LISTA learns the value of the linear transform that minimizes MSE on the training data. As noted in [4], however, the LISTA’s performance does not degrade after imposing the structure

(28)

where and , as suggested by (26). Since the form of in (28) involves free parameters, it is advantageous (in memory and training) over unstructured when , which is often the case in compressive sensing. The structured from (28) leads to network layers of the form shown in Fig. 3, with first-layer inputs and .

Although not considered in [4], the network in Fig. 3 allows both and to vary with the layer , allowing for a modest performance improvement (as will be demonstrated in Section VI-A) at the expense of a -fold increase in memory and training complexity. We will refer to networks that use fixed and over all layers as “tied,” and those that allow -dependent and as “untied.”

Fig. 3: The th layer of the LISTA network, with learnable parameters , and .

Iii-a The LAMP- Network

We propose to construct a neural network by unfolding the iterations of AMP- from (7). We then propose to learn the MSE-optimal values of the network parameters, , from training data . We will refer to this approach as “learned AMP-” (LAMP-). The hope is that it will require fewer layers than LISTA to yield an accurate reconstruction, just as AMP- requires many fewer iterations than ISTA to do the same (when is drawn i.i.d. Gaussian).

Figure 4 shows one layer of the LAMP- network. Comparing LAMP- to LISTA, we see two main differences:

  1. LAMP- includes a “bypass” path from to that is not present in LISTA. This path implements an “Onsager correction” whose goal is to decouple the layers of the network, just as it decoupled the iterations of the AMP algorithm (recall Section II-A3).

  2. LAMP-’s th shrinkage threshold varies with the realization , whereas LISTA’s does not.

Fig. 4: The th layer of the LAMP- network, with learnable parameters , and .

Iii-B Parameterizing LAMP-

It is important to realize that LAMP- implements a generalization of the AMP- algorithm (7), wherein the matrices manifest as at iteration . In other words, the AMP algorithm enforces and , whereas the LAMP- network does not. An important question is whether this generalization preserves the independent-Gaussian nature (14) of the shrinkage input error—the most important feature of AMP. We will show, numerically, that the desired behavior does seem to occur when

(29)

with , at least when is i.i.d. Gaussian.

Note that, in (29), “” refers to the true measurement matrix from (2). If was unknown, we could instead use an estimate of computed from the training data, as described in Section III-C. But, in many applications of the sparse linear inverse problem, is known. Furthermore, if matrix-vector multiplication with was known to have a fast implementation (e.g., FFT), then it could be exploited in (29).

In Appendix A, we show that, under the parameterization (29) and some redefinitions of variables, the layer of the LAMP- network can be summarized as

(30a)
(30b)

with first-layer inputs and . The LAMP- parameters are then in the tied case, or in the untied case.

Figure 5(c) shows a quantile-quantile (QQ) plot for the error in the input to untied-LAMP’s shrinkage function, , at a middle layer , using the data from Fig. 1(a). Also shown are the shrinkage inputs for ISTA and AMP. The figure shows that the quantiles of AMP- and LAMP- fall on the dashed diagonal line, confirming that they are Gaussian distributed. In contrast, the quantiles of ISTA are heavy-tailed.

Fig. 5: QQ plots of the shrinkage input error evaluated at the first iteration/layer for which NMSE dB (i.e., for ISTA, for AMP-, and for untied LAMP-.) The plots show that ISTA’s error is heavy tailed while AMP-’s and LAMP-’s errors are Gaussian due to Onsager correction.

Iii-C Learning the LAMP- Parameters

For the “tied” case of LAMP-, we aim to learn the parameters that minimize the MSE on the training data, i.e., (27). In a first attempt to do this, we tried the standard back-propagation approach, where were jointly optimized from the initialization , but we found that the parameters converged to a bad local minimum. We conjecture that this failure was a result of overfitting, since had many free parameters in our experiments: , since . Thus we propose a hybrid of “layer-wise” and “global” optimization that appears to avoid this problem.

Roughly speaking, our approach is to learn , then , and so on, until . Recall that are not the parameters of layer but the parameters of all layers up to and including layer . The details of our approach are specified in Algorithm 2. There, line 5 performs layer-wise learning (of layer ) and line 6 performs global learning (of all layers up to and including ). Note that, in line 2, we do not learn the parameter but instead leave it at its initial value. The reason is that the triple is over-parameterized, in that gives the same layer- output for any , due to the property of the soft-thresholding function. To avoid this over-parameterization, we fix the value of .

1:  Initialize
2:  Learn
3:  for  to  do
4:     Initialize
5:     Learn with fixed
6:     Re-learn
7:  end for
8:  Return
Algorithm 2 Tied LAMP- parameter learning

For the untied case of LAMP-, we aim to learn the parameters . Here we found that extra care was needed to avoid bad local minima. To this end, we implemented a bootstrapping method based on the following rationale: a network that can choose a different for each layer should perform at least as well as one that is constrained to use the same for all layers . In particular, our bootstrapping method checks performance against tied LAMP- at each layer and reinitializes using the tied parameters when appropriate. The details are given in Algorithm 3.

1:  Compute using Algorithm 2
2:  Initialize
3:  Learn
4:  for  to  do
5:     Initialize
6:     Learn with fixed
7:     Set
8:     if  performs better than  then
9:        Replace with (setting )
10:     end if
11:     Re-learn
12:  end for
13:  Return
Algorithm 3 Untied LAMP- parameter learning

As described in Section III-A, our LAMP- parameterization (29) assumes that is known. If is unknown, it could be estimated using a least-squares (LS) fit101010For the least-squares learning of , one could either use the one-shot approach where and and is the pseudo-inverse of , or one could use back-propagation to minimize the loss . to the training data and further optimized along with the parameters or to minimize the loss from (27). Empirically, we find (in experiments not detailed here) that there is essentially no difference between the final test MSEs of LAMP- networks trained with known or LS-estimated .

Iii-D Discussion

In this section, we proposed a LAMP network whose nonlinear stages were constrained to the soft-thresholding shrinkage from (5). Under this constraint, the resulting LAMP- network differs from LISTA only in the presence of Onsager correction, allowing us to study the effect of Onsager correction in deep networks. The numerical experiments in Section VI-A show that, as expected, the LAMP- network outperforms the LISTA network at every layer for the numerical data used to create Fig. 1.

Iv Learned AMP

We now consider the use of generic shrinkage functions within LAMP with the goal of improving its performance over that of LAMP-. In particular, we aim to learn the jointly MSE-optimal shrinkage functions and linear transforms across all layers of the LAMP network. To make this optimization tractable, we consider several families of shrinkage functions, where each family is parameterized by a finite-dimensional vector at layer . We then use back-propagation to learn the jointly MSE-optimal values of and the linear-transform parameters.

Iv-a The LAMP Network

For LAMP, we unfold the generic AMP algorithm (10) into a network. As with AMP-, we relax the linear transform pair to the layer-dependent learnable pair , and then place the restrictions on to facilitate Onsager correction. With AMP-, the restrictions came in the form of (29), where and emerged as the tunable parameters. It was then shown, in Appendix A, that acted to scale the output of the soft-thresholding function. Since the shrinkage functions that we use in this section will have their own scaling mechanisms, it now suffices to use (29) with . Under this parameterization, the th layer of (general) LAMP becomes

(31a)
(31b)

with learnable parameters and . See Fig. 6 for an illustration.

Fig. 6: The th layer of the (general) LAMP network, with learnable parameters and .

Iv-B Parameterizing the Shrinkage Functions

In the sequel, we consider families of shrinkage functions that are both separable and odd symmetric. By separable, we mean that for some scalar function , and by odd symmetric we mean that for all . Several such shrinkage families are detailed below.

Iv-B1 Scaled Soft-Threshold

We first consider

(32)

which can be recognized as a scaled version of the soft-threshold operator from (5). Note that . It can be readily seen that LAMP- from (30) is a special case of LAMP from (31) for which and .

Iv-B2 Piecewise Linear

Next we consider (odd symmetric) piecewise linear functions with five segments:

(33)

Here, the shrinkage-family parameters determine the abscissae of the four vertices where the line segments meet (i.e., ) and the slopes of the five segments (i.e., ). The shrinkage in (33) can be considered as a generalization of (32) from three to five segments with a possibly non-zero slope on the middle segment. It is inspired by the design from [37, Eq. (13)-(15)] but has a different parameterization and includes a dependence on the estimated noise level .

Iv-B3 Exponential

We now consider the exponential shrinkage family

(34)

The parameters control the asymptotic slope (i.e., ), the slope at the origin (i.e., ), and the rate of transition between those two slopes (where larger gives a slower transition). The shrinkage in (34) is inspired by the design from [37, Eq. (19)-(20)] but includes a dependence on the estimated noise level .

Iv-B4 Spline

Next we consider the spline shrinkage family

(35)

where is the cubic B-spline [38]

(36)

Similar to (34), the parameters in (35) control the asymptotic slope (i.e., ), the slope at the origin (i.e., ), and the rate of transition between those two slopes (where larger gives a slower transition). The shrinkage in (35) is inspired by that used in [6], but is parameterized differently. The shrinkage in [6] was constructed using shifts of spread uniformly over the dynamic range of the signal, each scaled by an adjustable weight. By contrast, the shrinkage in (35) has only three adjustable parameters but includes a dependence on the noise level . Furthermore, [6] used identical shrinkage parameters at all layers of the ISTA network, whereas we allow the shrinkage parameters to vary across the layers of the LAMP network.

Iv-B5 Bernoulli-Gaussian

Finally, we consider shrinkage functions that correspond to MSE-optimal denoisers under zero-mean Bernoulli-Gaussian (BG) priors. That is, , where has the BG prior

(37)

(with and ) and is an AWGN-corrupted measurement of :

(38)

The MSE-optimal denoiser is then (see, e.g., [39])

(39)

To turn (39) into a learnable shrinkage function, we set and and then simplify, giving

(40)

Iv-C Learning the LAMP Parameters

As with LAMP-, we consider two cases of LAMP: the “tied” case, where the same linear transform is used at all layers of the network, and the “untied” case where a different linear transform is allowed in each layer. Thus, the parameters for the tied LAMP are and those for untied LAMP are . The LAMP parameters are then learned using the method described in Section III-C, now with replaced by .

Iv-D Discussion

In this section, we constructed a “LAMP” deep network by unfolding the AMP algorithm from [11], parameterizing its linear and nonlinear stages in novel ways, and learning the parameters using a hybrid of layer-wise and global optimization. The numerical experiments in Section VI suggest that LAMP performs quite well with i.i.d. Gaussian . For example, after layers, untied LAMP’s NMSE is  dB from the support-oracle bound and as much as  dB better than that of the (tied) LISTA approach from [4].

For non-i.i.d.-Gaussian , and especially ill-conditioned , however, the performance of LAMP suffers. Also, it is not clear how to interpret the parameters learned by LAMP, even in the case of i.i.d. Gaussian . Both problems stem from the fact that LAMP can be viewed as a generalization of AMP that uses the matrices in place of at the iteration. We aim to resolve these issues using the method presented in the next section.

V Learned Vector-AMP

As described in Section II-A, the behavior of AMP is well understood when is i.i.d. sub-Gaussian, but even small deviations from this model can lead AMP to diverge or at least behave in ways that are not well understood. Very recently, however, the VAMP algorithm has been proposed as a partial solution to this problem. That is, VAMP enjoys the same benefits of AMP but works with a much larger class of matrices : those that are right-rotationally invariant. Perhaps, by building a deep network around the VAMP algorithm, we can circumvent the problems with LAMP that arise with non-i.i.d.-Gaussian matrices.

V-a The LVAMP Network

We propose to unfold the VAMP algorithm into a network and learn the MSE-optimal values of its parameters. The th layer of the learned VAMP (LVAMP) network is illustrated in Fig. 7. Essentially it consists of four operations: 1) LMMSE estimation, 2) decoupling, 3) shrinkage, and 4) decoupling, where the two decoupling stages are identical.

With an i.i.d. signal, the LMMSE estimator takes the form (23). Plugging the SVD (15) into (23) yields (16). Thus, since VAMP assumes an i.i.d. signal, its LMMSE stage is parameterized by for all iterations (recall (17)). For generality, we allow the LVAMP to vary these parameters with the layer , giving .

With non-i.i.d. (e.g., correlated) signals, the LMMSE estimator also depends on the signal covariance matrix, which may not be explicitly known. In this case, it makes more sense to parameterize