# Generalised Approximate Message Passing for Non-I.I.D. Sparse Signals

###### Abstract

Generalised approximate message passing (GAMP) is an approximate Bayesian estimation algorithm for signals observed through a linear transform with a possibly non-linear subsequent measurement model. By leveraging prior information about the observed signal, such as sparsity in a known dictionary, GAMP can for example reconstruct signals from under-determined measurements – known as compressed sensing. In the sparse signal setting, most existing signal priors for GAMP assume the input signal to have i.i.d. entries. Here we present sparse signal priors for GAMP to estimate non-i.d.d. signals through a non-uniform weighting of the input prior, for example allowing GAMP to support model-based compressed sensing.

references.bib

## 1 Introduction

Generalised approximate message passing (GAMP) was introduced by \citeauthorRangan2011 in [Rangan2011, Rangan2011Full]. GAMP addresses the estimation of signals observed through a linear transform as follows

(1) |

where . Here we express as an additive noise which is classical in linear measurement models. We define the intermediate measurements without noise:

(2) |

The noise term does not have to be a strictly additive term independent of . It can more generally be a (possibly non-linear) separable measurement channel expressed through a p.d.f. of ; where represents the parameters of the channel.

GAMP can be used quite generally for estimation of signals from many different distributions of . Here we consider the compressed sensing setting (see [Candes2008]) where , i.e. an under-determined system which may be solved if

(3) |

where the operator counts the number of non-zero entries in . GAMP can solve (1) in the compressed sensing setting when a sparse prior can be imposed on to model (3).

For the compressed sensing setting, the algorithm approximate message passing (AMP) was proposed to estimate with an i.i.d. Laplacian prior and i.i.d additive white Gaussian noise [Donoho2009]. GAMP can be seen as a generalisation of AMP that allows for a wider range of probability distributions on the signals and on the measurements given .

Another prior that can be used to model sparse signals in Bayesian estimators such as GAMP is the so-called spike-and-slab model [Mitchell1988]. According to this model, each entry in is distributed according to a linear combination of a Dirac delta p.d.f and another p.d.f. :

(4) |

The models the fact that many of the entries are zero (by the sparsity of ). The variable controls the sparsity of , i.e. how likely the entries are to be zero. The function can be chosen to represent the p.d.f. of entries of that are not zero; represents the parameters of .

One example of such a spike-and-slab prior is the Bernoulli-Gauss distribution [Vila2011]:

(5) |

where the p.d.f. representing the non-zero entries is the Gaussian distribution with mean and std. deviation (corresponding to the parameters ). The term Bernoulli-Gauss (BG) reflects the fact that each entry can be seen as the product of a Bernoulli random variable (values 0 or 1) and a Gaussian random variable.

Applying GAMP with the signal prior (5) assumes that the entries of are i.i.d. In many cases, signals of interest exhibit additional structure that can be exploited to estimate them more accurately [Baraniuk2010].

Many different algorithmic approaches to modelling and leveraging such signal structure can be taken. Schniter et al. have for example produced substantial results on incorporating the ability to learn the structure of non-i.i.d. priors into the GAMP framework, see e.g. [Ziniel2012]. We take a different approach here and propose a weighted spike-and-slab model that can be used to model non-i.i.d. signals. We present the corresponding GAMP equations derived for this prior model on as well as results from numerical simulations that show the improvements in reconstruction capabilities that are achievable using such a structured prior.

## 2 Weighted-Prior GAMP

We re-state the uniform variance MMSE GAMP algorithm [Rangan2011Full] in Algorithm 1 as given in [Oxvig2017] with some variable changes to match (1).

We stress that our proposed prior may as well be used with the non-uniform variants of GAMP that track individual variances. Due to limited space, a detailed explanation of the algorithms can be found in [Oxvig2017] and references therein.

In order to apply the GAMP algorithm with a specific entry-wise input prior and measurement channel function , it is necessary to derive the posterior p.d.f. . The evaluation of input posterior and measurement channel functions is incorporated in the GAMP iterations in the form of and in Algorithm 1, lines 11-12, respectively and in lines 5-6. We refer to these functions, and as the GAMP input- and output channnel functions, respectively.

Channel functions for the BG input channel (5) can be found in [Krzakala2012a, JasonParker_PhDThesis, Vila2011] and for the additive white Gaussian noise output channel in [Rangan2011, Rangan2011Full].

Here we propose a modified Bernoulli-Gauss input channel that supports non-uniform sparsity over the signal , i.e. different entries can have different probabilities of being zero. We extend the spike-and slab model (4) as follows [Oxvig2017, p. 15]:

(6) |

General input channel functions have been derived for this model [Oxvig2017, p. 16]. As one example of such a weighted spike-and-slab input channel, we have derived the following closed-form expressions for the weighted BG input channel functions [Oxvig2017, p. 26], i.e. where :

(7) | |||

(8) |

where the function is given in eqs. (3.46)-(3.50) in [Oxvig2017, p. 15] and and are the mean and the variance, respectively, of the Gaussian term in each entry of the GAMP input channel.

With the proposed weighted input prior (6) it is possible to model signals with varying probability of zero entries across the signal . Additionally, similar to the mechanism for the i.i.d. BG input channel in [Vila2011], we have derived formulas for updating the parameters and using expectation-maximization (EM) as part of the GAMP algorithm [Oxvig2017, sec. 6]. We retain separate parameters and to enable estimating the overall sparsity via EM without too many free parameters.

Next, we demonstrate by numerical experiments how the proposed model can improve estimation of compressively sensed signals with a known non-uniform sparsity structure.

## 3 Numerical Example

We perform a set of numerical experiments to demonstrate the benefits of the proposed weighted sparse prior for GAMP when such a model matches the signal of interest. We simulate the algorithm’s reconstruction capabilities in the form of a phase transition diagram over the full sparsity / under-sampling parameter space. See [Donoho2009a] for an introduction to the phase transition in compressed sensing problems.

We simulate sparse BG signals according to the proposed weighted model (6) where the weights are selected to have a Gaussian shape over the support of the signal vector :

(9) |

We partition the parameter space of and into a grid of points in each of which we reconstruct 10 random signals with non-zero entries on average, measured with a randomly generated matrix with i.i.d. random Gaussian entries. The reconstruction success (exact to within a small tolerance) of each signal is evaluated and the success rates in the points are used to estimate the phase transitions shown in Figure 1. The results shown here are a small subset of the simulation results available along with source code in [Oxvig2017_results].

For reference, the figure includes the theoretical -optimisation phase transition curve [Donoho2009a], simulation results for a non-weighted BG prior (5) as well as simulation results for the implied Laplacian model used by AMP, “DMM AMP” [Donoho2009]. We present results for a weighted prior that matches our Gaussian weights in (9) for: 1. The algorithm knows the true model parameters (genie) and 2. EM is used with a re-weighting scheme to estimate the parameters [Oxvig2017, sec. 6]. Comparing the two, we can see that knowing the underlying sparsity structure of the signal of interest improves the reconstruction capabilities substantially compared to simply assuming no weighting. Note, however, that it is important to allow the GAMP algorithm some “slack” in the form of estimating the parameters , and using EM to get the best performance. The error bars shown on the result curves correspond to the 10% to 90% percentile range of the logistic sigmoid functions fitted to the reconstruction outcomes to estimate the phase transition.

## 4 Conclusion

We have proposed a model for a class of non-uniformly structured sparse signals for use in the generalised approximate message passing (GAMP) algorithm. The proposed approach models sparse signals where the probability of zero vs. non-zero entries can vary across the support of the signal of interest. We have demonstrated through numerical examples how exploiting such structure when present in signals can substantially improve reconstruction of compressively sensed signals.