Bayesian Compressive Sensing
via Belief Propagation
Abstract
Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, subNyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can complement conventional CS methods based on linear programming or greedy algorithms. We perform approximate Bayesian inference using belief propagation (BP) decoding, which represents the CS encoding matrix as a graphical model. Fast computation is obtained by reducing the size of the graphical model with sparse encoding matrices. To decode a length signal containing large coefficients, our CSBP decoding algorithm uses measurements and computation. Finally, although we focus on a twostate mixture Gaussian model, CSBP is easily adapted to other signal models.
1 Introduction
Many signal processing applications require the identification and estimation of a few significant coefficients from a highdimensional vector. The wisdom behind this is the ubiquitous compressibility of signals: in an appropriate basis, most of the information contained in a signal often resides in just a few large coefficients. Traditional sensing and processing first acquires the entire data, only to later throw away most coefficients and retain the few significant ones [2]. Interestingly, the information contained in the few large coefficients can be captured (encoded) by a small number of random linear projections [3]. The groundbreaking work in compressive sensing (CS) [4, 5, 6] has proved for a variety of settings that the signal can then be decoded in a computationally feasible manner from these random projections.
1.1 Compressive sensing
Sparsity and random encoding: In a typical compressive sensing (CS) setup, a signal vector has the form , where is an orthonormal basis, and satisfies .^{2}^{2}2 We use to denote the “norm” that counts the number of nonzero elements. Owing to the sparsity of relative to the basis , there is no need to sample all values of . Instead, the CS theory establishes that can be decoded from a small number of projections onto an incoherent set of measurement vectors [4, 5]. To measure (encode) , we compute linear projections of via the matrixvector multiplication where is the encoding matrix.
In addition to strictly sparse signals where , other signal models are possible. Approximately sparse signals have large coefficients, while the remaining coefficients are small but not necessarily zero. Compressible signals have coefficients that, when sorted, decay quickly according to a power law. Similarly, both noiseless and noisy signals and measurements may be considered. We emphasize noiseless measurement of approximately sparse signals in the paper.
Decoding via sparsity: Our goal is to decode given and . Although decoding from appears to be an illposed inverse problem, the prior knowledge of sparsity in enables to decode from measurements. Decoding often relies on an optimization, which searches for the sparsest coefficients that agree with the measurements . If is sufficiently large and is strictly sparse, then is the solution to the minimization:
Unfortunately, solving this optimization is NPcomplete [7].
The revelation that supports the CS theory is that a computationally tractable optimization problem yields an equivalent solution. We need only solve for the sparsest coefficients that agree with the measurements [4, 5]:
(1) 
as long as satisfies some technical conditions, which are satisfied with overwhelming probability when the entries of are independent and identically distributed (iid) subGaussian random variables [4]. This optimization problem (1), also known as Basis Pursuit [8], can be solved with linear programming methods. The decoder requires only projections [9, 10]. However, encoding by a dense Gaussian is slow, and decoding requires cubic computation in general [11].
1.2 Fast CS decoding
While decoders figure prominently in the CS literature, their cubic complexity still renders them impractical for many applications. For example, current digital cameras acquire images with pixels or more, and fast decoding is critical. The slowness of decoding has motivated a flurry of research into faster algorithms.
One line of research involves iterative greedy algorithms. The Matching Pursuit (MP) [12] algorithm, for example, iteratively selects the vectors from the matrix that contain most of the energy of the measurement vector . MP has been proven to successfully decode the acquired signal with high probability [12, 13]. Algorithms inspired by MP include OMP [12], tree matching pursuit [14], stagewise OMP [15], CoSaMP [16], IHT [17], and Subspace Pursuit [18] have been shown to attain similar guarantees to those of their optimizationbased counterparts [19, 20, 21].
While the CS algorithms discussed above typically use a dense matrix, a class of methods has emerged that employ structured . For example, subsampling an orthogonal basis that admits a fast implicit algorithm also leads to fast decoding [4]. Encoding matrices that are themselves sparse can also be used. Cormode and Muthukrishnan proposed fast streaming algorithms based on group testing [22, 23], which considers subsets of signal coefficients in which we expect at most one “heavy hitter” coefficient to lie. Gilbert et al. [24] propose the Chaining Pursuit algorithm, which works best for extremely sparse signals.
1.3 Bayesian CS
CS decoding algorithms rely on the sparsity of the signal . In some applications, a statistical characterization of the signal is available, and Bayesian inference offers the potential for more precise estimation of or a reduction in the number of CS measurements. Ji et al. [25] have proposed a Bayesian CS framework where relevance vector machines are used for signal estimation. For certain types of hierarchical priors, their method can approximate the posterior density of and is somewhat faster than decoding. Seeger and Nickisch [26] extend these ideas to experimental design, where the encoding matrix is designed sequentially based on previous measurements. Another Bayesian approach by Schniter et al. [27] approximates conditional expectation by extending the maximal likelihood approach to a weighted mixture of the most likely models. There are also many related results on application of Bayesian methods to sparse inverse problems (c.f. [28] and references therein).
Bayesian approaches have also been used for multiuser decoding (MUD) in communications. In MUD, users modulate their symbols with different spreading sequences, and the received signals are superpositions of sequences. Because most users are inactive, MUD algorithms extract information from a sparse superposition in a manner analogous to CS decoding. Guo and Wang [29] perform MUD using sparse spreading sequences and decode via belief propagation (BP) [30, 31, 32, 33, 34, 35]; our paper also uses sparse encoding matrices and BP decoding. A related algorithm for decoding low density lattice codes (LDLC) by Sommer et al. [36] uses BP on a factor graph whose self and edge potentials are Gaussian mixtures. Convergence results for the LDLC decoding algorithm have been derived for Gaussian noise [36].
1.4 Contributions
In this paper, we develop a sparse encoder matrix and a belief propagation (BP) decoder to accelerate CS encoding and decoding under the Bayesian framework. We call our algorithm CSBP. Although we emphasize a twostate mixture Gaussian model as a prior for sparse signals, CSBP is flexible to variations in the signal and measurement models.
Encoding by sparse CS matrix: The dense subGaussian CS encoding matrices [4, 5] are reminiscent of Shannon’s random code constructions. However, although dense matrices capture the information content of sparse signals, they may not be amenable to fast encoding and decoding. Low density parity check (LDPC) codes [37, 38] offer an important insight: encoding and decoding are fast, because multiplication by a sparse matrix is fast; nonetheless, LDPC codes achieve rates close to the Shannon limit. Indeed, in a previous paper [39], we used an LDPClike sparse for the special case of noiseless measurement of strictly sparse signals; similar matrices were also proposed for CS by Berinde and Indyk [40]. Although LDPC decoding algorithms may not have provable convergence, the recent extension of LDPC to LDLC codes [36] offers provable convergence, which may lead to similar future results for CS decoding.
We encode (measure) the signal using sparse Rademacher () LDPClike matrices. Because entries of are restricted to , encoding only requires sums and differences of small subsets of coefficient values of . The design of , including characteristics such as column and row weights, is based on the relevant signal and measurement models, as well as the accompanying decoding algorithm.
Decoding by BP: We represent the sparse as a sparse bipartite graph. In addition to accelerating the algorithm, the sparse structure reduces the number of loops in the graph and thus assists the convergence of a message passing method that solves a Bayesian inference problem. Our estimate for explains the measurements while offering the best match to the prior. We employ BP in a manner similar to LDPC channel decoding [38, 37, 34]. To decode a length signal containing large coefficients, our CSBP decoding algorithm uses measurements and computation. Although CSBP is not guaranteed to converge, numerical results are quite favorable.
The remainder of the paper is organized as follows. Section 2 defines our signal model, and Section 3 describes our sparse CSLDPC encoding matrix. The CSBP decoding algorithm is described in Section 4, and its performance is demonstrated numerically in Section 5. Variations and applications are discussed in Section 6, and Section 7 concludes.
2 Mixture Gaussian signal model
We focus on a twostate mixture Gaussian model [41, 42, 43] as a prior that succinctly captures our prior knowledge about approximate sparsity of the signal. Bayesian inference using a twostate mixture model has been studied well before the advent of CS, for example by George and McCulloch [44] and Geweke [45]; the model was proposed for CS in [1] and also used by He and Carin [46]. More formally, let be a random vector in , and consider the signal as an outcome of . Because our approximately sparse signal consists of a small number of large coefficients and a large number of small coefficients, we associate each probability density function (pdf) with a state variable that can take on two values. Large and small magnitudes correspond to zero mean Gaussian distributions with high and low variances, which are implied by and , respectively,
with . Let be the state random vector associated with the signal; the actual configuration is one of possible outcomes. We assume that the ’s are iid.^{3}^{3}3 The model can be extended to capture dependencies between coefficients, as suggested by Ji et al. [25]. To ensure that we have approximately large coefficients, we choose the probability mass function (pmf) of the state variable to be Bernoulli with and , where is the sparsity rate.
The resulting model for signal coefficients is a twostate mixture Gaussian distribution, as illustrated in Figure 1. This mixture model is completely characterized by three parameters: the sparsity rate and the variances and of the Gaussian pdf’s corresponding to each state.
Mixture Gaussian models have been successfully employed in image processing and inference problems, because they are simple yet effective in modeling realworld signals [41, 42, 43]. Theoretical connections have also been made between wavelet coefficient mixture models and the fundamental parameters of Besov spaces, which have proved invaluable for characterizing realworld images. Moreover, arbitrary densities with a finite number of discontinuities can be approximated arbitrarily closely by increasing the number of states and allowing nonzero means [47]. We leave these extensions for future work, and focus on twostate mixture Gaussian distributions for modeling the signal coefficients.
3 Sparse encoding
Sparse CS encoding matrix: We use a sparse matrix to accelerate both CS encoding and decoding. Our CS encoding matrices are dominated by zero entries, with a small number of nonzeros in each row and each column. We focus on CSLDPC matrices whose nonzero entries are ;^{4}^{4}4CSLDPC matrices are slightly different from LDPC parity check matrices, which only contain the binary entries and . We have observed numerically that allowing negative entries offers improved performance. At the expense of additional computation, further minor improvement can be attained using sparse matrices with Gaussian nonzero entries. each measurement involves only sums and differences of a small subset of coefficients of . Although the coherence between a sparse and , which is the maximal inner product between rows of and , may be higher than the coherence using a dense matrix [48], as long as is not too sparse (see Theorem 1 below) the measurements capture enough information about to decode the signal. A CSLDPC can be represented as a bipartite graph , which is also sparse. Each edge of connects a coefficient node to an encoding node and corresponds to a nonzero entry of (Figure 2).
In addition to the core structure of , we can introduce other constraints to tailor the measurement process to the signal model. The constant row weight constraint makes sure that each row of contains exactly nonzero entries. The row weight can be chosen based on signal properties such as sparsity, possible measurement noise, and details of the decoding process. Another option is to use a constant column weight constraint, which fixes the number of nonzero entries in each column of to be a constant .
Although our emphasis is on noiseless measurement of approximately sparse signals, we briefly discuss noisy measurement of a strictly sparse signal, and show that a constant row weight ensures that the measurements are approximated by twostate mixture Gaussians. To see this, consider a strictly sparse with sparsity rate and Gaussian variance . We now have , where is additive white Gaussian noise (AWGN) with variance . In our approximately sparse setting, each row of picks up small magnitude coefficients. If , then the few large coefficients will be obscured by similar noise artifacts.
Our definition of relies on the implicit assumption that is sparse in the canonical sparsifying basis, i.e., . In contrast, if is sparse in some other basis , then more complicated encoding matrices may be necessary. We defer the discussion of these issues to Section 6, but emphasize that in many practical situations our methods can be extended to support the sparsifying basis in a computationally tractable manner.
Information content of sparsely encoded measurements: The sparsity of our CSLDPC matrix may yield measurements that contain less information about the signal than a dense Gaussian . The following theorem, whose proof appears in the Appendix, verifies that retains enough information to decode well. As long as , then measurements are sufficient.
Theorem 1
Let be a twostate mixture Gaussian signal with sparsity rate and variances and , and let be a CSLDPC matrix with constant row weight , where . If
(2) 
then can be decoded to such that with probability .
The proof of Theorem 1 relies on a result by Wang et al. [49, Theorem 1]. Their proof partitions into submatrices of rows each, and estimates each as a median of inner products with submatrices. The performance guarantee relies on the union bound; a less stringent guarantee yields a reduction in . Moreover, can be reduced if we increase the number of measurements accordingly. Based on numerical results, we propose the following modified values as rules of thumb,
(3) 
Noting that each measurement requires additions and subtractions, and using our rules of thumb for and (3), the computation required for encoding is , which is significantly lower than the required for dense Gaussian .
4 CSBP decoding of approximately sparse signals
Decoding approximately sparse random signals can be treated as a Bayesian inference problem. We observe the measurements , where is a mixture Gaussian signal. Our goal is to estimate given and . Because the set of equations is underdetermined, there are infinitely many solutions. All solutions lie along a hyperplane of dimension . We locate the solution within this hyperplane that best matches our prior signal model. Consider the minimum mean square error (MMSE) and maximum a posteriori (MAP) estimates,
where the expectation is taken over the prior distribution for . The MMSE estimate can be expressed as the conditional mean, , where is the random vector that corresponds to the measurements. Although the precise computation of may require the evaluation of terms, a close approximation to the MMSE estimate can be obtained using the (usually small) set of state configuration vectors with dominant posterior probability [27]. Indeed, exact inference in graphical models is NPhard [50], because of loops in the graph induced by . However, the sparse structure of reduces the number of loops and enables us to use lowcomplexity messagepassing methods to estimate approximately.
4.1 Decoding algorithm
We now employ belief propagation (BP), an efficient method for solving inference problems by iteratively passing messages over graphical models [30, 31, 32, 33, 34, 35]. Although BP has not been proved to converge, for graphs with few loops it often offers a good approximation to the solution to the MAP inference problem. BP relies on factor graphs, which enable fast computation of global multivariate functions by exploiting the way in which the global function factors into a product of simpler local functions, each of which depends on a subset of variables [51].
Factor graph for CSBP: The factor graph shown in Figure 2 captures the relationship between the states , the signal coefficients , and the observed CS measurements . The graph is bipartite and contains two types of vertices; all edges connect variable nodes (black) and constraint nodes (white). There are three types of variable nodes corresponding to state variables , coefficient variables , and measurement variables . The factor graph also has three types of constraint nodes, which encapsulate the dependencies that their neighbors in the graph (variable nodes) are subjected to. First, prior constraint nodes impose the Bernoulli prior on state variables. Second, mixing constraint nodes impose the conditional distribution on coefficient variables given the state variables. Third, encoding constraint nodes impose the encoding matrix structure on measurement variables.
Message passing: CSBP approximates the marginal distributions of all coefficient and state variables in the factor graph, conditioned on the observed measurements , by passing messages between variable nodes and constraint nodes. Each message encodes the marginal distributions of a variable associated with one of the edges. Given the distributions and , one can extract MAP and MMSE estimates for each coefficient.
Denote the message sent from a variable node to one of its neighbors in the bipartite graph, a constraint node , by ; a message from to is denoted by . The message is updated by taking the product of all messages received by on all other edges. The message is computed in a similar manner, but the constraint associated with is applied to the product and the result is marginalized. More formally,
(4) 
(5) 
where and are sets of neighbors of and , respectively, is the constraint on the set of variable nodes , and is the set of neighbors of excluding . We interpret these 2 types of message processing as multiplication of beliefs at variable nodes (4) and convolution at constraint nodes (5). Finally, the marginal distribution for a given variable node is obtained from the product of all the most recent incoming messages along the edges connecting to that node,
(6) 
Based on the marginal distribution, various statistical characterizations can be computed, including MMSE, MAP, error bars, and so on.
We also need a method to encode beliefs. One method is to sample the relevant pdf’s uniformly and then use the samples as messages. Another encoding method is to approximate the pdf by a mixture Gaussian with a given number of components, where mixture parameters are used as messages. These two methods offer different tradeoffs between modeling flexibility and computational requirements; details appear in Sections 4.2 and 4.3. We leave alternative methods such as particle filters and importance sampling for future research.
Protecting against loopy graphs and message quantization errors: BP converges to the exact conditional distribution in the ideal situation where the following conditions are met: (i) the factor graph is cyclefree; and (ii) messages are processed and propagated without errors. In CSBP decoding, both conditions are violated. First, the factor graph is loopy — it contains cycles. Second, message encoding methods introduce errors. These nonidealities may lead CSBP to converge to imprecise conditional distributions, or more critically, lead CSBP to diverge [52, 53, 54]. To some extent these problems can be reduced by (i) using CSLDPC matrices, which have a relatively modest number of loops; and (ii) carefully designing our message encoding methods (Sections 4.2 and 4.3). We stabilize CSBP against these nonidealities using message damped belief propagation (MDBP) [55], where messages are weighted averages between old and new estimates. Despite the damping, CSBP is not guaranteed to converge, and yet the numerical results of Section 5 demonstrate that its performance is quite promising. We conclude with a prototype algorithm; Matlab code is available at http://dsp.rice.edu/CSBP.
CSBP Decoding Algorithm

Initialization: Initialize the iteration counter . Set up data structures for factor graph messages and . Initialize messages from variable to constraint nodes with the signal prior.

Convolution: For each measurement , which corresponds to constraint node , compute via convolution (5) for all neighboring variable nodes . If measurement noise is present, then convolve further with a noise prior. Apply damping methods such as MDBP [55] by weighting the new estimates from iteration with estimates from previous iterations.

Output: For each coefficient , compute MMSE or MAP estimates (or alternative statistical characterizations) based on the marginal distribution (6). Output the requisite statistics.
4.2 Samples of the pdf as messages
Having described main aspects of the CSBP decoding algorithm, we now focus on the two message encoding methods, starting with samples. In this method, we sample the pdf and send the samples as messages. Multiplication of pdf’s (4) corresponds to pointwise multiplication of messages; convolution (5) is computed efficiently in the frequency domain.^{5}^{5}5Fast convolution via FFT has been used in LDPC decoding over using BP [34].
The main advantage of using samples is flexibility to different prior distributions for the coefficients; for example, mixture Gaussian priors are easily supported. Additionally, both multiplication and convolution are computed efficiently. However, sampling has large memory requirements and introduces quantization errors that reduce precision and hamper the convergence of CSBP [52]. Sampling also requires finer sampling for precise decoding; we propose to sample the pdf’s with a spacing less than .
We analyze the computational requirements of this method. Let each message be a vector of samples. Each iteration performs multiplication at coefficient nodes (4) and convolution at constraint nodes (5). Outgoing messages are modified,
(7) 
where the denominators are nonzero, because mixture Gaussian pdf’s are strictly positive. The modifications (7) reduce computation, because the numerators are computed once and then reused for all messages leaving the node being processed.
Assuming that the column weight is fixed (Section 3), the computation required for message processing at a variable node is per iteration, because we multiply vectors of length . With variable nodes, each iteration requires computation. For constraint nodes, we perform convolution in the frequency domain, and so the computational cost per node is . With constraint nodes, each iteration is . Accounting for both variable and constraint nodes, each iteration is , where we employ our rules of thumb for , , and (3). To complete the computational analysis, we note first that we use CSBP iterations, which is proportional to the diameter of the graph [56]. Second, sampling the pdf’s with a spacing less than , we choose to support a maximal amplitude on the order of . Therefore, our overall computation is , which scales as when and are constant.
4.3 Mixture Gaussian parameters as messages
In this method, we approximate the pdf by a mixture Gaussian with a maximum number of components, and then send the mixture parameters as messages. For both multiplication (4) and convolution (5), the resulting number of components in the mixture is multiplicative in the number of constituent components. To keep the message representation tractable, we perform model reduction using the Iterative Pairwise Replacement Algorithm (IPRA) [57], where a sequence of mixture models is computed iteratively.
The advantage of using mixture Gaussians to encode pdf’s is that the messages are short and hence consume little memory. This method works well for mixture Gaussian priors, but could be difficult to adapt to other priors. Model order reduction algorithms such as IPRA can be computationally expensive [57], and introduce errors in the messages, which impair the quality of the solution as well as the convergence of CSBP [52].
Again, we analyze the computational requirements. Because it is impossible to undo the multiplication in (4) and (5), we cannot use the modified form (7). Let be the maximum model order. Model order reduction using IPRA [57] requires computation per coefficient node per iteration. With coefficient nodes, each iteration is . Similarly, with constraint nodes, each iteration is . Accounting for CSBP iterations, overall computation is .
4.4 Properties of CSBP decoding
Messages  Parameter  Computation  Storage 

Samples of pdf  samples  
Mixture Gaussians  components 
We briefly describe several properties of CSBP decoding. The computational characteristics of the two methods for encoding beliefs about conditional distributions were evaluated in Sections 4.2 and 4.3. The storage requirements are mainly for message representation of the edges. For encoding with pdf samples, the message length is , and so the storage requirement is . For encoding with mixture Gaussian parameters, the message length is , and so the storage requirement is . Computational and storage requirements are summarized in Table 1.
Several additional properties are now featured. First, we have progressive decoding; more measurements will improve the precision of the estimated posterior probabilities. Second, if we are only interested in an estimate of the state configuration vector but not in the coefficient values, then less information must be extracted from the measurements. Consequently, the number of measurements can be reduced. Third, we have robustness to noise, because noisy measurements can be incorporated into our model by convolving the noiseless version of the estimated pdf (5) at each encoding node with the pdf of the noise.
5 Numerical results
To demonstrate the efficacy of CSBP, we simulated several different settings. In our first setting, we considered decoding problems where , , , , and the measurements are noiseless. We used samples of the pdf as messages, where each message consisted of samples; this choice of provided fast FFT computation. Figure 3 plots the MMSE decoding error as a function of for a variety of row weights . The figure emphasizes with dashed lines the average norm of (top) and of the small coefficients (bottom); increasing reduces the decoding error, until it reaches the energy level of the small coefficients. A small row weight, e.g., , may miss some of the large coefficients and is thus bad for decoding; as we increase , fewer measurements are needed to obtain the same precision. However, there is an optimal beyond which any performance gains are marginal. Furthermore, values of give rise to divergence in CSBP, even with damping. An example of the output of the CSBP decoder and how it compares to the signal appears in Figure 4, where we used and . Although , we only plotted the first signal values for ease of visualization.
To compare the performance of CSBP with other CS decoding algorithms, we also simulated: (i) decoding (1) via linear programming; (ii) GPSR [20], an optimization method that minimizes ; (iii) CoSaMP [16], a fast greedy solver; and (iv) IHT [17], an iterative thresholding algorithm. We simulated all five methods where , , , , , , and the measurements are noiseless. Throughout the experiment we ran the different methods using the same CSLDPC encoding matrix , the same signal , and therefore same measurements . Figure 5 plots the MMSE decoding error as a function of for the five methods. For small to moderate , CSBP exploits its knowledge about the approximately sparse structure of , and has a smaller decoding error. CSBP requires 20–30% fewer measurements than the optimization methods LP and GPSR to obtain the same MMSE decoding error; the advantage over the greedy solvers IHT and CoSaMP is even greater. However, as increases, the advantage of CSBP over LP and GPSR becomes less pronounced.
To compare the speed of CSBP to other methods, we ran the same five methods as before. In this experiment, we varied the signal length from to , where , , , , , and the measurements are noiseless. We mention in passing that some of the algorithms that were evaluated can be accelerated using linear algebra routines optimized for sparse matrices; the improvement is quite modest, and the runtimes presented here do not reflect this optimization. Figure 6 plots the runtimes of the five methods in seconds as a function of . It can be seen that LP scales more poorly than the other algorithms, and so we did not simulate it for .^{6}^{6}6 Our LP solver is based on interior point methods. CoSaMP also seems to scale relatively poorly, although it is possible that our conjugate gradient implementation can be improved using the pseudoinverse approach instead [16]. The runtimes of CSBP seem to scale somewhat better than IHT and GPSR. Although the asymptotic computational complexity of CSBP is good, for signals of length it is still slower than IHT and GPSR; whereas IHT and GPSR essentially perform matrixvector multiplications, CSBP is slowed by FFT computations performed in each iteration for all nodes in the factor graph. Additionally, whereas the choice yields complexity, FFT computation with samples is somewhat slow. That said, our main contribution is a computationally feasible Bayesian approach, which allows to reduce the number of measurements (Figure 5); a comparison between CSBP and previous Bayesian approaches to CS [25, 26] would be favorable.
To demonstrate that CSBP deals well with measurement noise, recall the noisy measurement setting of Section 3, where is AWGN with variance . Our algorithm deals with noise by convolving the noiseless version of the estimated pdf (5) with the noise pdf. We simulated decoding problems where , , , , , , and . Figure 7 plots the MMSE decoding error as a function of and . To put things in perspective, the average measurement picks up a Gaussian term of variance from the signal. Although the decoding error increases with , as long as the noise has little impact on the decoding error; CSBP offers a graceful degradation to measurement noise.
Our final experiment considers model mismatch where CSBP has an imprecise statistical characterization of the signal. Instead of a twostate mixture Gaussian signal model as before, where large coefficients have variance and occur with probability , we defined a component mixture model. In our definition, is interpreted as a background signal level, which appears in all coefficients. Whereas the twostate model adds a “true signal” component of variance to the background signal, the large components each occur with probability and the amplitudes of the true signals are , where is chosen to preserve the total signal energy. At the same time, we did not change the signal priors in CSBP, and used the same twostate mixture model as before. We simulated decoding problems where , , , , , , the measurements are noiseless, and . Figure 8 plots the MMSE decoding error as a function of and . The figure also shows how IHT and GPSR perform, in order to evaluate whether they are more robust than the Bayesian approach of CSBP. We did not simulate CoSaMP and decoding, since their MMSE performance is comparable to that of IHT and GPSR. As the number of mixture components increases, the MMSE provided by CSBP increases. However, even for the sparsity rate effectively doubles from to , and an increase in the required number of measurements is expected. Interestingly, the greedy IHT method also degrades significantly, perhaps because it implicitly makes an assumption regarding the number of large mixture components. GPSR, on the other hand, degrades more gracefully.
6 Variations and enhancements
Supporting arbitrary sparsifying basis : Until now, we have assumed that the canonical sparsifying basis is used, i.e., . In this case, itself is sparse. We now explain how CSBP can be modified to support the case where is sparse in an arbitrary basis . In the encoder, we multiply the CSLDPC matrix by and encode as , where denotes the transpose operator. In the decoder, we use BP to form the approximation , and then transform via to . In order to construct the modified encoding matrix and later transform to , extra computation is needed; this extra cost is in general. Fortunately, in many practical situations is structured (e.g., Fourier or wavelet bases) and amenable to fast computation. Therefore, extending our methods to such bases is feasible.
Exploiting statistical dependencies: In many signal representations, the coefficients are not iid. For example, wavelet representations of natural images often contain correlations between magnitudes of parent and child coefficients [2, 43]. Consequently, it is possible to decode signals from fewer measurements using an algorithm that allocates different distributions to different coefficients [58, 46]. By modifying the dependencies imposed by the prior constraint nodes (Section 4.1), CSBP decoding supports different signal models.
Feedback: Feedback from the decoder to the encoder can be used in applications where measurements may be lost because of transmissions over faulty channels. In an analogous manner to a digital fountain [59], the marginal distributions (6) enable us to identify when sufficient information for signal decoding has been received. At that stage, the decoder notifies the encoder that decoding is complete, and the stream of measurements is stopped.
Irregular CSLDPC matrices: In channel coding, LDPC matrices that have irregular row and column weights come closer to the Shannon limit, because a small number of rows or columns with large weights require only modest additional computation yet greatly reduce the block error rate [38]. In an analogous manner, we expect irregular CSLDPC matrices to enable a further reduction in the number of measurements required.
7 Discussion
This paper has developed a sparse encoding matrix and belief propagation decoding algorithm to accelerate CS encoding and decoding under the Bayesian framework. Although we focus on decoding approximately sparse signals, CSBP can be extended to signals that are sparse in other bases, is flexible to modifications in the signal model, and can address measurement noise.
Despite the significant benefits, CSBP is not universal in the sense that the encoding matrix and decoding methods must be modified in order to apply our framework to arbitrary bases. Nonetheless, the necessary modifications only require multiplication by the sparsifying basis or its transpose .
Our method resembles low density parity check (LDPC) codes [37, 38], which use a sparse Bernoulli parity check matrix. Although any linear code can be represented as a bipartite graph, for LDPC codes the sparsity of the graph accelerates the encoding and decoding processes. LDPC codes are celebrated for achieving rates close to the Shannon limit. A similar comparison of the MMSE performance of CSBP with information theoretic bounds on CS performance is left for future research. Additionally, although CSBP is not guaranteed to converge, the recent convergence proofs for LDLC codes [36] suggest that future work on extensions of CSBP may also yield convergence proofs.
In comparison to previous work on Bayesian aspects of CS [25, 26], our method is much faster, requiring only computation. At the same time, CSBP offers significant flexibility, and should not be viewed as merely another fast CS decoding algorithm. However, CSBP relies on the sparsity of CSLDPC matrices, and future research can consider the applicability of such matrices in different applications.
Outline of proof of Theorem 1: The proof begins with a derivation of probabilistic bounds on and . Next, we review a result by Wang et al. [49, Theorem 1]. The proof is completed by combining the bounds with the result by Wang et al.
Upper bound on : Consider , where the random variable (RV) has a mixture distribution
Recall the moment generating function (MGF), . The MGF of a Chisquared RV satisfies . For the mixture RV ,
Additionally, because the are iid, . Invoking the Chernoff bound, we have
for . We aim to show that decays faster than as is increased. To do so, let , where . It suffices to prove that there exists some for which
Let and . It is easily seen via Taylor series that and , and so
Because of the negative term , which dominates the higher order term for small , there exists , which is independent of , for which . Using this , the Chernoff bound provides an upper bound on that decays exponentially with . In summary,
(8) 
Lower bound on : In a similar manner, MGF’s and the Chernoff bound can be used to offer a probabilistic bound on the number of large Gaussian mixture components
(9) 
Taking into account the limited number of large components and the expected squared norm, , we have
(10) 
We omit the (similar) details for brevity.
Bound on : The upper bound on is obtained by first considering large mixture components and then small components. First, we consider the large Gaussian mixture components, and denote .
(11)  
(12)  
where is the cumulative distribution function of the standard normal distribution, the inequality (11) relies on and the possibility that is strictly smaller than , is the pdf of the standard normal distribution, (12) relies on the bound , and the inequality (7) is motivated by for . Noting that increases with , for large we have
(14) 
Now consider the small Gaussian mixture components, and denote . As before,
where in (7) the number of small mixture components is often less than . Because , for large we have
(16) 
Combining (9), (14) and (16), for large we have
(17) 
Result by Wang et al. [49, Theorem 1]:
Theorem 2 ([49])
Consider that satisfies the condition
(18) 
In addition, let be any set of vectors . Suppose a sparse random matrix satisfies
where is the fraction of nonzero entries in . Let
(19) 
Then with probability at least , the random projections and can produce an estimate for satisfying
Application of Theorem 2 to proof of Theorem 1: Combining (8), (10), and (17), the union bound demonstrates that with probability lower bounded by we have and .^{7}^{7}7 The terms (8) and (10) demonstrate that there exists some such that for all the upper and lower bounds on each hold with probability lower bounded by , resulting in a probability lower bounded by via the union bound. Because the expression (2) for the number of measurements is an order term, the case where is inconsequential. When these and bounds hold, we can apply Theorem 2.
To apply Theorem 2, we must specify (i) (18); (ii) the test vectors ; (iii) the matrix sparsity ; and (iv) the parameter. First, the bounds on and indicate that . Second, we choose to be the canonical vectors of the identity matrix , providing . Third, our choice of offers . Fourth, we set
Using these parameters, Theorem 2 demonstrates that all approximations satisfy
with probability lower bounded by . Combining the probability that the and bounds hold and the decoding probability offered by Theorem 2, we have
(20) 
with probability lower bounded by .
We complete the proof by computing the number of measurements required (19). Because , we need
measurements.
Acknowledgments
Thanks to David Scott, Danny Sorensen, Yin Zhang, Marco Duarte, Michael Wakin, Mark Davenport, Jason Laska, Matthew Moravec, Elaine Hale, Christine Kelley, and Ingmar Land for informative and inspiring conversations. Thanks to Phil Schniter for bringing his related work [27] to our attention. Special thanks to Ramesh Neelamani, Alexandre de Baynast, and Predrag Radosavljevic for providing helpful suggestions for implementing BP; to Danny Bickson and Harel Avissar for improving our implementation; and to Marco Duarte for wizardry with the figures. Additionally, the first author thanks the Department of Electrical Engineering at the Technion for generous hospitality while parts of the work were being performed, and in particular the support of Yitzhak Birk and Tsachy Weissman. Final thanks to the anonymous reviewers, whose superb comments helped to greatly improve the quality of the paper.
References
 [1] S. Sarvotham, D. Baron, and R. G. Baraniuk, “Compressed sensing reconstruction via belief propagation,” Tech. Rep. TREE0601, Rice University, Houston, TX, July 2006.
 [2] R. A. DeVore, B. Jawerth, and B. J. Lucier, “Image compression through wavelet transform coding,” IEEE Trans. Inf. Theory, vol. 38, no. 2, pp. 719–746, Mar. 1992.
 [3] I. F. Gorodnitsky and B. D. Rao, “Sparse signal reconstruction from limited data using FOCUSS: A reweighted minimum norm algorithm,” IEEE Trans. Signal Process., vol. 45, no. 3, pp. 600–616, March 1997.
 [4] E. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489–509, Feb. 2006.
 [5] D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, Apr. 2006.
 [6] R. G. Baraniuk, “A lecture on compressive sensing,” IEEE Signal Process Mag., vol. 24, no. 4, pp. 118–121, 2007.
 [7] E. Candès, M. Rudelson, T. Tao, and R. Vershynin, “Error correction via linear programming,” Found. Comp. Math., pp. 295–308, 2005.
 [8] S. Chen, D. Donoho, and M. Saunders, “Atomic decomposition by basis pursuit,” SIAM J. Sci. Comp., vol. 20, no. 1, pp. 33–61, 1998.
 [9] D. Donoho and J. Tanner, “Neighborliness of randomly projected simplices in high dimensions,” Proc. Nat. Academy Sciences, vol. 102, no. 27, pp. 9452–457, 2005.
 [10] D. Donoho, “Highdimensional centrally symmetric polytopes with neighborliness proportional to dimension,” Discrete Comput. Geometry, vol. 35, no. 4, pp. 617–652, Mar. 2006.
 [11] P. M. Vaidya, “An algorithm for linear programming which requires O(((m+n)n2+(m+n)1.5n)L) arithmetic operations,” in STOC ’87: Proc. 19th ACM Symp. Theory of computing, New York, NY, USA, 1987, pp. 29–38, ACM.
 [12] J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Trans. Inf. Theory, vol. 53, no. 12, pp. 4655–4666, Dec. 2007.
 [13] A. Cohen, W. Dahmen, and R. A. DeVore, “Near optimal approximation of arbitrary vectors from highly incomplete measurements,” 2007, Preprint.
 [14] M. F. Duarte, M. B. Wakin, and R. G. Baraniuk, “Fast reconstruction of piecewise smooth signals from random projections,” in Proc. SPARS05, Rennes, France, Nov. 2005.
 [15] D. L. Donoho, Y. Tsaig, I. Drori, and JC Starck, “Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit,” Mar. 2006, Preprint.
 [16] D. Needell and J. A. Tropp, “CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,” Appl. Comput. Harmonic Analysis, vol. 26, no. 3, pp. 301–321, 2008.
 [17] T. Blumensath and M. E. Davies, “Iterative hard thresholding for compressed sensing,” to appear in Appl. Comput. Harmonic Analysis, 2008.
 [18] W. Dai and O. Milenkovic, “Subspace pursuit for compressive sensing: Closing the gap between performance and complexity,” IEEE Trans. Inf. Theory, vol. 55, no. 5, pp. 2230–2249, May 2009.
 [19] E. Hale, W. Yin, and Y. Zhang, “Fixedpoint continuation for minimization: Methodology and convergence,” 2007, Submitted.
 [20] M. Figueiredo, R. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” Dec. 2007, IEEE J. Sel. Top. Sign. Proces.
 [21] E. van den Berg and M. P. Friedlander, “Probing the Pareto frontier for basis pursuit solutions,” Tech. Rep. TR200801, Department of Computer Science, University of British Columbia, Jan. 2008, To appear in SIAM J. Sci. Comp.
 [22] G. Cormode and S. Muthukrishnan, “Towards an algorithmic theory of compressed sensing,” DIMACS Technical Report TR 200525, 2005.
 [23] G. Cormode and S. Muthukrishnan, “Combinatorial algorithms for compressed sensing,” DIMACS Technical Report TR 200540, 2005.
 [24] A. C. Gilbert, M. J. Strauss, J. Tropp, and R. Vershynin, “Algorithmic linear dimension reduction in the norm for sparse vectors,” Apr. 2006, Submitted.
 [25] S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” IEEE Trans. Signal Process., vol. 56, no. 6, pp. 2346–2356, June 2008.
 [26] M. W. Seeger and H. Nickisch, “Compressed sensing and Bayesian experimental design,” in ICML ’08: Proc. 25th Int. Conf. Machine learning, 2008, pp. 912–919.
 [27] P. Schniter, L. C. Potter, and J. Ziniel, “Fast Bayesian matching pursuit: Model uncertainty and parameter estimation for sparse linear models,” IEEE Trans. Signal Process., March 2009.
 [28] T. Hastie, R. Tibshirani, and J. H. Friedman, The Elements of Statistical Learning, Springer, August 2001.
 [29] G. Guo and C.C. Wang, “Multiuser detection of sparsely spread CDMA,” IEEE J. Sel. Areas Commun., vol. 26, no. 3, pp. 421–431, 2008.
 [30] J. Pearl, “Probablistic reasoning in intelligent systems: Networks of plausible inference,” MorganKaufmann, 1988.
 [31] F. V. Jensen, “An introduction to Bayesian networks,” SpringerVerlag, 1996.
 [32] B. J. Frey, “Graphical models for machine learning and digital communication,” MIT press, 1998.
 [33] J. S. Yedidia, W. T. Freeman, and Y. Weiss, “Understanding belief propagation and its generalizations,” Mitsubishi Tech. Rep. TR2001022, Jan. 2002.
 [34] D. J. C. MacKay, “Information theory, inference and learning algorithms,” Cambridge University Press, 2002.
 [35] R. G. Cowell, A. P. Dawid, S. L. Lauritzen, and D. J. Spiegelhalter, “Probabilistic networks and expert systems,” SpringerVerlag, 2003.
 [36] N. Sommer, M. Feder, and O. Shalvi, “Lowdensity lattice codes,” IEEE Trans. Inf. Theory, vol. 54, no. 4, pp. 1561–1585, 2008.
 [37] R. G. Gallager, “Lowdensity paritycheck codes,” IEEE Trans. Inf. Theory, vol. 8, pp. 21–28, Jan. 1962.
 [38] T. J. Richardson, M. A. Shokrollahi, and R. L. Urbanke, “Design of capacityapproaching irregular lowdensity paritycheck codes,” IEEE Trans. Inf. Theory, vol. 47, pp. 619–637, Feb. 2001.
 [39] S. Sarvotham, D. Baron, and R. G. Baraniuk, “Sudocodes – Fast measurement and reconstruction of sparse signals,” in Proc. Int. Symp. Inf. Theory (ISIT2006), Seattle, WA, July 2006.
 [40] R. Berinde and P. Indyk, “Sparse recovery using sparse random matrices,” MITCSAILTR2008001, 2008, Technical Report.
 [41] J.C Pesquet, H. Krim, and E. Hamman, “Bayesian approach to best basis selection,” IEEE 1996 Int. Conf. Acoustics, Speech, Signal Process. (ICASSP), pp. 2634–2637, 1996.
 [42] H. Chipman, E. Kolaczyk, and R. McCulloch, “Adaptive Bayesian wavelet shrinkage,” J. Amer. Stat. Assoc., vol. 92, 1997.
 [43] M. S. Crouse, R. D. Nowak, and R. G. Baraniuk, “Waveletbased signal processing using hidden Markov models,” IEEE Trans. Signal Process., vol. 46, pp. 886–902, April 1998.
 [44] E. I. George and R. E. McCulloch, “Variable selection via Gibbs sampling,” J. Am. Stat. Assoc., vol. 88, pp. 881–889, 1993.
 [45] J. Geweke, “Variable selection and model comparison in regression,” in Bayesian Statistics 5, 1996, pp. 609–620.
 [46] L. He and L. Carin, “Exploiting structure in waveletbased Bayesian compressed sensing,” to appear in IEEE Trans. Signal Process., 2008.
 [47] H. W. Sorenson and D. L. Alspach, “Recursive Bayesian estimation using Gaussian sums,” Automatica, vol. 7, pp. 465–479, 1971.
 [48] J. A. Tropp, “Greed is good: Algorithmic results for sparse approximation,” IEEE Trans. Inf. Theory, vol. 50, pp. 2231–2242, 2004.
 [49] W. Wang, M. Garofalakis, and K. Ramchandran, “Distributed sparse random projections for refinable approximation,” in Proc. Inf. Process. Sensor Networks (IPSN2007), 2007, pp. 331–339.
 [50] G. Cooper, “The computational complexity of probabilistic inference using Bayesian belief networks,” Artificial Intelligence, vol. 42, pp. 393–405, 1990.
 [51] F. R. Kschischang, B. J. Frey, and HA. Loeliger, “Factor graphs and the sumproduct algorithm,” IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 498–519, Feb. 2001.
 [52] E. Sudderth, A. Ihler, W. Freeman, and A. S. Willsky, “Nonparametric belief propagation,” MIT LIDS Tech. Rep. 2551, Oct. 2002.
 [53] B. J. Frey and D. J. C. MacKay, “A revolution: Belief propagation in graphs with cycles,” Adv. Neural Inf. Process. Systems, M. Jordan, M. S. Kearns and S. A. Solla (Eds.), vol. 10, 1998.
 [54] A. Ihler, J. Fisher, and A. S. Willsky, “Loopy belief propagation: Convergence and effects of message errors,” J. Machine Learning Res., vol. 6, pp. 905–936, May 2005.
 [55] M. Pretti, “A MessagePassing Algorithm with Damping,” J. Stat. Mech., Nov. 2005.
 [56] D. J. C. MacKay, “Good errorcorrecting codes based on very sparse matrices,” IEEE Trans. Inf. Theory, vol. 45, pp. 399–431, Mar. 1999.
 [57] D. W. Scott and W. F. Szewczyk, “From kernels to mixtures,” Technometrics, vol. 43, pp. 323–335, Aug. 2001.
 [58] R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hegde, “Modelbased compressive sensing,” 2008, Preprint.
 [59] J. W. Byers, M. Luby, and M. Mitzenmacher, “A digital fountain approach to asynchronous reliable multicast,” IEEE J. Sel. Areas Commun., vol. 20, no. 8, pp. 1528–1540, Oct. 2002.