Fusion of Sparse Reconstruction Algorithms for Multiple Measurement Vectors

Fusion of Sparse Reconstruction Algorithms for Multiple Measurement Vectors

Abstract

We consider the recovery of sparse signals that share a common support from multiple measurement vectors. The performance of several algorithms developed for this task depends on parameters like dimension of the sparse signal, dimension of measurement vector, sparsity level, measurement noise. We propose a fusion framework, where several multiple measurement vector reconstruction algorithms participate and the final signal estimate is obtained by combining the signal estimates of the participating algorithms. We present the conditions for achieving a better reconstruction performance than the participating algorithms. Numerical simulations demonstrate that the proposed fusion algorithm often performs better than the participating algorithms.

1Introduction

Consider the standard Compressed Sensing (CS) measurement setup where a -sparse signal is acquired through linear measurements via

where denotes the measurement matrix, represents the measurement vector, and denotes the additive measurement noise present in the system. The reconstruction problem, estimating from using and , is known as Single Measurement Vector (SMV) problem. In this work, we consider the Multiple Measurement Vector (MMV) problem [1] where we have measurements: , , , . The vectors are assumed to have a common sparse support-set. The problem is to estimate . Instead of recovering the signals individually, the attempt in the MMV problem is to simultaneously recover all the signals. MMV problem arises in many applications such as the neuromagnetic inverse problem in Magnetoencephalography (a modality for imaging the brain) [2], array processing [4], non-parametric spectrum analysis of time series [5], and equalization of sparse communication channels [6].

Recently many algorithms have been proposed to recover signal vectors with a common sparse support. Some among them are algorithms based on diversity minimization methods like minimization [7], and M-FOCUSS [1], greedy methods like M-OMP and M-ORMP [1], and Bayesian methods like MSBL [8] and T-MSBL [9].

However it has been observed that the performance of many algorithms depends on many parameters like the dimension of the measurement vector, the sparsity level, the statistical distribution of the non-zero elements of the signal, the measurement noise power etc. [9]. Thus it becomes difficult to choose the best sparse reconstruction algorithm without a priori knowledge about these parameters.

Suppose we have the sparse signal estimates given by various algorithms. It may be possible to merge these estimates to form a more accurate estimate of the original. This idea of fusion of multiple estimators has been proposed in the context of signal denoising in [10] where fusion was performed by plain averaging. Recently, Ambat et al. [11] proposed fusion of the estimates of sparse reconstruction algorithms to improve the sparse signal reconstruction performance of SMV problem.

In this paper, we propose a framework which uses several MMV reconstruction algorithms and combines their sparse signal support estimates to determine the final signal estimate. We refer to this scheme as MMV-Fusion of Algorithms for Compressed Sensing (MMV-FACS). We present an upper bound on the reconstruction error by MMV-FACS. We also present a sufficient condition for achieving a better reconstruction performance than any participating algorithm. By Monte-Carlo simulations we show that fusion of viable algorithms leads to improved reconstruction performance for the MMV problem.

Notations:

Matrices and vectors are denoted by bold upper case and bold lower case letters respectively. Sets are represented by upper case Greek alphabets and calligraphic letters. denotes the column sub-matrix of where the indices of the columns are the elements of the set . denotes the sub-matrix formed by those rows of whose indices are listed in the set . is the matrix obtained from by keeping its rows with the largest -norm and by setting all other rows to zero, breaking ties lexicographically. supp() denotes the set of indices of non-zero rows of . For a matrix , denotes the column vector of . denotes the reconstructed matrix by the participating algorithm. The complement of the set with respect to the set is denoted by . For two sets and , denotes the set difference. denotes the cardinality of set . and denote the pseudo-inverse and transpose of matrix , respectively. The mixed norm of the matrix is defined as

The Frobenius norm of matrix is denoted as .

2Problem Formulation

The MMV problem involves solving the following under-determined systems of linear equations

where represents the measurement matrix, represents the measurement vector, and denotes the corresponding -sparse source vector. That is, and share a common support-set for . represents the additive measurement noise. We can rewrite as

where , , and .

For a matrix , we define

In , we assume that is jointly -sparse. That is, . There are at most rows in that contain non-zero elements. We assume that and is known a priori.

3Fusion of Algorithms for Multiple Measurement Vector Problem

In this paper, we propose to employ multiple sparse reconstructions algorithms independently for estimating from and fuse the resultant estimates to yield a better sparse signal estimate. Let denote the number of different participating algorithms employed to estimate the sparse signal. Let denote the support-set estimated by the participating algorithm and let denote the true-support-set. Denote the union of the estimated support-sets as , i.e., , assume that . We hope that different participating algorithms work on different principles and the support-set estimated by each participating algorithm includes a partially correct information about the true support-set . It may be also observed that the union of the estimated support-sets, , is richer in terms of the true atoms as compared to the support-set estimated by any participating algorithm. Also note that, once the support-set is estimated, the non-zero magnitudes of can be estimated by solving a Least-Squares (LS) problem on an over-determined system of linear equations. Hence if we can identify all the true atoms included in the joint support-set , we can achieve a better sparse signal estimate.

Since we are estimating the support atoms only from , we need to only solve the following problem which is lower dimensional as compared to the original problem :

where denotes the sub-matrix formed by the columns of whose indices are listed in , denotes the submatrix formed by the rows of whose indices are listed in , and . The matrix equation represents a system of linear equations which are over-determined in nature. We use the method of LS to find an approximate solution to the overdetermined system of equations in . Let denote the LS solution of . We choose the support-set estimate of MMV-FACS as the support of , i.e., indices of those rows having the largest -norm. Once the non-zero rows are identified, solving the resultant overdetermined solution using LS we can estimate the non-zero entries of . MMV-FACS is summarized in Algorithm ?.

Remark:

An alternate approach for solving an MMV problem is to stack all the columns of to get a single measurement vector. Then in a noiseless case becomes

where and () denote the column of and respectively. Now, we have the following SMV problem.

In principle, we can solve using FACS with sparsity level . Note that, after stacking column-wise, we lost the joint sparsity constraint imposed on in the MMV problem in . The non-zero elements estimated from using FACS can be from more than different rows of . In the worst case, the estimate of FACS may include non-zero elements from different rows of . Then we will end up with an estimate of with non-zero rows, which is highly undesirable. Hence stacking the columns of the observation matrix and solving it using FACS is not advisable. Note that Step 3 in Algorithm ? ensures that MMV-FACS estimates only non-zero rows of .

4Theoretical Studies of Mmv-Facs

In this section, we will theoretically analyse the performance of MMV-FACS. We consider the general case for an arbitrary signal matrix. We also study the average case performance of MMV-FACS subsequently.

The performance analysis is characterized by SRER extended for MMV which is defined as

where and denote the actual and reconstructed signal matrix respectively.

Proof: Proof is given in ?.

Proof: Proof is given in Appendix ?.

4.1Performance Analysis for Arbitrary Signals under Measurement Perturbations

We analyse the performance of MMV-FACS for arbitrary signals and give an upper bound on the reconstruction error in Theorem ?. We also derive a sufficient condition to get an improved performance of MMV-FACS scheme over any given participating algorithm.

Proof:
i) We have,

Consider,

Using the relations (from Algorithm ?) and , we get

Let denote the column of matrix and denote the column of matrix , . Now from Proposition 3.1 and Corollary 3.3 of [16] we obtain the following relations.

Consider , we get

Summing the above equation over , we obtain

Similarly, summing the relations in and , we obtain

Substituting , and in , we get

Substituting in , we get

Next, we will find an upper bound for .
Define . That is, is the set formed by the atoms in which are discarded by Algorithm ?. Since , we have and hence we obtain

We also have,

Note that contains the -rows of with highest row -norm. Therefore, using , we get

Substituting in , we get

Now, consider

Using , and in , we get

Using and in , we get

Let denote the column of matrix . The, we have,

Using Lemma ? and , we get

Substituting in , we get

where .
Substituting in and using the definitions of , and , we get


ii) Using and the definitions of and , we get

Hence, we obtain the relation for SRER for MMV-FACS, in case of arbitrary signals, as

Hence MMV-FACS provides at least SRER gain of over algorithm if .
Note that .

4.2Exactly -sparse Matrix

Theorem ? considered the case when is an arbitrary matrix. If is a -sparse matrix then we have and . Thus, it follows from Theorem ? that, MMV-FACS provides at least SRER gain of over participating algorithm if . Thus, the improvement in the SRER gain provided by MMV-FACS over the Algorithm for a -sparse matrix is greater than that of an arbitrary matrix by a factor of .

The second part of Theorem ? considers the case when and . If , then . Also, implies . Suppose , then the support-set is correctly estimated by algorithm and further performance improvement is not possible by MMV-FACS. Hence we consider the case where , and derive the condition for exact reconstruction by MMV-FACS in the following proposition.

Proof: We have

From Algorithm ?, we have where , and

If , then and (). Thus MMV-FACS estimates the support-set correctly from .

In practice, the original signal is not known and hence it is not possible to evaluate the performance w.r.t. the true signal. Hence in applications, the decrease in energy of the residual is often treated as a measure of performance improvement. Proposition ? gives a sufficient condition for decrease in the energy of the residual matrix obtained by MMV-FACS over the participating algorithm.

We have,