Optimal Sparse Recovery for Multi-Sensor Measurements
Many practical sensing applications involve multiple sensors simultaneously acquiring measurements of a single object. Conversely, most existing sparse recovery guarantees in compressed sensing concern only single-sensor acquisition scenarios. In this paper, we address the optimal recovery of compressible signals from multi-sensor measurements using compressed sensing techniques, thereby confirming the benefits of multi- over single-sensor environments. Throughout the paper we consider a broad class of sensing matrices, and two fundamentally different sampling scenarios (distinct and identical respectively), both of which are relevant to applications. For the case of diagonal sensor profile matrices (which characterize environmental conditions between a source and the sensors), this paper presents two key improvements over existing results. First, a simpler optimal recovery guarantee for distinct sampling, and second, an improved recovery guarantee for identical sampling, based on the so-called sparsity in levels signal model.
The standard single-sensor problem in compressed sensing (CS) involves the recovery of a sparse signal from measurements
where and is noise. As is well known, subject to appropriate conditions on the measurement matrix (e.g. incoherence) it is possible to recover from a number of measurements that scales linearly with its sparsity .
I-a System models
Here is the matrix corresponding to the measurements taken in the sensor and is noise. Throughout, we assume that
where are standard CS matrices (e.g. a random subgaussian, subsampled isometry or random convolution), and are fixed, deterministic matrices, referred to as sensor profile matrices. These matrices model environmental conditions in the sensing problem; for example, a communication channel between and the sensors, the geometric position of the sensors relative to , or the effectiveness of the sensors to . As in standard CS, our recovery algorithm will be basis pursuit:
Here is such that .
Within this setup we consider two distinct types of problem:
Distinct sampling. Here the matrices are independent; that is, drawn independently from possibly different distributions.
Identical sampling. Here and , where is a standard CS matrix. That is, the measurement process in each sensor is identical, the only difference being in the sensor profiles .
Parallel acquisition systems are found in numerous applications, and are employed for a variety of different reasons.
I-B1 Parallel magnetic resonance imaging
Parallel MRI (pMRI) techniques are commonly used over single-coil MRI to reduce scan duration. The most general system model in pMRI is an example of identical sampling with diagonal sensor profiles [2, 3, 4]. In this case, the model (1.2)–(1.3) is the well-known CS SENSE technique for pMRI [3, 2, 5, 6].
I-B2 Multi-view imaging
In multi-view imaging – with applications to satellite imaging, remote sensing, super-resolution imaging and more – cameras with differing alignments simultaneously image a single object. Following the work of of [7, 8], this can be understood in terms of the above framework, with the sensor profiles corresponding to geometric features of the scene.
I-B3 Generalized sampling
Papoulis’ generalized sampling theorem [9, 10] is a well-known extension of the classical Shannon Sampling theorem in which a bandlimited signal is recovered from samples of convolutions of the original signal taken at a lower rate (precisely of the Nyquist rate). Common examples include jittered or derivative sampling, with applications to super-resolution and seismic imaging respectively. Our identical sampling framework gives rises to a sparse, discrete version of generalized sampling.
I-B4 Other applications
Besides acquisition time or cost reduction (e.g. pMRI and generalized sampling) or the recovery of higher-dimensional/resolution signals (e.g. in multi-view or light-field imaging), parallel acquisition systems are also used for power reduction (e.g. in wireless sensor networks), and also naturally arise in a number of other applications, including system identification. We refer to  for details.
The work  introduced the first CS framework and theoretical analysis for the system (1.2)–(1.3). We refer to this paper for further information and background. Our work builds on this paper by introducing new recovery guarantees for the identical and distinct sampling scenarios. Specifically, in Corollaries 3.5 and 3.6 respectively we present new sufficient conditions on the sensor profile matrices so that the total number of required measurements is linear in the sparsity and independent of the number of sensors . Since this implies that the average number of measurements required per sensor behaves like , these results provide a theoretical foundation for the successful use of CS in the aforementioned applications. To verify our recovery guarantees we provide numerical results showing phase transition curves.
Write for the -norm on and denote the canonical basis by . If then we use the notation for both the orthogonal projection with
and the matrix with
The conversion of a vector into a diagonal matrix is denoted by . Distinct from the index , we denote the imaginary unit by . In addition, we use the notation or to mean there exists a constant independent of all relevant parameters (in particular, the number of sensors ) such that or respectively.
Ii Abstract framework
For some , let be a distribution of complex matrices. We assume that is isotropic in the sense that
If (assumed to be an integer), let be a sequence of i.i.d. random matrices drawn from . Then we define the measurement matrix by
where denotes the Kronecker product.
This framework is an extension of the well-known setup of  for standard single-sensor CS, which corresponds to isotropic distributions of complex vectors (i.e. ), to arbitrary matrices. It is sufficiently general to allow us to consider both the distinct and identical sampling scenarios:
Ii-A1 Distinct sampling,
In the sensor, suppose that the sampling arises from random draws from an isotropic distribution on . Define so that if for . Now let be a uniformly-distributed random variable taking values in . Then define the distribution on so that, when conditioned on the event , we have . Since should be isotropic in the sense of (2.4), this means that we require the joint isometry condition for the sensor profiles.
Ii-A2 Identical sampling,
Let be an isotropic distribution of vectors on . Define the distribution on so that if for . In this case, we require the joint isometry condition to satisfy the condition (2.4).
Ii-B Signal model
As discussed in , is it often not possible in multi-sensor systems to recover all sparse vectors with an optimal measurement condition. This is due for the potential of clustering of the nonzeros of a sparse vector. Instead, we shall consider a signal model that prohibits such clustering:
Definition 2.1 (Sparsity in levels).
Let be a partition of and where , . A vector is -sparse in levels if for .
Definition 2.2 (Sparse and distributed vectors).
Let be a partition of and . For , we say that an -sparse vector is sparse and -distributed with respect to the levels if is -sparse in levels for some satisfying
We denote the set of such vectors as and, for an arbitrary , write for the -norm error of the best approximation of by a vector in .
Note that our interest lies with the case where is independent of ; that is, when the none of the local sparsities greatly exceeds the average .
Ii-C Abstract recovery guarantee
Our first result concerns the recovery of the an arbitrary support set . For this, we require the following (see ):
Definition 2.3 (Coherence relative to ).
Let be as in §II-A and . The local coherence of relative to is , where , , are the smallest numbers such that
Iii Results for diagonal sensor profile matrices
Iii-a Distinct sampling
Let , , be diagonal sensor profiles. We now let and , be as in §II-A1. For simplicity, we suppose that for . We also assume that the distributions are incoherent, i.e. for .111The coherence of a distribution of vectors in is defined as the smallest number such that almost surely for .
By Theorem 2.4 it suffices to estimate the coherence for subsets of the form , where and for . Fix , and . If , where then
Hence . Also,
where in the last step we use the fact that is isotropic. Observe that
Also, the normalization condition implies that . Substituting these into the previous bound now gives . To complete the proof, we now let be the index set of the largest entries of restricted to , where satisfies . Then as required. ∎
Iii-B Identical sampling
Now let , and , be as in §II-A. We shall assume that is incoherent; .
As in the previous proof, it suffices by Theorem 2.4 to estimate the coherence for subsets of the form , where and for . Let , and . If then
Therefore . Similarly,
where in the final step we use the fact that is isotropic. Hence
Since the are diagonal, the normalization condition is equivalent to , . In particular,
It now follows that and therefore . Combining this with the result for completes the proof. ∎
For distinct and identical sampling respectively, Corollaries 3.5 and 3.6 provide optimal recovery guarantees, provided the partition and sensors profiles are such that and are independent of . Note that in general, which agrees with the worst-case bounds derived in . Yet, it is possible to construct large families of sensor profile matrices for which and are independent of , thus yielding optimal recovery. We consider several such examples in §IV.
Iv Examples of diagonal sensor profiles
We now introduce several different families of diagonal sensor profiles that lead to optimal recovery guarantees for both distinct and identical sampling.
Iv-a Piecewise constant sensor profiles
The following example was first presented in . Let be a partition of , where , and suppose that is an isometry, i.e. . Define the sensor profile matrices
where, as in §II-A, for distinct sampling and for identical sampling. Observe that , so the profiles satisfy the respective joint isometry conditions. Furthermore, in the distinct case
where is the coherence of the matrix . Hence, for distinct sampling, we obtain an optimal recovery guarantee whenever is incoherent, i.e. .222Since is an isometry and , its coherence satisfies . Note that this holds independently of the number of partitions . In particular, when we get optimal recovery of all -sparse vectors.
Conversely, in the identical case
Hence we obtain an optimal recovery guarantee whenever the number of partitions is such that . Note that this does not require to be incoherent, as in the case of distinct sampling. However, it only ensures recovery of vectors that are sparse and distributed, as opposed to all sparse vectors.
Iv-B Banded sensor profile
Let be a partition and suppose that the are banded, i.e.
for some fixed and (note that if or ). Since , where is as in the previous example, it follows that
Hence in both cases we get an optimal recovery guarantee whenever is such that and the bandwidth is independent of .
A specific example of banded sensor profile stemming from applications is a smooth sensor profile with compact support [1, Fig. 1(c)]. This corresponds to a sharply decaying coil sensitivity in a one-dimensional example of pMRI application; see  for further details on the pMRI application. For these sensor profiles, we set and
Note that this specific example corresponds to a banded sensor profile with , ; therefore, for any , which leads to an optimal recovery guarantee. This theoretical result is verified in Fig. 1(b), where empirical phase transition curves are computed for both types of sampling.
|(a) Distinct sampling||(b) Identical sampling|
BA wishes to acknowledge the support of Alfred P. Sloan Research Foundation and the Natural Sciences and Engineering Research Council of Canada through grant 611675. BA and IYC acknowledge the support of the National Science Foundation through DMS grant 1318894.
-  I. Y. Chun and B. Adcock, “Compressed sensing and parallel acquisition,” submitted to IEEE Trans. Inf. Theory, Jan. 2016. [Online]. Available: http://arxiv.org/abs/1601.06214
-  I. Y. Chun, B. Adcock, and T. M. Talavage, “Efficient compressed sensing SENSE pMRI reconstruction with joint sparsity promotion,” IEEE Trans. Med. Imag., vol. 35, no. 1, pp. 354–368, Jan. 2016.
-  I. Y. Chun, B. Adcock, and T. Talavage, “Efficient compressed sensing SENSE parallel MRI reconstruction with joint sparsity promotion and mutual incoherence enhancement,” in Proc. IEEE EMBS, Chicago, IL, Aug. 2014, pp. 2424–2427.
-  K. P. Pruessmann, M. Weiger, M. B. Scheidegger, and P. Boesiger, “SENSE: sensitivity encoding for fast MRI,” Magn. Reson. Med., vol. 42, no. 5, pp. 952–962, Jul. 1999.
-  F. Knoll, C. Clason, K. Bredies, M. Uecker, and R. Stollberger, “Parallel imaging with nonlinear reconstruction using variational penalties,” Magn. Reson. Med., vol. 67, no. 1, pp. 34–41, Jan. 2012.
-  H. She, R. R. Chen, D. Liang, E. V. DiBella, and L. Ying, “Sparse BLIP: BLind Iterative Parallel imaging reconstruction using compressed sensing,” Magn. Reson. Med., vol. 71, no. 2, pp. 645–660, Feb. 2014.
-  J. Y. Park and M. B. Wakin, “A geometric approach to multi-view compressive imaging,” EURASIP J. Adv. Signal Process., vol. 2012, no. 1, pp. 1–15, Dec. 2012.
-  Y. Traonmilin, S. Ladjal, and A. Almansa, “Robust multi-image processing with optimal sparse regularization,” J. Math. Imaging Vis., vol. 51, no. 3, pp. 413–429, Mar. 2015.
-  A. Papoulis, “Generalized sampling expansion,” IEEE Trans. Circuits Syst., vol. 24, no. 11, pp. 652–654, Nov. 1977.
-  M. Unser, “Sampling-50 years after Shannon,” Proc. IEEE, vol. 88, no. 4, pp. 569–587, Apr. 2000.
-  E. J. Candes and Y. Plan, “A probabilistic and RIPless theory of compressed sensing,” IEEE Trans. Inf. Theory, vol. 57, no. 11, pp. 7235–7254, Nov. 2011.
-  B. Adcock, A. C. Hansen, C. Poon, and B. Roman, “Breaking the coherence barrier: a new theory for compressed sensing,” arXiv pre-print cs.IT/1302.0561, Feb. 2013.
-  B. Adcock, A. C. Hansen, and B. Roman, “A note on compressed sensing of structured sparse wavelet coefficients from subsampled Fourier measurements,” arXiv pre-print math.FA/1403.6541, Mar. 2014.