Optimal Sparse Recovery for Multi-Sensor Measurements

Optimal Sparse Recovery for Multi-Sensor Measurements

Il Yong Chun Department of Mathematics
Purdue University
West Lafayette, IN 47907 USA
Email: chuni@purdue.edu
   Ben Adcock Department of Mathematics
Simon Fraser University
Burnaby, BC V5A 1S6 Canada
Email: ben_adcock@sfu.ca
Abstract

Many practical sensing applications involve multiple sensors simultaneously acquiring measurements of a single object. Conversely, most existing sparse recovery guarantees in compressed sensing concern only single-sensor acquisition scenarios. In this paper, we address the optimal recovery of compressible signals from multi-sensor measurements using compressed sensing techniques, thereby confirming the benefits of multi- over single-sensor environments. Throughout the paper we consider a broad class of sensing matrices, and two fundamentally different sampling scenarios (distinct and identical respectively), both of which are relevant to applications. For the case of diagonal sensor profile matrices (which characterize environmental conditions between a source and the sensors), this paper presents two key improvements over existing results. First, a simpler optimal recovery guarantee for distinct sampling, and second, an improved recovery guarantee for identical sampling, based on the so-called sparsity in levels signal model.

I Introduction

The standard single-sensor problem in compressed sensing (CS) involves the recovery of a sparse signal from measurements

(1.1)

where and is noise. As is well known, subject to appropriate conditions on the measurement matrix (e.g. incoherence) it is possible to recover from a number of measurements that scales linearly with its sparsity .

I-a System models

In this paper, we consider the generalization of (1.1) to a so-called parallel acquisition system [1], where sensors simultaneously measure :

(1.2)

Here is the matrix corresponding to the measurements taken in the sensor and is noise. Throughout, we assume that

where are standard CS matrices (e.g. a random subgaussian, subsampled isometry or random convolution), and are fixed, deterministic matrices, referred to as sensor profile matrices. These matrices model environmental conditions in the sensing problem; for example, a communication channel between and the sensors, the geometric position of the sensors relative to , or the effectiveness of the sensors to . As in standard CS, our recovery algorithm will be basis pursuit:

(1.3)

Here is such that .

Within this setup we consider two distinct types of problem:

  • Distinct sampling. Here the matrices are independent; that is, drawn independently from possibly different distributions.

  • Identical sampling. Here and , where is a standard CS matrix. That is, the measurement process in each sensor is identical, the only difference being in the sensor profiles .

I-B Applications

Parallel acquisition systems are found in numerous applications, and are employed for a variety of different reasons.

I-B1 Parallel magnetic resonance imaging

Parallel MRI (pMRI) techniques are commonly used over single-coil MRI to reduce scan duration. The most general system model in pMRI is an example of identical sampling with diagonal sensor profiles [2, 3, 4]. In this case, the model (1.2)–(1.3) is the well-known CS SENSE technique for pMRI [3, 2, 5, 6].

I-B2 Multi-view imaging

In multi-view imaging – with applications to satellite imaging, remote sensing, super-resolution imaging and more – cameras with differing alignments simultaneously image a single object. Following the work of of [7, 8], this can be understood in terms of the above framework, with the sensor profiles corresponding to geometric features of the scene.

I-B3 Generalized sampling

Papoulis’ generalized sampling theorem [9, 10] is a well-known extension of the classical Shannon Sampling theorem in which a bandlimited signal is recovered from samples of convolutions of the original signal taken at a lower rate (precisely of the Nyquist rate). Common examples include jittered or derivative sampling, with applications to super-resolution and seismic imaging respectively. Our identical sampling framework gives rises to a sparse, discrete version of generalized sampling.

I-B4 Other applications

Besides acquisition time or cost reduction (e.g. pMRI and generalized sampling) or the recovery of higher-dimensional/resolution signals (e.g. in multi-view or light-field imaging), parallel acquisition systems are also used for power reduction (e.g. in wireless sensor networks), and also naturally arise in a number of other applications, including system identification. We refer to [1] for details.

I-C Contributions

The work [1] introduced the first CS framework and theoretical analysis for the system (1.2)–(1.3). We refer to this paper for further information and background. Our work builds on this paper by introducing new recovery guarantees for the identical and distinct sampling scenarios. Specifically, in Corollaries 3.5 and 3.6 respectively we present new sufficient conditions on the sensor profile matrices so that the total number of required measurements is linear in the sparsity and independent of the number of sensors . Since this implies that the average number of measurements required per sensor behaves like , these results provide a theoretical foundation for the successful use of CS in the aforementioned applications. To verify our recovery guarantees we provide numerical results showing phase transition curves.

I-D Notation

Write for the -norm on and denote the canonical basis by . If then we use the notation for both the orthogonal projection with

and the matrix with

The conversion of a vector into a diagonal matrix is denoted by . Distinct from the index , we denote the imaginary unit by . In addition, we use the notation or to mean there exists a constant independent of all relevant parameters (in particular, the number of sensors ) such that or respectively.

Ii Abstract framework

Following [1], we now introduce an abstract framework that is sufficiently general to include both the identical and distinct sampling scenarios. For more details we refer to [1].

Ii-a Setup

For some , let be a distribution of complex matrices. We assume that is isotropic in the sense that

(2.4)

If (assumed to be an integer), let be a sequence of i.i.d. random matrices drawn from . Then we define the measurement matrix by

(2.5)

where denotes the Kronecker product.

This framework is an extension of the well-known setup of [11] for standard single-sensor CS, which corresponds to isotropic distributions of complex vectors (i.e. ), to arbitrary matrices. It is sufficiently general to allow us to consider both the distinct and identical sampling scenarios:

Ii-A1 Distinct sampling,

In the sensor, suppose that the sampling arises from random draws from an isotropic distribution on . Define so that if for . Now let be a uniformly-distributed random variable taking values in . Then define the distribution on so that, when conditioned on the event , we have . Since should be isotropic in the sense of (2.4), this means that we require the joint isometry condition for the sensor profiles.

Ii-A2 Identical sampling,

Let be an isotropic distribution of vectors on . Define the distribution on so that if for . In this case, we require the joint isometry condition to satisfy the condition (2.4).

Ii-B Signal model

As discussed in [1], is it often not possible in multi-sensor systems to recover all sparse vectors with an optimal measurement condition. This is due for the potential of clustering of the nonzeros of a sparse vector. Instead, we shall consider a signal model that prohibits such clustering:

Definition 2.1 (Sparsity in levels).

Let be a partition of and where , . A vector is -sparse in levels if for .

Note that sparsity in levels was first introduced in [12] as a way to consider the asymptotic sparsity of wavelet coefficients (see also [13]).

Definition 2.2 (Sparse and distributed vectors).

Let be a partition of and . For , we say that an -sparse vector is sparse and -distributed with respect to the levels if is -sparse in levels for some satisfying

We denote the set of such vectors as and, for an arbitrary , write for the -norm error of the best approximation of by a vector in .

Note that our interest lies with the case where is independent of ; that is, when the none of the local sparsities greatly exceeds the average .

Ii-C Abstract recovery guarantee

Our first result concerns the recovery of the an arbitrary support set . For this, we require the following (see [1]):

Definition 2.3 (Coherence relative to ).

Let be as in §II-A and . The local coherence of relative to is , where , , are the smallest numbers such that

and

almost surely.

Theorem 2.4 (Abstract recovery theorem [1]).

For with and let be a distribution on satisfying (2.4) and suppose that , and with . Let and draw according to (2.5), where . Then for any minimizer of

where with , we have

with probability at least , provided

where

Iii Results for diagonal sensor profile matrices

Iii-a Distinct sampling

Let , , be diagonal sensor profiles. We now let and , be as in §II-A1. For simplicity, we suppose that for . We also assume that the distributions are incoherent, i.e.  for .111The coherence of a distribution of vectors in is defined as the smallest number such that almost surely for [11].

Corollary 3.5.

Let be a partition of , , and . Let , and , , be diagonal matrices satisfying the joint isometry condition

Let be as in §II-A1 and draw according to (2.5). If , , then for any minimizer of

we have

with probability at least , provided

where and

Proof.

By Theorem 2.4 it suffices to estimate the coherence for subsets of the form , where and for . Fix , and . If , where then

Hence . Also,

where in the last step we use the fact that is isotropic. Observe that

Also, the normalization condition implies that . Substituting these into the previous bound now gives . To complete the proof, we now let be the index set of the largest entries of restricted to , where satisfies . Then as required. ∎

We remark that Corollary 3.5 is simpler than our previous result [1, Cor. 3.5]. Specifically, it requires only one condition on the sensor profile matrices.

Iii-B Identical sampling

Now let , and , be as in §II-A. We shall assume that is incoherent; .

Corollary 3.6.

Let be a partition of , , and . Let , and , , be diagonal matrices satisfying the joint isometry condition

Let be defined as in §II-A2 and draw according to (2.5). If , , then for any minimizer of

we have

with probability at least , provided

where and

Proof.

As in the previous proof, it suffices by Theorem 2.4 to estimate the coherence for subsets of the form , where and for . Let , and . If then

Therefore . Similarly,

where in the final step we use the fact that is isotropic. Hence

Since the are diagonal, the normalization condition is equivalent to , . In particular,

It now follows that and therefore . Combining this with the result for completes the proof. ∎

This bound improves on our previous result [1, Cor. 4.2], since it depends on the quantity whereas the bound Corollary 4.2 of [1] depends linearly on .

Iii-C Discussion

For distinct and identical sampling respectively, Corollaries 3.5 and 3.6 provide optimal recovery guarantees, provided the partition and sensors profiles are such that and are independent of . Note that in general, which agrees with the worst-case bounds derived in [1]. Yet, it is possible to construct large families of sensor profile matrices for which and are independent of , thus yielding optimal recovery. We consider several such examples in §IV.

Interestingly, and are computable in and operations respectively. Hence, optimal recovery can be easily checked numerically. Thus, Corollaries 3.5 and 3.6 give a practical means to approach the question of optimal design of sensor profiles, within the constraints of a particular application.

Iv Examples of diagonal sensor profiles

We now introduce several different families of diagonal sensor profiles that lead to optimal recovery guarantees for both distinct and identical sampling.

Iv-a Piecewise constant sensor profiles

The following example was first presented in [1]. Let be a partition of , where , and suppose that is an isometry, i.e. . Define the sensor profile matrices

where, as in §II-A, for distinct sampling and for identical sampling. Observe that , so the profiles satisfy the respective joint isometry conditions. Furthermore, in the distinct case

where is the coherence of the matrix . Hence, for distinct sampling, we obtain an optimal recovery guarantee whenever is incoherent, i.e. .222Since is an isometry and , its coherence satisfies . Note that this holds independently of the number of partitions . In particular, when we get optimal recovery of all -sparse vectors.

Conversely, in the identical case

Hence we obtain an optimal recovery guarantee whenever the number of partitions is such that . Note that this does not require to be incoherent, as in the case of distinct sampling. However, it only ensures recovery of vectors that are sparse and distributed, as opposed to all sparse vectors.

Iv-B Banded sensor profile

Let be a partition and suppose that the are banded, i.e.

for some fixed and (note that if or ). Since , where is as in the previous example, it follows that

Hence in both cases we get an optimal recovery guarantee whenever is such that and the bandwidth is independent of .

A specific example of banded sensor profile stemming from applications is a smooth sensor profile with compact support [1, Fig. 1(c)]. This corresponds to a sharply decaying coil sensitivity in a one-dimensional example of pMRI application; see [2] for further details on the pMRI application. For these sensor profiles, we set and

Note that this specific example corresponds to a banded sensor profile with , ; therefore, for any , which leads to an optimal recovery guarantee. This theoretical result is verified in Fig. 1(b), where empirical phase transition curves are computed for both types of sampling.

  (a) Distinct sampling   (b) Identical sampling
Fig. 1: Empirical phase transitions for random Fourier sensing with banded diagonal sensor profile matrices and sensors. Phase transition curves with the empirical success probability are presented (for details of phase transition experiment, see [1]). For both sampling scenarios, the empirical probability of successful recovery increases as increases. The results are in agreement with our theoretical results.

Acknowledgment

BA wishes to acknowledge the support of Alfred P. Sloan Research Foundation and the Natural Sciences and Engineering Research Council of Canada through grant 611675. BA and IYC acknowledge the support of the National Science Foundation through DMS grant 1318894.

References

  • [1] I. Y. Chun and B. Adcock, “Compressed sensing and parallel acquisition,” submitted to IEEE Trans. Inf. Theory, Jan. 2016. [Online]. Available: http://arxiv.org/abs/1601.06214
  • [2] I. Y. Chun, B. Adcock, and T. M. Talavage, “Efficient compressed sensing SENSE pMRI reconstruction with joint sparsity promotion,” IEEE Trans. Med. Imag., vol. 35, no. 1, pp. 354–368, Jan. 2016.
  • [3] I. Y. Chun, B. Adcock, and T. Talavage, “Efficient compressed sensing SENSE parallel MRI reconstruction with joint sparsity promotion and mutual incoherence enhancement,” in Proc. IEEE EMBS, Chicago, IL, Aug. 2014, pp. 2424–2427.
  • [4] K. P. Pruessmann, M. Weiger, M. B. Scheidegger, and P. Boesiger, “SENSE: sensitivity encoding for fast MRI,” Magn. Reson. Med., vol. 42, no. 5, pp. 952–962, Jul. 1999.
  • [5] F. Knoll, C. Clason, K. Bredies, M. Uecker, and R. Stollberger, “Parallel imaging with nonlinear reconstruction using variational penalties,” Magn. Reson. Med., vol. 67, no. 1, pp. 34–41, Jan. 2012.
  • [6] H. She, R. R. Chen, D. Liang, E. V. DiBella, and L. Ying, “Sparse BLIP: BLind Iterative Parallel imaging reconstruction using compressed sensing,” Magn. Reson. Med., vol. 71, no. 2, pp. 645–660, Feb. 2014.
  • [7] J. Y. Park and M. B. Wakin, “A geometric approach to multi-view compressive imaging,” EURASIP J. Adv. Signal Process., vol. 2012, no. 1, pp. 1–15, Dec. 2012.
  • [8] Y. Traonmilin, S. Ladjal, and A. Almansa, “Robust multi-image processing with optimal sparse regularization,” J. Math. Imaging Vis., vol. 51, no. 3, pp. 413–429, Mar. 2015.
  • [9] A. Papoulis, “Generalized sampling expansion,” IEEE Trans. Circuits Syst., vol. 24, no. 11, pp. 652–654, Nov. 1977.
  • [10] M. Unser, “Sampling-50 years after Shannon,” Proc. IEEE, vol. 88, no. 4, pp. 569–587, Apr. 2000.
  • [11] E. J. Candes and Y. Plan, “A probabilistic and RIPless theory of compressed sensing,” IEEE Trans. Inf. Theory, vol. 57, no. 11, pp. 7235–7254, Nov. 2011.
  • [12] B. Adcock, A. C. Hansen, C. Poon, and B. Roman, “Breaking the coherence barrier: a new theory for compressed sensing,” arXiv pre-print cs.IT/1302.0561, Feb. 2013.
  • [13] B. Adcock, A. C. Hansen, and B. Roman, “A note on compressed sensing of structured sparse wavelet coefficients from subsampled Fourier measurements,” arXiv pre-print math.FA/1403.6541, Mar. 2014.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
14301
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description