Random subdictionaries and sparse signal recovery

Random subdictionaries and coherence conditions for sparse signal recovery

Abstract.

The most frequently used condition for sampling matrices employed in compressive sampling is the restricted isometry (RIP) property of the matrix when restricted to sparse signals. At the same time, imposing this condition makes it difficult to find explicit matrices that support recovery of signals from sketches of the optimal (smallest possible) dimension. A number of attempts have been made to relax or replace the RIP property in sparse recovery algorithms. We focus on the relaxation under which the near-isometry property holds for most rather than for all submatrices of the sampling matrix, known as statistical RIP or StRIP condition. We show that sampling matrices of dimensions with maximum coherence and mean square coherence support stable recovery of -sparse signals using Basis Pursuit. These assumptions are satisfied in many examples. As a result, we are able to construct sampling matrices that support recovery with low error for sparsity higher than which exceeds the range of parameters of the known classes of RIP matrices.

Date: March 6, 2018.
Dept. of Electrical and Computer Engineering and Institute for Systems Research, University of Maryland, College Park, MD 20742, and Institute for Problems of Information Transmission, Russian Academy of Sciences, Moscow, Russia. Email: abarg@umd.edu. Research supported in part by NSF grants CCF0916919, CCF1217245, CCF1217894, DMS1101697, and NSA H98230-12-1-0260.
Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139. This work was done while the author was at the University of Maryland, College Park, MD. Email: aryam@mit.edu
Dept. of Mathematics, University of Maryland, College Park, MD 20742. Email: rwang928@math.umd.edu

1. Introduction

One of the important problems in theory of compressed sampling is construction of sampling operators that support algorithmic procedures of sparse recovery. A universal sufficient condition for stable reconstruction is given by the restricted isometry property (RIP) of sampling matrices [14]. It has been shown that sparse signals compressed to low-dimensional images using linear RIP maps can be reconstructed using minimization procedures such as Basis pursuit and Lasso [20, 18, 14, 11],

Let be an -dimensional real signal that has a sparse representation in a suitably chosen basis. We will assume that has nonzero coordinates (it is a -sparse vector) or is approximately sparse in the sense that it has at most significant coordinates, i.e., entries of large magnitude compared to the other entries. The observation vector is formed as a linear transformation of , i.e.,

where is an real matrix, and is a noise vector. We assume that has bounded energy (i.e., ). The objective of the estimator is to find a good approximation of the signal after observing . This is obviously impossible for general signals but becomes tractable if we seek a sparse approximation which satisfies

(1)

for some and constants . Note that if itself is -sparse, then (1) implies that the recovery error is at most proportional to the norm of the noise. Moreover it implies that the recovery is stable in the sense that if is approximately -sparse then the recovery error is small. If the estimate satisfies an inequality of the type (1), we say that the recovery procedure satisfies a error guarantee.

Among the most studied estimators is the Basis Pursuit algorithm [23]. This is an -minimization algorithm that provides an estimate of the signal through solving a convex programming problem

(2)

Basis Pursuit is known to provide both and error guarantees under the conditions on discussed in the next section.

Another popular estimator for which the recovery guarantees are proved using coherence properties of the sampling matrix is Lasso [45, 23]. Assume the vector is independent of the signal and formed of independent identically distributed Gaussian random variables with zero mean and variance Lasso is a regularization of the minimization problem written as follows:

(3)

Here is a regularization parameter which controls the complexity (sparsity) of the optimizer.

Compressed sensing is just one of a large group of applications of solutions of severely ill-defined problems under the sparsity assumption. An extensive recent overview of such applications is given in [10]. It is this multitude of concrete applications that makes the study of sparse recovery such an appealing area of signal processing and applied statistics.

1.1. Properties of sampling matrices

One of the main questions related to sparse recovery is derivation of sufficient conditions for the convergence and error guarantees of the reconstruction algorithms. Here we discuss some properties of sampling matrices that are relevant to our results, focusing on incoherence and near-isometry of random submatrices of the sampling matrix.

Let be an real matrix and let be its columns. Without loss of generality throughout this paper we assume that the columns are unit-length vectors. Let and let be a -subset of the set of coordinates. By we denote the set of all -subsets of Below we write to refer to the submatrix of formed of the columns with indices in . Given a vector we denote by a -dimensional vector given by the projection of the vector on the coordinates in .

It is known that at least samples are required for any recovery algorithm with an error guarantee of the form (1) (see for example [36, 37]). Matrices with random Gaussian or Bernoulli entries with high probability provide the best known error guarantees from the sketch dimension that matches this lower bound [20, 21, 19]. The estimates become more conservative once we try to construct sampling matrices explicitly.

We say that satisfies the coherence property if the inner product is uniformly small, and call the coherence parameter of the matrix. The importance of incoherent dictionaries has been recognized in a large number of papers on compressed sensing, among them [46, 49, 30, 17, 15, 16, 11]. The coherence condition plays an essential role in proofs of recovery guarantees in these and many other studies. We also define the mean square coherence and the maximum average square coherence of the dictionary:

Of course, with equality if and only if for every the sum in takes the same value. Our reliance on two coherence parameters of the sampling matrix resembles somewhat the approach in [3, 4]; however, unlike those papers, our results imply recovery guarantees for Basis Pursuit. Our proof methods are also materially different from these works. More details are provided below in this section where we comment on previous results.

The RIP property

The matrix satisfies the RIP property (is -RIP) if

(4)

holds for all -sparse vectors , where is a parameter. Equivalently, is -RIP if holds for all , where is the spectral norm and Id is the identity matrix. The RIP property provides a sufficient condition for the solution of (2) to satisfy the error guarantees of Basis Pursuit [20, 18, 14, 11]. In particular, by [14], -RIP suffices for both and error estimates, while [11] improves this to -RIP.

As is well known (see [46] [26]), coherence and RIP are related: a matrix with coherence parameter is -RIP. This connection has served the starting point in a number of studies on constructing RIP matrices from incoherent dictionaries. To implement this idea one starts with a set of unit vectors with maximum coherence In other words, we seek a well-separated collection of lines through the origin in , or reformulating again, a good packing of the real projective space One way of constructing such packings begins with taking a set of binary -dimensional vectors whose pairwise Hamming distances are concentrated around Call the maximum deviation from the width of the set An incoherent dictionary is obtained by mapping the bits of a small-width code to bipolar signals and normalizing. The resulting coherence and width are related by

One of the first papers to put forward the idea of constructing RIP matrices from binary vectors was the work by DeVore [25]. While [25] did not make a connection to error-correcting codes, a number of later papers pursued both its algorithmic and constructive aspects [6, 12, 13, 24]. Examples of codes with small width are given in [2], where they are studied under the name of small-bias probability spaces. RIP matrices obtained from the constructions in [2] satisfy . Ben-Aroya and Ta-Shma [7] recently improved this to for The advantage of obtaining RIP matrices from binary or spherical codes is low construction complexity: in many instances it is possible to define the matrix using only columns while the remaining columns can be computed as their linear combinations. We also note a result by Bourgain et al. [8] who gave the first (and the only known) construction of RIP matrices with on the order of (i.e., greater than ). An overview of the state of the art in the construction of RIP matrices is given in a recent paper [5].

At the same time, in practical problems we still need to write out the entire matrix; so constructions of complexity are an acceptable choice. Under these assumptions, the best tradeoff between and for RIP-matrices based on codes and coherence is obtained from Gilbert-Varshamov type code constructions: namely, it is possible to construct -RIP matrices with . At the same time, already [2] observes that the sketch dimension in RIP matrices constructed from binary codes is at least

Statistical incoherence properties

The limitations on incoherent dictionaries discussed in the previous section suggest relaxing the RIP condition. An intuitively appealing idea is to require that condition (4) hold for almost all rather than all -subsets replacing RIP with a version of it, in which the near-isometry property holds with high probability with respect to the choice of Statistical RIP (StRIP) matrices are arguably easier to construct, so they have a potential of supporting provable recovery guarantees from shorter sketches compared to the known constructive schemes relying on RIP.

A few words on notation. Let and let denote the set of -subsets of The usual notation for probability is used to refer a probability measure when there is no ambiguity. At the same time, we use separate notation for some frequently encountered probability spaces. In particular, we use to denote the uniform probability distribution on . If we need to choose a random -subset and a random index in we use the notation . We use to denote any probability measure on which assigns equal probability to each of the orthants (i.e., with uniformly distributed signs).

The following definition is essentially due to Tropp [49, 48], where it is called conditioning of random subdictionaries.

Definition 1.

An matrix satisfies the statistical RIP property (is -StRIP) if

In other words, the inequality

(5)

holds for at least a proportion of all -subsets of and for all

A related but different definition was given later in several papers such as [12, 3, 30] as well as some others. In these works, a matrix is called -StRIP if inequality (5) holds for at least proportion of -sparse unit vectors . While several well-known classes of matrices were shown to have this property, it is not sufficient for sparse recovery procedures. Several additional properties as well as specialized recovery procedures that make signal reconstruction possible were investigated in [12].

In this paper we focus on the statistical isometry property as given by Def. 1 and mean this definition whenever we mention StRIP matrices. We note that condition (5) is scalable, so the restriction to unit vectors is not essential.

Definition 2.

An matrix satisfies a statistical incoherence condition (is -SINC) if

(6)

This condition is discussed in [29, 47], and more explicitly in [48]. Following [48], it appears in the proofs of sparse recovery in [15] and below in this paper. A somewhat similar average coherence condition was also introduced in [3, 4]. The reason that (6) is less restrictive than the coherence property is as follows. Collections of unit vectors with small coherence (large separation) cannot be too large so as not to contradict universal bounds on packings of At the same time, for the norm to be large it is necessary that a given column is close to the majority of the vectors from the set , which is easier to rule out.

Nevertheless, the above relaxed conditions are still restrictive enough to rule out many deterministic matrices: the problem is that for almost all supports we require that be small for all We observe that this condition can be further relaxed. Namely, let

be the set of all values taken by the coherence parameter. Let us introduce the following definition.

Definition 3.

An matrix is said to satisfy a weak statistical incoherence condition (to be a -WSINC) if

(7)

where is a positive increasing function of and

We note that this definition is informative if otherwise, replacing it with 1 we get back the SINC condition. Below we use This definition takes account of the distribution of values of the quantity for different choices of the support and a column outside it. For binary dictionaries, the WSINC property relies on a distribution of sums of Hamming distances between a column and a collection of columns, taken with weights that decrease as the sum increases.

Definition 4.

We say that a signal is drawn from a generic random signal model if

1) The locations of the coordinates of with largest magnitudes are chosen among all -subsets with a uniform distribution;

2) Conditional on , the signs of the coordinates are i.i.d. uniform Bernoulli random variables taking values in the set .

Using previous defined notation, the probability induced by the generic model can be decomposed as .

1.2. Contributions of this paper

Our results are as follows. First, we show that a combination of the StRIP and SINC conditions suffices for stable recovery of sparse signals. In their large part, these results are due to [49]. We incorporate some additional elements such as stability analysis of Basis Pursuit based on these assumptions and give the explicit values of the constants involved in the assumptions. We also show that the WSINC condition together with StRIP is sufficient for bounding the off-support error of Basis Pursuit.

One of the main results of [49, 48] is a sufficient condition for a matrix to act nearly isometrically on most sparse vectors. Namely, an matrix is -StRIP if

where and is a constant; see [49], Theorem B. For this condition to be applicable, one needs that For sampling matrices that satisfy this condition, we obtain a near-optimal relation between the parameters. Some examples of this kind are given below in Sect. 5. As one of our main results, we extend the region of parameters that suffice for -StRIP. Namely, in Theorem 4.7 we prove that it is enough to have the relation . This improvement comes at the expense of an additional requirement on (or a similar inequality for ), but this is easily satisfied in a large class of examples, discussed below in the paper. These examples in conjunction with Theorem 4.1 and the results in Section 2 establish provable error guarantees for some new classes of sampling matrices.

We note a group of papers by Bajwa and Calderbank [3, 4, 13] which is centered around the analysis of a threshold decoding procedure (OST) defined in [3]. The sufficient conditions in these works are formulated in terms of and maximum average coherence Reliance on two coherence parameters of for establishing sufficient conditions for error estimates in [3] is a shared feature of these papers and our research. At the same time, the OST procedure relies on additional assumptions such as minimum-to-average ratio of signal components bounded away from zero (in experiments, OST is efficient for uniform-looking signals, and is less so for sparse signals with occasional small components). Some other similar assumptions are required for the proofs of the noisy version of OST [4].

We note that there is a number of other studies that establish sufficient conditions for sampling matrices to provide bounded-error approximations in sparse recovery procedures, e.g., [16, 34, 35]. At the same time, these conditions are formulated in terms different from our assumptions, so no immediate comparison can be made with our results.

As a side result, we also calculate the parameters for the StRIP and SINC conditions that suffice to derive an error estimate for sparse recovery using Lasso. This result is implicit in the work of Candés and Plan [15], which also uses the SINC property of sampling matrices. The condition on sparsity for Lasso is in the form so if this yields This range of parameters exceeds the range in which Basis Pursuit is shown to have good error guarantees, even with the improvement obtained in our paper. At the same time, both [15] and our calculations find error estimates in the form of bounds on rather than i.e., on the compressed version of the recovered signal.

In the final section of the paper we collect examples of incoherent dictionaries that satisfy our sufficient conditions for approximate recovery using Basis Pursuit. Two new examples with nearly optimal parameters that emerge are the Delsarte-Goethals dictionaries [39] and deterministic sub-Fourier dictionaries [31]. For instance, in the Delsarte-Goethals case we obtain the sketch dimension on the order of which is near-optimal, and is in line with the comments made above.

We also show that the restricted independence property of the dictionary suffices to establish the StRIP condition. Using sets of binary vectors known as orthogonal arrays, we find -StRIP dictionaries with At the same time, we are not able to show that restricted independence gives rise to the SINC property with good parameter estimates, so this result has no consequences for linear programming decoders.

Acknowledgment: We are grateful to Waheed Bajwa for useful feedback on an early version of this work.

2. Statistical Incoherence Properties and Basis Pursuit

In this section we prove approximation error bounds for recovery by Basis Pursuit from linear sketches obtained using deterministic matrices with the StRIP and SINC properties.

2.1. StRIP Matrices with incoherence property

It was proved in [49] that random sparse signals sampled using matrices with the StRIP property can be recovered with high probability from low-dimensional sketches using linear programming. In this section we prove a similar result that in addition incorporates stability analysis.

Theorem 2.1.

Suppose that is a generic random signal from the model Let and let be the approximation of by the Basis Pursuit algorithm. Let be the set of largest coordinates of . If

  1. is -StRIP;

  2. is -SINC,

then with probability at least

and

This theorem implies that if the signal itself is -sparse then the basis pursuit algorithm will recover it exactly. Otherwise, its output will be a tight sparse approximation of .

Theorem 2.1 will follow from the next three lemmas. Some of the ideas involved in their proofs are close to the techniques used in [21]. Let be the error in recovery of basis pursuit. In the following refers to the support of the largest coordinates of

Lemma 2.2.

Let Suppose that and

Then

Proof.

Clearly, so and

We obtain

as required.  

Next we show that the error outside cannot be large. Below is a -vector of signs of the argument vector

Lemma 2.3.

Suppose that there exists a vector such that

  1. is contained in the row space of , say

Then

(8)
Proof.

By (2) we have

Here we have used the inequality valid for any two vectors and the triangle inequality. From this we obtain

Further, using the properties of we have

The statement of the lemma is now evident.  

Now we prove that such a vector as defined in the last lemma indeed exists.

Lemma 2.4.

Let be a generic random signal from the model Suppose that the support of the largest coordinates of is fixed. Under the assumptions of Lemma 2.2 the vector

satisfies (i)-(iii) of Lemma 2.3 with probability at least

Proof.

From the definition of it is clear that it belongs to the row-space of and We have where

We will show that for all with probability

Since the coordinates of are i.i.d. uniform random variables taking values in the set , we can use Hoeffding’s inequality to claim that

(9)

On the other hand, for all

(10)

Equations (9) and (10) together imply for any

Using the union bound, we now obtain the following relation:

(11)

Hence for all with probability at least .  

Now we are ready to prove Theorem 2.1.

Proof of Theorem 2.1.

The matrix is -SRIP. Hence, with probability at least . At the same time, from the SINC assumption we have, with probability at least over the choice of ,

for all Thus, will have these two properties with probability at least . Then from Lemma 2.2 we obtain that

with probability Furthermore, from Lemmas 2.3, 2.4

with probability . This completes the proof.  

2.2. StRIP Matrices with weak incoherence property

In this section we establish a recovery guarantee of Basis Pursuit under the weak SINC condition defined earlier in the paper.

Theorem 2.5.

Suppose that the sampling matrix is -StRIP and -WSINC, where and Suppose that the signal is chosen from the generic random signal model and let be the approximation of found by Basis Pursuit. Then with probability at least we have

If is -sparse and satisfies the condition , then this theorem asserts that Basis Pursuit will find the support of . If in addition is the only -sparse solution to then we have . Note that the WSINC property is not sufficient for the error guarantee. However, once the corrected support is detected, the signal can be found by solving the overcomplete system .

To prove Theorem 2.5, we refine the ideas used to establish Lemma 2.4.

Lemma 2.6.

Suppose that the sampling matrix satisfies the conditions of Theorem 2.5. For any and define Let

Then

Proof.

As in the proof of Lemma 2.4, we define the vector

and let be the th coordinate of the vector From now on we write simply omitting the dependence on and . Let then the StRIP property of implies that

By definition, for any

Now we split the target probability into three parts:

where is the set of supports appearing in the definition of the WSINC property. If i.e., it supports both StRIP and SINC properties, then (11) implies that so the first term on the right-hand side equals 0. The third term refers to supports with no SINC property, whose total probability is Estimating the second term by the Markov inequality, we have

(12)

where denotes the indicator random variable. We have

(13)

Let us first estimate for by invoking Hoeffding’s inequality (9):

Substituting this result into (13), we obtain

where the last step is on account of (12) and the WSINC assumption.  

Proof of Theorem 2.5: Define the set by

Recall that Theorem 2.5 is stated with respect to the random signal . Therefore, let us estimate the probability

We have from Lemma 2.4 and from Lemma 2.6, so

This implies that with probability the signal chosen from the generic random signal model satisfies the conditions of Lemma 2.3, i.e.,

This completes the proof.  

3. Incoherence Properties and Lasso

In this section we prove that sparse signals can be approximately recovered from low-dimensional observations using Lasso if the sampling matrices have statistical incoherence properties. The result is a modification of the methods developed in [15, 49] in that we prove that the conditions used there to bound the error of the Lasso estimate hold with high probability if is has both StRIP and SINC properties. The precise claim is given in the following statement.

Theorem 3.1.

Let be a random -sparse signal whose support satisfies the two properties of the generic random signal model Denote by its estimate from via Lasso (3), where is a i.i.d. Gaussian vector with zero mean and variance and where Suppose that , where is a positive constant, and that the matrix satisfies the following two properties:

  1. is -StRIP.

  2. is -SINC.

Then we have

with probability at least , where is an absolute constant and

The following theorem is implicit in [15], see Theorem 1.2 and Sect 3.2 in that paper.

Theorem 3.2.

(Candès and Plan) Suppose that is a -sparse signal drawn from the model where

where is a constant. Let be the support of and suppose the following three conditions are satisfied:

Then

where is an absolute constant.

Our aim will be to prove that conditions (1)-(3) of this theorem hold with large probability under the assumptions of Theorem 3.1.

First, it is clear that with probability at least This follows simply because is an independent Gaussian vector, and has been discussed in [15] (this is also the reason for selecting the particular value of ). The main part of the argument is contained in the following lemma whose proof uses some ideas of [15].

Lemma 3.3.

Suppose that and that for all

Then Condition (3) of Theorem 3.2 holds with probability at least for .

Proof.

Let Define and where

Let and We will show that with high probability and