New and improved Johnson-Lindenstrauss embeddings via the Restricted Isometry Property

# New and improved Johnson-Lindenstrauss embeddings via the Restricted Isometry Property

Felix Krahmer and Rachel Ward
###### Abstract

Consider an matrix with the Restricted Isometry Property of order and level , that is, the norm of any -sparse vector in is preserved to within a multiplicative factor of under application of . We show that by randomizing the column signs of such a matrix , the resulting map with high probability embeds any fixed set of points in into without distorting the norm of any point in the set by more than a factor of . Consequently, matrices with the Restricted Isometry Property and with randomized column signs provide optimal Johnson-Lindenstrauss embeddings up to logarithmic factors in . In particular, our results improve the best known bounds on the necessary embedding dimension for a wide class of structured random matrices; for partial Fourier and partial Hadamard matrices, we improve the recent bound appearing in Ailon and Liberty  to , which is optimal up to the logarithmic factors in . Our results also have a direct application in the area of compressed sensing for redundant dictionaries.

11footnotetext: Hausdorff Center for Mathematics, Universität Bonn, Bonn, Germany 22footnotetext: Courant Institute of Mathematical Science, New York University, New York, NY, USA

## 1 Introduction

The Johnson-Lindenstrauss (JL) Lemma states that any set of points in high dimensional Euclidean space can be embedded into dimensions, without distorting the distance between any two points by more than a factor between and . In its original form, the Johnson-Lindenstrauss Lemma reads as follows.

###### Theorem 1.1 (Johnson-Lindenstrauss Lemma ).

Let and let be arbitrary points. Let be a natural number. Then there exists a Lipschitz map such that

 (1−ε)∥xi−xj∥22≤∥f(xi)−f(xj)∥22≤(1+ε)∥xi−xj∥22 (1)

for all . Here stands for the Euclidean norm in or , respectively.

As shown in , the bound for the size of is tight up to an factor. In the original paper of Johnson and Lindenstrauss, it was shown that a random orthogonal projection, suitably normalized, provides such an embedding with high probability . Later, this property was also verified for Gaussian random matrices, among other random matrix constructions [21, 15]. As a consequence, the JL Lemma has become a valuable tool for dimensionality reduction in a myriad of applications ranging from computer science , numerical linear algebra [36, 23, 18], manifold learning , and compressed sensing , , .

In most of these frameworks, the map under consideration is a linear map represented by an matrix . In this case, one can consider the set of differences ; to prove the theorem, one then needs to show that

 (1−ε)∥y∥22≤∥Φy∥22≤(1+ε)∥y∥22, for all y∈E. (2)

When is a random matrix, the proof that satisfies the JL lemma with high probability boils down to showing a concentration inequality of the type

 P((1−ε)∥x∥22≤∥Φx∥22≤(1+ε)∥x∥22)≥1−2exp(−c0ε2m), (3)

for an arbitrary fixed , where is an absolute constant in the optimal case, and in addition possibly mildly dependent on in almost-optimal scenarios as for example in . Indeed it directly follows by a union bound over (as in the proof of Theorem 3.1 below) that (2) holds with high probability.

In order to reduce storage space and implementation time of such embeddings, the design of structured random JL embeddings has been an active area of research in recent years [4, 37, 3, 29]; see  or  for a good overview of these efforts. Of particular importance in this context is whether fast (i.e. ) multiplication algorithms are available for the resulting matrices. Fast JL embeddings with optimal embedding dimension were first constructed by Ailon and Chazelle , but their embeddings are fast only for vectors. This restriction on the number of vectors was later weakened to . In , fast JL embeddings were constructed without any restrictions on the number of vectors, but the authors only provide sub-optimal embedding dimension . In this paper, we provide the first unrestricted fast JL construction with optimal embedding dimension up to logarithmic factors in . Note that in the range not covered by the constructions in [1, 2], a logarithmic factor in is bounded by , and thus plays a minor role.

#### The Johnson-Lindenstrauss Lemma in Compressed Sensing.

One of the more recent applications of the Johnson-Lindenstrauss Lemma is to the area of compressed sensing, which is centered around the following phenomenon: For many underdetermined systems of linear equations , the solution of minimal -norm is also the sparsest solution. To be precise, a vector is -sparse if . A by now classical sufficient condition on the matrix for guaranteeing equivalence between the minimal norm solution and sparsest solution is the so-called Restricted Isometry Property (RIP)[11, 13, 17].

###### Definition 1.2.

A matrix is said to have the Restricted Isometry Property of order and level (equivalently, -RIP) if

 (1−δ)∥x∥22≤∥Φx∥22≤(1+δ)∥x∥22 for allk-sparse x∈RN. (4)

The restricted isometry constant is defined as the smallest value of for which (4) holds.

In particular, if has -RIP with , and if admits a -sparse solution , then .

Gaussian and Bernoulli random matrices have -RIP with high probability, if the embedding dimension . Up to the constant, lower bounds for Gelfand widths of -balls [22, 20] show that this dependence on and in is optimal. The Restricted Isometry Property also holds for a rich class of structured random matrices, where usually the best known bounds for have additional log factors in . All known deterministic constructions of RIP matrices require that or at least for some small constant .

The similarity between the expressions in (2) and (4) suggests a connection between the JL lemma and the Restricted Isometry Property. A first result in this direction was established in , wherein it was shown that random matrices satisfying a concentration inequality of type (3) (and hence the JL Lemma) satisfy the RIP of optimal order. More precisely, the authors prove the following theorem.

###### Theorem 1.3 (Theorem 5.2 in ).

Suppose that , and are given. If the probability distribution generating the matrices satisfies the concentration inequality (3) with and absolute constant , then there exist absolute constants such that with probability the RIP (4) holds for with the prescribed and any .

In this sense, the JL Lemma implies the Restricted Isometry Property.

#### Contribution of this work.

We prove a converse result to Theorem 1.3: We show that RIP matrices, with randomized column signs, provide Johnson-Lindenstrauss embeddings that are optimal up to logarithmic factors in the ambient dimension. In particular, RIP matrices of optimal order provide Johnson-Lindenstrauss embeddings of optimal order as such, up to a logarithmic factor in (see Theorem 3.1). Note that without randomization, such a converse is impossible as vectors in the null space of the fixed parent matrix are always mapped to zero.

This observation has several consequences in the area of compressed sensing, and also allows us to obtain improved JL embedding results for several matrix constructions with existing RIP bounds [13, 35, 31, 38, 33]. Of particular interest is the random partial Fourier or the random partial Hadamard matrix, which is formed by choosing a random subset of rows from the discrete Fourier or Hadamard matrix respectively, and with high probability has -RIP if the embedding dimension . For these matrices with randomized column signs, the running time for matrix-vector multiplication is as opposed to the running time of for purely random matrices. For such constructions, the previous best-known embedding dimension to ensure that (2) holds with probability , given by Ailon and Liberty , is . We can improve their result to have optimal dependence on the distortion, , showing that rows suffice for the embedding.

This paper is structured as follows: Section 2 introduces necessary notation. In Section 3, we state our main results, and Section 4 gives concrete examples of how these results improve on the best-known JL bounds for several matrix constructions as well as applications of our findings in compressed sensing. In Section 5 we give the relevant concentration inequalities and explicit RIP-based matrix inequalities that are needed for the proofs, which are then carried out in Section 6.

## 2 Notation

Before continuing, let us fix some notation to be used in the remainder. For , we denote . The -norm of a vector is defined as

 ∥x∥p=(N∑j=1|xj|p)1/p,1≤p<∞,

and as usual. For a matrix , its operator norm is , and its Frobenius norm is defined by

 ∥Φ∥F:=(m∑j=1N∑ℓ=1|Φj,ℓ|2)1/2.

For two functions , an arbitrary set, we write if there is a constant such that for all ; we write if and . Let and be given and set . For given , we say that is in decreasing arrangement, if one has for . For vectors in decreasing arrangement, we decompose into blocks of size , i.e. ; the last block is potentially of smaller size. We will also consider the coarse decomposition , where . Denote by the indices corresponding to the -th block. For we write if the two indices are associated to the same block, and we write otherwise. Given a matrix , write to denote the -th column, to denote the matrix that is the restriction of to the columns indexed by (again with the obvious modification for ), and to denote the restriction of to all but the first columns. Finally, for a vector , we denote by the diagonal matrix satisfying .

## 3 The main results

###### Theorem 3.1.

Fix and , and consider a finite set of cardinality . Set , and suppose that satisfies the Restricted Isometry Property of order and level . Let be a Rademacher sequence, i.e., uniformly distributed on . Then with probability exceeding ,

 (1−ε)∥x∥22≤∥ΦDξx∥22≤(1+ε)∥x∥22 (5)

uniformly for all .

Along the way, our method provides a direct converse to Theorem 1.3:

###### Proposition 3.2.

Fix , and suppose that there is a constant such that for all pairs that are admissible in the sense that , has the Restricted Isometry Property of order and level . Fix and let be a Rademacher sequence, i.e., uniformly distributed on . Then there exists a constant such that for all , satisfies the concentration inequality (3) for , where is any integer such that is admissible.

## 4 Concrete examples and applications

Using Theorem 3.1, we can improve on the best Johnson-Lindenstrauss bounds for several matrix constructions that are known to have the Restricted Isometry Property:

#### 1. Matrices arising from bounded orthonormal systems.

Consider an orthonormal system of real-valued functions , , on a measurable space with respect to an orthogonalization measure . Such systems are called bounded orthonormal systems if for some constant . We may associate to such a system the matrix with entries , where , are drawn independently according to the orthogonalization measure . As shown in [13, 35, 31], matrices arising as such have -RIP with high probability if . By Theorem 3.1, these embeddings with randomized column signs satisfy the JL Lemma for , which is optimal up to the factors.111Actually, the bounds in  yield that is sufficient for to have -RIP with high probability. Hence is a JL-embedding for . However, in order to work with simpler expressions, we bound in the logarithmic factors.

For measures with discrete support, such constructions are equivalent to choosing rows at random from an matrix with orthonormal rows and uniformly bounded entries. Examples include the random partial Fourier matrix or random partial Hadamard matrix, formed from the discrete Fourier matrix or discrete Hadamard matrix respectively. (In the Fourier case, we distribute the resulting real and complex parts in different coordinates, inducing an additional factor of .) Note that the structure of these matrices allows for fast matrix vector multiplication. Recently, Ailon and Liberty  verified the JL Lemma for such constructions, with column signs randomized, when . Our result improves the factor of in their result to the optimal dependence . We note that while their proof also uses the RIP, it also requires arguments from  that are specific to discrete bounded orthonormal systems.

Examples of bounded orthonormal systems connected to continuous measures include the trigonometric polynomials and Chebyshev polynomials, which are orthogonal with respect to the uniform and Chebyshev measures, respectively. The Legendre system, while not uniformly bounded, can still be transformed via preconditioning to a bounded orthonormal system with respect to the Chebyshev measure . Note that all of these constructions have an associated fast transform.

#### 2. Partial circulant matrices.

Other classes of structured random matrices known to have the RIP include partial circulant matrices [34, 30, 32]. In one such set-up, the first row of the matrix is a Gaussian or Rademacher random vector, and each subsequent row is created by rotating one element to the right relative to the preceding row vector. Again, rows of this matrix are sampled, but in contrast to partial Fourier or Hadamard matrices, the selection need not be random. Using that convolution corresponds to multiplication in the Fourier domain, these matrices have associated fast matrix-vector multiplication routines. In , such matrices were shown to have the RIP with high probability for .

On the other hand, such a matrix composed with a diagonal matrix of random signs was shown to be a JL embedding with high probability as long as . Through Theorem 3.1, the same results also obtain if . For large , this is an improvement compared to .

#### 3. Deterministic constructions.

Several deterministic constructions of RIP matrices are known, including a recent result in  that requires only . We refer the reader to the exposition in  for a good overview in this direction; we highlight two such deterministic constructions here. Using finite fields, DeVore  provides deterministic constructs of cyclic --valued matrices with -RIP with . Iwen  provides deterministic constructions of --valued matrices whose number theoretic properties allow their products with Discrete Fourier Transform (DFT) matrices to be well approximated using a few highly sparse matrix multiplications. Both the binary-valued matrices and their products with the DFT yield -RIP matrices with . By Theorem 3.1, the class of matrices that results by randomizing the column signs of either of these deterministic constructions satisfies the JL Lemma with .

Note that the amount of randomness needed to construct such embeddings is still comparable to the first two examples, requiring random bits. Under the model assumption that the entries of each vector to be embedded has random signs, however, the required randomness in the matrix is removed completely.

In addition to their fast multiplication properties, these examples have the advantage in that the construction of the matrix embedding only uses , , and independent random bits, respectively, compared to bits for matrices with independent entries. We note that stronger embedding results are known with fewer bits, if one imposes restrictions on the norm of the vectors to be embedded – see  and .

For each of the aforementioned examples, we summarize the number of dimensions that are known to be sufficient -RIP to hold. We also list the previously best known bound for JL embedding dimension (if there is one) along with the JL bounds obtained from Theorem 3.1. Where Theorem 3.1 yields a better bound than previously known, at least for some range of parameters, we highlight the result in bold face. In each of the bounds, we list only the dependence on , and , or and , omitting absolute constants.

#### 4. Compressed sensing in redundant dictionaries.

As shown recently in , concentration inequalities of type (3) allow for the extension of the compressed sensing methodology to redundant dictionaries – in particular, tight frames – as opposed to orthonormal bases only. Since signals with sparse representations in redundant dictionaries comprise a much more realistic model of nature, this extension of compressed sensing is fundamental. Our results show that basically all random matrix constructions arising in the standard theory of compressed sensing (i.e., based on RIP estimates) also yield compressed sensing matrices for the redundant framework.

#### 5. Compressed sensing with cross validation.

Compressed sensing algorithms are designed to recover approximately sparse signals; if this assumption is violated, they may yield solutions far from the input signal. In , a method of cross validation is introduced to detect such situations, and to obtain tight bounds on the error incurred by compressed sensing reconstruction algorithms in general. There, a subset of the measurements are held out from the reconstruction algorithm and only the remaining measurements are used to produce a candidate approximation to the unknown . If the hold-out matrix satisfies the Johnson-Lindenstrauss Lemma, then the observable quantity can be used as a reliable proxy for the unknown error . Our work shows that any RIP matrix as in the standard compressed sensing framework can be used for cross validation up to a randomization of its column signs.

#### 6. Optimal asymptotics in δ for RIP to hold.

As mentioned above, it can be shown using a Gelfand width argument that is the optimal asymptotics (in and ) of the embedding dimension for a matrix with the restricted isometry property (4). Our results – combined with the known optimality of the asymptotics for the embedding dimension in the Johnson-Lindenstrauss Lemma (1.1) – imply that up to a factor of , is the optimal asymptotics in the restricted isometry constant for fixed and as . Recall that this rate is realized by many of the above examples, such as Gaussian random matrices.

## 5 Proof Ingredients

The proof of Theorem 3.1 relies on concentration inequalities for Rademacher sequences and explicit RIP-based norm estimates. The first concentration result is a classical inequality by Hoeffding .

###### Proposition 5.1 (Hoeffding’s Inequality).

Let , and let be a Rademacher sequence. Then, for any ,

 P(|∑jξjxj|>t)≤2exp(−t22∥x∥22). (6)

The second concentration of measure result is a deviation bound for Rademacher chaos. There are many such bounds in the literature; the following inequality dates back to , but appeared with explicit constants and with a much simplified proof as Theorem in .

###### Proposition 5.2.

Let be the matrix with entries and assume that for all . Let be a Rademacher sequence. Then, for any ,

 P(|∑i,jξiξjxi,j|>t)≤2exp(−164min(9665t∥X∥,t2∥X∥2F)). (7)

We also need the following basic estimate for RIP matrices (see for instance Proposition in ).

###### Proposition 5.3.

Suppose that has the Restricted Isometry Property of order and level . Then for any two disjoint subsets of size ,

 ∥Φ∗(J)Φ(L)∥≤δ.

The proof of our norm estimate for RIP-matrices uses Proposition 5.3, and relies on the observation commonly used in the theory of compressed sensing (see for example ) that for in decreasing arrangement and , for one has and thus .

###### Proposition 5.4.

Let . Let have the -Restricted Isometry Property, let be in decreasing arrangement with , and consider the symmetric matrix

 C∈RN×N,Cj,ℓ={[]llxjΦ∗jΦℓxℓ,j≁ℓ,j,ℓ>s,0,else,

and, for , the vector

 v∈RN,v=Dx(♭)Φ∗(♭)Φ(1)Dx(1)b.

The following bounds hold: and .

###### Proof.
 ∥C∥= sup∥y∥2=1|⟨y,Cy⟩| ≤ sup∥y∥2=1R∑J,L=2J≠L∣∣⟨y(J),Dx(J)Φ∗(J)Φ(L)Dx(L)y(L)⟩∣∣ ≤ sup∥y∥2=1R∑J,L=2J≠L∥y(J)∥2∥y(L)∥2∥Dx(J)Φ∗(J)Φ(L)Dx(L)∥ ≤ sup∥y∥2=1R∑J,L=2J≠L∥y(J)∥2∥y(L)∥2∥x(J)∥∞∥x(L)∥∞δ (8) ≤ sup∥y∥2=1R∑J,L=2∥y(J)∥2∥y(L)∥21√s∥x(J−1)∥21√s∥x(L−1)∥2δ ≤ sup∥y∥2=1δsR∑J,L=2(12∥x(J−1)∥22+12∥y(J)∥22)(12∥x(L−1)∥22+12∥y(L)∥22) (9) ≤ δs.

To obtain (9), we use the inequality of arithmetic and geometric means; to obtain (8), we use Proposition 5.3.

Similarly,

 ∥v∥2≤ sup∥y∥2=1R∑L=2⟨y(L),D∗x(L)Φ∗(L)Φ(1)D(b)x(1)⟩ ≤ sup∥y∥2=1R∑L=2∥y(L)∥2∥x(L)∥∞∥Φ∗(L)Φ(1)∥∥b∥∞∥x(1)∥2 ≤ sup∥y∥2=1R∑L=2∥y(L)∥21√s∥x(L−1)∥2∥Φ∗(L)Φ(1)∥∥b∥∞ ≤ δ√ssup∥y∥2=1R∑L=2(12∥y(L)∥22+12∥x(L−1)∥22) ≤ δ√s.

For the Frobenius norm, we estimate:

 ∥C∥2F= N∑j,l=s+1j≁ℓ(xjΦ∗jΦℓxℓ)2 = R∑L=2N∑j=s+1j∉]L[x2jΦ∗jΦ(L)D2x(L)Φ∗(L)Φj = R∑L=2N∑j=s+1j∉]L[x2j∥Dx(L)Φ∗(L)Φj∥2 ≤ R∑L=2N∑j=s+1j∉]L[x2j∥x(L)∥2∞∥Φ∗(L)Φj∥2 ≤ R∑L=2δ2s∥x(L−1)∥22N∑j=1x2j ≤ δ2s.

## 6 Proof of the main results

We begin by proving Theorem 3.1. Without loss of generality, we assume that all are normalized so that . Furthermore, assume that is even.

We first consider a fixed , eventually taking a union bound over all . We further assume that is in decreasing arrangement. To achieve this, we reorder the entries of , and permute the columns of accordingly. This has no impact on the following estimates, as the Restricted Isometry Property of the matrix is invariant under permutations of its columns. We need to estimate

 ∥Φ Dξx∥22=∥ΦDxξ∥22 = (10)

We will bound the terms separately.

1. As has the Restricted Isometry Property of order and level , it also has the RIP of order and level , and each is almost an isometry. Hence, noting that , the first term can be estimated as follows.

 (1−δ)∥x∥22≤R∑J=1∥Φ(J)Dx(J)ξ(J)∥22≤(1+δ)∥x∥22.

Thus, using that

 (1−ε4)∥x∥22≤R∑J=1∥Φ(J)Dx(J)ξ(J)∥22≤(1+ε4)∥x∥22.
2. To estimate the second term, fix and consider the random variable

 X=b∗Dx(1)Φ∗(1)Φ(♭)Dx(♭)ξ(♭)=⟨v,ξ(♭)⟩

with as in Proposition 5.4. By Hoeffding’s inequality (Proposition 5.1) combined with Proposition 5.4,

 P(|X|≥γε)≤2exp(−sγ2ε22δ2). (11)

Taking a union bound, one obtains:

 P(∃x∈E:|X|≥γε)≤exp(logp+log2−γ2sε22δ2).

In order for this probability to be less than , we need:

 log2p−sγ2ε22δ2≤logη2,

that is,

 δ≤ε/4√8γ2slog(4p/η). (12)
3. We can rewrite the third term as

 R∑J,L=2J≠L⟨Φ(J)Dx(J)ξ(J),Φ(L)Dx(L)ξ(L)⟩=⟨ξ,Cξ⟩=N∑j,ℓ=s+1ξjξℓCjℓ,

where is the matrix as in Proposition 5.4. By Proposition 5.4, we have and , hence by Proposition 5.2

 P⎛⎝∣∣ ∣∣N∑j,ℓ=s+1ξjξℓCjℓ∣∣ ∣∣≥τε⎞⎠≤2exp(−164min(sτ2ε2δ2,96τsε65δ)). (13)

Using a union bound, one obtains:

 P⎛⎝∃x∈E:∣∣ ∣∣N∑j,ℓ=s+1ξjξℓCjl∣∣ ∣∣≥τε⎞⎠≤2exp(logp−164smin(τ2ε2δ2,96τε65δ)).

In order for this probability to be less than , we need:

 log2p−164smin(τ2ε2δ2,96τε65δ)≤log(η/2),

that is,

 δ≤ε4min⎛⎜ ⎜⎝ ⎷τ2s4log(4pη),9665τs16log(4pη)⎞⎟ ⎟⎠. (14)

By assumption, , so conditions (12) and (14) are satisfied by setting , and (that is, ). Then the second term is bounded by in absolute value, and the last term is bounded by . Together with the deterministic RIP-based estimate for the first term, this implies the Theorem. ∎

#### Proof of Proposition 3.2.

Fix , and suppose that there is a constant such that for all pairs with , has the Restricted Isometry Property of order and level . Now let be admissible. An elementary monotonicity argument shows that there exists such that is admissible and . Fix and let be a Rademacher sequence. Then, for any fixed vector , the estimates in equations (11) and (13) with parameters and imply the existence of a constant for which

 P(∣∣∥ΦDξx∥22−∥x∥22∣∣≥ε∥x∥22)) ≤ 2exp(−c5k′) (15) ≤ 2exp(−c4ε2mlog−1(N/k′))

where . ∎

#### Remarks:

Although we have stated the main result for the setting and , all of the analysis holds also in the complex setting, and .

As shown in , a random matrix whose entries follow a subgaussian distribution is known to have with high probability the Restricted Isometry Property of best possible order, that is, one can choose When , is a JL embedding by Theorem 3.1, and our resulting bound for is optimal up to a single logarithmic factor in . This shows that Theorem 3.1 must also be optimal up to a single logarithmic factor in .

### Acknowledgments

The authors would like to thank Holger Rauhut, Deanna Needell, Jan Vybíral, Mark Tygert, Mark Iwen, Justin Romberg, Mark Davenport, and Arie Israel for valuable discussions on this topic. Rachel Ward gratefully acknowledges the partial support of National Science Foundation Postdoctoral Research Fellowship. Felix Krahmer gratefully acknowledges the partial support of the Hausdorff Center for Mathematics. Finally, both authors are grateful for the support of the Institute of Advanced Study through the Park City Math Institute where this project was initiated.

## References

•  N. Ailon and B. Chazelle. Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform. STOC: Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, 2006.
•  N. Ailon and E. Liberty. Fast dimension reduction using Rademacher series on dual BCH codes. SODA ’08: Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms, pages 1–9, 2008.
•  N. Ailon and E. Liberty. Almost optimal unrestricted fast Johnson-Lindenstrauss transform. Symposium on Discrete Algorithms (SODA), to appear, 2011.
•  N. Ailon, E. Liberty, and A. Singer. Dense fast random projections and Lean Walsh transforms. Proceedings of the 12th International Workshop on Randomization and Computation (RANDOM), pages 512–522, 2008.
•  N. Alon. Problems and results in extremal combinatorics. Discrete Math, 273:31–53, 2003.
•  R. Baraniuk and M. Wakin. Random projections of smooth manifolds. In Foundations of Computational Mathematics, pages 941–944, 2006.
•  R. G. Baraniuk, M. Davenport, R. A. DeVore, and M. Wakin. A simple proof of the Restricted Isometry Property for random matrices. Constr. Approx., 28(3):253–263, 2008.
•  S. Boucheron, G. Lugosi, and P. Massart. Concentration inequalities using the entropy method. Ann. Probab., 31(3):1583–1614, 2003.
•  J. Bourgain, S. Dilworth, K. Ford, S. Konyagin, and D. Kutzarova. Explicit constructions of RIP matrices and related problems. Preprint, 2010.
•  E. Candès, Y. Eldar, and D. Needell. Compressed sensing with coherent and redundant dictionaries. Appl. Comput. Harmon. Anal., to appear, 2011.
•  E. J. Candès, J., T. Tao, and J. Romberg. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory, 52(2):489–509, 2006.
•  E. J. Candès, J. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math., 59(8):1207–1223, 2006.
•  E. J. Candès and T. Tao. Near optimal signal recovery from random projections: Universal encoding strategies? IEEE Trans. Inform. Theory, 52(12):5406–5425, 2006.
•  A. Dasgupta, R. Kumar, and T. Sarlï¿½os. A sparse Johnson-Lindenstrauss transform. STOC, page 341ï¿½350, 2010.
•  S. Dasgupta and A. Gupta. An elementary proof of a theorem of Johnson and Lindenstrauss. Random Structures and Algorithms, 22:60–65, 2003.
•  R. DeVore. Deterministic constructions of compressed sensing matrices. J. Complexity, 23:918–925, 2007.
•  D. L. Donoho. For most large underdetermined systems of linear equations the minimal solution is also the sparsest solution. Commun. Pure Appl. Anal., 59(6):797–829, 2006.
•  Edo Liberty, Franco Woolfe, Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert. Randomized algorithms for the low-rank approximation of matrices. Proceedings of the National Academy of Sciences, 104(51):20167–20172, 2007.
•  S. Foucart. A note on guaranteed sparse recovery via -minimization. Appl. Comput. Harmon. Anal., 29(1):97 – 103, 2010.
•  S. Foucart, A. Pajor, H. Rauhut, and T. Ullrich. The Gelfand widths of -balls for . J. Complexity, 26:629–640, 2010.
•  P. Frankl and H. Maehara. The Johnson-Lindenstrauss Lemma and the sphericity of some graphs. Journal of Combinatorial Theory B, 44:355–362, 1988.
•  A. Garnaev and E. Gluskin. The widths of euclidean balls. Doklady An. SSSR., 277:1048–1052, 1984.
•  N. Halko, P. Martinsson, and J. Tropp. Finding structure with randomness: Stochastic algorithms for constructing approximate matrix decompositions. SIAM Rev., Survey and Review section, to appear, 2011.
•  D. L. Hanson and F. T. Wright. A bound on tail probabilities for quadratic forms in independent random variables. Ann. Math. Statist., 42:1079–1083, 1971.
•  W. Hoeffding. Probability inequalities for sums of bounded random variables. J. Amer. Statist. Assoc., 58:13–30, 1963.
•  P. Indyk. Algorithmic applications of low-distortion embeddings. Proc. 42nd IEEE Symposium on Foundations of Computer Science, 2001.
•  M. Iwen. Simple deterministically constructible RIP matrices with sublinear Fourier sampling requirements. Information Sciences and Systems, 2009. CISS 2009. 43rd Annual Conference on, pages 870–875, 2009.
•  W. B. Johnson and J. Lindenstrauss. Extensions of Lipschitz mappings into a Hilbert space. Contemp. Math, 26:189–206, 1984.
•  D. Kane and J. Nelson. A derandomized sparse Johnson-Lindenstrauss transform. Preprint, 2010.
•  H. Rauhut. Circulant and Toeplitz matrices in compressed sensing. In Proc. SPARS’09, Saint-Malo, France, 2009.
•  H. Rauhut. Compressive sensing and structured random matrices. In M. Fornasier, editor, Theoretical Foundations and Numerical Methods for Sparse Recovery, volume 9 of Radon Series Comp. Appl. Math., pages 1–92. deGruyter, 2010.
•  H. Rauhut, J. Romberg, and J. Tropp. Restricted isometries for partial random circulant matrices. Preprint, 2010.
•  H. Rauhut and R. Ward. Sparse Legendre expansions via minimization. Preprint, 2010.
•  J. Romberg. Compressive sensing by random convolution. SIAM Journal on Imaging Sciences, 2(4):1098–1128, 2009.
•  M. Rudelson and R. Vershynin. On sparse reconstruction from Fourier and Gaussian measurements. Comm. Pure Appl. Math., 61:1025–1045, 2008.
•  T. Sarlos. Improved approximation algorithms for large matrices via random projections. Proceedings of the 47th IEEE Symposium on Foundations of Computer Science (FOCS), 2006.
•  Thong Do, Lu Gan, Yi Chen, Nam Nguyen, and Trac Tran. Fast and efficient dimensionality reduction using structurally random matrices. Proc. of ICASSP, 2009.
•  J. Tropp, J. Laska, M. Duarte, J. Romberg, and R. G. Baraniuk. Beyond Nyquist: Efficient sampling of sparse, bandlimited signals. IEEE Trans. Inform. Theory, 56(1):520–544, 2010.
•  J. Vybíral. A variant of the Johnson-Lindenstrauss lemma for circulant matrices. Journal of Functional Analysis, 260(4):1096 – 1105, 2011.
•  R. Ward. Compressed sensing with cross validation. IEEE Trans. Inform. Theory, 55:5773–5782, 2009.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters   