New and improved JohnsonLindenstrauss embeddings via the Restricted Isometry Property
Abstract
Consider an matrix with the Restricted Isometry Property of order and level , that is, the norm of any sparse vector in is preserved to within a multiplicative factor of under application of . We show that by randomizing the column signs of such a matrix , the resulting map with high probability embeds any fixed set of points in into without distorting the norm of any point in the set by more than a factor of . Consequently, matrices with the Restricted Isometry Property and with randomized column signs provide optimal JohnsonLindenstrauss embeddings up to logarithmic factors in . In particular, our results improve the best known bounds on the necessary embedding dimension for a wide class of structured random matrices; for partial Fourier and partial Hadamard matrices, we improve the recent bound appearing in Ailon and Liberty [3] to , which is optimal up to the logarithmic factors in . Our results also have a direct application in the area of compressed sensing for redundant dictionaries.
1 Introduction
The JohnsonLindenstrauss (JL) Lemma states that any set of points in high dimensional Euclidean space can be embedded into dimensions, without distorting the distance between any two points by more than a factor between and . In its original form, the JohnsonLindenstrauss Lemma reads as follows.
Theorem 1.1 (JohnsonLindenstrauss Lemma [28]).
Let and let be arbitrary points. Let be a natural number. Then there exists a Lipschitz map such that
(1) 
for all . Here stands for the Euclidean norm in or , respectively.
As shown in [5], the bound for the size of is tight up to an factor. In the original paper of Johnson and Lindenstrauss, it was shown that a random orthogonal projection, suitably normalized, provides such an embedding with high probability [28]. Later, this property was also verified for Gaussian random matrices, among other random matrix constructions [21, 15]. As a consequence, the JL Lemma has become a valuable tool for dimensionality reduction in a myriad of applications ranging from computer science [26], numerical linear algebra [36, 23, 18], manifold learning [6], and compressed sensing [7], [40], [10].
In most of these frameworks, the map under consideration is a linear map represented by an matrix . In this case, one can consider the set of differences ; to prove the theorem, one then needs to show that
(2) 
When is a random matrix, the proof that satisfies the JL lemma with high probability boils down to showing a concentration inequality of the type
(3) 
for an arbitrary fixed , where is an absolute constant in the optimal case, and in addition possibly mildly dependent on in almostoptimal scenarios as for example in [3]. Indeed it directly follows by a union bound over (as in the proof of Theorem 3.1 below) that (2) holds with high probability.
In order to reduce storage space and implementation time of such embeddings, the design of structured random JL embeddings has been an active area of research in recent years [4, 37, 3, 29]; see [4] or [29] for a good overview of these efforts. Of particular importance in this context is whether fast (i.e. ) multiplication algorithms are available for the resulting matrices. Fast JL embeddings with optimal embedding dimension were first constructed by Ailon and Chazelle [1], but their embeddings are fast only for vectors. This restriction on the number of vectors was later weakened to [2]. In [3], fast JL embeddings were constructed without any restrictions on the number of vectors, but the authors only provide suboptimal embedding dimension . In this paper, we provide the first unrestricted fast JL construction with optimal embedding dimension up to logarithmic factors in . Note that in the range not covered by the constructions in [1, 2], a logarithmic factor in is bounded by , and thus plays a minor role.
The JohnsonLindenstrauss Lemma in Compressed Sensing.
One of the more recent applications of the JohnsonLindenstrauss Lemma is to the area of compressed sensing, which is centered around the following phenomenon: For many underdetermined systems of linear equations , the solution of minimal norm is also the sparsest solution. To be precise, a vector is sparse if . A by now classical sufficient condition on the matrix for guaranteeing equivalence between the minimal norm solution and sparsest solution is the socalled Restricted Isometry Property (RIP)[11, 13, 17].
Definition 1.2.
A matrix is said to have the Restricted Isometry Property of order and level (equivalently, RIP) if
(4) 
The restricted isometry constant is defined as the smallest value of for which (4) holds.
In particular, if has RIP with , and if admits a sparse solution , then [19].
Gaussian and Bernoulli random matrices have RIP with high probability, if the embedding dimension [7]. Up to the constant, lower bounds for Gelfand widths of balls [22, 20] show that this dependence on and in is optimal. The Restricted Isometry Property also holds for a rich class of structured random matrices, where usually the best known bounds for have additional log factors in . All known deterministic constructions of RIP matrices require that or at least for some small constant [9].
The similarity between the expressions in (2) and (4) suggests a connection between the JL lemma and the Restricted Isometry Property. A first result in this direction was established in [7], wherein it was shown that random matrices satisfying a concentration inequality of type (3) (and hence the JL Lemma) satisfy the RIP of optimal order. More precisely, the authors prove the following theorem.
Theorem 1.3 (Theorem 5.2 in [7]).
In this sense, the JL Lemma implies the Restricted Isometry Property.
Contribution of this work.
We prove a converse result to Theorem 1.3: We show that RIP matrices, with randomized column signs, provide JohnsonLindenstrauss embeddings that are optimal up to logarithmic factors in the ambient dimension. In particular, RIP matrices of optimal order provide JohnsonLindenstrauss embeddings of optimal order as such, up to a logarithmic factor in (see Theorem 3.1). Note that without randomization, such a converse is impossible as vectors in the null space of the fixed parent matrix are always mapped to zero.
This observation has several consequences in the area of compressed sensing, and also allows us to obtain improved JL embedding results for several matrix constructions with existing RIP bounds [13, 35, 31, 38, 33]. Of particular interest is the random partial Fourier or the random partial Hadamard matrix, which is formed by choosing a random subset of rows from the discrete Fourier or Hadamard matrix respectively, and with high probability has RIP if the embedding dimension . For these matrices with randomized column signs, the running time for matrixvector multiplication is as opposed to the running time of for purely random matrices. For such constructions, the previous bestknown embedding dimension to ensure that (2) holds with probability , given by Ailon and Liberty [3], is . We can improve their result to have optimal dependence on the distortion, , showing that rows suffice for the embedding.
This paper is structured as follows: Section 2 introduces necessary notation. In Section 3, we state our main results, and Section 4 gives concrete examples of how these results improve on the bestknown JL bounds for several matrix constructions as well as applications of our findings in compressed sensing. In Section 5 we give the relevant concentration inequalities and explicit RIPbased matrix inequalities that are needed for the proofs, which are then carried out in Section 6.
2 Notation
Before continuing, let us fix some notation to be used in the remainder. For , we denote . The norm of a vector is defined as
and as usual. For a matrix , its operator norm is , and its Frobenius norm is defined by
For two functions , an arbitrary set, we write if there is a constant such that for all ; we write if and . Let and be given and set . For given , we say that is in decreasing arrangement, if one has for . For vectors in decreasing arrangement, we decompose into blocks of size , i.e. ; the last block is potentially of smaller size. We will also consider the coarse decomposition , where . Denote by the indices corresponding to the th block. For we write if the two indices are associated to the same block, and we write otherwise. Given a matrix , write to denote the th column, to denote the matrix that is the restriction of to the columns indexed by (again with the obvious modification for ), and to denote the restriction of to all but the first columns. Finally, for a vector , we denote by the diagonal matrix satisfying .
3 The main results
Theorem 3.1.
Fix and , and consider a finite set of cardinality . Set , and suppose that satisfies the Restricted Isometry Property of order and level . Let be a Rademacher sequence, i.e., uniformly distributed on . Then with probability exceeding ,
(5) 
uniformly for all .
Along the way, our method provides a direct converse to Theorem 1.3:
Proposition 3.2.
Fix , and suppose that there is a constant such that for all pairs that are admissible in the sense that , has the Restricted Isometry Property of order and level . Fix and let be a Rademacher sequence, i.e., uniformly distributed on . Then there exists a constant such that for all , satisfies the concentration inequality (3) for , where is any integer such that is admissible.
4 Concrete examples and applications
Using Theorem 3.1, we can improve on the best JohnsonLindenstrauss bounds for several matrix constructions that are known to have the Restricted Isometry Property:
1. Matrices arising from bounded orthonormal systems.
Consider an orthonormal system of realvalued functions , , on a measurable space with respect to an orthogonalization measure . Such systems are called bounded orthonormal systems if for some constant . We may associate to such a system the matrix with entries , where , are drawn independently according to the orthogonalization measure . As shown in [13, 35, 31], matrices arising as such have RIP with high probability if . By Theorem 3.1, these embeddings with randomized column signs satisfy the JL Lemma for , which is optimal up to the factors.^{1}^{1}1Actually, the bounds in [31] yield that is sufficient for to have RIP with high probability. Hence is a JLembedding for . However, in order to work with simpler expressions, we bound in the logarithmic factors.
For measures with discrete support, such constructions are equivalent to choosing rows at random from an matrix with orthonormal rows and uniformly bounded entries. Examples include the random partial Fourier matrix or random partial Hadamard matrix, formed from the discrete Fourier matrix or discrete Hadamard matrix respectively. (In the Fourier case, we distribute the resulting real and complex parts in different coordinates, inducing an additional factor of .) Note that the structure of these matrices allows for fast matrix vector multiplication. Recently, Ailon and Liberty [3] verified the JL Lemma for such constructions, with column signs randomized, when . Our result improves the factor of in their result to the optimal dependence . We note that while their proof also uses the RIP, it also requires arguments from [35] that are specific to discrete bounded orthonormal systems.
Examples of bounded orthonormal systems connected to continuous measures include the trigonometric polynomials and Chebyshev polynomials, which are orthogonal with respect to the uniform and Chebyshev measures, respectively. The Legendre system, while not uniformly bounded, can still be transformed via preconditioning to a bounded orthonormal system with respect to the Chebyshev measure [33]. Note that all of these constructions have an associated fast transform.
2. Partial circulant matrices.
Other classes of structured random matrices known to have the RIP include partial circulant matrices [34, 30, 32]. In one such setup, the first row of the matrix is a Gaussian or Rademacher random vector, and each subsequent row is created by rotating one element to the right relative to the preceding row vector. Again, rows of this matrix are sampled, but in contrast to partial Fourier or Hadamard matrices, the selection need not be random. Using that convolution corresponds to multiplication in the Fourier domain, these matrices have associated fast matrixvector multiplication routines. In [32], such matrices were shown to have the RIP with high probability for .
3. Deterministic constructions.
Several deterministic constructions of RIP matrices are known, including a recent result in [9] that requires only . We refer the reader to the exposition in [9] for a good overview in this direction; we highlight two such deterministic constructions here. Using finite fields, DeVore [16] provides deterministic constructs of cyclic valued matrices with RIP with . Iwen [27] provides deterministic constructions of valued matrices whose number theoretic properties allow their products with Discrete Fourier Transform (DFT) matrices to be well approximated using a few highly sparse matrix multiplications. Both the binaryvalued matrices and their products with the DFT yield RIP matrices with . By Theorem 3.1, the class of matrices that results by randomizing the column signs of either of these deterministic constructions satisfies the JL Lemma with .
Note that the amount of randomness needed to construct such embeddings is still comparable to the first two examples, requiring random bits. Under the model assumption that the entries of each vector to be embedded has random signs, however, the required randomness in the matrix is removed completely.
In addition to their fast multiplication properties, these examples have the advantage in that the construction of the matrix embedding only uses , , and independent random bits, respectively, compared to bits for matrices with independent entries. We note that stronger embedding results are known with fewer bits, if one imposes restrictions on the norm of the vectors to be embedded – see [29] and [14].
For each of the aforementioned examples, we summarize the number of dimensions that are known to be sufficient RIP to hold. We also list the previously best known bound for JL embedding dimension (if there is one) along with the JL bounds obtained from Theorem 3.1. Where Theorem 3.1 yields a better bound than previously known, at least for some range of parameters, we highlight the result in bold face. In each of the bounds, we list only the dependence on , and , or and , omitting absolute constants.
RIP bounds  Previous JL Bound  JL Bound from Theorem 3.1  
Partial Fourier  
Partial Circulant  
Deterministic  
(DeVore, Iwen)  
Subgaussian 
4. Compressed sensing in redundant dictionaries.
As shown recently in [10], concentration inequalities of type (3) allow for the extension of the compressed sensing methodology to redundant dictionaries – in particular, tight frames – as opposed to orthonormal bases only. Since signals with sparse representations in redundant dictionaries comprise a much more realistic model of nature, this extension of compressed sensing is fundamental. Our results show that basically all random matrix constructions arising in the standard theory of compressed sensing (i.e., based on RIP estimates) also yield compressed sensing matrices for the redundant framework.
5. Compressed sensing with cross validation.
Compressed sensing algorithms are designed to recover approximately sparse signals; if this assumption is violated, they may yield solutions far from the input signal. In [40], a method of cross validation is introduced to detect such situations, and to obtain tight bounds on the error incurred by compressed sensing reconstruction algorithms in general. There, a subset of the measurements are held out from the reconstruction algorithm and only the remaining measurements are used to produce a candidate approximation to the unknown . If the holdout matrix satisfies the JohnsonLindenstrauss Lemma, then the observable quantity can be used as a reliable proxy for the unknown error . Our work shows that any RIP matrix as in the standard compressed sensing framework can be used for cross validation up to a randomization of its column signs.
6. Optimal asymptotics in for RIP to hold.
As mentioned above, it can be shown using a Gelfand width argument that is the optimal asymptotics (in and ) of the embedding dimension for a matrix with the restricted isometry property (4). Our results – combined with the known optimality of the asymptotics for the embedding dimension in the JohnsonLindenstrauss Lemma (1.1) – imply that up to a factor of , is the optimal asymptotics in the restricted isometry constant for fixed and as . Recall that this rate is realized by many of the above examples, such as Gaussian random matrices.
5 Proof Ingredients
The proof of Theorem 3.1 relies on concentration inequalities for Rademacher sequences and explicit RIPbased norm estimates. The first concentration result is a classical inequality by Hoeffding [25].
Proposition 5.1 (Hoeffding’s Inequality).
Let , and let be a Rademacher sequence. Then, for any ,
(6) 
The second concentration of measure result is a deviation bound for Rademacher chaos. There are many such bounds in the literature; the following inequality dates back to [24], but appeared with explicit constants and with a much simplified proof as Theorem in [8].
Proposition 5.2.
Let be the matrix with entries and assume that for all . Let be a Rademacher sequence. Then, for any ,
(7) 
We also need the following basic estimate for RIP matrices (see for instance Proposition in [31]).
Proposition 5.3.
Suppose that has the Restricted Isometry Property of order and level . Then for any two disjoint subsets of size ,
The proof of our norm estimate for RIPmatrices uses Proposition 5.3, and relies on the observation commonly used in the theory of compressed sensing (see for example [12]) that for in decreasing arrangement and , for one has and thus .
Proposition 5.4.
Let . Let have the Restricted Isometry Property, let be in decreasing arrangement with , and consider the symmetric matrix
and, for , the vector
The following bounds hold: and .
6 Proof of the main results
We begin by proving Theorem 3.1. Without loss of generality, we assume that all are normalized so that . Furthermore, assume that is even.
We first consider a fixed , eventually taking a union bound over all . We further assume that is in decreasing arrangement. To achieve this, we reorder the entries of , and permute the columns of accordingly. This has no impact on the following estimates, as the Restricted Isometry Property of the matrix is invariant under permutations of its columns. We need to estimate
(10) 
We will bound the terms separately.

As has the Restricted Isometry Property of order and level , it also has the RIP of order and level , and each is almost an isometry. Hence, noting that , the first term can be estimated as follows.
Thus, using that

To estimate the second term, fix and consider the random variable
with as in Proposition 5.4. By Hoeffding’s inequality (Proposition 5.1) combined with Proposition 5.4,
(11) Taking a union bound, one obtains:
In order for this probability to be less than , we need:
that is,
(12) 
We can rewrite the third term as
where is the matrix as in Proposition 5.4. By Proposition 5.4, we have and , hence by Proposition 5.2
(13) Using a union bound, one obtains:
In order for this probability to be less than , we need:
that is,
(14)
By assumption, , so conditions (12) and (14) are satisfied by setting , and (that is, ). Then the second term is bounded by in absolute value, and the last term is bounded by . Together with the deterministic RIPbased estimate for the first term, this implies the Theorem. ∎
Proof of Proposition 3.2.
Fix , and suppose that there is a constant such that for all pairs with , has the Restricted Isometry Property of order and level . Now let be admissible. An elementary monotonicity argument shows that there exists such that is admissible and . Fix and let be a Rademacher sequence. Then, for any fixed vector , the estimates in equations (11) and (13) with parameters and imply the existence of a constant for which
(15)  
where . ∎
Remarks:
Although we have stated the main result for the setting and , all of the analysis holds also in the complex setting, and .
As shown in [7], a random matrix whose entries follow a subgaussian distribution is known to have with high probability the Restricted Isometry Property of best possible order, that is, one can choose When , is a JL embedding by Theorem 3.1, and our resulting bound for is optimal up to a single logarithmic factor in . This shows that Theorem 3.1 must also be optimal up to a single logarithmic factor in .
Acknowledgments
The authors would like to thank Holger Rauhut, Deanna Needell, Jan Vybíral, Mark Tygert, Mark Iwen, Justin Romberg, Mark Davenport, and Arie Israel for valuable discussions on this topic. Rachel Ward gratefully acknowledges the partial support of National Science Foundation Postdoctoral Research Fellowship. Felix Krahmer gratefully acknowledges the partial support of the Hausdorff Center for Mathematics. Finally, both authors are grateful for the support of the Institute of Advanced Study through the Park City Math Institute where this project was initiated.
References
 [1] N. Ailon and B. Chazelle. Approximate nearest neighbors and the fast JohnsonLindenstrauss transform. STOC: Proceedings of the thirtyeighth annual ACM symposium on Theory of computing, 2006.
 [2] N. Ailon and E. Liberty. Fast dimension reduction using Rademacher series on dual BCH codes. SODA ’08: Proceedings of the nineteenth annual ACMSIAM symposium on Discrete algorithms, pages 1–9, 2008.
 [3] N. Ailon and E. Liberty. Almost optimal unrestricted fast JohnsonLindenstrauss transform. Symposium on Discrete Algorithms (SODA), to appear, 2011.
 [4] N. Ailon, E. Liberty, and A. Singer. Dense fast random projections and Lean Walsh transforms. Proceedings of the 12th International Workshop on Randomization and Computation (RANDOM), pages 512–522, 2008.
 [5] N. Alon. Problems and results in extremal combinatorics. Discrete Math, 273:31–53, 2003.
 [6] R. Baraniuk and M. Wakin. Random projections of smooth manifolds. In Foundations of Computational Mathematics, pages 941–944, 2006.
 [7] R. G. Baraniuk, M. Davenport, R. A. DeVore, and M. Wakin. A simple proof of the Restricted Isometry Property for random matrices. Constr. Approx., 28(3):253–263, 2008.
 [8] S. Boucheron, G. Lugosi, and P. Massart. Concentration inequalities using the entropy method. Ann. Probab., 31(3):1583–1614, 2003.
 [9] J. Bourgain, S. Dilworth, K. Ford, S. Konyagin, and D. Kutzarova. Explicit constructions of RIP matrices and related problems. Preprint, 2010.
 [10] E. Candès, Y. Eldar, and D. Needell. Compressed sensing with coherent and redundant dictionaries. Appl. Comput. Harmon. Anal., to appear, 2011.
 [11] E. J. Candès, J., T. Tao, and J. Romberg. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory, 52(2):489–509, 2006.
 [12] E. J. Candès, J. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math., 59(8):1207–1223, 2006.
 [13] E. J. Candès and T. Tao. Near optimal signal recovery from random projections: Universal encoding strategies? IEEE Trans. Inform. Theory, 52(12):5406–5425, 2006.
 [14] A. Dasgupta, R. Kumar, and T. Sarlï¿½os. A sparse JohnsonLindenstrauss transform. STOC, page 341ï¿½350, 2010.
 [15] S. Dasgupta and A. Gupta. An elementary proof of a theorem of Johnson and Lindenstrauss. Random Structures and Algorithms, 22:60–65, 2003.
 [16] R. DeVore. Deterministic constructions of compressed sensing matrices. J. Complexity, 23:918–925, 2007.
 [17] D. L. Donoho. For most large underdetermined systems of linear equations the minimal solution is also the sparsest solution. Commun. Pure Appl. Anal., 59(6):797–829, 2006.
 [18] Edo Liberty, Franco Woolfe, PerGunnar Martinsson, Vladimir Rokhlin, and Mark Tygert. Randomized algorithms for the lowrank approximation of matrices. Proceedings of the National Academy of Sciences, 104(51):20167–20172, 2007.
 [19] S. Foucart. A note on guaranteed sparse recovery via minimization. Appl. Comput. Harmon. Anal., 29(1):97 – 103, 2010.
 [20] S. Foucart, A. Pajor, H. Rauhut, and T. Ullrich. The Gelfand widths of balls for . J. Complexity, 26:629–640, 2010.
 [21] P. Frankl and H. Maehara. The JohnsonLindenstrauss Lemma and the sphericity of some graphs. Journal of Combinatorial Theory B, 44:355–362, 1988.
 [22] A. Garnaev and E. Gluskin. The widths of euclidean balls. Doklady An. SSSR., 277:1048–1052, 1984.
 [23] N. Halko, P. Martinsson, and J. Tropp. Finding structure with randomness: Stochastic algorithms for constructing approximate matrix decompositions. SIAM Rev., Survey and Review section, to appear, 2011.
 [24] D. L. Hanson and F. T. Wright. A bound on tail probabilities for quadratic forms in independent random variables. Ann. Math. Statist., 42:1079–1083, 1971.
 [25] W. Hoeffding. Probability inequalities for sums of bounded random variables. J. Amer. Statist. Assoc., 58:13–30, 1963.
 [26] P. Indyk. Algorithmic applications of lowdistortion embeddings. Proc. 42nd IEEE Symposium on Foundations of Computer Science, 2001.
 [27] M. Iwen. Simple deterministically constructible RIP matrices with sublinear Fourier sampling requirements. Information Sciences and Systems, 2009. CISS 2009. 43rd Annual Conference on, pages 870–875, 2009.
 [28] W. B. Johnson and J. Lindenstrauss. Extensions of Lipschitz mappings into a Hilbert space. Contemp. Math, 26:189–206, 1984.
 [29] D. Kane and J. Nelson. A derandomized sparse JohnsonLindenstrauss transform. Preprint, 2010.
 [30] H. Rauhut. Circulant and Toeplitz matrices in compressed sensing. In Proc. SPARS’09, SaintMalo, France, 2009.
 [31] H. Rauhut. Compressive sensing and structured random matrices. In M. Fornasier, editor, Theoretical Foundations and Numerical Methods for Sparse Recovery, volume 9 of Radon Series Comp. Appl. Math., pages 1–92. deGruyter, 2010.
 [32] H. Rauhut, J. Romberg, and J. Tropp. Restricted isometries for partial random circulant matrices. Preprint, 2010.
 [33] H. Rauhut and R. Ward. Sparse Legendre expansions via minimization. Preprint, 2010.
 [34] J. Romberg. Compressive sensing by random convolution. SIAM Journal on Imaging Sciences, 2(4):1098–1128, 2009.
 [35] M. Rudelson and R. Vershynin. On sparse reconstruction from Fourier and Gaussian measurements. Comm. Pure Appl. Math., 61:1025–1045, 2008.
 [36] T. Sarlos. Improved approximation algorithms for large matrices via random projections. Proceedings of the 47th IEEE Symposium on Foundations of Computer Science (FOCS), 2006.
 [37] Thong Do, Lu Gan, Yi Chen, Nam Nguyen, and Trac Tran. Fast and efficient dimensionality reduction using structurally random matrices. Proc. of ICASSP, 2009.
 [38] J. Tropp, J. Laska, M. Duarte, J. Romberg, and R. G. Baraniuk. Beyond Nyquist: Efficient sampling of sparse, bandlimited signals. IEEE Trans. Inform. Theory, 56(1):520–544, 2010.
 [39] J. Vybíral. A variant of the JohnsonLindenstrauss lemma for circulant matrices. Journal of Functional Analysis, 260(4):1096 – 1105, 2011.
 [40] R. Ward. Compressed sensing with cross validation. IEEE Trans. Inform. Theory, 55:5773–5782, 2009.