Sparsity Lower Bounds for Dimensionality Reducing Maps

Sparsity Lower Bounds for Dimensionality Reducing Maps

Jelani Nelson Institute for Advanced Study. Supported by NSF CCF-0832797 and NSF DMS-1128155.    Huy L. Nguyn Princeton University. Supported in part by NSF CCF-0832797 and a Gordon Wu fellowship.

We give near-tight lower bounds for the sparsity required in several dimensionality reducing linear maps. First, consider the Johnson-Lindenstrauss (JL) lemma which states that for any set of vectors in there is a matrix with such that mapping by preserves pairwise Euclidean distances of these vectors up to a factor. We show that there exists a set of vectors such that any such matrix with at most non-zero entries per column must have as long as . This bound improves the lower bound of by [Dasgupta-Kumar-Sarlós, STOC 2010], which only held against the stronger property of distributional JL, and only against a certain restricted class of distributions. Meanwhile our lower bound is against the JL lemma itself, with no restrictions. Our lower bound matches the sparse Johnson-Lindenstrauss upper bound of [Kane-Nelson, SODA 2012] up to an factor.

Next, we show that any matrix with the -restricted isometry property (RIP) with constant distortion must have at least non-zeroes per column if , the optimal number of rows of RIP matrices, and . This improves the previous lower bound of by [Chandar, 2010] and shows that for virtually all it is impossible to have a sparse RIP matrix with an optimal number of rows.

Both lower bounds above also offer a tradeoff between sparsity and the number of rows.

Lastly, we show that any oblivious distribution over subspace embedding matrices with non-zero per column and preserving distances in a dimensional-subspace up to a constant factor must have at least rows. This matches one of the upper bounds in [Nelson-Nguyn, 2012] and shows the impossibility of obtaining the best of both of constructions in that work, namely 1 non-zero per column and rows.

1 Introduction

The last decade has witnessed a burgeoning interest in algorithms for large-scale data. A common feature in many of these works is the exploitation of data sparsity to achieve algorithmic efficiency, for example to have running times proportional to the actual complexity of the data rather than the dimension of the ambient space it lives in. This approach has found applications in compressed sensing [CT05, Don06], dimension reduction [BOR10, DKS10, KN10, KN12, WDL09], and numerical linear algebra [CW12, MM12, MP12, NN12]. Given the success of these algorithms, it is important to understand their limitations. Until now, for most of these problems it is not known how far one can reduce the running time on sparse inputs. In this work we make a step towards understanding the performance of algorithms for sparse data and show several tight lower bounds.

In this work we provide three main contributions. We give near-optimal or optimal sparsity lower bounds for Johnson-Lindenstrauss transforms, matrices satisfying the restricted isometry property for use in compressed sensing, and subspace embeddings used in numerical linear algebra. These three contributions are discussed in Section 1.1, Section 1.2, and Section 1.3, respectively.

1.1 Johnson-Lindenstrauss

The following lemma, due to Johnson and Lindenstrauss [JL84], has been used widely in many areas of computer science to reduce data dimension.

Theorem 1 (Johnson-Lindenstrauss (JL) lemma [Jl84]).

For any and any in , there exists with such that for all 111Here and throughout this paper, denotes the set .,

Typically one uses the lemma in algorithm design by mapping some instance of a high-dimensional computational geometry problem to a lower dimension. The running time to solve the instance then becomes the time needed for the lower-dimensional problem, plus the time to perform the matrix-vector multiplications ; see [Ind01, Vem04] for further discussion. This latter step highlights the importance of having a JL matrix supporting fast matrix-vector multiplication. The original proofs of the JL lemma took to be a random dense matrix, e.g. with i.i.d. Gaussian, Rademacher, or even subgaussian entries [Ach03, AV06, DG03, FM88, IM98, JL84, Mat08]. The time to compute then becomes , where has non-zero entries.

A beautiful work of Ailon and Chazelle [AC09] described a construction of a JL matrix supporting matrix-vector multiplication in time , also with . This was improved to [AL09] with the same for any constant , or to with [AL11, KW11]. Thus if one can obtain nearly-linear embedding time with the same target dimension as the original JL lemma, or one can also obtain nearly-linear time for any setting of by increasing slightly by factors.

While the previous paragraph may seem to present the end of the story, in fact note that the “nearly-linear” embedding time is actually much worse than the original time of dense JL matrices when is very small, i.e. when is sparse. Indeed, in several applications we expect to be sparse. Consider the bag of words model in information retrieval: in for example an email spam collaborative filtering system for Yahoo! Mail [WDL09], each email is treated as a -dimensional vector where is the size of the lexicon. The th entry of the vector is some weighted count of the number of occurrences of word (frequent words like “the” should be weighted less heavily). A machine learning algorithm is employed to learn a spam classifier, which involves dot products of email vectors with some learned classifier vector, and JL dimensionality reduction is used to speed up the repeated dot products that are computed during training. Note that in this scenario we expect to be sparse since most emails do not contain nearly every word in the lexicon. An even starker scenario is the turnstile streaming model, where the vectors may receive coordinate-wise updates in a data stream. In this case maintaining in a stream given some update of the form “add to ” requires adding to the compression stored in memory. Since , we would not like to spend per streaming update.

The intuition behind all the works [AC09, AL09, AL11, KW11] to obtain embedding time was as follows. Picking to be a scaled sampling matrix (where each row has a in a random location) gives the correct expectation for , but the variance may be too high. Indeed, the variance is high exactly when is sparse; consider the extreme case where so that sampling is not even expected to see the non-zero coordinate unless . These works then all essentially proceed by randomly preconditioning to ensure that is very well-spread (i.e. far from sparse) with high probability, so that sampling works, and thus fundamentally cannot take advantage of input sparsity. One way of obtaining faster matrix-vector multiplication for sparse inputs is to have sparse JL matrices . Indeed, if has at most non-zero entries per column then can be computed in time. A line of work [Ach03, Mat08, DKS10, BOR10, KN10, KN12] investigated the value achievable in a JL matrix, culminating in [KN12] showing that it is possible to simultaneously have and . Such a sparse JL transform thus speeds up embeddings by a factor of roughly without increasing the target dimension.

Our Contribution I:

We show that for any and any , there exists a set of vectors such that any JL matrix for this set of vectors with rows requires column sparsity as long as . Thus the sparse JL transforms of [KN12] achieve optimal sparsity up to an factor. In fact this lower bound on continues to hold even if for any positive constant .

Note that if one can simply take to be the identity matrix which achieves , and thus the restriction is nearly optimal. Also note that we can assume since otherwise is required in any JL matrix [Alo09], and thus the restriction is no worse than requiring . Furthermore if all the entries of are required to be equal in magnitude, our lower bound holds as long as .

Before our work, only a restricted lower bound of had been shown [DKS10]. In fact this lower bound only applied to the distributional JL problem, a much stronger guarantee where one wants to design a distribution over matrices such that any fixed vector has with probability over the choice of . Indeed any distributional JL construction yields the JL lemma by setting and union bounding over all the difference vectors. Thus, aside from the weaker lower bound on , [DKS10] only provided a lower bound against this stronger guarantee, and furthermore only for a certain restricted class of distributions that made certain independence assumptions amongst matrix entries, and also assumed certain bounds on the sum of fourth moments of matrix entries in each row.

It was shown by Alon [Alo09] that is required for the set of points and as long as . Here is the th standard basis vector. Simple manipulations show that, when appropriately scaled, any JL matrix for this set of vectors is -incoherent, in the sense that all its columns have unit norm and the dot products between pairs of columns are all at most in magnitude. We study this exact same hard input to the JL lemma; what we show is that any such matrix must have column sparsity .

In some sense our lower bound can be viewed as a generalization of the Singleton bound for error-correcting codes in a certain parameter regime. The Singleton bound states that for any set of codewords with block length , alphabet size , and relative distance , it must be that . If the code has relative distance then , so that if the Singleton bound implies . The connection to incoherent matrices (and thus the JL lemma), observed in [Alo09], is the following. For any such code , form a matrix with . The rows are partitioned into chunks each of size . In the th column of , in the th chunk we put a in the row of that chunk corresponding to the symbol , and we put zeroes everywhere else in that column. All columns then have norm , and the code having relative distance implies that all pairs of columns have dot products at most . The Singleton bound thus implies that any incoherent matrix formed from codes in this way has . Note the column sparsity of is , and thus this matches our lower bound for . Our sparsity lower bound thus recovers this Singleton-like bound, without the requirement that the matrix takes this special structure of being formed from a code in the manner described above. One reason this is perhaps surprising is that incoherent matrices from codes have all nonnegative entries; our lower bound thus implies that the use of negative entries cannot be exploited to obtain sparser incoherent matrices.

1.2 Compressed sensing and the restricted isometry property

Another object of interest are matrices satisfying the restricted isometry property (RIP). Such matrices are widely used in compressed sensing.

Definition 2 ([Ct05, CRT06b, Can08]).

For any integer , a matrix is said to have the -restricted isometry property with distortion if for all with .

The goal of the area of compressed sensing is to take few nonadaptive linear measurements of a vector to allow for later recovery from those measurements. That is to say, if those measurements are organized as the rows of some matrix , we would like to recover from . Furthermore, we would like do so with so that is a compressed representation of . Of course if we cannot recover all vectors with any meaningful guarantee, since then will have a non-trivial kernel, and are indistinguishable for . Compressed sensing literature has typically focused on the case of being sparse [CRT06a, Don06], in which case a recovery algorithm could hope to recover by finding the sparsest such that .

The works [Can08, CRT06b, CT05] show that if satisfies the -RIP with distortion , and if is -sparse, then given there is a polynomial-time solvable linear program to recover . In fact for any , not necessarily sparse, the linear program recovers a vector satisfying

known as the guarantee. That is, the recovery error depends on the norm of the best -sparse approximation to .

It is known [BIPW10, GG84, Kaš77] that any matrix allowing for the guarantee simultaneously for all vectors , and thus RIP matrices, must have rows. For completeness we give a proof of the new stronger lower bound in Section 5, though we remark here that current uses of RIP all take .

Although the recovery of can be found in polynomial time as mentioned above, this polynomial is quite large as the algorithm involves solving a linear program with variables and constraints. This downside has led researchers to design alternative measurement and/or recovery schemes which allow for much faster sparse recovery, sometimes even at the cost of obtaining a recovery guarantee weaker than recovery for the sake of algorithmic performance. Many of these schemes are iterative, such as CoSaMP [NT09], Expander Matching Pursuit [IR08], and several others [BI09, BIR08, BD08, DTDlS12, Fou11, GK09, NV09, NV10, TG07], and several of their running times depend on the product of the number of iterations and the time required to multiply by or (here denotes the conjugate transpose of ). Several of these algorithms furthermore apply to vectors which are themselves sparse. Thus, recovery time is improved significantly in the case that is sparse. Previously the only known lower bound for column sparsity for an RIP matrix with an optimal number of rows was [Cha10]. Note that if an RIP construction existed matching the [Cha10] column sparsity lower bound, application to a -sparse vector would take time , which is always and can be very fast for small . Furthermore, in several applications of compressed sensing is very close to , in which case an lower bound on column sparsity does not rule out very sparse RIP matrices. For example, in applications of compressed sensing to magnetic resonance imaging, [LDP07] recommended setting the number of measurements to be between of to obtain good performance for recovery of brain and angiogram images. We remark that one could also obtain speedup by using structured RIP matrices, such as those obtained by sampling rows of the discrete Fourier matrix [CT06], though such constructions require matrix-vector multiplication time independent of input sparsity.

Another upside of sparse RIP matrices is that they allow faster algorithms for encoding . If has non-zeroes per column and receives, for example, turnstile streaming updates, then the compression can be maintained on the fly in time per update (assuming the non-zero entries of any column of can be recovered in time).

Our Contribution II:

We show as long as , any -RIP matrix with distortion and rows with non-zero entries per column must have . That is, RIP matrices with the optimal number of rows must be dense for almost the full range of up to . This lower bound strongly rules out any hope for faster recovery and compression algorithms for compressed sensing by using sparse RIP matrices as mentioned above.

We note that any sparsity lower bound should fail as approaches since the identity matrix trivially satisfies -RIP for any and has column sparsity . Thus, our lower bound holds for almost the full range of parameters for .

1.3 Oblivious Subspace Embeddings

The last problem we consider is the oblivious subspace embedding (OSE) problem. Here one aims to design a distribution over matrices such that for any -dimensional subspace ,

Sarlós showed in [Sar06] that OSE’s are useful for approximate least squares regression and low rank approximation, and they have also been shown useful for approximating statistical leverage scores [DMIMW12], an important concept in statistics and machine learning. See [CW12] for an overview of several applications of OSE’s.

To give more details of how OSE’s are typically used, consider the example of solving an overconstrained least-squares regression problem, where one must compute for some . By overconstrained we mean , and really one should imagine in what follows. There is a closed form solution for the minimizing vector , which requires computing the Moore-Penrose pseudoinverse of . The total running time is , where is the exponent of square matrix multiplication.

Now suppose we are only interested in finding some so that

Then it suffices to have a matrix such that for all in the subspace spanned by and the columns of , in which case we could obtain such an by solving the new least squares regression problem of computing . If has rows, the new running time is the sum of three terms: (1) the time to compute , (2) the time to compute , and (3) the time required to solve the new least-squares problem. It turns out it is possible to obtain such an with by choosing, for example, a matrix with independent Gaussian entries (see e.g. [Gor88, KM05]), but then computing takes time , providing no benefit.

The work of Sarlós picked with special structure so that can be computed in time , namely by using the Fast Johnson-Lindenstrauss Transform of [AC09] (see also [Tro11]). Unfortunately the time is even for sparse matrices , and several applications require solving numerical linear algebra problems on sparse matrix inputs. For example in the Netflix matrix where rows are users and columns are movies, and is some rating score, is very sparse since most users rate only a tiny fraction of all movies [ZWSP08]. If denotes the number of non-zero entries of , we would like running times closer to than to multiply by . Such a running time would be possible, for example, if only had non-zero entries per column.

In a recent and surprising work, Clarkson and Woodruff [CW12] gave an OSE with and , thus providing fast numerical linear algebra algorithms for sparse matrices. For example, the running time for least-squares regression becomes . The dependence on was improved in [NN12] to . The work [NN12] also showed how to obtain , for any constant (the constant in the big-Oh depends polynomially on ), or , . It is thus natural to ask whether one can obtain the best of both worlds: can there be an OSE with and ?

Our Contribution III:

In this work we show that any OSE such that all matrices in its support have rows and non-zero entries per column must have if . Thus for constant and large , the upper bound of [NN12] is optimal.

1.4 Organization

In Section 2 we prove our lower bound for the sparsity required in JL matrices. In Section 3 we give our sparsity lower bound for RIP matrices, and in Section 4 we give our lower bound on the number of rows for OSE’s having sparsity . In Section 5 we give a lower bound involving on the number of rows in an RIP matrix, and in Section 6 we state an open problem.

2 JL Sparsity Lower Bound

Define an -incoherent matrix as any matrix whose columns have unit norm, and such that every pair of columns has dot product at most in magnitude. A simple observation of [Alo09] is that any JL matrix for the set of vectors , when its columns are scaled by their norms, must be -incoherent.

In this section, we consider an -incoherent matrix with at most non-zero entries per column. We show a lower bound on in terms of . In particular if is the number of rows guaranteed by the JL lemma, we show that as long as . In fact if all the entries in are either or equal in magnitude, we show that the lower bound even holds up to .

In Section 2.1 we give the lower bound on in the case that all entries in are in . In Section 2.2 we give our lower bound without making any assumption on the magnitudes of entries in . Before proceeding further, we prove a couple lemmas used throughout this section, and also later in this paper. Throughout this section is always an -incoherent matrix.

Lemma 3.

For any , cannot have any row with at least entries greater than , nor can it have any row with at least entries less than .

Proof.  For the sake of contradiction, suppose did have such a row, say the th row. Suppose for some , where (the case where they are each less than is argued identically). Let denote the th column of . Let be but with the th coordinate replaced with . Then for any

Thus we have

and rearranging gives the contradiction .

Lemma 4.

Let be positive reals with and . Then if it must be the case that .

Proof.  Define the function . Then is increasing for . Then since , for for constant we have the equality , where the term goes to zero as . Thus for sufficiently small we have that the term must be less than , so in order to have , since is increasing we must have .

2.1 Sign matrices

In this section we consider the case that all entries of are either or and show a lower bound on in this case.

Lemma 5.

Suppose and all entries of are in . Then .

Proof.  For the sake of contradiction suppose . There are non-zero entries in and thus at least of these entries have the same sign by the pigeonhole principle; wlog let us say appears at least times. Then again by pigeonhole some row of has values that are . The claim now follows by Lemma 3 with .

We now show how to improve the bound to the desired form.

Theorem 6.

Suppose and all entries of are in . Then .

Proof.  We know by Lemma 5. Let . Every has subsets of size of non-zero coordinates. Thus by pigeonhole there exists a set of rows and columns such that for each row all entries in those columns are in magnitude and have the same sign (the signs may vary across rows). Letting be but with those coordinates set to , we have

Thus we have

so that rearranging gives

Suppose for some small constant so that . Then


Taking the natural logarithm of both sides gives

Define , . Then , since . By [Alo09] we must have , so for smaller than some fixed constant. Thus by Lemma 4 we have . The theorem follows since since [Alo09].

Corollary 7.

Suppose and all entries of are in . Then .

2.2 General matrices

We now consider arbitrary sparse and nearly orthogonal matrices . That is, we no longer require the non-zero entries of to be in magnitude.

Lemma 8.

Suppose . Then .

Proof.  For the sake of contradiction suppose . We know by Lemma 3 that for any , no row of can have more than entries of value at least in magnitude and of the same sign. Define . Let be the subset of indices in with , and define . Let denote the square of a random positive value from . Then

By analogously bounding the sum of squares of entries in , we have that the sum of squares of entries at least in magnitude is never more than in the th row of , for any . Thus the total sum of squares of all entries in the matrix less than in magnitude is at most . Meanwhile the sum of all other entries is at most . Thus the sum of squares of all entries in the matrix is at most , by our assumption on . This quantity must be , since every column of has unit norm. However for our stated value of this is impossible since , a contradiction.

We now show how to obtain the extra factor of in the lower bound.

Lemma 9.

Let . Suppose each have and , and furthermore for . Then for any with , we must have with

Proof.  We label each vector by its -type, defined in the following way. The -type of a vector is the set of locations of the largest coordinates in magnitude, as well as the signs of those coordinates, together with a rounding of those top coordinates so that their squares round to the nearest integer multiple of . In the rounding, values halfway between two multiples are rounded arbitrarily; say downward, to be concrete. Note that the amount of mass contained in the top coordinates of any after such a rounding is at most , and thus the number of roundings possible is at most the number of ways to write a positive integer in as a sum of positive integers, which is . Thus the total number of possible -types is at most ( choices of the largest coordinates, choices of their signs, and choices for how they round). Thus by the pigeonhole principle, there exist vectors each with the same -type such that .

Now for these vectors , let of size be the set of the largest coordinates (in magnitude) in each . Define ; that is, we zero out the coordinates in . Then for ,


The last inequality used that . Also we pick to ensure so that the right hand side of Eq.(1) is less than . The penultimate inequality follows by Cauchy-Schwarz. Thus we have


However we also have , which implies by rearranging Eq.(2).

Theorem 10.

There is some fixed so that the following holds. Let . Suppose each have and , and furthermore for . Then as long as .

Proof.  By Lemma 8, . Set so that Lemma 9 applies. Then by Lemma 9, as long as ,

where is as in Lemma 9. Taking the natural logarithm on both sides,

In other words,

Define . Thus we have . We have that is always the case for since then and we have that . Also note for smaller than some constant we have that since by [Alo09]. Thus by Lemma 4 we have . Using that since , and that for our setting of when gives . Since [Alo09], this is equivalent to our lower bound in the theorem statement.

Corollary 11.

Let be as in Theorem 10. Then as long as .

Remark 12.

From Theorem 10, we can deduce that for constant , in order for the sparsity to be a constant independent of , it must be the case that . This fact rules out very sparse mappings even when we significantly increase the target dimension.

3 RIP Sparsity Lower Bound

Consider a -RIP matrix with distortion where each column has at most non-zero entries. We will show for that cannot be very small when has the optimal number of rows .

Theorem 13.

Assume , for some fixed universal small constant , . Then we must have .

Proof.  Assume for the sake of contradiction that . Consider the th column of for some fixed . By -RIP, the norm of each column of is at least , so the sum of squares of entries greater than in magnitude is at least . Therefore, there exists a scale such that the number of entries of absolute value greater than or equal to is at least . To see this, let be the set of rows such that . For the sake of contradiction, suppose that every scale has strictly fewer than values that are at least in magnitude (note this also implies ). Let be the square of a random element of . Then

a contradiction. Let a pattern at scale be a subset of size of along with signs. There are patterns where for all and the signs of match the signs of .

There are possible patterns at scale . By an averaging argument, there exists a scale , and a pattern such that the number of columns of with this pattern is at least . Consider 2 cases.

Case 1 ():

Pick an arbitrary set of such columns. Consider the vector with ones at locations corresponding to those columns and zeroes everywhere else. We have and for each , we have


This contradicts the assumption that .

Case 2 ():

Consider the vector with ones at locations corresponding to those columns and zeroes everywhere else. We have and for each , we have . Consider 2 subcases.

Case 2.1 ():

Then , so


This contradicts the assumption that .

Case 2.2 ():

. We have


Eq.(4) follows from . Eq.(5) follows from the fact that is monotonically decreasing for . Indeed,