Uniform Sampling for Matrix Approximation
Abstract
Random sampling has become a critical tool in solving massive matrix problems. For linear regression, a small, manageable set of data rows can be randomly selected to approximate a tall, skinny data matrix, improving processing time significantly. For theoretical performance guarantees, each row must be sampled with probability proportional to its statistical leverage score. Unfortunately, leverage scores are difficult to compute. A simple alternative is to sample rows uniformly at random. While this often works, uniform sampling will eliminate critical row information for many natural instances.
We take a fresh look at uniform sampling by examining what information it does preserve. Specifically, we show that uniform sampling yields a matrix that, in some sense, well approximates a large fraction of the original. While this weak form of approximation is not enough for solving linear regression directly, it is enough to compute a better approximation.
This observation leads to simple iterative row sampling algorithms for matrix approximation that run in inputsparsity time and preserve row structure and sparsity at all intermediate steps. In addition to an improved understanding of uniform sampling, our main proof introduces a structural result of independent interest: we show that every matrix can be made to have low coherence by reweighting a small subset of its rows.
1 Introduction
Many fast, randomized algorithms for solving massive regression problems rely on the fundamental building block of spectral approximation. For a tall, narrow data matrix , these methods find a shorter approximate data matrix, , such that, for all vectors , . A recent explosion in work on this problem has lead to extremely fast algorithms, all of which rely on variations of JohnsonLindenstrauss random projections [CW13, MM13, NN12, LMP13].
By reexamining uniform sampling, a heuristic known to work for low coherence data, we give spectral approximation algorithms that avoid projection entirely. Our methods are the first to match stateoftheart runtimes while preserving row structure and sparsity in all matrix operations.
It is known that for a data matrix , a spectral approximation can be obtained by sampling rows, each with probability proportional to its statistical leverage score [DMM06, SS08]. The leverage score of ’s row, , is . A higher leverage score indicates that is more important in composing the spectrum of .
Unfortunately, leverage scores are difficult to calculate – finding them involves computing , which is as slow as solving our regression problem in the first place! In practice, data is often assumed to have low coherence [MT11], in which case simply selecting rows uniformly at random works [AMT10, KMT12]. However, uniform sampling could be disastrous – if contains a row with some component orthogonal to all other rows, removing it will reduce the rank of and thus we cannot possibly preserve all vector products ( will start sending some vectors to ). Any uniform sampling scheme is likely to drop any such single row.^{1}^{1}1When leverage score sampling, such a row would have the highest possible leverage score of 1.
Possible fixes include randomly “mixing” data points to avoid degeneracies [AMT10]. However, this approach sacrifices sparsity and structure in our data matrix, increasing storage and runtime costs. Is there a more elegant fix? First note that sampling by approximate leverage scores is fine, but we may need to select more than the optimal rows. With that in mind, consider the following straightforward algorithm for iterative sampling (inspired by [LMP13]):

Reduce significantly by sampling uniformly.

Approximate using the smaller matrix and estimate leverage scores for .

Resample rows from using these estimates, obtaining a spectral approximation .

Repeat from Step 1 to reduce further and obtain a smaller approximation.
While intuitive, this scheme was not previously known to work! Our main technical result is proving that it does. This process (and related schemes) will quickly converge on a small spectral approximation to – i.e. with rows.
A few results come close to an analysis of such a routine – in particular, two iterative sampling schemes are analyzed in [LMP13]. However, the first ultimately requires JohnsonLindenstrauss projections that mix rows, something we were hoping to avoid. The second almost maintains sparsity and row structure (except for possibly including rows of the identity in ), but its convergence rate depends on the condition number of .
More importantly, both of these results are similar in that they rely on the primitive that a (possibly poor) spectral approximation to is sufficient for approximately computing leverage scores, which are in turn good enough for obtaining an even better spectral approximation. As mentioned, uniform sampling will not in general give a spectral approximation – it does not preserve information about all singular values. Our key contribution is a better understanding of what information uniform sampling does preserve. It turns out that, although weaker than a spectral approximation, the matrix obtained from uniform sampling can nonetheless give leverage score estimates that are good enough to obtain increasingly better approximations to .
1.1 Our Approach
Suppose we compute a set of leverage score estimates, , using in place of for some already obtained matrix approximation . As long as our leverage score approximations are upper bounds on the true scores () we can use them for sampling and still obtain a spectral approximation to [LMP13]. However, the number of samples we take will increase to
where is some fixed constant. Note that, when sampling by exact leverage scores, it can be shown that so we take rows.
Thus, to prove that our proposed iterative algorithm works, we need to show that, if we uniformly sample a relatively small number of rows from (1) and estimate leverage scores using these rows (2), then the sum of our estimates will be small. Then, when we sample by these estimated leverage scores in 3, we can sufficiently reduce the size of . Note that we will not aim to reduce to height in one shot – we just need our leverage estimates to sum to say, , which allows us to cut the large matrix in half at each step.
In prior work, the sum of overestimates was bounded by estimating each leverage score to within a multiplicative factor. This requires a spectral approximation, which is why previous iterative sampling schemes could only boost poor spectral approximations to better spectral approximations. Of course, a “for each” statement is not required, and we will not get one through uniform sampling. Thus, our core result avoids this technique. Specifically, we show,
Theorem 1 (Leverage Score Approximation via Uniform Sampling).
The validity of our proposed iterative sampling scheme immediately follows from Theorem 1. For example, if we set with a high enough constant, , allowing us to cut our matrix in half. Alternatively, if we uniformly sample rows (say n/2) then , so we can cut our matrix down to rows. There is a convenient tradeoff – the more rows uniformly sampled in 1, the more we can cut down by in 3. This tradeoff leads to natural recursive and iterative algorithms for row sampling.
We give a proof of Theorem 1 using a clean expectation argument. By considering the estimated leverage score for a row computed using our uniformly sampled matrix unioned with that row itself, we can bound for all .
Although not stated here for conciseness, we also prove versions of Theorem 1 with slightly different guarantees (Theorems 3 and 4) using a technique that we believe is of independent interest. It is well known that, if has low coherence – that is, has a low maximum leverage score – then uniform sampling from the matrix is actually sufficient for obtaining a full spectral approximation. The uniform rate will upper bound the leverage score rate for every row. With this in mind, we show a powerful fact: while many matrices do not have low coherence, for any , we can decrease the weight on a small subset of rows to make the matrix have low coherence. Specifically,
Lemma 1 (Coherence Reducing Reweighting).
For any matrix and any coherence upper bound there exists a diagonal reweighting matrix with all entries in and just entries not equal to 1, such that:
Intuitively, this lemma shows that uniform sampling gives a matrix that spectrally approximates a large submatrix of the original data. It follows from our more general Theorem 2, which describes exactly how leverage scores of can be manipulated through row reweighting.
We never actually find explicitly – simply its existence implies our uniform sampling theorems! As explained, since has low coherence, uniform sampling would give a spectral approximation to the reweighted matrix and thus a multiplicatively good approximation to each leverage score. Thus, the sum of estimated leverage scores for will be low, i.e. . It can be shown that, for any row that is not reweighted, the leverage score in computed using a uniformly sampled , is never greater than the corresponding leverage score in computed using a uniformly sampled . Thus, the sum of approximate leverage scores for rows in that are not reweighted is small by comparison to their corresponding leverage scores in . How about the rows that are reweighted in ? Lemma 1 claims there are not too many of these – we can trivially bound their leverage score estimates by 1 and even then the total sum of estimated leverage scores will be small.
This argument gives the result we need: even if a uniformly sampled cannot be used to obtain good per row leverage score upper bounds, it is sufficient for ensuring that the sum of all leverage score estimates is not too high.
1.2 Road Map
 Section 2

Survey prior work on randomized linear algebra and spectral matrix approximation.
 Section 3

Review frequently used notation and important foundational lemmas.
 Section 4

Prove that uniform sampling is sufficient for leverage score estimation (Theorem 1).
 Section 5
 Section 6
 Section 7

Describe simple and efficient iterative algorithms for spectral matrix approximation.
2 Background
2.1 Randomized Numerical Linear Algebra
In the past decade, fast randomized algorithms for matrix problems have risen to prominence. Numerous results give improved running times for matrix multiplication, linear regression, and low rank approximation – helpful surveys of this work include [Mah11] and [HMT11]. In addition to asymptotic runtime gains, randomized alternatives to standard linear algebra tools tend to offer significant gains in terms of data access patterns and required working memory.
Algorithms for randomized linear algebra often work by generically reducing problem size – large matrices are compressed (using randomness) to smaller approximations which are processed deterministically via standard linear algebraic methods. Methods for matrix reduction divide roughly into two categories – random projection methods [CW09, CW13, MM13, NN12, Sar06] and sampling methods [DKM04, DKM06a, DKM06b, DMM08, DM10, LMP13].
Random projection methods recombine rows or columns from a large matrix to form a much smaller problem that approximates the original. Descending from the JohnsonLindenstrauss Lemma [JL84] and related results, these algorithms are impressive for their simplicity and speed – reducing a large matrix simply requires multiplication by an appropriately chosen random matrix.
Sampling methods, on the other hand, seek to approximate large matrices by judiciously selecting (and reweighting) few rows or columns. Sampling itself is even simpler and faster than random projection – the challenge becomes efficiently computing the correct measure of “importance” for rows or columns. More important rows or columns are selected with higher probability.
2.2 Approximate Linear Regression
We focus on linear regression, i.e. solving overdetermined systems, which requires our matrix reduction step to produce a spectral approximation to the data matrix . One possibility is to obtain a approximation with rows, and to solve regression on the smaller problem to give an approximate solution. To improve stablility and achieve dependence, randomized schemes can be combined with known iterative regression algorithms. These methods only require a constant factor spectral approximation with rows and are addressed in [AMT10, CRT11, CW13, MSM14, RT08].
When random projections are used, for some randomly generated matrix which is known as a subspace embedding. Work on subspace embeddings goes back to [PTRV98] and [Sar06]. Recent progress has significantly sped up the process of computing , leading to the first inputsparsity time algorithms for linear regression (or nearly inputsparsity time if iterative methods are employed) [CW13, MM13, NN12].
2.3 Row Sampling
As discussed, an alternative route to spectral matrix approximation is importance sampling. Specifically, rows can be sampled with probability proportional to their leverage scores, as suggested in [DMM06] and proved by Spielman and Srivastava [SS08]. Spielman and Srivastava were specifically focused on spectral approximations for the edgevertex incidence matrix of a graph. This is more commonly referred to as spectral sparsification – a primitive that has been very important in literature on graph algorithms. Each row in a graph’s (potentially tall) edgevertex incident matrix corresponds to an edge and the row’s leverage score is exactly the edge’s weighted effective resistance, whicth is used as the sampling probability in [SS08].
This application illustrates an important point: when applied to spectral graph sparsification, it is critical that the is compressed via sampling instead of random projection. Sampling ensures that contains only reweighted rows from – i.e. it remains an edgevertex incidence matrix. In general, sampling is interesting because it preserves row structure. Even if that structure is just a certain level of sparsity, it can reduce memory requirements and accelerate matrix operations.
While leverage scores for the edgevertex incidence matrix of a graph can be computed quickly [KMP10, ST04], in general, computing leverage scores requires evaluating , which is as difficult as solving regression in the first place. Li, Miller, and Peng address this issue with methods for iteratively computing good row samples [LMP13]. Their algorithms achieve inputsparsity time regression, but are fairly involved and rely on intermediate operations that ultimately require JohnsonLindenstrauss projections, mixing rows and necessitating dense matrix operations. An alternative approach from [LMP13] does preserve row structure (except for possible additions of rows from the identity to intermediate matrices) but converges in a number of steps that depends on ’s condition number.
3 Notation and Preliminaries
3.1 Singular Value Decomposition and Pseudoinverse
For any with rank , we write the reduced singular value decomposition, . and have orthonormal columns and is diagonal and contains the nonzero singular values of . . Let denote the MoorePenrose pseudoinverse of . .
3.2 Spectral Approximation
For any , we say that is a spectral approximation of if,
(1) 
Letting denote the singular value of a matrix, spectral approximation implies:
So, a spectral approximation preserves the magnitude of matrixvector multiplication with , the value of ’s quadratic form, and consequently, each singular value of . For conciseness, we sometimes write where indicates that is positive semidefinite. Even more succinctly, denotes the same condition.
3.3 Leverage Scores
The leverage score of the row of is:
(2) 
We also define the related cross leverage score as . Let be a vector containing ’s leverage scores. is the diagonal of , which is a projection matrix. Therefore, . In addition to this individual bound, because is a projection matrix, the sum of ’s leverage scores is equal to the matrix’s rank:
(3) 
A row’s leverage score measures how important it is in composing the row space of . If a row has a component orthogonal to all other rows, its leverage score is . Removing it would decrease the rank of , completely changing its row space. If all rows are the same, each has leverage score . The coherence of is . If has low coherence, no particular row is especially important. If has high coherence, it contains at least one row whose removal would significantly affect the composition of ’s row space. A characterization that helps with this intuition follows:
Lemma 2.
For all and we have that
Let denote the optimal for . The entry of is given by .
Proof.
For the solution to have minimal norm, we must have . Thus, and we can write for some . Using the constraints of the optimization problem we have that . Thus , so . This gives . Furthermore:
∎
We often approximate the leverage scores of by computing them with respect to some other matrix . We define the generalized leverage score:
(4) 
If has an component in , we set its generalized leverage score to , since it might be the only row in pointing in this direction. Thus, when sampling rows, we cannot remove it. We could set the generalized leverage score to , but using simplifies notation in some of our proofs. If is a spectral approximation for , then every generalized leverage score is a good multiplicative approximation to its corresponding true leverage score:
Lemma 3 (Leverage Score Approximation via Spectral Approximation  Lemma 4.3 of [Lmp13]).
If is a spectral approximation of , so , then .
Proof.
This follows simply from the definition of leverage scores and generalized leverage scores and the fact that implies . ∎
3.4 Leverage Score Sampling
Sampling rows from according to their exact leverage scores gives a spectral approximation for with high probability. Sampling by leverage score overestimates also suffices. Formally:
Lemma 4 (Spectral Approximation via Leverage Score Sampling).
Given an error parameter , let be a vector of leverage score overestimates, i.e., for all . Let be a sampling rate parameter and let be a fixed positive constant. For each row, we define a sampling probability . Furthermore, let denote a function which returns a random diagonal matrix with independently chosen entries. with probability and otherwise.
If we set , has at most nonzero entries and is a spectral approximation for with probability at least .
4 Leverage Score Estimation via Uniform Sampling
In this section, we prove Theorem 1 using a simple expectation argument. We restate a more complete version of the theorem below:
Theorem 1 (Full Statement).
Given any . Let denote a uniformly random sample of rows from and let be its diagonal indicator matrix (i.e. for , otherwise). Define
Then, for all and
Proof.
First we show that our estimates are valid leverage score upper bounds, i.e. . Let be the diagonal indicator matrix for . We claim that, for all ,
(5) 
This is proved casebycase:

When , so it holds trivially.

When and , then by definition, and .

When and then by the ShermanMorrison formula for pseudoinverses [Mey73, Thm 3],
(ShermanMorrison formula)
By (5) and the fact that (see Lemma 3), we have , so our estimates are upper bounds as desired. It remains to upper bound the expected sum of . We can break down the sum as
The first term is simply the sum of ’s leverage scores, so it is equal to by (3). To bound the second term, consider a random process that first selects , then selects a random row and returns . There are always exactly rows , so the value returned by this random process is, in expectation, exactly equal to .
This random process is also equivalent to randomly selecting a set of rows, then randomly choosing a row and returning its leverage score! In expectation it is therefore equal to the average leverage score in . has rows and its leverage scores sum to its rank, so we can bound its average leverage score by . Overall we have:
∎
5 Coherence Reducing Reweighting
In this section, we prove Theorem 2, which shows that we can reweight a small number of rows in any matrix to make it have low coherence. This structural result may be of independent interest. It is also fundamental in proving Theorem 3, a slightly stronger and more general version of Theorem 1 that we will prove in Section 6.
Actually, for Theorem 2 we prove a more general statement, studying how to select a diagonal row reweighting matrix to arbitrarily control the leverage scores of . One simple conjecture would be that, given a vector , there always exists a such that . This conjecture is unfortunately not true  if is the identity matrix, then if and otherwise. Instead, we show the following:
Theorem 2 (Leverage Score Bounding Row Reweighting).
For any and any vector with for all , there exists a diagonal matrix with such that:
(6) 
and
(7) 
Note that (6) is easy to satisfy – it holds if we set . Hence, the main result is the second claim . Not only does a exist that gives the desired leverage score bounds, but it is only necessary to reweight rows in with a low total weight in terms of .
For any incoherence parameter , if we set for all , then this theorem shows the existence of a reweighting that reduces coherence to . Such a reweighting has and therefore . So, we see that Lemma 1 follows as a special case of Theorem 2.
In order to prove Theorem 2, we first give two technical lemmas which are proved in Appendix A. Lemma 5 describes how the leverage scores of evolve when a single row of is reweighted. We show that, when we decrease the weight of a row, that row’s leverage score decreases and the leverage score of all other rows increases.
Lemma 5 (Leverage Score Changes Under Rank 1 Updates).
Given any , , and , let be a diagonal matrix such that and for all . Then,
and for all ,
Next we claim that leverage scores are lower semicontinuous in the weighting of the rows. This allows us to reason about weights that arise as the limit of Algorithm 1 for computing them.
Lemma 6 (Leverage Scores are Lower Semicontinuous).
is lower semicontinuous in the diagonal matrix , i.e. for any sequence with for all and , we have
Proof of Theorem 2.
We prove the existence of the required by considering the limit of the following algorithm for computing a reweighting matrix.
For all , let be the value of after the update to the weight. We show that meets the conditions of Theorem 2. First note that Algorithm 1 is well defined and that all entries of are nonnegative for all . To see this, suppose we need to decrease so that . Note that the condition gives
Therefore, Lemma 5 shows that we can make arbitrary small by setting close enough to . Since the leverage score for row is continuous, this implies that exists as desired.
Since, the entries of are nonnegative and decrease monotonically by construction, clearly exists. Furthermore, since setting makes , we see that, by construction,
Therefore, by Lemma 6 we have that .
It only remains to show that . Let be the first iteration such that for any such that . Let be the set of rows such that and let . Since decreasing the weight of one row increases the leverage scores of all other rows, we have
When we set , it must be the case that . In this case, removing the row decreases the rank of by 1 and hence . Therefore,
∎
6 Leverage Score Approximation via Undersampling
Theorem 1 alone is enough to prove that a variety of iterative methods for spectral matrix approximation work. However, in this section we prove Theorem 3, a slight strengthening and generalization that improves runtime bounds, proves correctness for some alternative sampling schemes, and gives some more intuition for why uniform sampling allows us to obtain leverage score estimates with low total sum.
Theorem 3 relies on Theorem 2, which intuitively shows that a large fraction of our matrix has low coherence. Sampling rows uniformly will give a spectral approximation for this portion of our matrix. Then, since few rows are reweighted in , even loose upper bounds on the leverage scores for those rows will allow us to bound the total sum of estimated leverage scores when we sample uniformly.
Formally, we show an upper bound on the sum of estimated leverage scores obtained from undersampling according to any set of leverage score upper bounds. Uniform sampling can simply be viewed as undersampling when all we know is that each leverage score is upper bounded by . That is, in the uniform case, we set .
The bound in Theorem 3 holds with high probability, rather than in expectation like the bound in Theorem 1. This gain comes at a cost of requiring our sampling rate to be higher by a factor of . At the end of this section we show how the factor can be removed at least in the case of uniform sampling, giving a high probability statement that matches the bound of Theorem 1.
Theorem 3 (Leverage Score Approximation via Undersampling).
Let be a vector of leverage score overestimates, i.e., for all . For some undersampling parameter , let . Let Then, with high probability, is a leverage score overestimate, i.e. , and
Furthermore, has nonzeros.
Proof.
Let . Since is a set of leverage score overestimates, Lemma 4 (with ) shows that, with high probability,
In , when we include a row, we reweight it by . For , we sample at a rate lower by a factor of as compared with , so we weight rows by a factor of higher. The multiplied by makes up for this difference. Thus, is equivalent to with some rows removed. Therefore:
So, for all , . By assumption , so this proves that .
By Theorem 2, there is a diagonal matrix such that for all and . For this , using the fact that , we have
(8) 
Using , we have
(9) 
Now, note that . Since is an overestimate of leverage scores for , Lemma 4 (again with ) shows that . Hence (9) along with Lemma 3 shows that
Combining with (8), we have
∎
Choosing an undersampling rate is equivalent to choosing a desired sampling rate and setting accordingly. From this perspective, it is clear that the above theorem gives an extremely simple way to iteratively improve leverage scores. Start with with . Undersample at rate to obtain a sample of size , which gives new leverage score estimates with . Repeat this process, cutting the sum of leverage score estimates in half with each iteration. Recall that we restrict , so once the sum of leverage score estimates converges on , this halving process halts – as expected, we can not keep cutting the sum further.
6.1 Improved Bound for Uniform Sampling
The algorithm just described corresponds to Algorithm 3 in Section 7 and differs somewhat from approaches discussed earlier (e.g. our proposed iterative algorithm from Section 1). It always maintains a sample of just rows that is improved iteratively.
Consider instead sampling few rows from with the goal of estimating leverage scores well enough to obtain a spectral approximation with rows. In the uniform sampling case, when , if we set for example, then sampling rows uniformly will give us leverage score estimates summing to . This is good enough to cut our original matrix in half. However, we see that we have lost a factor to Theorem 1, which let us cut down to expected size by sampling just rows uniformly.
At least when , this factor can be eliminated. In Theorem 3, we set , meaning that rows selected for are included with weight . Instead of reweighting rows, consider simply setting all nonzero values in to be . We know that our leverage score estimates will still be overestimates as we still have and so . Further
Formally, consider two cases:

. In this case, is simply itself, so we know our leverage score estimates are exact and thus their sum is . We can use them to obtain a spectral approximation with rows.

. In this case, we reweight rows by . Thus, increasing weights in to will reduce leverage score estimates by a factor of . So overall we have:
Recall from Lemma 4 that sampling by actually gives a matrix with rows. Thus, we obtain a spectral approximation to with the following number of rows:
Setting for some so that samples rows at rate yields the following theorem:
Theorem 4.
Given , suppose we sample rows uniformly and independently at rate , without reweighting, to obtain . Computing for each row and resampling from by these estimates will, with high probability, return a spectral approximation to with at most rows.
Choosing allows us to find a spectral approximation of size , as long as . This matches the bound of Theorem 1, but holds with high probability.
7 Applications to Row Sampling Algorithms
As discussed in the introduction, Theorems 1, 3, and 4 immediately yield new, extremely simple algorithms for spectral matrix approximation. For clarity, we initially present versions running in nearly inputsparsity time. However, we later explain how our first algorithm can be modified with standard techniques to remove log factors, achieving inputsparsity time and thus matching stateoftheart results [CW13, MM13, NN12]. Our algorithms rely solely on row sampling, which preserves matrix sparsity and structure, possibly improving space usage and runtime for intermediate system solves.
7.1 Algorithm Descriptions
The first algorithm we present, Repeated Halving, is a simple recursive procedure. We uniformly sample rows from to obtain . By Theorems 1 and 4, estimating leverage scores of with respect to this sample allows us to immediately find a spectral approximation to with rows. Of course, is still large, so computing these estimates would be slow. Thus, we recursively find a spectral approximation of and use this to compute the estimated leverage scores.
The second algorithm, Refinement Sampling, makes critical use of Theorem 3, which shows that, given a set of leverage score upper bounds, we can undersample by these estimates and still significantly improve their quality with each iteration. We start with all of our leverage score upper bounds set to so we have . In each iteration, we sample rows according to our upper bounds, meaning that we undersample at rate . By Theorem 3, we cut by a constant fraction in each iteration. Thus, within rounds, will be and we can simply use our estimates to directly obtain a spectral approximation to with rows.
7.2 Runtime Analysis
In analyzing the runtimes of these algorithms, we assume , which is a reasonable assumption for any practical regression problem.^{3}^{3}3A simple method for handling even larger values of is outlined in [LMP13]. Furthermore, we use the fact that a system can be solved in time , where is the matrix multiplication exponent. However, we emphasize that, depending on the structure and sparsity of , alternative system solving methods may yield faster results or runtimes with different trade offs. For example, if the rows of are sparse, solving a system in , where consists of rescaled rows from may be accelerated by using iterative conjugate gradient, or other Krylov subspace methods (which can also avoid explicitly computing ) . It is best to think of as the runtime of the fastest available system solver in your domain, and the quoted runtimes as general guidelines that will change somewhat depending on exactly how the above algorithms are implemented.
First, we give an important primitive showing that estimates of generalized leverage scores can be computed efficiently. Computing exact generalized leverage scores is slow and we only need constant factor approximations, which will only increase our sampling rates and hence number of rows sampled by a constant factor.
Lemma 7.
Given containing rescaled rows of , for any , it is possible to compute an estimate of , , in time such that, w.h.p. in , for all , and .
By setting , we can obtain constant factor approximations to generalized leverage scores in time .
Proof Sketch.
Lemma 7 follows from a standard technique that uses JohnsonLindenstrauss projections [AT11, LMP13, SS08]. Presuming , the general idea is to write . If we instead compute , where is a random Gaussian matrix with rows, then by the JohnsonLindenstrauss Lemma, with high probability, the approximation will be within a