Dimensionality Reduction of Massive Sparse Datasets Using Coresets

# Dimensionality Reduction of Massive Sparse Datasets Using Coresets

Dan Feldman, Mikhail Volkov, and Daniela Rus
###### Abstract

In this paper we present a practical solution with performance guarantees to the problem of dimensionality reduction for very large scale sparse matrices. We show applications of our approach to computing the low rank approximation (reduced SVD) of such matrices. Our solution uses coresets, which is a subset of scaled rows from the input matrix, that approximates the sub of squared distances from its rows to every -dimensional subspace in , up to a factor of . An open theoretical problem has been whether we can compute such a coreset that is independent of the input matrix and also a weighted subset of its rows. We answer this question affirmatively. Our main technical result is a novel technique for deterministic coreset construction that is based on a reduction to the problem of approximation for item frequencies.

## 1 Introduction

Algorithms for dimensionality reduction usually aim to project an input set of -dimensional vectors (database records) onto a dimensional affine subspace that minimizes the sum of squared distances to these vectors, under some constraints. Special cases include Principle Component Analysis (PCA), Linear regression (), Low-rank approximation (-SVD), Latent Drichlet Analysis (LDA) and Non-negative Matrix Factorization (NNMF). Learning algorithms such as -means clustering can then be applied on the low-dimensional data to obtain fast approximations with provable guarantees.

However, there are still no practical algorithms with provable guarantees that compute dimension reduction for sparse large-scale data. Much of the large scale high-dimensional data sets available today (e.g. image streams, adjacency matrices of graphs and social networks, text streams, etc.) are sparse. For example, consider the text case of the English Wikipedia. We can associate a matrix with the Wikipedia, where the (usually English) words define the columns (approximately 8 millions terms) and the individual documents define the rows (approximately 3.7 million documents). This large scale matrix is sparse because every document uses a very small fraction of the possible English words.

In this paper we present an algorithm for dimensionality reduction of very large scale sparse data sets such as the Wikipedia with provable approximations. Our approach uses coresets to solve the time and space challanges.

Given a matrix , a coreset in this paper is defined as a weighted subset of rows of such that the sum of squared distances from any given -dimensional subspace to the rows of is approximately the same as the sum of squared weighted distances to the rows in . Formally,

###### Definition 1 ((ε,k)-Coreset).

Given a matrix whose rows are points (vectors) in , an error parameter , and an integer that represents the desired dimensionality reduction, an -coreset for is a weighted subset of the rows of , where is a non-negative weight vector such that for every -subspace in we have

 ∣∣ ∣∣∑i(dist(ai,S))2−∑i(dist(wiai,S))2∣∣ ∣∣≤ε∑i(dist(ai,S))2.

That is, the sum of squared distances from the points to approximates the sum of squared weighted distances to . The approximation is up to a multiplicative factor of . By choosing we obtain a trivial -coreset. However, in a more efficient coreset most of the weights will be zero and the corresponding rows in can be discarded. The cardinality of the coreset is thus the sparsity of

 |C|=∥w∥0:=|{wi≠0∣i∈{1,⋯,n}}|.

If is small, then the computation is efficient. Because is a weighted subset of the rows of , if is sparse, then is also sparse. A long-open research question has been whether we can have such a coreset that is both of size independent of the input dimension ( and ) and a subset of the original input rows. In this paper we answer this question affirmatively as follows.

###### Theorem 1.

For every input matrix , a parameter error and an integer , there is a -coreset of size ; see Definition (1).

Our proof is constructive and we provide an efficient algorithm for computing such a coreset .

### 1.1 Why Coresets?

The motivation for using data reduction technique, and coresets as defined above in particular, is given below.

Fast approximations. The -subspace that minimizes the sum of squared distances to the vectors in , will minimize the sum of squared distances to the rows of , up to a factor of , over every -subspace in . If the -coreset is small, it can used to efficiently compute the low-rank approximation (reduced SVD) of the original matrix. More generally, using an -coreset we can compute the optimal -subspace under some given constraints, as desired by several popular dimensionality reduction techniques.

Streaming and parallel computation. An algorithm for constructing an -coreset off-line on a single machine, can be easily turned into an algorithm that maintains an -coreset for a (possibly unbounded) stream of -dimensional input vectors, where is the number of vectors seen so far, using one pass over these vectors and memory. Using the same merge-and-reduce technique, we can compute in a way known as embarrassingly parallel on distributed machines on a cloud, network or GPGPU, and reduce the construction time of by a factor of . In both the streaming and parallel models, the size of the coreset will be increased where is replaced by in the memory and running time computations. For our specific technique of the Frank-Wolfe algorithm, similar streaming versions do not introduce the factor.

Applications to sparse data. Let denote the sparsity, i.e., maximum non-zero entries, in a row of , over all of its rows. Note that this definition is different than the more common definition of number of non-zeroes entries (nnz) in the entire matrix. The memory (space) that is used by is then words (real numbers). In particular, if the cardinality of is independent of the dimensions and of , then so is the required memory to store and to compute its low rank approximation. This property allows us to handle massive datasets of unbounded dimensions and rows, with the natural limitation on the sparsity of each row.

Sparse reduced SVD. A lot of algorithms and papers aim to compute a low rank approximation which can be spanned by few input points. In particular, if each input point (row) is sparse then this low rank approximation will be sparse. For example, in the case of text mining, each topic (singular vector) will be a linear combination of few words, and not all the words in the dictionary. We get this property “for free” in the case of coresets: if the coreset consists of few input rows, its optimal -subspace (-rank approximation) will also be spanned by few input points. The optimal -subspace of the sketches presented in Section 1.3 do not have this property in general.

Interpretation and Factorization. Since coreset consists of few weighted rows, it tells us which records in the data are most importance, and what is their importance. For example, a coreset for the Wikipedia tells which documents are most important for approximating the reduced SVD (Latent Semantic Analysis). This and the previous property are the motivations behind the well known and decompositions, where is a subset of columns from the transpose input matrix , is a subset of rows from . A coreset represents an even simpler decomposition, where is a diagonal sparse matrix that consists of the weights in its diagonal, or a decomposition where is a diagonal matrix that consists of the non-zero rows of , and is a subset of rows from .

### 1.2 Our contribution

Our main result is the first algorithm for computing an -coreset of size independent of both and , for any given input matrix . Until today, it was not clear that such a coreset exists. The polynomial dependency on of previous coresets made them useless for fat or square input matrices, such as Wikipedia, images in a sparse feature space representation, or adjacency matrix of a graph. The output coreset has all the properties that are discussed in Section 1.1. We summarize this result in Theorem 1.

Open problems. We answer in this paper two recent open problems: coresets of size independent of for low rank approximation [7] and for -mean queries [2]. Both of the answers are based on our new technical tool for coreset construction; see Section 2.

Efficient construction. We suggest a deterministic algorithm that efficiently compute the above coreset for every given matrix. The construction time is dominated by the time it takes to compute the -rank approximation for the input matrix. Any such -coreset can be computed for both the parallel and streaming models as explained in [5], or using existing streaming versions for the Frank-Wolfe algorithm.

Efficient Implementation. Although our algorithm runs on points in that represent matrices, but we show how it can actually be implemented using operations in , similarly to learning kernels techniques. The running time of the algorithm is dominated by the time it takes to compute a constant factor approximation to the -rank approximation of the input matrix, and the time it takes to approximates the farthest input point from a given query point in the input. Over the recent years several efficient algorithms were suggested for these problems using pre-processing, in particular for sparse data, (e.g. [3], Fast JL and farthest nearest neighbour).

### 1.3 Related Work

Existing Software and implementations. We have tried to run modern reduced SVD implementations such as GenSim [11] that uses random projections or Matlab’s svds function that uses the classic LAPACK library that in turn uses a variant of the Power (Lanczoz) method for handling sparse data. All of them crashed for an input of few thousand of documents and , as expected from the analysis of their algorithms. Even for , running the implementation of svds in Hadoop [12] (also a version of the Power Method) took several days [8].

Coresets. Following a decade of research, in [13] it was recently proved that an coreset of size exists for every input matrix. The proof is based on a general framework for constructing different kinds of coresets, and is known as sensitivity [4, 9]. This coreset is efficient for tall matrices, since its cardinality is independent of . However, it is useless for “fat” or square matrices (such as the Wikipedia matrix above), where is in the order of , which is the main motivation for our paper. In [2], the Frank-Wolfe algorithm was used to construct different types of coresets than ours, and for different problems. Our approach is based on a solution that we give to an open problem in [2], however we can see how it can be used to compute the coresets in [2] and vice versa.

Sketches. A sketch in the context of matrices is a set of vectors in such that the sum of squared distances from the input points to every -dimensional subspace in , can be approximated by up to a multiplicative factor of .

Note that even if the input vectors are sparse, the sketched vectors in general are not sparse, unlike the case of coresets. A sketch of cardinality can be constructed with no approximation error (), by defining to be the rows of the matrix where is the SVD of . It was proved in [5] that taking the first rows of yields such a sketch, i.e., of size independent of both and . Using the merge-and-reduced technique, this sketch can be computed in the streaming and parallel computation models. The final coreset size will be increased by a factor of in this case. For the streaming case, it was shown in [10, 7] that such a strong sketch of the same size, , can be maintained without the additional factor. The first sketch for sparse matrices was suggested in [3], but like more recent results, it assumes that the complete matrix fits in memory.

Lower bounds. Recently, [7, 10] proved a lower bound of for the cardinality of a strong sketchs. A lower bound of for strong coreset was proved for the special case in [1].

The Lanczoz Algorithm. The Lanczoz method and its variant as the Power Method are based on multiplying a large matrix by a vector for a few iterations to get its largest eigenvector . Then the computation is done recursively after projecting the matrix on the hyperplane that is orthogonal to . Multiplying a matrix by a vector can be done easily in the streaming mode without having all the matrix in memory. The main problem with running the Lanczoz method on large sparse data is that it is efficient only for the case : the largest eigenvector of a sparse matrix is in general not sparse even is sparse. Hence, when we project on the orthogonal subspace to , the resulting matrix is dense for the rest of the computations . Indeed, our experimental results show that the sparse SVD function in MATLAB which uses this method runs faster than the exact SVD, but crashes on large input, even for small .

## 2 Novel Coreset Constructions

Our main technical result is a new approach that yields coresets of size significantly lower than the state-of-the-art. While this paper is focused on coresets for computing the reduced SVD, our approach implies a technique for constructing coresets that may improve most of the previous coreset constructions.

Recall that coresets as defined in this paper must approximate every query for a given set of queries. In this paper the queries are -subspaces in . Unlike sketches, coresets must also be weighted subsets of the input. In this sense, all existing coresets constructions that we aware of can be described as computing sensitivity (or leverage score, or importance) for each point, as described earlier, and then computing what is known as an -sample; see [4]. For a given set of size , and a set of queries (subsets) of , a subset is an -sample if for every query, the fraction of that is covered by the query is -approximated by the fraction that it covers from , i.e,

 |f−~f|≤ε. (1)

Such -sample can usually be constructed deterministically in time that is exponential in the pseudo-dimension of (a complexity measure of shapes that is similar to VC-dimension [4]) , or using uniform random sampling of size the is linear . The second approach yields randomized constructions which is known to be sub-optimal compared to deterministic ones [1].

In this section we suggest a new gradient descent type of deterministic construction of coresets by reducing coreset problems to what we call Item Frequency -Approximation. The name comes from the well known Item Frequency Approximation (IFA) problem that was recently used in [10] to deterministically compute a sketch of size for the reduced SVD problem. Unlike the -sample and previous deterministic approach, or approach yields deterministic coresets of size independent of the pseudo-dimension . This property will turn into coresets of size independent of for the reduced SVD problem and also for the simpler -mean coreset that will be described below.

To show the relation to (1), we present the IFA problem a bit differently than in existing papers. In the IFA problem there is a universe of items and a stream of item appearances. The frequency of an item stands for the fraction of times appears in the stream (i.e., its number of occurrences divided by ). It is trivial to produce all item frequencies using space simply by keeping a counter for each item. The goal is to use space and produce approximate frequencies such that

 |fi−~fi|≤ε.

Note the surprising similarity to -samples in (1).

As stated there [10, 7], IFA is the fundamental tool that was used to construct sketches of size . However, it was also proved in [7] that this approach cannot be used to construct coresets (weighted subsets). This was suggested there as an open problem. In this paper we solve this problem by generalizing the IFA problem as follows.

IFA equals -IFA. First observe that the input vectors for the IFA problem can be considered as the rows of the identity matrix . We then wish to approximate each entry of the resulting mean vector by a small weighted subset,

 maxi|fi−~fi|=∥∥ ∥∥∑iain−∑iwiai∥∥ ∥∥∞≤ε,

where is a non-negative sparse vector of weights, and for is known as the norm. More generally we can replace by all the unit vectors in .

-IFA. In this paper we define the version of IFA as the problem of computing a sparse non-negative vector such that

 ∥∥ ∥∥∑iain−∑iwiai∥∥ ∥∥2≤ε,

where is known as the , Frobenius, or Euclidean norm.

Let denote the union over every vector that represent a distribution, i.e., . Our first technical result is that for any finite set of unit vectors in , any distribution , and every , we can compute a distribution of sparsity (non-zeroes entries) . This result is in fact straight-forward by applying the Frank-Wolfe algorithm [2], after normalizing the input points.

###### Theorem 2 (Item Frequency ℓ2 Approximation).

Let be a distribution over vectors in . There is a distribution of sparsity such that

 ∥∑izi⋅ai−∑iwiai∥2≤ε.

The Caratheodory Theorem proves Theorem 2 for the special case using only points. Our approach and algorithms for -IFA can thus be considered as -approximations for the Caratheodory Theorem, to get coresets of size independent of . Note that a Frank-Wolfe-style algorithm might run more than or iterations without getting zero error, since the same point may be selected in several iterations. Computing in each iteration the closest point to the origin that is spanned by all the points selected in the previous iterations, would guarantee coresets of size at most , and fewer iterations. Of course, the computation time of each iteration will be much slower in this case. Due to lack of space we omit further details.

### 2.1 Warm up: reduction to 1-mean ε-coresets.

Given a set of points in , and , we define a -mean -coreset to be a weight vector such that the sum of squared distances for every center to these input points is the same as the weighted sum of the weighted points, up to a multiplicative factor.

Using the sensitivity framework [4], we can compute such a coreset that has size with high probability [9]. A corollary to this also answers the open problem in [2]. Although the approximation property should hold for every center , we reduce the -mean coreset problem to the IFA problem, whose solution depends only on the input points. By scaling and normalizing the input points, we prove that Theorem 2 can be used to deterministically compute the first coreset of size independent of for the -mean problem in time that is not exponential in .

###### Corollary 1 (1-mean ε-coreset).

For every set of of vectors in and every a -mean -coreset of size can be computed deterministically in time.

## 3 Reduction from Coreset to ℓ2-Item Frequency Approximation

In the previous section we suggested the problem of -IFA and several algorithms to solve it. We then suggested a new construction for -mean coreset that is based on a simple reduction to the -IFA problem using a scaled versions of the input points. In this section we prove a more involved reduction from the main problem of this paper: computing a -coreset for the reduced SVD problem of a matrix . In this case the input to the -IFA problem is a set of matrices, i.e., vectors in . Each such matrix has the form where is related to the th row of the matrix in the Singular Value Decomposition of . The proof of Theorem 1 then follows by bounding the right hand side of (2) using Theorem 2. This is done by normalizing the input set, where the mean is translated and scaled to be the vector . To get an -additive error after this scaling, the number of iterations is at most the sum of the norms of the matrices divided by , which yields a coreset of size .

Notation. We denote by the set of all matrices. For a given integer we denote . For a pair of indexes (integers) , and a given matrix we denote by its entry in the th row and th column. As in Matlab, we denote by the set of indexes for an integer . The th row of is denoted by and the th column by . The th entry of a vector is denoted by .

The Frobenius norm (root of squared entries) of a matrix or a vector is denoted by . The identity matrix is denoted by and the matrix whose entries are all zeroes is denoted by . The size of and is determined by the context. The inner product of a pair of matrices is denoted by

We are now ready to prove the reduction from coresets to the Item Frequency Approximation problem, as follows. Theorem 1 then follows by using

###### Claim 1.

Let be a matrix of rank , and let denote its full SVD. Let be an integer, and for every let

 vi=(Ui,1,⋯,Ui,k,Ui,k+1:dΣk+1:d,k+1:d∥Σk+1:d,k+1:d∥,1). (2)

Let be a diagonal matrix. Then for every such that we have

 ∣∣∣1−∥WAX∥2∥AX∥2∣∣∣≤5⋅∥n∑i=1(1−Wi,i)vivTi∥.
###### Proof.

Let For every let . Put such that . Without loss of generality we assume , i.e., , otherwise we replace by . It thus suffices to prove that

 |∑iti∥Ai,:X∥2|≤5ε∥AX∥2. (3)

Using the triangle inequality,

 |∑iti∥Ai,:X∥2| ≤∣∣ ∣∣∑iti∥Ai,:X∥2−∑iti∥(Ai,1:k,0)X∥2∣∣ ∣∣. (4) +|∑iti∥(Ai,1:k,0)X∥2|. (5)

We now bound the last two expressions.

Bound on (5): It was proven in [6] that for every pair of -subspaces in there is and a -subspace such that the distance from every point to equals to its distance to multiplied by . By letting denote the -subspace that is spanned by the first standard vectors of , letting denote the -subspace that is orthogonal to each column of , and be a unit vector that is orthogonal to , we obtain that for every row vector ,

 ∥(p,0)X∥2=u2(py)2. (6)

After defining , (5) is bounded by

 ∑iti∥(Ai,1:k,0)X∥2=∑iti⋅u2∥Ai,1:ky∥2 =u2∑iti∥Ai,1:ky∥2 =u2∑iti∥Ui,1:kΣ1:k,1:ky∥2 =u2∥Σ1:k,1:ky∥2∑iti∥(Ui,1:k)x∥2. (7)

The left side of (7) is bounded by substituting in (6) for , as

 u2∥Σ1:k,1:ky∥2=k∑j=1u2(Σj,1:ky)2=k∑j=1∥(Σj,1:k,0)X∥2 =k∑j=1σ2j∥Xj,:∥2≤d∑j=1σ2d∥Xj,:∥2 =∥ΣX∥2=∥UΣX∥2=∥AX∥2. (8)

The right hand side of (7) is bounded by

 |∑iti∥(Ui,1:k)x∥2|=|∑iti(Ui,1:k)TUi,1:k∙xxT| =|xxT∙∑iti(Ui,1:k)TUi,1:k| ≤∥xxT∥⋅∥∑iti(Ui,1:k)TUi,1:k∥ (9) ≤∥∑iti(vi,1:k)Tvi,1:k∥≤∥∑itivTivi∥=ε, (10)

where (9) is by the Cauchy-Schwartz inequality and the fact that , and in (10) we used the assumption for every .

Plugging (8) and (10) in (7) bounds (5) as

 |∑iti∥(Ai,1:k,0)X∥2|≤ε∥AX∥2. (11)

Bound on (4): For every we have

 ∥Ai,:X∥2−∥(Ai,1:k,0)X∥2 =2(Ai,1:k,0)XXT(0,Ai,k+1:d)T +∥(0,Ai,k+1:d)X∥2 =2Ai,1:kX1:k,:(Xk+1:d,:)T(Ai,k+1:d)T +∥(0,Ai,k+1:d)X∥2
 =2k∑j=1Ai,jXj,:(Xk+1:d,:)T(Ai,k+1:d)T +∥(0,Ai,k+1:d)X∥2 =k∑j=12σjXj,:(Xk+1:d,:)T⋅∥σk+1:d∥vi,j(vi,k+1:d)T +∥σk+1:d∥2∥(0,vi,k+1:d)X∥2.

Summing this over with multiplicative weight and using the triangle inequality, will bound (4) by

 ∣∣ ∣∣∑iti∥Ai,:X∥2−∑iti∥(Ai,1:k,0)X∥2∣∣ ∣∣ ≤∣∣∑itik∑j=12σjXj,:(Xk+1:d,:)T (13) ⋅∥σk+1:d∥vi,j(vi,k+1:d)T∣∣ +∣∣ ∣∣∑iti∥σk+1:d∥2∥(0,vi,k+1:d)X∥2∣∣ ∣∣. (14)

The right hand side of (13) is bounded by

 ∣∣ ∣∣k∑j=12σjXj,:(Xk+1:d)T⋅∥σk+1:d∥∑itivi,j(vi,k+1:d)T∣∣ ∣∣ ≤k∑j=12σj∥Xj,:Xk+1:d∥⋅∥σk+1:d∥∥∑itivi,jvi,k+1:d∥ (15) ≤k∑j=1(εσ2j∥Xj,:∥2+∥σk+1:d∥2ε∥∑itivi,jvi,k+1:d∥2) (16) ≤2ε∥AX∥2, (17)

where (15) is by the Cauchy-Schwartz inequality,  (16) is by the inequality . In (17) we used the fact that is a block in the matrix , and

Next, we bound  (14). Let such that and . Hence, the columns of span the -subspace that is orthogonal to each of the columns of . By using the Pythagorean Theorem and then the triangle inequality,

 ∥σk+1:d∥2|∑iti∥(0,vi,k+1:d)X∥2| (19) = ∥σk+1:d∥2|∑iti∥(0,vi,k+1:d)∥2 −∑iti∥(0,vi,k+1:d)Y∥2| ≤∥σk+1:d∥2|∑iti∥vi,k+1:d∥2| (20) +∥σk+1:d∥2|∑iti∥(0,vi,k+1:d)Y∥2|. (21)

For bounding (21), observe that corresponds to a subspace, and is contained in the subspace that is spanned by the last standard vectors. Using same observations as above (6), there is a unit vector such that for every . Summing this over yields,

 |∑iti∥(0,vi,k+1:d)Y∥2|=|∑iti∥vi,k+1:dy∥2|=|∑itid∑j=k+1v2i,jy2j−k|=|d∑j=k+1y2j−k∑itiv2i,j|.

Replacing (21) in (19) by the last inequality yields

 ∥σk+1:d∥2|∑iti∥(0,vi,k+1:d)X∥2| ≤∥σk+1:d∥2(|∑itiv2i,d+1|+d∑j=k+1y2j−k∥∑itivivTi∥) (22) ≤∥σk+1:d∥2(ε+εd∑j=k+1y2j−k)≤2ε∥AX∥2, (23)

where (22) follows since is an entry in the matrix , in (23) we used (18) and the fact that . Plugging (17) in (13) and (23) in(8) gives the desired bound on (4) as

 |∑iti∥Ai,:X∥2−∑iti∥(Ai,1:k,0)X∥2|≤4ε∥AX∥2.

Finally, using (11) in (5) and the last inequality in (4), proves the desired bound of (3). ∎

### 3.1 Implementation.

We now give a brief overview that bridges the gap between the theoretical results and the practical implementation presented in Algorithm 1. The coreset construction described in this section use storage of per point. However the suggested implementation uses space for point. To achieve this we leverage several observations: (1) we do not need to compute the center, only its norm, which can we can update recursively from a starting value of 1; (2) the term only needs to be computed times, and its value stored for further iterations that find the same farthest point.

The algorithm works as follows. First we restructure the input points using their -rank SVD decomposition into a new input matrix , and normalize it to get . We arbitrarily select starting point and set its weight to , with all other weights set to zero. Next, we compute the farthest point from by projecting the weighted points onto the current point. The following expressions contain all the update steps in order to compute the norm of the new center recursively without computing the actual center itself (an computation). Finally, the value of is updated based on the ratio of distances from the current point, current center, and new center, and the weights are updated based on the new . The algorithm runs for iterations, or until has converged to 1. The output of the algorithm is a sparse vector of weights satisfying the guarantees.

## 4 Conclusion

We present a new approach for dimensionality reduction using coresets. The key feature of our algorithm is that it computes coresets that are small in size and subsets of the original data. Using synthetic data as ground truth we show that our algorithm provides a good approximation.

## References

• [1] Joshua Batson, Daniel A Spielman, and Nikhil Srivastava. Twice-ramanujan sparsifiers. SIAM Journal on Computing, 41(6):1704–1721, 2012.
• [2] Kenneth L Clarkson. Coresets, sparse greedy approximation, and the frank-wolfe algorithm. ACM Transactions on Algorithms (TALG), 6(4):63, 2010.
• [3] Kenneth L Clarkson and David P Woodruff. Low rank approximation and regression in input sparsity time. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pages 81–90. ACM, 2013.
• [4] D. Feldman and M. Langberg. A unified framework for approximating and clustering data. In Proc. 41th Ann. ACM Symp. on Theory of Computing (STOC), 2010. Manuscript available at arXiv.org.
• [5] D. Feldman, M. Schmidt, and C Sohler. Turning big data into tiny data: Constant-size coresets for k-means, pca and projective clustering. Proceedings of ACM-SIAM Symposium on Discrete Algorithms (SODA), 2013.
• [6] Dan Feldman. Phd thesis. Technical report, PhD thesis, Tel-Aviv University, 2010.
• [7] Mina Ghashami, Edo Liberty, Jeff M Phillips, and David P Woodruff. Frequent directions: Simple and deterministic matrix sketching. arXiv preprint arXiv:1501.01711, 2015.
• [8] Nathan P Halko. Randomized methods for computing low-rank approximations of matrices. PhD thesis, University of Colorado, 2012.
• [9] M. Langberg and L. J. Schulman. Universal approximators for integrals. Proceedings of ACM-SIAM Symposium on Discrete Algorithms (SODA), 2010.
• [10] Edo Liberty. Simple and deterministic matrix sketching. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 581–588. ACM, 2013.
• [11] Radim Ruvrek, Petr Sojka, et al. Gensimâstatistical semantics in python. 2011.
• [12] Konstantin Shvachko, Hairong Kuang, Sanjay Radia, and Robert Chansler. The hadoop distributed file system. In Mass Storage Systems and Technologies (MSST), 2010 IEEE 26th Symposium on, pages 1–10. IEEE, 2010.
• [13] Kasturi Varadarajan and Xin Xiao. On the sensitivity of shape fitting problems. arXiv preprint arXiv:1209.4893, 2012.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters