On Convolutional Approximations to Linear Dimensionality Reduction Operators for Large-Scale Data Processing

# On Convolutional Approximations to Linear Dimensionality Reduction Operators for Large-Scale Data Processing

Swayambhoo Jain  and Jarvis Haupt  The authors are with the Department of Electrical and Computer Engineering at the University of Minnesota – Twin Cities. Email: {jainx174, jdhaupt}@umn.edu. Manuscript submitted February 24, 2015. This work was supported in part by DARPA/ONR Grant No. N66001-11-1-4090 and the DARPA Young Faculty Award, Grant No. N66001-14-1-4047.
###### Abstract

In this paper, we examine the problem of approximating a general linear dimensionality reduction (LDR) operator, represented as a matrix with , by a partial circulant matrix with rows related by circular shifts. Partial circulant matrices admit fast implementations via Fourier transform methods and subsampling operations; our investigation here is motivated by a desire to leverage these potential computational improvements in large-scale data processing tasks. We establish a fundamental result, that most large LDR matrices (whose row spaces are uniformly distributed) in fact cannot be well-approximated by partial circulant matrices. Then, we propose a natural generalization of the partial circulant approximation framework that entails approximating the range space of a given LDR operator over a restricted domain of inputs, using a matrix formed as a product of a partial circulant matrix having rows and a “post-processing” matrix. We introduce a novel algorithmic technique, based on sparse matrix factorization, for identifying the factors comprising such approximations, and provide preliminary evidence to demonstrate the potential of this approach.

{keywords}

Subspace learning, circulant matrices, matrix factorization, sparse regularization, big data

## I Introduction

Numerous tasks in signal processing, statistics, and machine learning employ dimensionality reduction methods to facilitate the processing, visualization, and analysis of (ostensibly) high-dimensional data in (more tractable) low-dimensional spaces. Among the myriad of dimensionality reduction techniques in the literature, linear dimensionality reduction (LDR) methods remain among the most popular and widely-used.

One well-known example is principal component analysis [1] characterized by a linear dimensionality reduction (LDR) operator designed to maximally preserve the variance of the original data in the projected space, and which has been widely used for data compression, denoising, and as a dimensionality reducing pre-processing step for other analyses (e.g., clustering, classification, etc.) [2]. Other classical data analysis methods that employ specialized LDR operators include linear discriminant analysis (where the operator is designed to preserve separations among original data points belonging to disparate classes), and canonical correlations analysis (where the operator maximizes correlations among projected data points). See the survey paper [3] for additional examples.

In recent years, LDR methods have also been utilized for universal “precompression” in high-dimensional inference tasks. For example, fully random LDR operators are at the essence of the initial investigations into compressed sensing (CS) (see, e.g., [4, 5, 6, 7]); and other, more structured, LDR operators – both non-adaptive (see, e.g., [8, 9, 10, 11, 12, 13, 14, 15, 16, 17]) and adaptive (see, e.g., [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]) – have also been examined recently in the context of CS and sparse inference.

The computational efficiency of LDR methods is often cited as one of their primary virtues; however, as data of interest become increasingly large-scale (high-dimensional, and numerous), even the relatively low computational complexity associated with LDR methods can become significant. In this paper, we investigate the utility of employing partial circulant approximations to general LDR operators; partial circulant matrices admit fast implementations (via convolution or Fourier transform methods, and subsampling), and their use as surrogates to arbitrary LDR matrices may provide potentially significant computational efficiency improvements in practice.

### I-a Problem Statement and Our Contributions

We represent an arbitrary (real) linear dimensionality reduction (LDR) operator as a matrix , with . Operating with on an arbitrary vector generally requires operations, which can be superlinear in For even modest values (e.g., when for the complexity is ). Here, we seek computationally-efficient approximations of implementable via convolution and downsampling (and, perhaps, modest post-processing).

We first consider approximating as , where is an circulant matrix and is a matrix with rows that are a (permuted) subset of ( identity). These partial circulant approximations enjoy an implementation complexity of , owing to fast Fourier transform methods. Despite these potential implementation complexity benefits, our first contribution establishes that most LDR matrices, in fact, cannot be well-approximated by partial circulant matrices.

We then propose a generalization that uses approximations of the form , where and are as above, except that has some rows, and is an “post-processing” matrix. Operating with such matrices requires operations in general, and can be as low as , e.g., when . Within this framework, we also exploit the fact that signals of interest often reside on a restricted input domain (e.g., they lie on a union of subspaces, manifold, etc.), so we may restrict our approximation to mimicking the action of on these inputs. We describe a data-driven approach to learning the factors of the approximating matrix in these settings, and provide empirical evidence to demonstrate the viability of this approximation approach.

### I-B Connections to Existing Work

Circulant approximations to square matrices are classical in linear algebra; for example, circulant preconditioners for linear systems were examined in [33, 34, 35]. In a line of work motivated by “optical information processing,” several efforts have examined fundamental aspects of approximating square matrices by products of circulant and diagonal matrices [36, 37, 38, 39, 40]. Here, our focus is on LDR matrices (not square matrices), so results from these works are not directly applicable here.

Random partial circulant matrices have been studied recently in compressed sensing tasks [41, 42, 43, 44, 45], and random partial circulant matrices with diagonal pre-processors have been proposed as computationally efficient methods for Johnson-Lindenstrauss (JL) embeddings in [46, 47]. The work [48] established the viability of using random partial circulant matrices for embedding manifold-structured data. In contrast to these works, here our aim is to approximate the action of given LDR matrix, not necessarily to perform JL embeddings.

### I-C Outline

Following the introduction of a few preliminaries (below), we establish in Sect. II a fundamental approximation result regarding partial circulant approximations to general LDR matrices. In Sect. III we propose a generalized approach to the partial circulant approximation problem, and provide empirical evidence to demonstrate its efficacy. A brief discussion is provided in Sect. IV; proofs appear in the Appendix.

### I-D Preliminaries

We introduce some preliminary concepts and notation that will be used throughout the sequel. First, we define

 R=[0(n−1)×1In−1101×(n−1)],

to be the “right rotation” matrix, in that post-multiplication of a row vector by implements a circular shift to the right by one position. Analogously, post-multiplying a row vector by implements a circular shift to the left by one position. To simplify notation we write ; note that .

We represent an (real) circulant matrix by

 C=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣c1c2⋯cncnc1⋯cn−1⋱c2c3⋯c1⎤⎥ ⎥ ⎥ ⎥ ⎥⎦=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣cTcTR⋮cTRn−1⎤⎥ ⎥ ⎥ ⎥ ⎥⎦, (1)

where . We let denote the set of all (real) -dimensional circulant matrices of the form (1). For , the set of all real partial circulant matrices is

 PCm,n={SC∈Rm×n | S∈Sm,C∈Cn},

where is the set of row sampling matrices whose rows comprise different canonical basis vectors of .

For a matrix , we denote its individual rows by for and its columns by for . We write the squared Frobenius norm of as , and its norm by , where denotes the Euclidean norm of .

## Ii A Fundamental (Negative) Approximation Result for Partial Circulant Matrices

As above, we let denote the LDR matrix we aim to approximate using a partial circulant matrix of the same dimensions. Here, we consider a tractable (and somewhat natural) choice of the approximation error metric, and seek to find the matrix closest to in the Frobinius sense. In this setting, the minimum approximation error is given by

 EPCm,n(A)=minZ∈PCm,n ∥A−Z∥2F. (2)

Finding the minimizer of (2) is a non-convex optimization problem, owing to the product nature of elements of . Nevertheless, we obtain a precise characterization of the minimum achievable approximation error for a given via the following lemma (whose proof appears in the appendix).

###### Lemma II.1

For , we have

 EPCm,n(A)=∥A∥2F−R2(A), (3)

where

 R(A)=maxf∈F∥∑mi=1Ai,:Lfi∥2√m, (4)

and .

We call the term in (4) the Rubik’s Score of the matrix , inspired by the fact that is maximized when the circular shifts of the rows are “maximally aligned” (indeed, the numerator of (4) is a sum of rotated rows of ).

Evidently, , with for all , but beyond these “boundary” cases, Lemma II.1 yields limited interpretational insight into the achievable error for any particular . We gain additional insight here using a probabilistic technique – instead of quantifying the approximation error for a fixed , we consider instead matrices whose row spaces are distributed uniformly at random on , the Grassmannian manifold of -dimensional linear subspaces of . Using this model, we can quantify the proportion of matrices so drawn whose optimal partial circulant approximation error is at most a fixed fraction (say ) of their squared Frobenius norm.

To this end, we exploit the fact that matrices whose row-spaces are uniformly distributed on may be modeled as matrices having iid zero-mean Gaussian elements. With this, we establish the following theorem (proved in the appendix).

###### Theorem II.1

For , let have iid entries. Then for , and is sufficiently large, there exists a positive constant such that

 Pr(EPCm,n(A)≤δ∥A∥2F)=O(e−c(δ)⋅mn).

Simply put, the content of Theorem II.1 is that the proportion of large matrices (with uniformly distributed row spaces) that can be approximated to high accuracy by partial circulant matrices is exponentially small in the product of the matrix dimensions. In the next section we propose a more general framework designed to facilitate accurate partial circulant approximations in a number of practical applications.

## Iii A Generalized “Data-Driven” Partial Circulant Approximation Approach

That the conclusion of Theorem II.1 be pessimistic is, perhaps, intuitive, since has “degrees of freedom,” while any has only . These structural limitations are, of course, fundamental. We gain additional leverage here by exploiting a more general approximation model, and exploiting additional latent “signal space characteristics” that arise in many practical applications.

Formally, we consider approximations of of the form where , is an matrix comprising a permuted subset of rows of identity, and is a “post-processing” matrix. This extension allows approximations of whose rows are generally linear combinations of the rows of a partial circulant matrix rather than being rows of a partial circulant matrix themselves, facilitating accurate approximation of any matrix whose row space is spanned by vectors related by circular shifts.

Further, we exploit the fact that in many applications where LDR methods are deployed (e.g, PCA, LDA, Compressive Sensing, etc.), the high-dimensional data to be processed are not arbitrary, but instead lie in some restricted input domain (e.g., they lie in a low-dimensional subspace, a union of low-dimensional subspaces, distinct clusters, etc.). In these cases, our approximation task reduces to a simpler task of mimicking the action of on these restricted inputs.

Suppose is a matrix whose columns are “representative” of the restricted input domain for the problem of interest. Then, minimizing an objective of the form essentially ensures the approximation be accurate in an (empirical) average sense. We propose an algorithmic approach based on alternating minimization and regularized matrix factorization for identifying the factors comprising this more general approximation; the method is summarized as Algorithm 1. Note that in our approach, we effectively combine the action of the sampling and post processing matrices; we let , and seek column sparsity in using the regularization term, which penalizes the number of nonzero columns of (the resulting number of nonzero columns depends on the regularization parameter ). We also include a (simple) regularization term for to fix scaling ambiguities; leaving unconstrained would allow elements of to become arbitrarily small, and would circumvent the effect of the column-sparsity regularization. Both of these sub-problems are convex, and can be solved using off-the-shelf software (e.g., SLEP [49]).

To evaluate the performance of this approach, we chose to be a matrix of training data whose columns are vectorized images of the digit ‘’ from the USPS handwritten digit dataset (available at http://www.cs.nyu.edu/~roweis/data.html), and held out the remaining images of the digit ‘’ to form a “test” data set. We select the rows of the matrix to be a subset of the top principal component vectors of the training set , and we consider two settings corresponding to having rows and rows. Then, we applied the procedure of Alg. 1, with and varying values of to obtain different factorizations and . For each, we quantified the normalized error on the training set (on a logarithmic scale) vs. the column sparsity of . For a convenient choice of , we also computed histograms of the normalized approximation errors () for each point in the testing and training data sets. The results are provided in Fig. 1, and illustrate that accurate approximations can indeed be obtained for reasonable values of for each case, and that the approximations generalize well to the test data, in that the approximation error histogram is concentrated around small values for each ( error for most points).

## Iv Discussion and Conclusions

In this work we established a fundamental result regarding partial circulant approximations of general matrices, and proposed and demonstrated a general approach for approximating the action of general linear dimensionality reduction (LDR) operators over restricted input domains. The convolutional nature of the approximations we consider here, in addition to their computational expediency, makes them viable for implementation using LTI systems, or for specialized applications (e.g., RADAR) where convolutional models arise naturally.

A more general investigation motivated by computational complexity considerations would also include other specialized matrix structures with low implementation complexities (e.g., sparse matrices or “fast” or sparse Johnson-Lindenstrauss embeddings [50, 51]). A thorough, unifying analysis of such “low-complexity” approximations is the topic of our ongoing work, and will be reported in a future publication.

### -a Proof of Lemma ii.1

We first write in (2) equivalently as

 EPCm,n(A)=minS∈SmminC∈Cn ∥A−SC∥2F. (5)

Now, a key insight is that each choice of corresponds to a vector from the set defined in Lemma II.1, as including the -th row of in corresponds to selecting the -th row of , which is given from (1) by . Using this we may parameterize the choice of in terms of , and rewrite the objective function in (5) as

 ∥A−SC∥2F=m∑i=1∥∥Ai,:−cTRfi∥∥22 = m∑i=1∥Ai,:∥22+cTRfiLfic−2Ai,:Lfic.

Thus, since , we have

 EPCm,n(A)=minf∈Fminc∈Rn ∥A∥2F+mcTc−2(m∑i=1Ai,:Lfi)c. (6)

We first minimize the objective with respect to , keeping fixed to any arbitrary value in . This is an unconstrained strictly convex quadratic problem whose minima can be obtained by equating the gradient (with respect to ) to zero. We identify the optimal to be . Substituting into (6), and simplifying, we obtain

 EPCm,n(A)=minf∈F∥A∥2F−1m∥∥ ∥∥m∑i=1Ai,:Lfi∥∥ ∥∥22,

which gives (3), using the definition (4) of the Rubik’s Score.

### -B Proof of Theorem ii.1

Note that (since it is just the number of ways of choosing elements out of without replacement), and we have (trivially) that , so

 Pr(EPCm,n(A)≤δ∥A∥2F) = Pr((1−δ)∥A∥2F≤supf∈F∥∑mi=1Ai,:Lfi∥22m) = Pr⎛⎝⋃f∈F[(1−δ)∥A∥2F≤∥∑mi=1Ai,:Lfi∥22m]⎞⎠ ≤ nm Pr((1−δ)∥A∥2F≤∥∑mi=1Ai,:Lfi∥22m),

where the last step follows from union bounding.

Now, we can simplify further by introducing the shorthand notation and , so that and . It is easy to check that , which implies that the columns of are eigenvectors of the matrix corresponding to eigenvalue . Further, since is symmetric, it admits an eigendecomposition with orthonormal and diagonal, and since it is rank , has exactly entries being (and the rest ).

Incorporating this insight into the probabilistic analysis above, we obtain that

 Pr((1−δ)∥A∥2F≤∥∑mi=1Ai,:Lfi∥22m) = Pr(aTUf ((1−δ)Imn−Σf)) UTfa≤0) = Pr(~aT ((1−δ)Imn−Σf) ~a≤0),

where the components of are iid due to the unitary invariance of the Gaussian distribution. Thus, with a slight overloading of notation, we may write that

 Pr(EPCm,n(A)≤δ∥A∥2F) (7) ≤ nm Pr(m∑i=2∥Ai,:∥22≤δm∑i=1∥Ai,:∥22).

At this point, we note that vectorizing (row-wise) a random matrix having iid zero-mean Gaussian elements yields an -dimensional vector whose direction is selected uniformly at random from . Thus, quantifies the length of the vector, while describes the energy retained after projecting the vector onto the fixed -dimensional subspace spanned by the last coordinates. It follows that the probability on the right-hand side of (7) may be interpreted in terms of the energy retained after projecting a fixed -dimensional unit-normed vector onto a subspace selected uniformly at random from . To quantify this, we use the following.

###### Lemma .1 (Adapted from Thm. 2.14 of [52])

Let be a fixed unit-length vector in , a randomly oriented -dimensional subspace, and the projection of onto . For ,

 Pr(∥w∥2≤(1−ε)√k/d)≤3e−kε2/64.

Using this result (with and ) we obtain that for ,

 Pr(EPCm,n(A)≤δ∥A∥2F)≤nmPr(∑mi=2∥Ai,:∥22∑mi=1∥Ai,:∥22≤δ) ≤ 3nme−n(m−1)64⎛⎜⎝1−√δ1−1m⎞⎟⎠2 ≤ 3 emlogn−n(m−1)64⎛⎜⎝1−2√δ1−1m⎞⎟⎠.

Now, it is straightforward to verify that for and sufficiently large, so that , there exists a positive constant

 c(δ)<1128(1−2√2δ)−lognn

for which

 mlogn−n(m−1)64⎛⎝1−2√δ1−1m⎞⎠≤−c(δ)mn.

In this case, it follows that

 Pr(EPCm,n(A)≤δ∥A∥2F)≤3e−c(δ)⋅mn.

## References

• [1] K. Pearson, “On lines and planes of closest fit to systems of points in space,” The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, vol. 2, no. 11, pp. 559–572, 1901.
• [2] I. Jolliffe, Principal component analysis, Wiley Online Library, 2002.
• [3] J. P. Cunningham and Z. Ghahramani, “Unifying linear dimensionality reduction,” arXiv preprint arXiv:1406.0873, 2014.
• [4] E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal recovery from highly incomplete frequency information,” IEEE Trans. on Inform. Theory, vol. 52, no. 2, pp. 489–509, 2006.
• [5] D. Donoho, “Compressed sensing,” IEEE Trans. on Inform. Theory, vol. 52, no. 4, pp. 1289–1306, 2006.
• [6] E. J. Candès and T. Tao, “Near optimal signal recovery from random projections: Universal encoding strategies?,” IEEE Trans. on Inform. Theory, vol. 52, no. 12, pp. 5406–5425, 2006.
• [7] J. Haupt and R. Nowak, “Signal reconstruction from noisy random projections,” IEEE Transactions on Information Theory, vol. 52, no. 9, pp. 4036–4048, 2006.
• [8] M. Elad, “Optimized projections for compressed sensing,” IEEE Trans. Signal Processing, vol. 55, no. 12, pp. 5695–5702, 2007.
• [9] M. Rudelson and R. Vershynin, “On sparse reconstruction from Fourier and Gaussian measurements,” Communications on Pure and Applied Mathematics, vol. 61, no. 8, pp. 1025–1045, 2008.
• [10] A. Ashok, P. K. Baheti, and M. A. Neifeld, “Compressive imaging system design using task-specific information,” Applied Pptics, vol. 47, no. 25, pp. 4457–4471, 2008.
• [11] J. M. Duarte-Carvajalino and G. Sapiro, “Learning to sense sparse signals: Simultaneous sensing matrix and sparsifying dictionary optimization,” IEEE Trans. Image Processing, vol. 18, no. 7, pp. 1395–1408, 2009.
• [12] R. Calderbank, S. Howard, and S. Jafarpour, “Construction of a large class of deterministic sensing matrices that satisfy a statistical isometry property,” IEEE Journal of Selected Topics in Signal Processing, vol. 4, no. 2, pp. 358–374, 2010.
• [13] J. Haupt, L. Applebaum, and R. Nowak, “On the restricted isometry of deterministically subsampled Fourier matrices,” in IEEE Conf. on Information Sciences and Systems, 2010, pp. 1–6.
• [14] L. Zelnik-Manor, K. Rosenblum, and Y. C. Eldar, “Sensing matrix optimization for block-sparse decoding,” IEEE Trans. Signal Processing, vol. 59, no. 9, pp. 4300–4312, 2011.
• [15] A. Ashok and M. A. Neifeld, “Compressive imaging: hybrid measurement basis design,” J. Optical Society of America A, vol. 28, no. 6, pp. 1041–1050, 2011.
• [16] W. R. Carson, M. Chen, M. R. D. Rodrigues, R. Calderbank, and L. Carin, “Communications-inspired projection design with application to compressive sensing,” SIAM Journal on Imaging Sciences, vol. 5, no. 4, pp. 1185–1212, 2012.
• [17] S. Jain, A. Soni, and J. Haupt, “Compressive measurement designs for estimating structured signals in structured clutter: A Bayesian experimental design approach,” in Asilomar Conf. on Signals, Systems and Computers. IEEE, 2013, pp. 163–167.
• [18] S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” IEEE Trans. on Signal Processing, vol. 56, no. 6, pp. 2346–2356, 2008.
• [19] J. Haupt, R. Baraniuk, R. Castro, and R. Nowak, “Compressive distilled sensing: Sparse recovery using adaptivity in compressive measurements,” in Proc. Asilomar Conf. on Signals, Systems, and Computers, 2009, pp. 1551–1555.
• [20] J. Haupt, R. Castro, and R. Nowak, “Adaptive sensing for sparse signal recovery,” in Proc. IEEE DSP Workshop and Workshop on Sig. Processing Education, 2009, pp. 702–707.
• [21] J. Haupt and R. Nowak, “Adaptive sensing for sparse recovery,” in Compressed Sensing: Theory and applications, Y. Eldar and G. Kutyniok, Eds. Cambridge University Press, 2011.
• [22] P. Indyk, E. Price, and D. P. Woodruff, “On the power of adaptivity in sparse recovery,” in Proc. IEEE Foundations of Computer Science, 2011, pp. 285–294.
• [23] A. Ashok, J. L. Huang, and M. A. Neifeld, “Information-optimal adaptive compressive imaging,” in Asilomar Conf. on Signals, Systems and Computers, 2011, pp. 1255–1259.
• [24] E. Price and D. P. Woodruff, “Lower bounds for adaptive sparse recovery,” arXiv preprint arXiv:1205.3518, 2012.
• [25] M. Iwen and A. Tewfik, “Adaptive group testing strategies for target detection and localization in noisy environments,” IEEE Trans. on Sig. Processing, vol. 60, no. 5, pp. 2344–2353, 2012.
• [26] J. Haupt, R. Baraniuk, R. Castro, and R. Nowak, “Sequentially designed compressed sensing,” in Proc. IEEE Statistical Sig. Processing Workshop, 2012, pp. 401–404.
• [27] M. A. Davenport and E. Arias-Castro, “Compressive binary search,” in Proc. IEEE Intl. Symp. on Inform. Theory, 2012, pp. 1827–1831.
• [28] E. Arias-Castro, E. J. Candès, and M. A. Davenport, “On the fundamental limits of adaptive sensing,” IEEE Trans. on Inform. Theory, vol. 59, no. 1, pp. 472–481, 2013.
• [29] J. M. Duarte-Carvajalino, G. Yu, L. Carin, and G. Sapiro, “Task-driven adaptive statistical compressive sensing of gaussian mixture models,” IEEE Trans. Signal Processing, vol. 61, no. 3, pp. 585–600, 2013.
• [30] A. Soni and J. Haupt, “On the fundamental limits of recovering tree sparse vectors from noisy linear measurements,” IEEE Trans. on Inform. Theory, vol. 60, no. 1, pp. 133–149, 2014.
• [31] M. Malloy and R. Nowak, “Near-optimal adaptive compressive sensing,” IEEE Trans. on Inform. Theory, vol. 60, no. 4, pp. 4001–4012, 2014.
• [32] R. M. Castro, “Adaptive sensing performance lower bounds for sparse signal detection and support estimation,” Bernoulli, vol. 20, no. 4, pp. 2217–2246, 2014.
• [33] G. Strang, “A proposal for Toeplitz matrix calculations,” Studies in Applied Mathematics, vol. 74, no. 2, pp. 171–176, 1986.
• [34] T. F. Chan, “An optimal circulant preconditioner for Toeplitz systems,” SIAM Journal on Scientific and Statistical Computing, vol. 9, no. 4, pp. 766–771, 1988.
• [35] E. E. Tyrtyshnikov, “Optimal and superoptimal circulant preconditioners,” SIAM Journal on Matrix Analysis and Applications, vol. 13, no. 2, pp. 459–473, 1992.
• [36] J. Müller-Quade, H. Aagedal, T. Beth, and M. Schmid, “Algorithmic design of diffractive optical systems for information processing,” Physica D: Nonlinear Phenomena, vol. 120, no. 1, pp. 196–205, 1998.
• [37] M. Schmid, R. Steinwandt, J. Müller-Quade, M. Rötteler, and T. Beth, “Decomposing a matrix into circulant and diagonal factors,” Linear Algebra and its Applications, vol. 306, no. 1, pp. 131–143, 2000.
• [38] M. Huhtanen, “How real is your matrix?,” Linear algebra and its applications, vol. 424, no. 1, pp. 304–319, 2007.
• [39] M. Huhtanen, “Factoring matrices into the product of two matrices,” BIT Numerical Mathematics, vol. 47, no. 4, pp. 793–808, 2007.
• [40] M. Huhtanen, “Approximating ideal diffractive optical systems,” Journal of Mathematical Analysis and Applications, vol. 345, no. 1, pp. 53–62, 2008.
• [41] J. Romberg, “Compressive sensing by random convolution,” SIAM Journal on Imaging Sciences, vol. 2, no. 4, pp. 1098–1128, 2009.
• [42] J. Haupt, W. U. Bajwa, G. Raz, and R. Nowak, “Toeplitz compressed sensing matrices with applications to sparse channel estimation,” IEEE Transactions on Information Theory, vol. 56, no. 11, pp. 5862–5875, 2010.
• [43] H. Rauhut, “Compressive sensing and structured random matrices,” in Theoretical Foundations and Numerical Methods for Sparse Recovery, M. Fornasier, Ed., vol. 9, pp. 1–92. deGruyter, 2010.
• [44] H. Rauhut, J. Romberg, and J. A. Tropp, “Restricted isometries for partial random circulant matrices,” Applied and Computational Harmonic Analysis, vol. 32, no. 2, pp. 242–254, 2012.
• [45] W. Yin, S. Morgan, J. Yang, and Y. Zhang, “Practical compressive sensing with Toeplitz and circulant matrices,” in Visual Communications and Image Processing 2010. International Society for Optics and Photonics, 2010, p. 77440K.
• [46] F. Krahmer and R. Ward, “New and improved Johnson-Lindenstrauss embeddings via the restricted isometry property,” SIAM Journal on Mathematical Analysis, vol. 43, no. 3, pp. 1269–1281, 2011.
• [47] H. Zhang and L. Cheng, “New bounds for circulant Johnson-Lindenstrauss embeddings,” arXiv preprint arXiv:1308.6339, 2013.
• [48] H. L. Yap, M. B. Wakin, and C. J. Rozell, “Stable manifold embeddings with structured random matrices,” IEEE Journal of Selected Topics in Signal Processing, vol. 7, no. 4, pp. 720–730, 2013.
• [49] J. Liu, S. Ji, and J. Ye, SLEP: Sparse Learning with Efficient Projections, Arizona State University, 2009.
• [50] N. Ailon and B. Chazelle, “Approximate nearest neighbors and the fast Johnson-Llindenstrauss transform,” in Proc. ACM Symp. on Theory of Computing, 2006, pp. 557–563.
• [51] A. Dasgupta, R. Kumar, and T. Sarlós, “A sparse Johnson-Lindenstrauss transform,” in Proc. ACM Symp. on Theory of Computing, 2010, pp. 341–350.
• [52] J. Hopcroft and R. Kannan, Foundations of Data Science, April 2014, Available online: http://www.cs.cornell.edu/jeh/book11April2014.pdf.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters