TripleSpin  a generic compact paradigm for fast machine learning computations
Abstract
We present a generic compact computational framework relying on structured random matrices that can be applied to speed up several machine learning algorithms with almost no loss of accuracy. The applications include new fast LSHbased algorithms, efficient kernel computations via random feature maps, convex optimization algorithms, quantization techniques and many more. Certain models of the presented paradigm are even more compressible since they apply only bit matrices. This makes them suitable for deploying on mobile devices. All our findings come with strong theoretical guarantees. In particular, as a byproduct of the presented techniques and by using relatively new BerryEsseentype CLT for random vectors, we give the first theoretical guarantees for one of the most efficient existing LSH algorithms based on the structured matrix [1]. These guarantees as well as theoretical results for other aforementioned applications follow from the same general theoretical principle that we present in the paper. Our structured family contains as special cases all previously considered structured schemes, including the recently introduced model [2]. Experimental evaluation confirms the accuracy and efficiency of TripleSpin matrices.
1 Introduction
Consider a randomized machine learning algorithm on a dataset and parameterized by the Gaussian matrix G with i.i.d. entries taken from . We assume that G is used to calculate Gaussian projections that are then passed to other, possibly highly nonlinear functions.
Many machine learning algorithms are of this form. Examples include several variants of the JohnsonLindenstrauss Transform applying random projections to reduce data dimensionality while approximately preserving Euclidean distance [3, 4], quantization techniques using random projection trees, where splitting in each node is determined by a projection of data onto Gaussian direction [5], algorithms solving convex optimization problems with random sketches of Hessian matrices [6, 7], kernel approximation techniques applying random feature maps produced from linear projections with Gaussian matrices followed by nonlinear mappings [2, 8, 9], several LSHschemes [1, 10, 11] (such as some of the most effective crosspolytope LSH methods) and many more.
If data is highdimensional then computing random projections for in time often occupies a significant fraction of the overall run time. Furthermore, storing matrix G frequently becomes a bottleneck in terms of space complexity. In this paper we propose a “structured variant” of the algorithm , where Gaussian matrix G is replaced by a structured matrix taken from the defined by us TripleSpinfamily of matrices. The name comes from the fact that each such matrix is a product of three other matrices, building components, which include rotations.
Replacing G by gives computational speedups since can be calculated in time: with the use of techniques such as Fast Fourier Transform, time complexity is reduced to . Furthermore, with matrices from the TripleSpinfamily space complexity can be also substantially reduced — to subquadratic, usually at most linear, sometimes even constant.
To the best of our knowledge, we are the first to provide a comprehensive theoretical explanation of the effectiveness of the structured approach. So far such an explanation was given only for some specific applications and specific structured matrices. The proposed TripleSpinfamily contains all previously considered structured matrices as special cases, including the recently introduced model [2], yet its flexible threeblock structure provides mechanisms for constructing many others. Our structured model is also the first one that proposes purely discrete Hadamardbased constructions with strong theoretical guarantees. We empirically show that these surpass their unstructured counterparts.
As a byproduct, we provide a theoretical explanation of the efficiency of the crosspolytope LSH method based on the structured matrix [1], where H stands for the Hadamard matrix and s are random diagonal matrices. Thus we solve the open problem posted in [1]. These guarantees as well as theoretical results for other aforementioned applications arise from the same general theoretical principle that we present in the paper. Our theoretical methods apply relatively new BerryEsseen type Central Limit Theorem results for random vectors.
2 Related work
Structured matrices were previously used mainly in the context of the JohnsonLindenstrauss Transform (JLT), where the goal is to linearly transform highdimensional data and embed it into a much lower dimensional space in such a way that the Euclidean distance is approximately preserved [3, 4, 12]. Most of these structured constructions involve sparse or circulant matrices [12, 13] providing computational speedups and space compression.
Specific structured matrices were used to approximate angular distance and Gaussian kernels [9, 14]. Very recently [2] structured matrices coming from the socalled Pmodel, were applied to speed up random feature map computations of some special kernels (angular, arccosine and Gaussian). The presented techniques did not work for discrete structured constructions such as the structured matrix since they focus on matrices with low (polylog) chromatic number of the corresponding coherence graphs and these do not include matrices such as or their direct nondiscrete modifications.
The mechanism gives in particular a highly parameterized family of structured methods for approximating general kernels with random feature maps. Among them are purely discrete computational schemes providing the most aggressive compression with just minimal loss of accuracy.
Several LSH methods use random Gaussian matrices to construct compact codes of data points and in turn speed up such tasks as approximate nearest neighbor search. Among them are crosspolytope methods proposed in [11]. In the angular crosspolytope setup the hash family is designed for points taken from the unit sphere , where stands for data dimensionality. To construct hashes a random matrix with i.i.d. Gaussian entries is built. The hash of a given data point is defined as: where returns the closest vector to y from the set , where stands for the canonical basis. The fastest known variant of the crosspolytope LSH [1] replaces unstructured Gaussian matrix G with the product . No theoretical guarantees regarding that variant were known. We provide a theoretical explanation here. As in the previous setting, matrices from the model lead to several fast structured crosspolytope LSH algorithms not considered before.
Recently a new method for speeding up algorithms solving convex optimization problems by approximating Hessian matrices (a bottleneck of the overall computational pipeline) with their random sketches was proposed [6, 7]. One of the presented sketches, the socalled Newton Sketch, leads to the sequence of iterates for optimizing given function given by the following recursion:
(1) 
where is the set of the socalled sketch matrices. Initially the subGaussian sketches based on i.i.d. subGaussian random variables were used. The disadvantage of the subGaussian sketches lies in the fact that computing the sketch of the given matrix , needed for convex optimization with sketches, requires time. Thus the method is too slow in practice. This is the place, where structured matrices can be applied. Some structured approaches were already considered in [6], where sketches based on randomized orthonormal systems were proposed. In that approach a sketching matrix is constructed by sampling i.i.d. rows of the form with probability for , where is chosen uniformly at random from the set of the canonical vectors and D is a random diagonal matrix. We show that our class of TripleSpin matrices can also be used in that setting.
3 The TripleSpinfamily
We now present the TripleSpinfamily. If not specified otherwise, the random diagonal matrix D is a diagonal matrix with diagonal entries taken independently at random from . For a sequence we denote by a diagonal matrix with diagonal equal to . For a matrix let denote its Frobenius and its spectral norm respectively. We denote by H the normalized Hadamard matrix. We say that r is a random Rademacher vector if every element of r is chosen independently at random from .
For a vector and let be a matrix, where the first row is of the form and each subsequent row is obtained from the previous one by rightshifting in a circulant manner the previous one by . For a sequence of matrices we denote by a matrix obtained by stacking vertically matrices: .
Each structured matrix from the TripleSpinfamily is a product of three main structured components, i.e.:
(2) 
where matrices and satisfy the following conditions:
If all three conditions are satisfied then we say that a matrix is a TripleSpinmatrix with parameters: . Below we explain these conditions.
Definition 1 (balanced matrices)
A randomized matrix is balanced if for every with we have: .
Remark 1
One can take as a matrix , since as we will show in the Appendix, matrix is balanced.
Definition 2 (smooth sets)
A deterministic set of matrices is smooth if:

for , where stands for the column of ,

for and we have: ,

and
Remark 2
If the unstructured matrix G has rows taken from the general multivariate Gaussian distribution with diagonal covariance matrix then one needs to rescale vectors r accordingly. For clarity we assume here that and we present our theoretical results for that setting.
All structured matrices previously considered are special cases of the family (for clarity we will explicitly show it for some important special cases). Others, not considered before, are also covered by the family. We have:
Lemma 1
Matrices , and , where is Gaussian circulant, are valid matrices for , , , and . The same is true if one replaces by a Gaussian Hankel or Toeplitz matrix.
3.1 Stacking together TripleSpinmatrices
We described TripleSpinmatrices as square matrices. In practice we are not restricted to square matrices. We can construct an TripleSpin matrix for from the square TripleSpinmatrix by taking its first rows. We can then stack vertically these independently constructed matrices to obtain an matrix for both: and . We think about as another parameter of the model that tunes the ”structuredness” level. Larger values of indicate more structured approach while smaller values lead to more random matrices (the case is the fully unstructured one).
4 Computing general kernels with TripleSpinmatrices
Previous works regarding approximating kernels with structured matrices covered only some special kernels, namely: Gaussian, arccosine and angular kernels. We explain here how structured approach (in particular our TripleSpinfamily) can be used to approximate well most kernels. Theoretical guarantees that cover also this special case are given in the subsequent section.
For kernels that can be represented as an expectation of Gaussian random variables it is natural to approximate them using structured matrices. We start our analysis with the socalled Pointwise Nonlinear Gaussian kernels (PNG) which are of the form:
(3) 
where: , the expectation is over a multivariate Gaussian and is a fixed nonlinear function (the positivesemidefiniteness of the kernel follows from it being an expectation of dot products). The expectation, interpreted in the MonteCarlo approximation setup, leads to an unbiased estimation by a sum of the normalized dot products , where is a random Gaussian matrix and is applied pointwise, i.e. .
In this setting matrices from the family can replace G with diagonal covariance matrices to speed up computations and reduce storage complexity. This idea of using random structured matrices to evaluate such kernels is common in the literature, e.g. [9, 15], but no theoretical guarantees were given so far for general PNG kernels (even under the restriction of diagonal ).
PNGs define a large family of kernels characterized by . Prominent examples include the Euclidean and angular distance [9], the arccosine kernel [16] and the sigmoidal neural network [17]. Since sums of kernels are again kernels, we can construct an even larger family of kernels by summing PNGs. A simple example is the Gaussian kernel which can be represented as a sum of two PNGs with replaced by the trigonometric functions: and .
Since the Laplacian, exponential and rational quadratic kernel can all be represented as a mixture of Gaussian kernels with different variances, they can be easily approximated by a finite sum of PNGs. Remarkably, virtually all kernels can be represented as a (potentially infinite) sum of PNGs with diagonal . Recall that kernel is stationary if for all and nonstationary otherwise. Harnessing Bochner’s and Wienerâs Tauberian theorem [18], we show that all stationary kernels may be approximated arbitrarily well by sums of PNGs.
4.1stationary kernels
The family of functions
with , , , is dense in the family of stationary realvalued kernels with respect to pointwise convergence.
This family corresponds to the spectral mixture kernels of [19]. We can extend these results to arbitrary, nonstationary, kernels; the precise statement and its proof can be found in the Appendix.
Theorem 4.1 and its nonstationary analogue show that the family of sums of PNGs contains virtually all kernel functions. If a kernel can be well approximated by the sum of only a small number of PNGs then we can use the family to efficiently evaluate it. It has been found in the Gaussian Process literature that often a small sum of kernels tuned using training data is sufficient to explain phenomena remarkably well [19]. The same should also be true for the sum of PNGs. This suggests a methodology for learning the parameters of a sum of PNGs from training data and then applying it out of sample. We leave this investigation for future research.
5 Theoretical results
We now show that matrices from the TripleSpinfamily can replace their unstructured counterparts in many machine learning algorithms with minimal loss of accuracy.
Let be a randomized machine learning algorithm operating on a dataset . We assume that uses certain functions such that each uses a given Gaussian matrix G (matrix G can be the same or different for different functions) with independent rows taken from a multivariate Gaussian distribution with diagonal covariance matrix. may also use other functions that do not depend directly on Gaussian matrices but may for instance operate on outputs of . We assume that each applies G to vectors from some linear space of dimensionality at most .
Remark 3
In the kernel approximation setting with random feature maps one can match each pair of vectors with a different . Each computes the approximate value of the kernel for vectors x and y. Thus in that scenario and (since one can take: ).
Remark 4
In the vector quantization algorithms using random projection trees one can take (the algorithm itself is a function ) and , where is an intrinsic dimensionality of a given dataset (random projection trees are often used if ).
Fix a function of using an unstructured Gaussian matrix G and applying it on the linear space of dimensionality . Note that the outputs of G on vectors from are determined by the sequence: , where stands for a fixed orthonormal basis of . Thus they are determined by a following vector (obtained from ”stacking together” vectors ,…,):
Notice that is a Gaussian vector with independent entries (this comes from the fact that rows of G are independent and the observation that the projections of the Gaussian vector on the orthogonal directions are independent). Thus the covariance matrix of is an identity.
Definition 3 (similarity)
We say that a multivariate Gaussian distribution is close to the multivariate Gaussian distribution with covariance matrix I if its covariance matrix is equal to on the diagonal and has all other entries of absolute value at most .
To measure the ”closeness” of the algorithm with the TripleSpinmodel matrices to , we will measure how ”close” the corresponding vector for is to in the distribution sense. The following definition proposes a quantitative way to measure this closeness. Without loss of generality we will assume now that each structured matrix consists of just one block since different blocks of the structured matrix are chosen independently.
Definition 4
Let be as above. For a given the class of algorithms is given as a set of algorithms obtained from by replacing unstructured Gaussian matrices with their structured counterparts such that for any with and any convex set the following holds:
(4) 
where is some multivariate Gaussian distribution that is similar to .
The smaller , the closer in distribution are and and the more accurate the structured version of is. Now we show that TripleSpinmatrices lead to algorithms from with .
5.1structured ml algorithms
Let be the randomized algorithm using unstructured Gaussian matrices G and let be as at the beginning of the section. Replace the unstructured matrix G by one of its structured variants from the TripleSpinfamily defined in Section 3 with blocks of rows each. Then for large enough and with probability at least:
(5) 
the structured version of the algorithm belongs to the class , where: , are as in the definition of the TripleSpinfamily from Section 3 and the probability is in respect to the random choices of and .
Theorem 5.1 implies strong accuracy guarantees for the specific matrices from the family. As a corollary we get for instance:
5.2
Under assumptions from Theorem 5.1 the probability that the structured version of the algorithm belongs to for is at least: for the structured matrices , as well as for the structured matrices of the form , where is Gaussian circulant, Gaussian Toeplitz or Gaussian Hankel matrix.
As a corollary of Theorem 5.2, we obtain the following result showing the effectiveness of the crosspolytope LSH with structured matrices that was only heuristically confirmed before [1].
5.3
Let be two unit norm vectors. Let be the vector indexed by all ordered pairs of canonical directions , where the value of the entry indexed by is the probability that: and , and stands for the hash of v. Then with probability at least: the version of the stochastic vector for the unstructured Gaussian matrix G and its structured counterpart for the matrix satisfy: , for large enough, where is a universal constant. The probability above is taken with respect to random choices of and .
The proof for the discrete structured setting applies BerryEsseentype results for random vectors (details in the Appendix) showing that for large enough random vectors r act similarly to Gaussian vectors.
6 Experiments
Experiments have been carried out on a single processor machine (Intel Core i75600U CPU @ 2.60GHz, 4 hyperthreads) with 16GB RAM for the first two applications and a dual processor machine (Intel Xeon E52640 v3 @ 2.60GHz, 32 hyperthreads) with 128GB RAM for the last one. Every experiment was conducted using Python. In particular, NumPy is linked against a highly optimized BLAS library (Intel MKL). Fast Fourier Transform is performed using numpy.fft and Fast Hadamard Transform is using ffht from [1]. To have fair comparison, we have set up: so that every experiment is done on a single thread. Every parameter corresponding to a TripleSpinmatrix is computed in advance, such that obtained speedups take only matrixvector products into account. All figures should be viewed in color.
6.1 LocalitySensitive Hashing (LSH) application
In the first experiment, to support our theoretical results for the TripleSpinmatrices in the crosspolytope LSH, we compared collision probabilities in Figure 2 for the low dimensional case. Results are shown for one hash function (averaged over runs). For each interval, collision probability has been computed for points for a random Gaussian matrix G and four other types of TripleSpinmatrices (descending order of number of parameters): , , , and , where , and are respectively Gaussian Toeplitz and Gaussian skewcirculant matrices.
We can see that all TripleSpinmatrices show high collision probabilities for small distances and low ones for large distances. All the curves are almost identical. As theoretically predicted, there is no loss of accuracy (sensitivity) by using matrices from the TripleSpinfamily.
Matrix dimensions  

x1.4  x3.4  x6.4  x12.9  x28.0  x42.3  x89.6  
x1.5  x3.6  x6.8  x14.9  x31.2  x49.7  x96.5  
x2.3  x6.0  x13.8  x31.5  x75.7  x137.0  x308.8  
x2.2  x6.0  x14.1  x33.3  x74.3  x140.4  x316.8 
6.2 Kernel approximation
In the second experiment, we compared feature maps obtained with Gaussian random matrices and specific TripleSpinmatrices for Gaussian and angular kernels. To test the quality of the structured kernels’ approximations, we compute the corresponding Grammatrix reconstruction error using the Frobenius norm metric as in [2] : , where are respectively the exact and approximate Grammatrices, as a function of the number of random features. When number of random features is greater than data dimensionality , we apply described blockmechanism. We used the USPST dataset (test set) which consists of scans of handwritten digits from envelopes by the U.S. Postal Service. It contains 2007 points of dimensionality 258 () corresponding to descriptors of 16 x 16 grayscale images. For Gaussian kernel, bandwidth is set to . The results are averaged over runs.
Results on the USPST dataset:
The following matrices have been tested: Gaussian random matrix G, , and .
In Figure 3, for both kernels, all TripleSpinmatrices perform similarly to a random Gaussian matrix, but is giving the best results (see Figure 5 in the Appendix for additional experiments). The efficiency of the TripleSpinmechanism does not depend on the dataset.
Table 2 shows significant speedups obtained by the TripleSpinmatrices. Those are defined as , where G is a random Gaussian matrix, T is a TripleSpin matrix and stands for the corresponding runtime.
6.3 Newton sketches
Our last experiment covers the Newton sketch approach initially proposed in [6] as a generic optimization framework. In the subsequent experiment we show that TripleSpin matrices can be used for this purpose, thus can speed up several convex optimization problems solvers. The logistic regression problem is considered (see the Appendix for more details). Our goal is to find , which minimizes the logistic regression cost, given a dataset , with sampled according to a Gaussian centered multivariate distribution with covariance and , generated at random. Various sketching matrices are considered.
We first show that equivalent convergence properties are exhibited for various TripleSpinmatrices. Figure 4 illustrates the convergence of the Newton sketch algorithm, as measured by the optimality gap defined in [6], versus the iteration number. While it is clearly expected that sketched versions of the algorithm do not converge as quickly as the exact Newtonsketch approach, the figure confirms that various TripleSpinmatrices exhibit similar convergence behavior.
As shown in [6], when the dimensionality of the problem increases, the computational cost of computing the Hessian in the exact Newtonsketch approach becomes very large, scaling as . The complexity of the structured Newtonsketch approach with TripleSpinmatrices is only . Wallclock times of computing single Hessian matrices are illustrated in Figure 4. This figure confirms that the increase in number of iterations of the Newton sketch compared to the exact sketch is compensated by the efficiency of sketched computations, in particular Hadamardbased sketches yield improvements at the lowest dimensions.
References
 [1] Alexandr Andoni, Piotr Indyk, Thijs Laarhoven, Ilya P. Razenshteyn, and Ludwig Schmidt. Practical and optimal LSH for angular distance. In NIPS, pages 1225–1233, 2015.
 [2] Krzysztof Choromanski and Vikas Sindhwani. Recycling randomness with structure for sublinear time kernel expansions. To appear in ICML, 2016.
 [3] Edo Liberty, Nir Ailon, and Amit Singer. Dense fast random projections and lean Walsh transforms. In RANDOM, pages 512–522, 2008.
 [4] Nir Ailon and Edo Liberty. An almost optimal unrestricted fast JohnsonLindenstrauss transform. ACM Transactions on Algorithms (TALG), 9(3):21, 2013.
 [5] Sanjoy Dasgupta and Yoav Freund. Random projection trees and low dimensional manifolds. In Proceedings of the 40th STOC, pages 537–546, 2008.
 [6] Mert Pilanci and Martin J. Wainwright. Newton sketch: A lineartime optimization algorithm with linearquadratic convergence. CoRR, abs/1505.02250, 2015.
 [7] Mert Pilanci and Martin J. Wainwright. Randomized sketches of convex programs with sharp guarantees. In ISIT, pages 921–925, 2014.
 [8] Ali Rahimi and Benjamin Recht. Random features for largescale kernel machines. In NIPS, pages 1177–1184, 2007.
 [9] Anna Choromanska, Krzysztof Choromanski, Mariusz Bojarski, Tony Jebara, Sanjiv Kumar, and Yann LeCun. Binary embeddings with structured hashed projections. To appear in ICML, 2016.
 [10] Moses Charikar. Similarity estimation techniques from rounding algorithms. In Proceedings on 34th STOC, pages 380–388, 2002.
 [11] Kengo Terasawa and Yuzuru Tanaka. Spherical LSH for approximate nearest neighbor search on unit hypersphere. In WADS, pages 27–38, 2007.
 [12] Nir Ailon and Bernard Chazelle. Approximate nearest neighbors and the fast JohnsonLindenstrauss transform. In Proceedings of the 38th STOC, pages 557–563. ACM, 2006.
 [13] Jan Vybíral. A variant of the JohnsonLindenstrauss lemma for circulant matrices. Journal of Functional Analysis, 260(4):1096–1105, 2011.
 [14] Chang Feng, Qinghua Hu, and Shizhong Liao. Random feature mapping with signed circulant matrix projection. In Proceedings of the 24th IJCAI, pages 3490–3496, 2015.
 [15] Quoc Le, Tamás Sarlós, and Alexander Smola. Fastfoodcomputing hilbert space expansions in loglinear time. In Proceedings of the 30th ICML, pages 244–252, 2013.
 [16] Youngmin Cho and Lawrence K. Saul. Kernel methods for deep learning. In NIPS, pages 342–350, 2009.
 [17] Christopher K. I. Williams. Computation with infinite neural networks. Neural Comput., 10(5):1203–1216, July 1998.
 [18] YvesLaurent Kom Samo and Stephen Roberts. Generalized spectral kernels. arXiv preprint arXiv:1506.02236, 2015.
 [19] Andrew Gordon Wilson and Ryan Prescott Adams. Gaussian process kernels for pattern discovery and extrapolation. arXiv preprint arXiv:1302.4245, 2013.
 [20] Vidmantas Bentkus. On the dependence of the Berry–Esseen bound on dimension. Journal of Statistical Planning and Inference, 113(2):385–402, 2003.
Appendix A Appendix
In the Appendix we prove all theorems presented in the main body of the paper.
a.1 Computing general kernels with TripleSpinmatrices
We prove here Theorem 4.1 and its nonstationary analogue. For the convenience of the Reader, we restate both theorems. We start with Theorem 4.1 that we restate as follows.
Theorem 4.1 (stationary kernels) The family of functions
with , , , is dense in the family of stationary realvalued kernels with respect to pointwise convergence.
Proof:
Theorem 3 of [18] states that:
“Let be a realvalued positive semidefinite, continuous, and integrable function such that . The family of functions
with , , is dense in the family of stationary realvalued kernels with respect to pointwise convergence.” Here .
Let denote the elementwise product. If we choose , as suggested in [18], then it follows that
(6)  
(7)  
(8)  
(9)  
(10)  
is dense in the family of stationary realvalued kernels with respect to pointwise convergence. Equation (6) follows from Bochner’s theorem, (7) from integration by substitution, (8) since sine is an odd function, (9) from cosine angle sum identity, (10) from writing as linear transform of g. Absorbing into and and relaxing to completes the proof.
Now we will show the analogous version of that result for nonstationary kernels.
a.1nonstationary kernels
The family of functions
where , with is dense in the family of realvalued continuous bounded nonstationary kernels with respect to the pointwise convergence of functions.
Proof:
Theorem 7 of [18] states that:
“Let be a realvalued, positive semidefinite, continuous, and integrable function such that . The family
where , with is dense in the family of realvalued continuous bounded nonstationary kernels with respect to the pointwise convergence of functions.”
If we choose as the Gaussian kernel:
with then
with . Absorbing into and relaxing to completes the proof.
a.2 Structured machine learning algorithms with TripleSpinmatrices
a.2.1 Proof of Remark 1
This result first appeared in [12]. The following proof was given in [2], we repeat it here for completeness. We will use the following standard concentration result.
Lemma 2
(Azuma’s Inequality) Let be a martingale and assume that for some positive constants . Denote . Then the following is true:
(11) 
Proof:
Denote by an image of under transformation HD. Note that the dimension of is given by the formula: , where stands for the element of the column of the randomized Hadamard matrix HD. First we use Azuma’s Inequality to find an upper bound on the probability that , where . By Azuma’s Inequality, we have:
(12) 
We use: . Now we take union bound over all dimensions and the proof is completed.
a.2.2 TripleSpinequivalent definition
We will introduce here equivalent definition of the TripleSpinmodel that is more technical (thus we did not give it in the main body of the paper), yet more convenient to work with in the proofs.
Note that from the definition of the TripleSpinfamily we can conclude that each structured matrix from the TripleSpinfamily is a product of three main structured blocks, i.e.:
(13) 
where matrices satisfy two conditions that we give below.
Below we give the definition of randomness.
Definition 5 (randomness)
A pair of matrices is random if there exist: , and a set of linear isometries , where , such that:

r is either a vector with i.i.d. entries or Gaussian with identity covariance matrix,

for every the element of Zx is of the form: ,

there exists a set of i.i.d. subGaussian random variables with subGaussian norm at most , mean , the same second moments and a smooth set of matrices such that for every we have: .
a.2.3 Proof of Lemma 1
Proof:
Let us first assume the setting (analysis for Toeplitz Gaussian or Hankel Gaussian is completely analogous). In that setting it is easy to see that one can take r to be a Gaussian vector (this vector corresponds to the first row of