TripleSpin - a generic compact paradigm for fast machine learning computations

TripleSpin - a generic compact paradigm for fast machine learning computations

Krzysztof Choromanski
Google Research
kchoro@google.com
   Francois Fagan
Columbia University
ff2316@columbia.edu
   Cedric Gouy-Pailler
CEA, LIST, LADIS
cedric.gouy-pailler@cea.fr
   Anne Morvan
CEA, LIST, LADIS & Universite Paris-Dauphine
anne.morvan@cea.fr
   Tamas Sarlos
Google Research
stamas@google.com
   Jamal Atif
Universite Paris-Dauphine
jamal.atif@dauphine.fr
May 15, 2016
Abstract

We present a generic compact computational framework relying on structured random matrices that can be applied to speed up several machine learning algorithms with almost no loss of accuracy. The applications include new fast LSH-based algorithms, efficient kernel computations via random feature maps, convex optimization algorithms, quantization techniques and many more. Certain models of the presented paradigm are even more compressible since they apply only bit matrices. This makes them suitable for deploying on mobile devices. All our findings come with strong theoretical guarantees. In particular, as a byproduct of the presented techniques and by using relatively new Berry-Esseen-type CLT for random vectors, we give the first theoretical guarantees for one of the most efficient existing LSH algorithms based on the structured matrix [1]. These guarantees as well as theoretical results for other aforementioned applications follow from the same general theoretical principle that we present in the paper. Our structured family contains as special cases all previously considered structured schemes, including the recently introduced -model [2]. Experimental evaluation confirms the accuracy and efficiency of TripleSpin matrices.

1 Introduction

Consider a randomized machine learning algorithm on a dataset and parameterized by the Gaussian matrix G with i.i.d. entries taken from . We assume that G is used to calculate Gaussian projections that are then passed to other, possibly highly nonlinear functions.

Many machine learning algorithms are of this form. Examples include several variants of the Johnson-Lindenstrauss Transform applying random projections to reduce data dimensionality while approximately preserving Euclidean distance [3, 4], quantization techniques using random projection trees, where splitting in each node is determined by a projection of data onto Gaussian direction [5], algorithms solving convex optimization problems with random sketches of Hessian matrices [6, 7], kernel approximation techniques applying random feature maps produced from linear projections with Gaussian matrices followed by nonlinear mappings [2, 8, 9], several LSH-schemes [1, 10, 11] (such as some of the most effective cross-polytope LSH methods) and many more.

If data is high-dimensional then computing random projections for in time often occupies a significant fraction of the overall run time. Furthermore, storing matrix G frequently becomes a bottleneck in terms of space complexity. In this paper we propose a “structured variant” of the algorithm , where Gaussian matrix G is replaced by a structured matrix taken from the defined by us TripleSpin-family of matrices. The name comes from the fact that each such matrix is a product of three other matrices, building components, which include rotations.

Replacing G by gives computational speedups since can be calculated in time: with the use of techniques such as Fast Fourier Transform, time complexity is reduced to . Furthermore, with matrices from the TripleSpin-family space complexity can be also substantially reduced — to sub-quadratic, usually at most linear, sometimes even constant.

To the best of our knowledge, we are the first to provide a comprehensive theoretical explanation of the effectiveness of the structured approach. So far such an explanation was given only for some specific applications and specific structured matrices. The proposed TripleSpin-family contains all previously considered structured matrices as special cases, including the recently introduced -model [2], yet its flexible three-block structure provides mechanisms for constructing many others. Our structured model is also the first one that proposes purely discrete Hadamard-based constructions with strong theoretical guarantees. We empirically show that these surpass their unstructured counterparts.

As a byproduct, we provide a theoretical explanation of the efficiency of the cross-polytope LSH method based on the structured matrix [1], where H stands for the Hadamard matrix and s are random diagonal -matrices. Thus we solve the open problem posted in [1]. These guarantees as well as theoretical results for other aforementioned applications arise from the same general theoretical principle that we present in the paper. Our theoretical methods apply relatively new Berry-Esseen type Central Limit Theorem results for random vectors.

2 Related work

Structured matrices were previously used mainly in the context of the Johnson-Lindenstrauss Transform (JLT), where the goal is to linearly transform high-dimensional data and embed it into a much lower dimensional space in such a way that the Euclidean distance is approximately preserved [3, 4, 12]. Most of these structured constructions involve sparse or circulant matrices [12, 13] providing computational speedups and space compression.

Specific structured matrices were used to approximate angular distance and Gaussian kernels [9, 14]. Very recently [2] structured matrices coming from the so-called P-model, were applied to speed up random feature map computations of some special kernels (angular, arc-cosine and Gaussian). The presented techniques did not work for discrete structured constructions such as the structured matrix since they focus on matrices with low (polylog) chromatic number of the corresponding coherence graphs and these do not include matrices such as or their direct non-discrete modifications.

The -mechanism gives in particular a highly parameterized family of structured methods for approximating general kernels with random feature maps. Among them are purely discrete computational schemes providing the most aggressive compression with just minimal loss of accuracy.

Several LSH methods use random Gaussian matrices to construct compact codes of data points and in turn speed up such tasks as approximate nearest neighbor search. Among them are cross-polytope methods proposed in [11]. In the angular cross-polytope setup the hash family is designed for points taken from the unit sphere , where stands for data dimensionality. To construct hashes a random matrix with i.i.d. Gaussian entries is built. The hash of a given data point is defined as: where returns the closest vector to y from the set , where stands for the canonical basis. The fastest known variant of the cross-polytope LSH [1] replaces unstructured Gaussian matrix G with the product . No theoretical guarantees regarding that variant were known. We provide a theoretical explanation here. As in the previous setting, matrices from the model lead to several fast structured cross-polytope LSH algorithms not considered before.

Recently a new method for speeding up algorithms solving convex optimization problems by approximating Hessian matrices (a bottleneck of the overall computational pipeline) with their random sketches was proposed [6, 7]. One of the presented sketches, the so-called Newton Sketch, leads to the sequence of iterates for optimizing given function given by the following recursion:

(1)

where is the set of the so-called sketch matrices. Initially the sub-Gaussian sketches based on i.i.d. sub-Gaussian random variables were used. The disadvantage of the sub-Gaussian sketches lies in the fact that computing the sketch of the given matrix , needed for convex optimization with sketches, requires time. Thus the method is too slow in practice. This is the place, where structured matrices can be applied. Some structured approaches were already considered in [6], where sketches based on randomized orthonormal systems were proposed. In that approach a sketching matrix is constructed by sampling i.i.d. rows of the form with probability for , where is chosen uniformly at random from the set of the canonical vectors and D is a random diagonal -matrix. We show that our class of TripleSpin matrices can also be used in that setting.

3 The TripleSpin-family

We now present the TripleSpin-family. If not specified otherwise, the random diagonal matrix D is a diagonal matrix with diagonal entries taken independently at random from . For a sequence we denote by a diagonal matrix with diagonal equal to . For a matrix let denote its Frobenius and its spectral norm respectively. We denote by H the -normalized Hadamard matrix. We say that r is a random Rademacher vector if every element of r is chosen independently at random from .

For a vector and let be a matrix, where the first row is of the form and each subsequent row is obtained from the previous one by right-shifting in a circulant manner the previous one by . For a sequence of matrices we denote by a matrix obtained by stacking vertically matrices: .

Each structured matrix from the TripleSpin-family is a product of three main structured components, i.e.:

(2)

where matrices and satisfy the following conditions:

Condition 1: Matrices: and are -balanced isometries. Condition 2: for some -smooth set and some i.i.d sub-Gaussian random variables with sub-Gaussian norm . Condition 3: for , where r is random Rademacher or Gaussian.

If all three conditions are satisfied then we say that a matrix is a TripleSpin-matrix with parameters: . Below we explain these conditions.

Definition 1 (-balanced matrices)

A randomized matrix is -balanced if for every with we have: .

Remark 1

One can take as a matrix , since as we will show in the Appendix, matrix is -balanced.

Definition 2 (-smooth sets)

A deterministic set of matrices is -smooth if:

  • for , where stands for the column of ,

  • for and we have: ,

  • and

Remark 2

If the unstructured matrix G has rows taken from the general multivariate Gaussian distribution with diagonal covariance matrix then one needs to rescale vectors r accordingly. For clarity we assume here that and we present our theoretical results for that setting.

All structured matrices previously considered are special cases of the -family (for clarity we will explicitly show it for some important special cases). Others, not considered before, are also covered by the -family. We have:

Lemma 1

Matrices , and , where is Gaussian circulant, are valid -matrices for , , , and . The same is true if one replaces by a Gaussian Hankel or Toeplitz matrix.

3.1 Stacking together TripleSpin-matrices

We described TripleSpin-matrices as square matrices. In practice we are not restricted to square matrices. We can construct an TripleSpin matrix for from the square TripleSpin-matrix by taking its first rows. We can then stack vertically these independently constructed matrices to obtain an matrix for both: and . We think about as another parameter of the model that tunes the ”structuredness” level. Larger values of indicate more structured approach while smaller values lead to more random matrices (the case is the fully unstructured one).

4 Computing general kernels with TripleSpin-matrices

Previous works regarding approximating kernels with structured matrices covered only some special kernels, namely: Gaussian, arc-cosine and angular kernels. We explain here how structured approach (in particular our TripleSpin-family) can be used to approximate well most kernels. Theoretical guarantees that cover also this special case are given in the subsequent section.

For kernels that can be represented as an expectation of Gaussian random variables it is natural to approximate them using structured matrices. We start our analysis with the so-called Pointwise Nonlinear Gaussian kernels (PNG) which are of the form:

(3)

where: , the expectation is over a multivariate Gaussian and is a fixed nonlinear function (the positive-semidefiniteness of the kernel follows from it being an expectation of dot products). The expectation, interpreted in the Monte-Carlo approximation setup, leads to an unbiased estimation by a sum of the normalized dot products , where is a random Gaussian matrix and is applied pointwise, i.e. .

In this setting matrices from the -family can replace G with diagonal covariance matrices to speed up computations and reduce storage complexity. This idea of using random structured matrices to evaluate such kernels is common in the literature, e.g. [9, 15], but no theoretical guarantees were given so far for general PNG kernels (even under the restriction of diagonal ).

PNGs define a large family of kernels characterized by . Prominent examples include the Euclidean and angular distance [9], the arc-cosine kernel [16] and the sigmoidal neural network [17]. Since sums of kernels are again kernels, we can construct an even larger family of kernels by summing PNGs. A simple example is the Gaussian kernel which can be represented as a sum of two PNGs with replaced by the trigonometric functions: and .

Since the Laplacian, exponential and rational quadratic kernel can all be represented as a mixture of Gaussian kernels with different variances, they can be easily approximated by a finite sum of PNGs. Remarkably, virtually all kernels can be represented as a (potentially infinite) sum of PNGs with diagonal . Recall that kernel is stationary if for all and non-stationary otherwise. Harnessing Bochner’s and Wiener’s Tauberian theorem [18], we show that all stationary kernels may be approximated arbitrarily well by sums of PNGs.

4.1stationary kernels

The family of functions

with , , , is dense in the family of stationary real-valued kernels with respect to pointwise convergence.

This family corresponds to the spectral mixture kernels of [19]. We can extend these results to arbitrary, non-stationary, kernels; the precise statement and its proof can be found in the Appendix.

Theorem 4.1 and its non-stationary analogue show that the family of sums of PNGs contains virtually all kernel functions. If a kernel can be well approximated by the sum of only a small number of PNGs then we can use the -family to efficiently evaluate it. It has been found in the Gaussian Process literature that often a small sum of kernels tuned using training data is sufficient to explain phenomena remarkably well [19]. The same should also be true for the sum of PNGs. This suggests a methodology for learning the parameters of a sum of PNGs from training data and then applying it out of sample. We leave this investigation for future research.

5 Theoretical results

We now show that matrices from the TripleSpin-family can replace their unstructured counterparts in many machine learning algorithms with minimal loss of accuracy.

Let be a randomized machine learning algorithm operating on a dataset . We assume that uses certain functions such that each uses a given Gaussian matrix G (matrix G can be the same or different for different functions) with independent rows taken from a multivariate Gaussian distribution with diagonal covariance matrix. may also use other functions that do not depend directly on Gaussian matrices but may for instance operate on outputs of . We assume that each applies G to vectors from some linear space of dimensionality at most .

Remark 3

In the kernel approximation setting with random feature maps one can match each pair of vectors with a different . Each computes the approximate value of the kernel for vectors x and y. Thus in that scenario and (since one can take: ).

Remark 4

In the vector quantization algorithms using random projection trees one can take (the algorithm itself is a function ) and , where is an intrinsic dimensionality of a given dataset (random projection trees are often used if ).

Fix a function of using an unstructured Gaussian matrix G and applying it on the linear space of dimensionality . Note that the outputs of G on vectors from are determined by the sequence: , where stands for a fixed orthonormal basis of . Thus they are determined by a following vector (obtained from ”stacking together” vectors ,…,):

Notice that is a Gaussian vector with independent entries (this comes from the fact that rows of G are independent and the observation that the projections of the Gaussian vector on the orthogonal directions are independent). Thus the covariance matrix of is an identity.

Definition 3 (-similarity)

We say that a multivariate Gaussian distribution is -close to the multivariate Gaussian distribution with covariance matrix I if its covariance matrix is equal to on the diagonal and has all other entries of absolute value at most .

To measure the ”closeness” of the algorithm with the TripleSpin-model matrices to , we will measure how ”close” the corresponding vector for is to in the distribution sense. The following definition proposes a quantitative way to measure this closeness. Without loss of generality we will assume now that each structured matrix consists of just one block since different blocks of the structured matrix are chosen independently.

Definition 4

Let be as above. For a given the class of algorithms is given as a set of algorithms obtained from by replacing unstructured Gaussian matrices with their structured counterparts such that for any with and any convex set the following holds:

(4)

where is some multivariate Gaussian distribution that is -similar to .

The smaller , the closer in distribution are and and the more accurate the structured version of is. Now we show that TripleSpin-matrices lead to algorithms from with .

5.1structured ml algorithms

Let be the randomized algorithm using unstructured Gaussian matrices G and let be as at the beginning of the section. Replace the unstructured matrix G by one of its structured variants from the TripleSpin-family defined in Section 3 with blocks of rows each. Then for large enough and with probability at least:

(5)

the structured version of the algorithm belongs to the class , where: , are as in the definition of the TripleSpin-family from Section 3 and the probability is in respect to the random choices of and .

Theorem 5.1 implies strong accuracy guarantees for the specific matrices from the -family. As a corollary we get for instance:

5.2

Under assumptions from Theorem 5.1 the probability that the structured version of the algorithm belongs to for is at least: for the structured matrices , as well as for the structured matrices of the form , where is Gaussian circulant, Gaussian Toeplitz or Gaussian Hankel matrix.

As a corollary of Theorem 5.2, we obtain the following result showing the effectiveness of the cross-polytope LSH with structured matrices that was only heuristically confirmed before [1].

5.3

Let be two unit -norm vectors. Let be the vector indexed by all ordered pairs of canonical directions , where the value of the entry indexed by is the probability that: and , and stands for the hash of v. Then with probability at least: the version of the stochastic vector for the unstructured Gaussian matrix G and its structured counterpart for the matrix satisfy: , for large enough, where is a universal constant. The probability above is taken with respect to random choices of and .

The proof for the discrete structured setting applies Berry-Esseen-type results for random vectors (details in the Appendix) showing that for large enough random vectors r act similarly to Gaussian vectors.

6 Experiments

Experiments have been carried out on a single processor machine (Intel Core i7-5600U CPU @ 2.60GHz, 4 hyper-threads) with 16GB RAM for the first two applications and a dual processor machine (Intel Xeon E5-2640 v3 @ 2.60GHz, 32 hyper-threads) with 128GB RAM for the last one. Every experiment was conducted using Python. In particular, NumPy is linked against a highly optimized BLAS library (Intel MKL). Fast Fourier Transform is performed using numpy.fft and Fast Hadamard Transform is using ffht from [1]. To have fair comparison, we have set up: so that every experiment is done on a single thread. Every parameter corresponding to a TripleSpin-matrix is computed in advance, such that obtained speedups take only matrix-vector products into account. All figures should be viewed in color.

6.1 Locality-Sensitive Hashing (LSH) application

In the first experiment, to support our theoretical results for the TripleSpin-matrices in the cross-polytope LSH, we compared collision probabilities in Figure 2 for the low dimensional case. Results are shown for one hash function (averaged over runs). For each interval, collision probability has been computed for points for a random Gaussian matrix G and four other types of TripleSpin-matrices (descending order of number of parameters): , , , and , where , and are respectively Gaussian Toeplitz and Gaussian skew-circulant matrices.

We can see that all TripleSpin-matrices show high collision probabilities for small distances and low ones for large distances. All the curves are almost identical. As theoretically predicted, there is no loss of accuracy (sensitivity) by using matrices from the TripleSpin-family.

\CenterFloatBoxes{floatrow}\ffigbox
Figure 1: Cross-polytope LSH - collision probabilities
\killfloatstyle\ttabbox
Matrix dimensions
x1.4 x3.4 x6.4 x12.9 x28.0 x42.3 x89.6
x1.5 x3.6 x6.8 x14.9 x31.2 x49.7 x96.5
x2.3 x6.0 x13.8 x31.5 x75.7 x137.0 x308.8
x2.2 x6.0 x14.1 x33.3 x74.3 x140.4 x316.8
Figure 2: Speedups for Gaussian kernel approximation

6.2 Kernel approximation

In the second experiment, we compared feature maps obtained with Gaussian random matrices and specific TripleSpin-matrices for Gaussian and angular kernels. To test the quality of the structured kernels’ approximations, we compute the corresponding Gram-matrix reconstruction error using the Frobenius norm metric as in [2] : , where are respectively the exact and approximate Gram-matrices, as a function of the number of random features. When number of random features is greater than data dimensionality , we apply described block-mechanism. We used the USPST dataset (test set) which consists of scans of handwritten digits from envelopes by the U.S. Postal Service. It contains 2007 points of dimensionality 258 () corresponding to descriptors of 16 x 16 grayscale images. For Gaussian kernel, bandwidth is set to . The results are averaged over runs.

Results on the USPST dataset:

The following matrices have been tested: Gaussian random matrix G, , and .

In Figure 3, for both kernels, all TripleSpin-matrices perform similarly to a random Gaussian matrix, but is giving the best results (see Figure 5 in the Appendix for additional experiments). The efficiency of the TripleSpin-mechanism does not depend on the dataset.

Figure 3: Accuracy of random feature map kernel approximation

Table 2 shows significant speedups obtained by the TripleSpin-matrices. Those are defined as , where G is a random Gaussian matrix, T is a TripleSpin matrix and stands for the corresponding runtime.

6.3 Newton sketches

Our last experiment covers the Newton sketch approach initially proposed in [6] as a generic optimization framework. In the subsequent experiment we show that TripleSpin matrices can be used for this purpose, thus can speed up several convex optimization problems solvers. The logistic regression problem is considered (see the Appendix for more details). Our goal is to find , which minimizes the logistic regression cost, given a dataset , with sampled according to a Gaussian centered multivariate distribution with covariance and , generated at random. Various sketching matrices are considered.

We first show that equivalent convergence properties are exhibited for various TripleSpin-matrices. Figure 4 illustrates the convergence of the Newton sketch algorithm, as measured by the optimality gap defined in [6], versus the iteration number. While it is clearly expected that sketched versions of the algorithm do not converge as quickly as the exact Newton-sketch approach, the figure confirms that various TripleSpin-matrices exhibit similar convergence behavior.

As shown in [6], when the dimensionality of the problem increases, the computational cost of computing the Hessian in the exact Newton-sketch approach becomes very large, scaling as . The complexity of the structured Newton-sketch approach with TripleSpin-matrices is only . Wall-clock times of computing single Hessian matrices are illustrated in Figure 4. This figure confirms that the increase in number of iterations of the Newton sketch compared to the exact sketch is compensated by the efficiency of sketched computations, in particular Hadamard-based sketches yield improvements at the lowest dimensions.

Figure 4: Numerical illustration of convergence (left) and computational complexity (right) of the Newton sketch algorithm with various TripleSpin-matrices. (left) Various sketching structures are compared in terms of convergence against iteration number. (right) Wall-clock times of TripleSpin structures are compared in various dimensionality settings.

References

  • [1] Alexandr Andoni, Piotr Indyk, Thijs Laarhoven, Ilya P. Razenshteyn, and Ludwig Schmidt. Practical and optimal LSH for angular distance. In NIPS, pages 1225–1233, 2015.
  • [2] Krzysztof Choromanski and Vikas Sindhwani. Recycling randomness with structure for sublinear time kernel expansions. To appear in ICML, 2016.
  • [3] Edo Liberty, Nir Ailon, and Amit Singer. Dense fast random projections and lean Walsh transforms. In RANDOM, pages 512–522, 2008.
  • [4] Nir Ailon and Edo Liberty. An almost optimal unrestricted fast Johnson-Lindenstrauss transform. ACM Transactions on Algorithms (TALG), 9(3):21, 2013.
  • [5] Sanjoy Dasgupta and Yoav Freund. Random projection trees and low dimensional manifolds. In Proceedings of the 40th STOC, pages 537–546, 2008.
  • [6] Mert Pilanci and Martin J. Wainwright. Newton sketch: A linear-time optimization algorithm with linear-quadratic convergence. CoRR, abs/1505.02250, 2015.
  • [7] Mert Pilanci and Martin J. Wainwright. Randomized sketches of convex programs with sharp guarantees. In ISIT, pages 921–925, 2014.
  • [8] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In NIPS, pages 1177–1184, 2007.
  • [9] Anna Choromanska, Krzysztof Choromanski, Mariusz Bojarski, Tony Jebara, Sanjiv Kumar, and Yann LeCun. Binary embeddings with structured hashed projections. To appear in ICML, 2016.
  • [10] Moses Charikar. Similarity estimation techniques from rounding algorithms. In Proceedings on 34th STOC, pages 380–388, 2002.
  • [11] Kengo Terasawa and Yuzuru Tanaka. Spherical LSH for approximate nearest neighbor search on unit hypersphere. In WADS, pages 27–38, 2007.
  • [12] Nir Ailon and Bernard Chazelle. Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform. In Proceedings of the 38th STOC, pages 557–563. ACM, 2006.
  • [13] Jan Vybíral. A variant of the Johnson-Lindenstrauss lemma for circulant matrices. Journal of Functional Analysis, 260(4):1096–1105, 2011.
  • [14] Chang Feng, Qinghua Hu, and Shizhong Liao. Random feature mapping with signed circulant matrix projection. In Proceedings of the 24th IJCAI, pages 3490–3496, 2015.
  • [15] Quoc Le, Tamás Sarlós, and Alexander Smola. Fastfood-computing hilbert space expansions in loglinear time. In Proceedings of the 30th ICML, pages 244–252, 2013.
  • [16] Youngmin Cho and Lawrence K. Saul. Kernel methods for deep learning. In NIPS, pages 342–350, 2009.
  • [17] Christopher K. I. Williams. Computation with infinite neural networks. Neural Comput., 10(5):1203–1216, July 1998.
  • [18] Yves-Laurent Kom Samo and Stephen Roberts. Generalized spectral kernels. arXiv preprint arXiv:1506.02236, 2015.
  • [19] Andrew Gordon Wilson and Ryan Prescott Adams. Gaussian process kernels for pattern discovery and extrapolation. arXiv preprint arXiv:1302.4245, 2013.
  • [20] Vidmantas Bentkus. On the dependence of the Berry–Esseen bound on dimension. Journal of Statistical Planning and Inference, 113(2):385–402, 2003.

Appendix A Appendix

In the Appendix we prove all theorems presented in the main body of the paper.

a.1 Computing general kernels with TripleSpin-matrices

We prove here Theorem 4.1 and its non-stationary analogue. For the convenience of the Reader, we restate both theorems. We start with Theorem 4.1 that we restate as follows.

Theorem 4.1 (stationary kernels) The family of functions

with , , , is dense in the family of stationary real-valued kernels with respect to pointwise convergence.

Proof:

Theorem 3 of [18] states that:

“Let be a real-valued positive semi-definite, continuous, and integrable function such that . The family of functions

with , , is dense in the family of stationary real-valued kernels with respect to pointwise convergence.” Here .

Let denote the element-wise product. If we choose , as suggested in [18], then it follows that

(6)
(7)
(8)
(9)
(10)

is dense in the family of stationary real-valued kernels with respect to pointwise convergence. Equation (6) follows from Bochner’s theorem, (7) from integration by substitution, (8) since sine is an odd function, (9) from cosine angle sum identity, (10) from writing as linear transform of g. Absorbing into and and relaxing to completes the proof.

Now we will show the analogous version of that result for non-stationary kernels.

a.1non-stationary kernels

The family of functions

where , with is dense in the family of real-valued continuous bounded non-stationary kernels with respect to the pointwise convergence of functions.

Proof:

Theorem 7 of [18] states that:

“Let be a real-valued, positive semi-definite, continuous, and integrable function such that . The family

where , with is dense in the family of real-valued continuous bounded non-stationary kernels with respect to the pointwise convergence of functions.”

If we choose as the Gaussian kernel:

with then

with . Absorbing into and relaxing to completes the proof.

a.2 Structured machine learning algorithms with TripleSpin-matrices

We prove now Lemma 1, Remark 1, as well as Theorem 5.1 and Theorem 5.2.

a.2.1 Proof of Remark 1

This result first appeared in [12]. The following proof was given in [2], we repeat it here for completeness. We will use the following standard concentration result.

Lemma 2

(Azuma’s Inequality) Let be a martingale and assume that for some positive constants . Denote . Then the following is true:

(11)
Proof:

Denote by an image of under transformation HD. Note that the dimension of is given by the formula: , where stands for the element of the column of the randomized Hadamard matrix HD. First we use Azuma’s Inequality to find an upper bound on the probability that , where . By Azuma’s Inequality, we have:

(12)

We use: . Now we take union bound over all dimensions and the proof is completed.

a.2.2 TripleSpin-equivalent definition

We will introduce here equivalent definition of the TripleSpin-model that is more technical (thus we did not give it in the main body of the paper), yet more convenient to work with in the proofs.

Note that from the definition of the TripleSpin-family we can conclude that each structured matrix from the TripleSpin-family is a product of three main structured blocks, i.e.:

(13)

where matrices satisfy two conditions that we give below.

Condition 1: Matrices: and are -balanced isometries. Condition 2: Pair of matrices is -random.

Below we give the definition of -randomness.

Definition 5 (-randomness)

A pair of matrices is -random if there exist: , and a set of linear isometries , where , such that:

  • r is either a -vector with i.i.d. entries or Gaussian with identity covariance matrix,

  • for every the element of Zx is of the form: ,

  • there exists a set of i.i.d. sub-Gaussian random variables with sub-Gaussian norm at most , mean , the same second moments and a -smooth set of matrices such that for every we have: .

a.2.3 Proof of Lemma 1

Proof:

Let us first assume the setting (analysis for Toeplitz Gaussian or Hankel Gaussian is completely analogous). In that setting it is easy to see that one can take r to be a Gaussian vector (this vector corresponds to the first row of