Fast Sparse Least-Squares Regression with Non-Asymptotic Guarantees
In this paper, we study a fast approximation method for large-scale high-dimensional sparse least-squares regression problem by exploiting the Johnson-Lindenstrauss (JL) transforms, which embed a set of high-dimensional vectors into a low-dimensional space. In particular, we propose to apply the JL transforms to the data matrix and the target vector and then to solve a sparse least-squares problem on the compressed data with a slightly larger regularization parameter. Theoretically, we establish the optimization error bound of the learned model for two different sparsity-inducing regularizers, i.e., the elastic net and the norm. Compared with previous relevant work, our analysis is non-asymptotic and exhibits more insights on the bound, the sample complexity and the regularization. As an illustration, we also provide an error bound of the Dantzig selector under JL transforms.
Fast Sparse Least-Squares Regression with Non-Asymptotic Guarantees
Tianbao Yang, Lijun Zhang, Qihang Lin, Rong Jin The University of Iowa, Nanjing University, Alibaba Group email@example.com, firstname.lastname@example.org email@example.com, firstname.lastname@example.org
Given a data matrix with each row representing an instance 111 is the number of instances and is the number of features. and a target vector , the sparse least-squares regression (SLSR) is to solve the following optimization problem:
where is a sparsity-inducing norm. In this paper, we consider two widely used sparsity-inducing norms: (i) the norm that leads to a formulation also known as LASSO ; (ii) the mixture of and norm that leads to a formulation known as the Elastic Net . Although norm has been widely explored and studied in SLSR, the elastic net usually yields better performance when there are highly correlated variables. Most previous studies on SLSR revolved around on two intertwined topics: sparse recovery analysis and efficient optimization algorithms. We aim to present a fast approximation method for solving SLSR with a strong guarantee on the optimization error.
Recent years have witnessed unprecedented growth in both the scale and the dimensionality of data. As the size of data continues to grow, solving the problem (1) is still computationally difficult because (i) the memory limitations could lead to increased additional costs (e.g., I/O costs, communication costs in distributed environment); (ii) a large number of instances or a high dimension of features usually implies a slow convergence of optimization (i.e., a large iteration complexity). In this paper, we study a fast approximation method that employes the JL transforms to reduce the size of and . In particular, let denote a linear transformation that obeys the JL lemma (c.f. Lemma 1), we transform the data matrix and the target vector into and . Then we optimize a slightly modified SLSR problem using the compressed data and to obtain an approximate solution . The proposed method is supported by (i) a theoretical analysis that provides a strong guarantee of the proposed approximation method on the optimization error of in both norm and norm, i.e., and ; and (ii) empirical studies on a synthetic data and a real dataset. We emphasize that besides in large-scale learning, the approximation method by JL transforms can be also used in privacy concerned applications, which is beyond the scope of this work.
In fact, our work is not the first that employes random reduction techniques to reduce the size of the data for SLSR and studies the theoretical guarantee of the approximate solution. The most relevant work is presented by Zhou & Lafferty & Wasserman  (referred to as Zhou’s work). Below we highlight several key differences from Zhou’s work, which also emphasize our contributions:
Our formulation on the compressed data is different from that in Zhou’s work, which simply solves the same SLSR problem using the compressed data. We introduce a slightly larger norm regularizer, which enjoys an intuitive geometric explanation. As a result, it also sheds lights on the Dantzig selector  under JL transforms, a theoretical result of which is also presented.
Zhou’s work focused on the regularized least-squares regression and the Gaussian random projection. We consider two sparsity-inducing regularizers including the elastic net and the norm. Since our analysis is based on the JL lemma, hence any JL transforms are applicable.
Zhou’s theoretical analysis is asymptotic, which only holds when the number of instances approaches infinity, and it requires strong assumptions about the data matrix and other parameters for obtaining sparsitency (i.e., the recovery of the support set) and the persistency (i.e., the generalization performance). In contrast, our analysis of the optimization error relies on relaxed assumptions and is non-asymptotic. In particular, for the norm we assume the standard restricted eigen-value condition in sparse recovery analysis. For the elastic net, by exploring the strong convexity of the regularizer, we can be even exempted from the restricted eigen-value condition and can derive better bounds when the condition is true.
The remainder of the paper is organized as follows. In Section 2, we review some related work. We present the proposed method and main results in Section 3 and 4. Numerical experiments will be presented in Section 5 followed by conclusions.
2 Related Work
Sparse Recovery Analysis.
The LASSO problem has been one of the core problems in statistics and machine learning, which is essentially to learn a high-dimensional sparse vector from (potentially noise) linear measurements . A rich theoretical literature [22, 29, 23] describes the consistency, in particular the sign consistency, of various sparse regression techniques. A stringent “irrepresentable condition” has been established to achieve sign consistency. To circumvent the stringent assumption, several studies [11, 18] have proposed to precondition the data matrix and/or the target vector by and before solving the LASSO problem, where is usually a matrix. The oracle inequalities of the solution to LASSO  and other sparse estimators (e.g., the Dantzig selector ) have also been established under restricted eigen-value conditions of the data matrix and the Gaussian noise assumption of . The focus in these studies is on when the number of measurements is much less than the number of features, i.e., . Different from these work, we consider that both and are significantly large 222This setting recently receives increasing interest . and aim to derive fast algorithms for solving the SLSR problem approximately by exploiting the JL transforms. The recovery analysis is centered on the optimization error of the learned model with respect to the optimal solution to (1), which together with the oracle inequality of automatically leads to an oracle inequality of the learned model under the Gaussian noise assumption.
Approximate Least-squares Regression.
In numerical linear algebra, one important problem is the over-constrained least-squares problem, i.e., finding a vector such that the Euclidean norm of the residual error is minimized, where the data matrix has . The exact solver takes time complexity. Several pieces of works have proposed randomized algorithms for finding an approximate solution to the above problem in [9, 8]. These works share the same paradigm by applying an appropriate random matrix to both and and solving the induced subproblem, i.e., . Relative-error bounds for and have been developed. Although the proposed method uses a similar idea to reduce the size of the data, there is a striking difference between our work and these studies in that we consider the sparse regularized least-squares problem when both and are very large. As a consequence, the analysis and the required condition on are substantially different. The analysis for over-constrained least-squares relies on the low-rank of the data matrix , while our analysis hinges on the inherent sparsity of the optimal solution . In terms of the value of for accurate recovery, approximate least-squares regression requires . In contrast, for the proposed method, our analysis exhibits that the order of is , where is the sparsity of the optimal solution to (1). In addition, the proposed method can utilize any JL transforms as long as they obey the JL lemma. Therefore, our method can benefit from recent advances in sparser JL transforms, leading to a fast transformation of the data.
Random Projection based Learning.
Random projection has been employed for addressing the computational challenge of high-dimensional learning problems . In particular, if let denote a set of instances, by random projection we can reduce the high-dimensional features into a low dimensional feature space by , where is a random projection matrix. Several works have studied some theoretical properties of learning in the low dimensional space. For example,  considered the following problem and its reduced counterpart (R):
Paul et al.  focused on SVM and showed that the margin and minimum enclosing ball in the reduced feature space are preserved to within a small relative error provided that the data matrix is of low-rank. Zhang et al.  studied the problem of recovering the original optimal solution and proposed a dual recovery approach, i.e., using the learned dual variable in the reduced feature space to recover the model in the original feature space. They also established a recovery error under the low-rank assumption of the data matrix. Recently, the low-rank assumption is alleviated by the sparsity assumption. Zhang et al.  considered a case when the optimal solution is sparse and Yang et al.  assumed the optimal dual solution is sparse and proposed to solve a regularized dual formulation using the reduced data. They both established a recovery error in the order of , where is the sparsity of the optimal primal solution or the optimal dual solution. Random projection for feature reduction has also been applied to the ridge regression problem . However, these methods do not apply to the SLSR problem and their analysis is developed mainly for the norm square regularizer. In order to maintain the sparsity of , we consider compressing the data instead of the features so that the sparse regularizer is maintained for encouraging sparsity. Moreover, our analysis exhibits an recovery error in the order of , where whose magnitude could be much smaller than .
The JL Transforms.
The JL transforms refer to a class of transforms that obey the JL lemma , which states that any points in Euclidean space can be embedded into dimensions so that all pairwise Euclidean distances are preserved upto . Since the original Johnson-Lindenstrauss result, many transforms have been designed to satisfy the JL lemma, including Gaussian random matrices , sub-Gaussian random matrices , randomized Hadamard transform , sparse JL transforms by random hashing [6, 13]. The analysis presented in this work builds upon the JL lemma and therefore our method can enjoy the computational benefits of sparse JL transforms including less memory and fast computation.
3 A Fast Sparse Least-Squares Regression
Let be a set of training instances, where and . We refer to as the data matrix and to as the target vector, where denotes the column of . To facilitate our analysis, let be the upper bound of . Denote by and the norm and the norm of a vector. A function is -strongly convex with respect to if it satisfies . A function is -smooth with respect to if for , , where and denotes the sub-gradient and the gradient, respectively. In the analysis below for the LASSO problem, we will use the following restricted eigen-value condition .
For any integer , the matrix satisfies the restricted eigen-value condition at the sparsity level if there exist positive constants and such that
The goal of SLSR is to learn an optimal vector that minimizes the sum of the least-squares error and a sparsity-inducing regularizer. We consider two different sparsity-inducing regularizers: (i) the norm: ; (ii) the elastic net: . Thus, we rewrite the problem in (1) into the following form:
When the problem is the LASSO problem and when the problem is the Elastic Net problem. Although many optimization algorithms have been developed for solving (2), they could still suffer from high computational complexities for large-scale high-dimensional data due to (i) an memory complexity and (ii) an iteration complexity.
To alleviate the two complexities, we consider using the JL transforms to reduce the size of data, which are discussed in more details in subsection 3.2. In particular, we let denote the transformation matrix corresponding to a JL transform, then we compute a compressed data by and , and then solve the following problem:
where , whose theoretical value is exhibited later. We emphasize that to obtain a bound on the optimization error of , i.e., , it is important to increase the value of the regularization parameter before the norm. Intuitively, after compressing the data the optimal solution may become less sparse, hence increasing the regularization parameter can pull the solution towards closer to the original optimal solution.
Geometric Interpretation. We can also explain the added parameter from a geometric viewpoint, which sheds insights on the theoretical value of and the analysis for the Dantzig selector under JL transforms. Without loss of generality, we consider . Since is the optimal solution to the original problem, then there exists a sub-gradient such that . Since , therefore must satisfy , which is also the constraint in the Dantzig selector. Similarly, the compressed problem (3) also defines a domain of the optimal solution , i.e.,
It turns out that is added to ensure that the original optimal solution lies in provided that is set appropriately, which can be verified as follows:
Hence, if we set , it is guaranteed that also lies in . Lemma 2 in subsection 3.3 provides an upper bound , therefore exhibits a theoretical value of . The above explanation also sheds lights on the Dantzig selector under JL transforms as presented in Section 4.
Before presenting the theoretical guarantee of the obtained solution , we compare the optimization of the original problem (2) and the compressed problem (3). In particular, we focus on since the optimization of the problem with only norm can be completed by adding the norm square with a small value of .
We choose the recently proposed accelerated stochastic proximal coordinate gradient method (APCG) . The reason are threefold: (i) it achieves an accelerated convergence for optimizing (2), i.e., a linear convergence with a square root dependence on the condition number; (ii) it updates randomly selected coordinates of , which is well suited for solving (3) since the dimensionality is much larger than the equivalent number of examples ; (iii) it leads to a much simpler analysis of the condition number for the compressed problem (3). First, we write the objective functions in (2) and (3) into the following general form:
where . For simplicity, we consider the case when each block of coordinates corresponds to only one coordinate. The key assumption of APCG is that the function should be coordinate-wise smooth. To this end, we let denote the -th column of the identity matrix and note that
Assume , then for any , we have
Therefore is coordinate-wise smooth and the smooth parameter is . On the other hand is also -strongly convex function. Therefore the condition number that affects the iteration complexity is , and the iteration complexity is given by
where is an accuracy for optimization. Since the per-iteration complexity of APCG for (5) is , therefore the time complexity is given by , where suppresses the logarithmic term. Next, we can analyze and compare the time complexity of optimization for (2) and (3). For (2), and . For (3) , and by the JL lemma for (Lemma 1), with a high probability we have , where . Let be sufficiently large, we can conclude that for is . Therefore, the time complexities of APCG for solving (2) and (3) are
Hence, we can see that the optimization time complexity of APCG for solving (3) can be reduced upto a factor of , which is substantial when . The total time complexity is discussed after we introduce the JL lemma.
3.2 JL Transforms and Running Time
Since the proposed method builds on the JL transforms, we present a JL lemma and mention several JL transforms.
[JL Lemma ] For any integer , and any , there exists a probability distribution on real matrices such that there exists a small universal constant and for any fixed with a probability at least , we have
In other words, in order to preserve the Euclidean norm for any vector within a relative error , we need to have . Proofs of the JL lemma can be found in many studies (e.g., [7, 1, 2, 6, 13]). The value of in the JL lemma is optimal . In these studies, different JL transforms are also exhibited, including Gaussian random matrices :, subGaussian random matrices , randomized Hadamard transform  and sparse JL transforms [6, 13]. For more discussions on these JL transforms, we refer the readers to .
Transformation time complexity and Total Amortizing time complexity. Among all the JL transforms mentioned above, the transform using the Gaussian random matrices is the most expensive that takes time complexity when applied to , while randomized Hadamard transform and sparse JL transforms can reduce it to where suppresses only a logarithmic factor. Although the transformation time complexity still scales as , the computational benefit of the JL transform can become more prominent when we consider the amortizing time complexity. In particular, in machine learning, we usually need to tune the regularization parameters (aka cross-validation) to achieve a better generalization performance. Let denote the total number of times of solving (2) or (3), then the amortizing time complexity is given by , where refers to the time of the transformation (zero for solving (2)) and is the optimization time. Since for (3) is reduced significantly, hence the total amortizing time complexity of the proposed method for SLSR is much reduced.
3.3 Theoretical Guarantees
Next, we present the theoretical guarantees on the optimization error of the obtained solution . We emphasize that one can easily obtain the oracle inequalities for using the optimization error and the oracle inequalities of  under the Gaussian noise model, which are omitted here. We use the notation to denote and assume . Again, we denote by the upper bound of column vectors in , i.e., . We first present two technical lemmas. All proofs are included in the appendix.
Let . With a probability at least , we have
where is the universal constant in the JL Lemma.
Let . If satisfies the restricted eigen-value condition as in Assumption 1, then with a probability at least , we have
where is the universal constant in the JL lemma.
Theorem 2 (Optimization Error for Elastic Net).
Remark: First, we can see that the value of is large than with a high probability due to Lemma 2, which is consistent with our geometric interpretation. The upper bound of the optimization error exhibits several interesting properties: (i) the term of occurs commonly in theoretical results of sparse recovery ; (ii) the term of is related to the condition number of the optimization problem (2), which reflects the intrinsic difficulty of optimization; and (iii) the term of is related to the empirical error of the optimal solution . This term makes sense because if indicating that the optimal solution satisfies , then it is straightforward to verify that also satisfies the optimality condition of (2) for . Due to the uniqueness of the optimal solution to (2), thus .
Theorem 3 (Optimization Error for LASSO).
4 Dantzig Selector under JL transforms
In light of our geometric explanation of , we present the Dantzig selector under JL transforms and its theoretical guarantee. The original Dantzig selector is the optimal solution to the following problem:
Under JL transforms, we propose the following estimator
From previous analysis, we show that satisfies the constraint in (8) provided that , which is the key to establish the following result.
Theorem 4 (Optimization Error for Dantzig Selector).
Remark: Compared to the result in Theorem 3, the definition of is slightly different, and there is an additional term of . This additional term seems unavoidable since doest not necessarily indicate is also the optimal solution to (8). However, this should not be a concern if we consider the oracle inequality of via the oracle inequality of , which is under the Gaussian noise assumption and .
5 Numerical Experiments
In this section, we present some numerical experiments to complement the theoretical results. We conduct experiments on two datasets, a synthetic dataset and a real dataset. The synthetic data is generated similar to previous studies on sparse signal recovery . In particular, we generate a random matrix with and . The entries of the matrix are generated independently with the uniform distribution over the interval . A sparse vector is generated with the same distribution at randomly chosen coordinates. The noise is a dense vector with independent random entries with the uniform distribution over the interval , where is the noise magnitude and is set to . We scale the data matrix such that all entries have a variance of and scale the noise vector accordingly. Finally the vector was obtained as . For elastic net on the synthetic data, we try two different values of , and . The value of is set to for both elastic net and lasso. Note that these values are not intended to optimize the performance of elastic net and lasso on the synthetic data. The real data used in the experiment is E2006-tfidf dataset. We use the version available on libsvm website 333http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/. There are a total of training instances and features and testing instances. We normalize the training data such that each dimension has mean zero and variance . The testing data is normalized using the statistics computed on the training data. For JL transform, we use the random hashing.
The experimental results on the synthetic data under different settings are shown in Figure 2. In the left plot, we compare the optimization error for elastic net with and two different values of , i.e., and . The horizontal axis is the value of , the added regularization parameter. We can observe that adding a slightly larger additional norm to the compressed data problem indeed reduces the optimization error. When the value of is larger than some threshold, the error will increase, which is consistent with our theoretical results. In particular, we can see that the threshold value for is smaller than that for . In the middle plot, we compare the optimization error for elastic net with and two different values of the regularization parameter . Similar trends of the optimization error versus are also observed. In addition, it is interesting to see that the optimization error for is less than that for , which seems to contradict to the theoretical results at the first glance due to the explicit inverse dependence on . However, the optimization error also depends on , which measures the empirical error of the corresponding optimal model. We find that with we have a smaller compared to with , which explains the result in the middle plot. For the right plot, we repeat the same experiments for lasso as in the left plot for elastic net, and observe similar results.
The experimental results on E2006-tfidf dataset for lasso are shown in Figure 2. In the left plot, we show the root mean square error (RMSE) on the testing data of different models learned from the original data with different values of . In the middle and right plots, we fix the value of and increase the value of and plot the relative optimization error and the RMSE on the testing data. Again, the empirical results are consistent with the theoretical results and verify that with JL transforms a larger regularizer yields a better performance.
In this paper, we have considered a fast approximation method for sparse least-squares regression by exploiting the JL transform. We propose a slightly different formulation on the compressed data and interpret it from a geometric viewpoint. We also establish the theoretical guarantees on the optimization error of the obtained solution for elastic net, lasso and Dantzig selector on the compressed data. The theoretical results are also validated by numerical experiments on a synthetic dataset and a real dataset.
-  D. Achlioptas. Database-friendly random projections: Johnson-lindenstrauss with binary coins. Journal of Computer and System Sciences., 66:671–687, 2003.
-  N. Ailon and B. Chazelle. Approximate nearest neighbors and the fast johnson-lindenstrauss transform. In Proceedings of the ACM Symposium on Theory of Computing (STOC), pages 557–563, 2006.
-  M.-F. Balcan, A. Blum, and S. Vempala. Kernels as features: on kernels, margins, and low-dimensional mappings. Machine Learning, 65(1):79–94, 2006.
-  P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of lasso and dantzig selector. ANNALS OF STATISTICS, 37(4), 2009.
-  E. Candes and T. Tao. The dantzig selector: Statistical estimation when is much larger than . Ann. Statist., 35(6):2313–2351, 2007.
-  A. Dasgupta, R. Kumar, and T. Sarlós. A sparse johnson-lindenstrauss transform. In Proceedings of the ACM Symposium on Theory of Computing (STOC), pages 341–350, 2010.
-  S. Dasgupta and A. Gupta. An elementary proof of a theorem of johnson and lindenstrauss. Random Structures & Algorithms, 22(1):60–65, 2003.
-  P. Drineas, M. W. Mahoney, and S. Muthukrishnan. Sampling algorithms for l2 regression and applications. In ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1127–1136, 2006.
-  P. Drineas, M. W. Mahoney, S. Muthukrishnan, and T. Sarlós. Faster least squares approximation. Numerische Mathematik, 117(2):219–249, Feb. 2011.
-  T. S. Jayram and D. P. Woodruff. Optimal bounds for johnson-lindenstrauss transforms and streaming problems with subconstant error. ACM Transactions on Algorithms, 9(3):26, 2013.
-  J. Jia and K. Rohe. Preconditioning to comply with the irrepresentable condition. 2012.
-  W. Johnson and J. Lindenstrauss. Extensions of Lipschitz mappings into a Hilbert space. In Conference in modern analysis and probability (New Haven, Conn., 1982), volume 26, pages 189–206. 1984.
-  D. M. Kane and J. Nelson. Sparser johnson-lindenstrauss transforms. Journal of the ACM, 61:4:1–4:23, 2014.
-  V. Koltchinskii. Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems. springer, 2011.
-  V. Koltchinskii. Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems: École DâÉté de Probabilités de Saint-Flour XXXVIII-2008. Ecole d’été de probabilités de Saint-Flour. Springer, 2011.
-  Q. Lin, Z. Lu, and L. Xiao. An accelerated proximal coordinate gradient method. In NIPS, pages 3059–3067, 2014.
-  O. Maillard and R. Munos. Compressed least-squares regression. In NIPS, pages 1213–1221, 2009.
-  D. Paul, E. Bair, T. Hastie, and R. Tibshirani. Preconditioning for feature selection and regression in high-dimensional problems. The Annals of Statistics, 36:1595–1618, 2008.
-  S. Paul, C. Boutsidis, M. Magdon-Ismail, and P. Drineas. Random projections for support vector machines. In AISTATS, pages 498–506, 2013.
-  Y. Plan and R. Vershynin. One-bit compressed sensing by linear programming. CoRR, abs/1109.4299, 2011.
-  S. Shalev-Shwartz and T. Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. In ICML, pages 64–72, 2014.
-  R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society (Series B), 58:267–288, 1996.
-  M. J. Wainwright. Information-theoretic limits on sparsity recovery in the high-dimensional and noisy setting. IEEE Transactions on Information Theory, 55(12):5728–5741, 2009.
-  L. Xiao and T. Zhang. A proximal-gradient homotopy method for the sparse least-squares problem. SIAM Journal on Optimization, 23(2):1062–1091, 2013.
-  T. Yang, L. Zhang, R. Jin, and S. Zhu. Theory of dual-sparse regularized randomized reduction. CoRR, 2015.
-  I. E. Yen, T. Lin, S. Lin, P. K. Ravikumar, and I. S. Dhillon. Sparse random feature algorithm as coordinate descent in hilbert space. In NIPS, pages 2456–2464, 2014.
-  L. Zhang, M. Mahdavi, R. Jin, T. Yang, and S. Zhu. Recovering the optimal solution by dual random projection. In Proceedings of the Conference on Learning Theory (COLT), pages 135–157, 2013.
-  L. Zhang, M. Mahdavi, R. Jin, T. Yang, and S. Zhu. Random projections for classification: A recovery approach. IEEE Transactions on Information Theory (IEEE TIT), 60(11):7300–7316, 2014.
-  P. Zhao and B. Yu. On model election consistency of lasso. JMLR, 7:2541–2563, 2006.
-  S. Zhou, J. D. Lafferty, and L. A. Wasserman. Compressed regression. In NIPS, pages 1713–1720, 2007.
-  H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67:301–320, 2003.
Appendix A Proofs of main theorems
a.1 Proof of Theorem 2
Recall the definitions
First, we note that
By optimality of and the strong convexity of , for any we have
By the optimality condition of , there exists such that
By utilizing the above equation in (A.1), we have
Let denote the support set of and denote its complement set. Since could be any sub-gradient of at , we define as . Then we have
where the last inequality uses and . Combining the above inequality with (12), we have
By splitting and reorganizing the above inequality we have
If , then we have
Note that the inequality (14) hold regardless the value of . Since
by combining the above inequalities with (13), we can get
We can then complete the proof of Theorem 2 by noting the upper bound of in Lemma 2 and by setting according to the Theorem.
a.2 Proof of Theorem 3
When , the reduced problem becomes
From the proof of Theorem 2, we have
Then we can have the following lemma, whose proof of the lemma is deferred to next section.
If satisfies the restricted eigen-value condition at sparsity level , then