## Abstract

In modern data analysis, random sampling is an efficient and widely-used strategy to overcome the computational difficulties brought by large sample size. In previous studies, researchers conducted random sampling which is according to the input data but independent on the response variable, however the response variable may also be informative for sampling. In this paper we propose an adaptive sampling called the gradient-based sampling which is dependent on both the input data and the output for fast solving of least-square (LS) problems. We draw the data points by random sampling from the full data according to their gradient values. This sampling is computationally saving, since the running time of computing the sampling probabilities is reduced to where is the full sample size and is the dimension of the input. Theoretically, we establish an error bound analysis of the general importance sampling with respect to LS solution from full data. The result establishes an improved performance of the use of our gradient-based sampling. Synthetic and real data sets are used to empirically argue that the gradient-based sampling has an obvious advantage over existing sampling methods from two aspects of statistical efficiency and computational saving.

## 1 Introduction

Modern data analysis always addresses enormous data sets in recent years. Facing the increasing large sample data, computational savings play a major role in the data analysis. One simple way to reduce the computational cost is to perform random sampling, that is, one uses a small proportion of the data as a surrogate of the full sample for model fitting and statistical inference. Among random sampling strategies, uniform sampling is simple but trivial way since it fails to exploit the unequal importance of the data points. As an alternative, leverage-based sampling is to perform random sampling with respect to nonuniform sampling probabilities that depend on the empirical statistical leverage scores of the input matrix . It has been intensively studied in the machine learning community and has been proved to achieve much better results for worst-case input than uniform sampling [1, 2, 3, 4]. However it is known that leverage-based sampling replies on input data but is independent on the output variable, so does not make use of the information of the output. Another shortcoming is that it needs to cost much time to get the leverage scores, although approximating leverage scores has been proposed to further reduce the computational cost [5, 6, 7].

In this paper, we proposed an adaptive importance sampling, the gradient-based sampling, for solving least-square (LS) problem. This sampling attempts to sufficiently make use of the data information including the input data and the output variable. This adaptive process can be summarized as follows: given a pilot estimate (good “guess”) for the LS solution, determine the importance of each data point by calculating the gradient value, then sample from the full data by importance sampling according to the gradient value. One key contribution of this sampling is to save more computational time than leverage-based sampling, and the running time of getting the probabilities is reduced to where is the sample size and is the input dimension. It is worthy noting that, although we apply gradient-based sampling into the LS problem, we believe that it may be extended to fast solve other large-scale optimization problems as long as the gradient of optimization function is obtained. However this is out of the scope so we do not extend it in this paper.

Theoretically, we give the risk analysis, error bound of the LS solution from random sampling. [8] and [9] gave the risk analysis of approximating LS by Hadamard-based projection and covariance-thresholded regression, respectively. However, no such analysis is studied for importance sampling. The error bound analysis is a general result on any importance sampling as long as the conditions hold. By this result, we establishes an improved performance guarantee on the use of our gradient-based sampling. It is improved in the sense that our gradient-based sampling can make the bound approximately attain its minimum, while previous sampling methods can not get this aim. Additionally, the non-asymptotic result also provides a way of balancing the tradeoff between the subsample size and the statistical accuracy.

Empirically, we conduct detailed experiments on datasets generated from the mixture Gaussian and real datasets. We argue by these empirical studies that the gradient-based sampling is not only more statistically efficient than leverage-based sampling but also much computationally cheaper from the computational viewpoint. Another important aim of detailed experiments on synthetic datasets is to guide the use of the sampling in different situations that users may encounter in practice.

The remainder of the paper is organized as follows: In Section 2, we formally describe random sampling algorithm to solve LS, then establish the gradient-based sampling in Section 3. The non-asymptotic analysis is provided in Section 4. We study the empirical performance on synthetic and real world datasets in Section 5.

Notation: For a symmetric matrix , we define and as its the largest and smallest eigenvalues. For a vector , we define as its L norm.

## 2 Problem Set-up

For LS problem, suppose that there are an matrix and an response vector . We focus on the setting . The LS problem is to minimize the sample risk function of parameters as follows:

 n∑i=1(yi−xTiβ)2/2=:n∑i=1li. (1)

The solution of equation (1) takes the form of

 ^βn=(n−1XTX)−1(n−1XTy)=:Σ−1nbn, (2)

where and . However, the challenge of large sample size also exists in this simple problem, i.e., the sample size is so large that the computational cost for calculating LS solution (2) is very expensive or even not affordable.

We perform the random sampling algorithm as follows:
(a) Assign sampling probabilities for all data points such that ;
(b) Get a subsample by random sampling according to the probabilities;
(c) Maximize a weighted loss function to get an estimate

 ~β=argminβ∈Rd∑i∈S12πi∥yi−xTiβ∥2=Σ−1sbs, (3)

where , , and , and are the partitions of , and with the subsample size , respectively, corresponding the subsample . Note that the last equality in (3) holds under the assumption that is invertible. Throughout this paper, we assume that is invertible for the convenience since in our setting and it can be replaced with its regularized version if it is not invertible.

How to construct is a key component in random sampling algorithm. One simple method is the uniform sampling, i.e.,, and another method is leverage-based sampling, i.e., . In the next section, we introduce a new efficient method: gradient-based sampling, which draws data points according to the gradient value of each data point.

Related Work. [10, 11, 4] developed leverage-based sampling in matrix decomposition. [10, 12] applied the sampling method to approximate the LS solution. [13] derived the bias and variance formulas for the leverage-based sampling algorithm in linear regression using the Taylor series expansion. [14] further provided upper bounds for the mean-squared error and the worst-case error of randomized sketching for the LS problem. [15] proposed a sampling-dependent error bound then implied a better sampling distribution by this bound. Fast algorithms for approximating leverage scores were proposed to further reduce the computational cost [5, 6, 7].

The gradient-based sampling uses a pilot solution of the LS problem to compute the gradient of the objective function, and then sampling a subsample data set according to the calculated gradient values. It differs from leverage-based sampling in that the sampling probability is allowed to depend on input data as well as . Given a pilot estimate (good guess) for parameters , we calculate the gradient for the th data point

 gi=∂li(β0)∂β0=xi(yi−xTiβ0). (4)

Gradient represents the slope of the tangent of the loss function, so logically if gradient of data points are large in some sense, these data points are important to find the optima. Our sampling strategy makes use of the gradient upon observing given , and specifically,

 π0i=∥gi∥/n∑i=1∥gi∥. (5)

Equations (4) and (5) mean that, includes two parts of information: one is which is the information provided by the input data and the other is which is considered to provide a justification from the pilot estimate to a better estimate. Figure 1 illustrates the efficiency benefit of the gradient-based sampling by constructing the following simple example. The figure shows that the data points with larger are probably considered to be more important in approximating the solution. On the other side, given , we hope to choose the data points with larger values, since larger values probably cause the approximate solution be more efficient. From the computation view, calculating costs , so the gradient-based sampling is much saving computational cost.

Choosing the pilot estimate . In many applications, there may be a natural choice of pilot estimate , for instance, the fit from last time is a natural choice for this time. Another simple way is to use a pilot estimate from an initial subsample of size obtained by uniform sampling. The extra computational cost is , which is assumed to be small since a choice will be good enough. We empirically show the effect of small () on the performance of the gradient-based sampling by simulations, and argue that one does not need to be careful when choosing to get a pilot estimate. (see Supplementary Material, Section S1)

Poisson sampling v.s. sampling with replacement. In this study, we do not choose sampling with replacement as did in previous studies, but apply Poisson sampling into this algorithm. Poisson sampling is executed in the following way: proceed down the list of elements and carry out one randomized experiment for each element, which results either in the election or in the nonselection of the element [16]. Thus, Poisson sampling can improve the efficiency in some context compared to sampling with replacement since it can avoid repeatedly drawing the same data points, especially when the sampling ratio increases, We empirically illustrates this advantage of Poisson sampling compared to sampling with replacement. (see Supplementary Material, Section S2)

Independence on model assumption. LS solution is well known to be statistically efficient under the linear regression model with homogeneous errors, but model misspecification is ubiquitous in real applications. On the other hand, LS solution is also an optimization problem without any linear model assumption from the algorithmic view. To numerically show the independence of the gradient-based sampling on model assumption, we do simulation studies and find that it is an efficient sampling method from the algorithmic perspective. (see Supplementary Material, Section S3)

Now as a summary we present the gradient-based sampling in Algorithm 1.

Remarks on Algorithm 1. (a) The subsample size from Poisson sampling is random in Algorithm 1. Since is multinomial distributed with expectation and variance , the range of probable values of can be assessed by an interval. In practice we just need to set the expected subsample size . (b) If ’s are so large that for some data points, we should take , i.e., for them.

## 4 Error Bound Analysis of Sampling Algorithms

Our main theoretical result establishes the excess risk, i.e., an upper error bound of the subsample estimator to approximate for an random sampling method. Given sampling probabilities , the excess risk of the subsample estimator with respect to is given in Theorem 1. (see Section S4 in Supplementary Material for the proof). By this general result, we provide an explanation why the gradient-based sampling algorithm is statistically efficient.

###### Theorem 1

Define , where , and , if

 r>2σ2Σlogdδ2(2−1λmin(Σn)−(3nδ)−1Rlogd)2

holds, the excess risk of for approximating is bounded in probability for as

 ∥~β−^βn∥≤Cr−1/2, (6)

where .

Theorem 1 indicates that, can be bounded by . From (6), the choice of sampling method has no effect on the decreasing rate of the bound, , but influences the constant . Thus, a theoretical measure of efficiency for some sampling method is whether it can make the constant attain its minimum. In Corollary 1 (see Section S5 in Supplementary Material for the proof), we show that Algorithm 1 can approximately get this aim.

Remarks on Theorem 1. (a) Theorem 1 can be used to guide the choice of in practice so as to guarantee the desired accuracy of the solution with high probability. (b) The constants , and can be estimated based on the subsample. (c) The risk of to predict follows from equation (6) and get that . (d) Although Theorem 1 is established under Poisson sampling, we can easily extend the error bound to sampling with replacement by following the technical proofs in Supplementary Material, since each drawing in sampling with replacement is considered to be independent.

###### Corollary 1

If , then is approximately mimimized by Algorithm 1, that is,

 C(π0i)−minπC=op(1), (7)

where denotes the value corresponding to our gradient-based sampling.

The significance of Corollary 1 is to give an explanation why the gradient-based sampling is statistically efficient. The corollary establishes an improved performance guarantee on the use of the gradient-based sampling. It is improved in the sense that our gradient-based sampling can make the bound approximately attain its minimum as long as the condition is satisfied, while neither uniform sampling nor leverage-based sampling can get this aim. The condition that provides a benchmark whether the pilot estimate is a good guess of . Note the condition is satisfied by the initial estimate from an initial subsample of size by uniform sampling since .

## 5 Numerical Experiments

Detailed numerical experiments are conducted to compare the excess risk of based on loss against the expected subsample size for different synthetic datasets and real data examples. In this section, we report several representative studies.

### 5.1 Performance of gradient-based sampling

The design matrix is generated with elements drawn independently from the mixture Gaussian distributions below: (1) and , i.e., Gaussian distribution (referred as to GA data); (2) and , i.e.,the mixture between small and relatively large variances (referred as to MG1 data); (3) and , i.e., the mixture between small and highly large variances (referred as to MG2 data); (4) and , i.e., the mixture between two symmetric peaks (referred as to MG3 data). We also do simulations on generated from multivariate mixture Gaussian distributions with AR(1) covariance matrix, but obtain the similar performance to the setting above, so we do not report them here. Given , we generate from the model where each element of is drawn from normal distribution and then fixed, and , where . Note that we also consider the heteroscedasticity setting that is from a mixture Gaussian, and get the similar results to the homoscedasticity setting. So we do not report them here. We set as 100, and as among 20K, 50K, 100K, 200K, 500K.

We calculate the full sample LS solution for each dataset, and repeatedly apply various sampling methods for times to get subsample estimates for . We calculate the empirical risk based on loss (MSE) as follows:

 MSE=B−1B∑b=1∥~βb−^βn∥2.

Two sampling ratio values are considered: 0.01 and 0.05. We compare uniform sampling (UNIF), the leverage-based sampling (LEV) and the gradient-based sampling (GRAD) to these data sets. For GRAD, we set the to getting the pilot estimate .

Figure 2 gives boxplots of the logarithm of sampling probabilities of LEV and GRAD, where taking the logarithm is to clearly show their distributions. We have some observations from the figure. (1) For all four datasets, GRAD has heavier tails than LEV, that is, GRAD lets sampling probabilities more disperse than LEV. (2) MG2 tends to have the most heterogeneous sampling probabilities, MG1 has less heterogeneous than MG2, whereas MG3 and GA have the most homogeneous sampling probabilities. This indicates that the mixture of large and small variances has effect on the distributions of sampling probabilities while the mixture of different peak locations has no effect.

We plot the logarithm of MSE values for GA, MG1, and MG2 in Figure 3, where taking the logarithm is to clearly show the relative values. We do not report the results for MG3, as there is little difference between MG3 and GA. There are several interesting results shown in Figure 3. (1) GRAD has better performance than others, and the advantage of GRAD becomes obvious as increases. (2) For GA, LEV is shown to have similar performance to UNIF, however GRAD has obviously better performance than UNIF. (3) When increases, the smaller is needed to make sure that GRAD outperforms others.

From the computation view, we compare the computational cost for UNIF, approximate LEV (ALEV) [5, 6] and GRAD in Table 1, since ALEV is shown to be computationally efficient to approximate LEV. From the table, UNIF is the most saving, and the time cost of GRAD is much less than that of ALEV. It indicates that GRAD is also an efficient method from the computational view, since its running time is . Additionally, Table 2 summaries the computational complexity of several sampling methods for fast solving LS problems.

### 5.2 Real Data Examples

In this section, we compare the performance of various sampling algorithms on two UCI datasets: CASP () and OnlineNewsPopularity (NEWS) (). At first, we plot boxplots of the logarithm of sampling probabilities of LEV and GRAD in Figure 4. From it, similar to synthetic datasets, we know that the sampling probabilities of GRAD looks more dispersed compared to those of LEV.

The MSE values are reported in Table 3. From it, we have two observations below. First, GRAD has smaller MSE values than others when is large. Second, as increases, the outperformance of Poisson sampling than sampling with replacement gets obvious for various methods. Similar observation is gotten in simulations (see Supplementary Material, Section S2).

## 6 Conclusion

In this paper we have proposed gradient-based sampling algorithm for approximating LS solution. This algorithm is not only statistically efficient but also computationally saving. Theoretically, we provide the error bound analysis, which supplies a justification for the algorithm and give a tradeoff between the subsample size and approximation efficiency. We also argue from empirical studies that: (1) since the gradient-based sampling algorithm is justified without linear model assumption, it works better than the leverage-based sampling under different model specifications; (2) Poisson sampling is much better than sampling with replacement when sampling ratio increases.

There is an interesting problem to address in the further study. Although the gradient-based sampling is proposed to approximate LS solution in this paper, we believe that this sampling method can apply into other optimization problems for large-scale data analysis, since gradient is considered to be the steepest way to attain the (local) optima. Thus, applying this idea to other optimization problems is an interesting study.

#### Acknowledgments

This research was supported by National Natural Science Foundation of China grants 11301514 and 71532013. We thank Xiuyuan Cheng for comments in a preliminary version.

## s.7 Supplementary

#### The influence of the pilot estimate

In gradient-based sampling, we need to get the pilot estimate by uniformly sampling a subsample of size . Now we investigate the effect of by plotting the relative MSE for with respect to that for on GA, MG1 and MG2 datasets in Figure 5. We observe that when is larger than , MSE values of go flat. Thus, we argue that choosing the initial size of to get a pilot estimate may not be careful.

#### The advantage of poisson sampling

Now we empirically compare poisson sampling (PS) with sampling with replacement (SR). We compare risk performance between them for different values: 0.01 and 0.05. We report the results in Table 4, where we do not report the performance of UNIF and LEV due to the similarity shared with GRAD. From Table 4, there is little difference between PS and SR for , however PS becomes better than SR for . This observation indicates that PS outperforms SR when the sampling ratio increases.

#### The Robustness to Model Specification

The gradient-based sampling algorithm does not reply on the model assumption. We empirically investigate the effect of the model specification on various sampling methods. Three kinds of model specification are considered here, i.e., models generating data are as follows:
(I) heteroscedasticity,

 y=10∑k=1x(k)βk+ε∗  with  ε∗=ρ1x(11)+ε,

where is ignored in LS computation, and denotes the seriousness degree of “model wrong” and is set as among ;
(II) model error dependence,

 y=10∑k=1x(k)βk+ε,  with  εi=N(ρ2εi−1,(1−ρ22)σ2),

where , and denotes the dependence degree among model errors and is set as among ;
(III) correlation between error and predictor,

 y=10∑k=1x(k)βk+ε  with  εi=(1+ρ3x(1)i)N(0,σ2),

where denotes the correlation between model error and the predictor and is set as among .

We report the results on MG1 dataset for and in Table 5 but do not report the results on other data sets because of the similarity. From Table 5, Firstly, most importantly, GRAD still works better than UNIF and LEV. Secondly, Types I and III can bring serious effect, especially Type III causes the most serious effect, while Type II seems have little effect on efficiency of sampling methods. Thus, these observations command that GRAD is a nice choice from the model robustness viewpoint.

### s.7.2 Technical Results

#### Lemma for proving Theorem 1

To analyze the risk, our key point is to apply Matrix Bernstein expectation bound (Theorem 6.1 in [17]) into matrix Bernoulli series. The lemma below present the expectation bound for matrix Bernoulli series.

###### Lemma 1

Consider a finite sequence of Hermitian matrices, where is vector. Let , with mean respectively, be a finite sequence of independent Bernoulli variables. Let and

 σ2Σ=1n2n∑i=1π−1i∥xi∥4.

Define matrix Bernoulli series We have,

 Eλmax(Δ)≤r−1/2σΣ√2logd+R3nlogd.

Since the sequence is independent random Hermitian matrices with , , and

 λmax(EΔ2) =1n2n∑i=1(p−1i−1)λ%max(A2i) ≤r−1n2n∑i=1π−1i∥xi∥4,

applying the matrix Berstein inequality of Theorem 6.1 in ([17]) to obtain that

 Eλmax(Δ)≤r−1/2σΣ√2logd+R3nlogd.

#### Proof of Theorem 1

We have that

 ∥~β−^β∥ =∥Σ−1sbs−Σ−1sΣs^βn∥ ≤λmax(Σ−1s)∥bs−Σs^βn∥. (A.1)

Note that . For convenience, we assume without loss of generality. If the event

 E1:={λmax(Σn−Σs)<2−1λmin(Σn)} (A.2)

holds, then we have that

 λmax(Σ−1s)≤[λmin(Σn)−λmax(Σn−Σs)]−1,

and combining (B.2),

 ∥~β−^β∥≤∥bs−Σs^βn∥λmin(Σn)−λmax(Σn−Σs)<[λ−1min(Σn)+2λ−2min(Σn)λmax(Σn−Σs)]∥bs−Σs^βn∥, (A.3)

where the 2nd inequality is from the fact that for any and the condition that the event holds. For any , define

 E2:={∥bs−Σs^βn∥≤σbr1/2δ}. E3:={λmax(Σn−Σs)≤σΣ√2logdr1/2δ+Rlogd3nδ}.

Since

 E∥bs−Σs^βn∥2 = E[1n2n∑i=1(Iipi−1)xTiein∑i=1(Iipi−1)xiei] = Missing or unrecognized delimiter for \right

by Markov’s inequality we have that,

 Pr(ET2)≤δ. (A.4)

Lemma 1 shows that

 Pr(ET3) ≤(σΣ√logdr1/2δ+Rlogd3nδ)−1[λmax(Σn−Σs)] =δ. (A.5)

For (A.2), we have that if

 r>2σ2Σlogdδ2(2−1λmin(Σn)−(3nδ)−1Rlogd)2, (A.6) δ>2Rlogd3nλmin(Σn) (A.7)

holds. Thus, combing (A.3), (A.4), (B.2), (A.6) and (A.7), we get

 Pr{∥~β−β∥≤C1r−1/2+C2r−1}≥1−δ, (A.8)

where and . From (A.6), . Thus Theorem 1 is proved.

#### Proof of Corollary 1

Let

 πei=∥eixi∥/n∑j=1∥ejxj∥. (A.9)

is minimized at by Cauchy-Schwarz inequality, and the minimum of :

 σ2b(πei)=(1nn∑i=1∥eixi∥)2. (A.10)

On the other hand, for the sampling probabilities of the gradient-based sampling,

 σ2b(π0i)=(1nn∑i=1∥~eixi∥)(1nn∑i=1∥xi∥e2i/|~ei|), (A.11)

where . From (A.10) and (A.11), we have that, if , then

 σ2b(π0i)−σ2b(πei)=op(1). (A.12)

From (A.12) and the notation of , Corollary 1 is proved.

### References

1. P. Drineas, R. Kannan, and M.W. Mahoney. Fast monte carlo algorithms for matrices i: approximating matrix multiplication. SIAM Journal on Scientific Computing, 36:132–157, 2006.
2. P. Drineas, R. Kannan, and M.W. Mahoney. Fast monte carlo algorithms for matrices ii: computing a low-rank approximation to a matrix. SIAM Journal on Scientific Computing, 36:158–183, 2006.
3. P. Drineas, R. Kannan, and M.W. Mahoney. Fast monte carlo algorithms for matrices iii: computing a compressed approximate matrix decomposition. SIAM Journal on Scientific Computing, 36:184–206, 2006.
4. M.W. Mahoney and P. Drineas. CUR matrix decompositions for improved data analysis. Proceedings of the National Academy of Sciences, 106:697–702, 2009.
5. P. Drineas, M. Magdon-Ismail, M.W. Mahoney, and D.P. Woodruff. Fast approximation of matrix coherence and statistical leverage. Journal of Machine Learning Research, 13:3475–3506, 2012.
6. D.P. Clarkson, K.L.and Woodruff. Low rank approximation and regression in input sparsity time. STOC, 2013.
7. M.B. Cohen, Y.T. Lee, C. Musco, C. Musco, R. Peng, and A. Sidford. Uniform sampling for matrix approximation. arXiv:1408.5099, 2014.
8. P. Dhillon, Y. Lu, D.P. Foster, and L. Ungar. New subsampling algorithns for fast least squares regression. In Advances in Neural Information Processing Systems, volume 26, pages 360–368, 2013.
9. D. Shender and J. Lafferty. Computation-risk tradeoffs for covariance-thresholded regression. In Proceedings of the 30th International Conference on Machine Learning, 2013.
10. P. Drineas, M.W. Mahoney, and S. Muthukrishnan. Sampling algorithms for regression and applications. In Proceedings of the 17th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1127–1136, 2006.
11. P. Drineas, M.W. Mahoney, and S. Muthukrishnan. Relative-error CUR matrix decomposition. SIAM Journal on Matrix Analysis and Applications, 30:844–881, 2008.
12. P. Drineas, M.W. Mahoney, S. Muthukrishnan, and T. Sarlos. Faster least squares approximation. Numerische Mathematik, 117:219–249, 2011.
13. P. Ma, M.W. Mahoney, and B. Yu. A statistical perspective on algorithmic leveraging. In Proceedings of the 31th International Conference on Machine Learning, 2014.
14. G. Raskutti and M.W. Mahoney. A statistical perspective on randomized sketching for ordinary least-squares. In Proc. of the 32nd ICML Conference, 2015.
15. T. Yang, L. Zhang, R. Jin, and S. Zhu. An explicit sampling dependent spectral error bound for column subset selection. In Proc. of the 32nd ICML Conference, 2015.
16. C.E. Särndal, B. Swensson, and J.H. Wretman. Model Assisted Survey Sampling. Springer, New York, 2003.
17. J.A. Tropp. User-friendly tools for random matrices: An introduction. In Advances in Neural Information Processing Systems, 2012.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minumum 40 characters