Optimal Rates for Multi-pass Stochastic Gradient Methods

Optimal Rates for Multi-pass Stochastic Gradient Methods

\nameJunhong Lin \emailjunhong.lin@iit.it
\addrLaboratory for Computational and Statistical Learning
Istituto Italiano di Tecnologia and Massachusetts Institute of Technology
Bldg. 46-5155, 77 Massachusetts Avenue, Cambridge, MA 02139, USA \ANDLorenzo Rosasco \emaillrosasco@mit.edu
\addrDIBRIS, Università di Genova
Via Dodecaneso, 35 — 16146 Genova, Italy
Laboratory for Computational and Statistical Learning
Istituto Italiano di Tecnologia and Massachusetts Institute of Technology
Bldg. 46-5155, 77 Massachusetts Avenue, Cambridge, MA 02139, USA
J.L. is now with the Laboratory for Information and Inference Systems, École Polytechnique Fédérale de Lausanne, Lausanne 1015, Switzerland. (jhlin5@hotmail.com)
Abstract

We analyze the learning properties of the stochastic gradient method when multiple passes over the data and mini-batches are allowed. We study how regularization properties are controlled by the step-size, the number of passes and the mini-batch size. In particular, we consider the square loss and show that for a universal step-size choice, the number of passes acts as a regularization parameter, and optimal finite sample bounds can be achieved by early-stopping. Moreover, we show that larger step-sizes are allowed when considering mini-batches. Our analysis is based on a unifying approach, encompassing both batch and stochastic gradient methods as special cases. As a byproduct, we derive optimal convergence results for batch gradient methods (even in the non-attainable cases).

Optimal Rates for Multi-pass Stochastic Gradient Methods Junhong Linthanks: J.L. is now with the Laboratory for Information and Inference Systems, École Polytechnique Fédérale de Lausanne, Lausanne 1015, Switzerland. (jhlin5@hotmail.com) junhong.lin@iit.it
Laboratory for Computational and Statistical Learning
Istituto Italiano di Tecnologia and Massachusetts Institute of Technology
Bldg. 46-5155, 77 Massachusetts Avenue, Cambridge, MA 02139, USA
Lorenzo Rosasco lrosasco@mit.edu
DIBRIS, Università di Genova
Via Dodecaneso, 35 — 16146 Genova, Italy
Laboratory for Computational and Statistical Learning
Istituto Italiano di Tecnologia and Massachusetts Institute of Technology
Bldg. 46-5155, 77 Massachusetts Avenue, Cambridge, MA 02139, USA

1 Introduction

Modern machine learning applications require computational approaches that are at the same time statistically accurate and numerically efficient (Bousquet and Bottou, 2008). This has motivated a recent interest in stochastic gradient methods (SGM), since on the one hand they enjoy good practical performances, especially in large scale scenarios, and on the other hand they are amenable to theoretical studies. In particular, unlike other learning approaches, such as empirical risk minimization or Tikhonov regularization, theoretical results on SGM naturally integrate statistical and computational aspects.

Most generalization studies on SGM consider the case where only one pass over the data is allowed and the step-size is appropriately chosen, see (Cesa-Bianchi et al., 2004; Nemirovski et al., 2009; Ying and Pontil, 2008; Tarres and Yao, 2014; Dieuleveut and Bach, 2016; Orabona, 2014) and references therein, possibly considering averaging (Poljak, 1987). In particular, recent works show how the step-size can be seen to play the role of a regularization parameter whose choice controls the bias and variance properties of the obtained solution (Ying and Pontil, 2008; Tarres and Yao, 2014; Dieuleveut and Bach, 2016; Lin et al., 2016a). These latter works show that balancing these contributions, it is possible to derive a step-size choice leading to optimal learning bounds. Such a choice typically depends on some unknown properties of the data generating distributions and it can be chosen by cross-validation in practice.

While processing each data point only once is natural in streaming/online scenarios, in practice SGM is often used to process large data-sets and multiple passes over the data are typically considered. In this case, the number of passes over the data, as well as the step-size, need then to be determined. While the role of multiple passes is well understood if the goal is empirical risk minimization (see e.g., Boyd and Mutapcic, 2007), its effect with respect to generalization is less clear. A few recent works have recently started to tackle this question. In particular, results in this direction have been derived in (Hardt et al., 2016) and (Lin et al., 2016a). The former work considers a general stochastic optimization setting and studies stability properties of SGM allowing to derive convergence results as well as finite sample bounds. The latter work, restricted to supervised learning, further develops these results to compare the respective roles of step-size and number of passes, and show how different parameter settings can lead to optimal error bounds. In particular, it shows that there are two extreme cases: while one between the step-size or the number of passes is fixed a priori, while the other one acts as a regularization parameter and needs to be chosen adaptively. The main shortcoming of these latter results is that they are for the worst case, in the sense that they do not consider the possible effect of benign assumptions on the problem (Zhang, 2005; Caponnetto and De Vito, 2007) that can lead to faster rates for other learning approaches such as Tikhonov regularization. Further, these results do not consider the possible effect on generalization of mini-batches, rather than a single point in each gradient step (Shalev-Shwartz et al., 2011; Dekel et al., 2012; Sra et al., 2012; Ng, 2016). This latter strategy is often considered especially for parallel implementation of SGM.

The study in this paper fills in these gaps in the case where the loss function is the least squares loss. We consider a variant of SGM for least squares, where gradients are sampled uniformly at random and mini-batches are allowed. The number of passes, the step-size and the mini-batch size are then parameters to be determined. Our main results highlight the respective roles of these parameters and show how can they be chosen so that the corresponding solutions achieve optimal learning errors in a variety of settings. In particular, we show for the first time that multi-pass SGM with early stopping and a universal step-size choice can achieve optimal convergence rates, matching those of ridge regression (Smale and Zhou, 2007; Caponnetto and De Vito, 2007). Further, our analysis shows how the mini-batch size and the step-size choice are tightly related. Indeed, larger mini-batch sizes allow considering larger step-sizes while keeping the optimal learning bounds. This result gives insights on how to exploit mini-batches for parallel computations while preserving optimal statistical accuracy. Finally, we note that a recent work (Rosasco and Villa, 2015) is related to the analysis in the paper. The generalization properties of a multi-pass incremental gradient are analyzed in (Rosasco and Villa, 2015), for a cyclic, rather than a stochastic, choice of the gradients and with no mini-batches. The analysis in this latter case appears to be harder and results in (Rosasco and Villa, 2015) give good learning bounds only in restricted setting and considering iterates rather than the excess risk. Compared to (Rosasco and Villa, 2015) our results show how stochasticity can be exploited to get fast rates and analyze the role of mini-batches. The basic idea of our proof is to approximate the SGM learning sequence in terms of the batch gradient descent sequence, see Subsection 3.7 for further details. This allows to study batch and stochastic gradient methods simultaneously, and may be also useful for analyzing other learning algorithms.

This paper is an extended version of a prior conference paper (Lin and Rosasco, 2016). In (Lin and Rosasco, 2016), we give convergence results with optimal rates for the attainable case (i.e., assuming the existence of at least one minimizer of the expected risk over the hypothesis space) in a fixed step-size setting. In this new version, we give convergence results with optimal rates, for both the attainable and non-attainable cases, and consider more general step-size choices. The extension from the attainable case to the non-attainable case is non-trivial. As will be seen from the proof, in contrast to the attainable case, a different and refined estimation is needed for the non-attainable case. Interestingly, as a byproduct of this paper, we also derived optimal rates for the batch gradient descent methods in the non-attainable case. To the best of our knowledge, such a result may be the first kind for batch gradient methods, without requiring any extra unlabeled data as that in (Caponnetto and Yao, 2010). Finally, we also add novel convergence results for the iterates showing that they converge to the minimal norm solution of the expected risk with optimal rates.

The rest of this paper is organized as follows. Section 2 introduces the learning setting and the SGM algorithm. Main results with discussions and proof sketches are presented in Section 3. Preliminary lemmas necessary for the proofs will be given in Section 4 while detailed proofs will be conducted in Sections 5 to 8. Finally, simple numerical simulations are given in Section 9 to complement our theoretical results.

Notation

For any , denotes the maximum of and . is the set of all positive integers. For any denotes the set For any two positive sequences and the notation for all means that there exists a positive constant such that is independent of and that for all

2 Learning with SGM

We begin by introducing the learning setting we consider, and then describe the SGM learning algorithm. Following (Rosasco and Villa, 2015), the formulation we consider is close to the setting of functional regression, and covers the reproducing kernel Hilbert space (RKHS) setting as a special case, see Appendix A. In particular, it reduces to standard linear regression for finite dimensions.

2.1 Learning Problems

Let be a separable Hilbert space, with inner product and induced norm denoted by and , respectively. Let the input space and the output space . Let be an unknown probability measure on the induced marginal measure on , and the conditional probability measure on with respect to and .

Considering the square loss function, the problem under study is the minimization of the risk,

(1)

when the measure is known only through a sample of size , independently and identically distributed (i.i.d.) according to . In the following, we measure the quality of an approximate solution (an estimator) considering the excess risk, i.e.,

(2)

Throughout this paper, we assume that there exists a constant , such that

(3)

2.2 Stochastic Gradient Method

We study the following variant of SGM, possibly with mini-batches. Unlike some of the variants studied in the literature, the algorithm we consider in this paper does not involve any explicit penalty term or any projection step, in which case one does not need to tune the penalty/projection parameter.

Algorithm 1

Let Given any sample , the -minibatch stochastic gradient method is defined by and

(4)

where is a step-size sequence. Here, are i.i.d. random variables from the uniform distribution on 111Note that, the random variables are conditionally independent given the sample ..

We add some comments on the above algorithm. First, different choices for the mini-batch size can lead to different algorithms. In particular, for , the above algorithm corresponds to a simple SGM, while for it is a stochastic version of the batch gradient descent. In this paper, we are particularly interested in the cases of and Second, other choices on the initial value, rather than , is possible. In fact, following from our proofs in this paper, the interested readers can see that the convergence results stated in the next subsections still hold for other choices of initial values. Finally, the number of total iterations can be bigger than the number of sample points . This indicates that we can use the sample more than once, or in another words, we can run the algorithm with multiple passes over the data. Here and in what follows, the number of ‘passes’ over the data is referred to at iterations of the algorithm.

The aim of this paper is to derive excess risk bounds for Algorithm 1. Throughout this paper, we assume that is non-increasing, and with . We denote by the set and by the set .

3 Main Results with Discussions

In this section, we first state some basic assumptions. Then, we present and discuss our main results.

3.1 Assumptions

The following assumption is related to a moment assumption on . It is weaker than the often considered bounded output assumption, such as the binary classification problems where

Assumption 1

There exists constants and such that

(5)

-almost surely.

To present our next assumption, we introduce the operator , defined by Here, is the Hilbert space of square integral functions from to with respect to , with norm,

Under Assumption (3), can be proved to be positive trace class operators (Cucker and Zhou, 2007), and hence with can be defined by using the spectral theory.

It is well known (see e.g., Cucker and Zhou, 2007) that the function minimizing over all measurable functions is the regression function, given by

(6)

Define another Hilbert space Under Assumption (3), it is easy to see that is a subspace of Let be the projection of the regression function onto the closure of in It is easy to see that the search for a solution of Problem (1) is equivalent to the search of a linear function in to approximate . From this point of view, bounds on the excess risk of a learning algorithm on or , naturally depend on the following assumption, which quantifies how well, the target function can be approximated by .

Assumption 2

There exist and , such that

The above assumption is fairly standard in non-parametric regression (Cucker and Zhou, 2007; Rosasco and Villa, 2015). The bigger is, the more stringent the assumption is, since

In particular, for we are making no assumption, while for we are requiring , since (Rosasco and Villa, 2015)

(7)

In the case of , , which implies Problem (1) has at least one solution in the space . In this case, we denote as the solution with the minimal -norm.

Finally, the last assumption relates to the capacity of the hypothesis space.

Assumption 3

For some and , satisfies

(8)

The left hand-side of of (8) is called as the effective dimension (Caponnetto and De Vito, 2007), or the degrees of freedom (Zhang, 2005). It can be related to covering/entropy number conditions, see (Steinwart and Christmann, 2008) for further details. Assumption 3 is always true for and , since is a trace class operator which implies the eigenvalues of , denoted as , satisfy This is referred to as the capacity independent setting. Assumption 3 with allows to derive better error rates. It is satisfied, e.g., if the eigenvalues of satisfy a polynomial decaying condition , or with if is finite rank.

3.2 Optimal Rates for SGM and Batch GM: Simplified Versions

We start with the following corollaries, which are the simplified versions of our main results stated in the next subsections.

Corollary 1 (Optimal Rate for SGM)

Under Assumptions 2 and 3, let almost surely for some Let if , or with otherwise. Consider the SGM with
1) , for all and
If and , then with probability222Here, ‘high probability’ refers to the sample .at least , it holds

(9)

Furthermore, the above also holds for the SGM with333Here, we assume that is an integer.
2) for all and
In the above, and are positive constants depending on , a polynomial of and , and also on (and also on in the case that ).

We add some comments on the above result. First, the above result asserts that, at passes over the data, the SGM with two different fixed step-size and fixed mini-batch size choices, achieves optimal learning error bounds, matching (or improving) those of ridge regression (Smale and Zhou, 2007; Caponnetto and De Vito, 2007). Second, according to the above result, using mini-batch allows to use a larger step-size while achieving the same optimal error bounds. Finally, the above result can be further simplified in some special cases. For example, if we consider the capacity independent case, i.e., , and assuming that , which is equivalent to making Assumption 2 with as mentioned before, the error bound is , while the number of passes

Remark 1 (Finite Dimensional Case)

With a simple modification of our proofs, we can derive similar results for the finite dimensional case, i.e., , where in this case, . In particular, letting under the same assumptions of Corollary 1, if one considers the SGM with and for all then with high probability, provided that

Remark 2

From the proofs, one can easily see that if and are replaced respectively by and , in both the assumptions and the error bounds, then all theorems and their corollaries of this paper are still true, as long as satisfies . As a result, if we assume that satisfies Assumption 2 (with replaced by ), as typically done in (Smale and Zhou, 2007; Caponnetto and De Vito, 2007; Steinwart et al., 2009; Caponnetto and Yao, 2010) for the RKHS setting, we have that with high probability,

In this case, the factor from the upper bounds for the case is exactly and can be controlled by the condition (and more generally, by Assumption 1). Since many common RKHSs are universally consistent (Steinwart and Christmann, 2008), making Assumption 2 on is natural and moreover, deriving error bounds with respect to seems to be more interesting in this case.

As a byproduct of our proofs in this paper, we derive the following optimal results for batch gradient methods (GM), defined by and

(10)
Corollary 2 (Optimal Rate for Batch GM)

Under the assumptions and notations of Corollary 1, consider batch GM (10) with . If is large enough, then with high probability, (9) holds for

In the above corollary, the convergence rates are optimal for . To the best of our knowledge, these results are the first ones with minimax rates (Caponnetto and De Vito, 2007; Blanchard and Mücke, 2016) for the batch GM in the non-attainable case. Particularly, they improve the results in the previous literature, see Subsection 3.6 for more discussions.

Corollaries 1 and 2 cover the main contributions of this paper. In the following subsections, we will present the main theorems of this paper, following with several corollaries and simple discussions, from which one can derive the simplified versions stated in this subsection. In the next subsection, we present results for SGM in the attainable case while results in the non-attainable case will be given in Subsection 3.4, as the bounds for these two cases are different and particularly their proofs require different estimations. At last, results with more specific convergence rates for batch GM will be presented in Subsection 3.5.

3.3 Main Results for SGM: Attainable Case

In this subsection, we present convergence results in the attainable case, i.e., , following with simple discussions. One of our main theorems in the attainable case is stated next, and provides error bounds for the studied algorithm. For the sake of readability, we only present results in a fixed step-size setting in this section. Results in a general setting ( with can be found in Section 7.

Theorem 1

Under Assumptions 1, 2 and 3, let , , for all with If , then the following holds with probability at least : for all

(11)

Here, and are positive constants depending on , and also on (which will be given explicitly in the proof).

There are three terms in the upper bounds of (11). The first term depends on the regularity of the target function and it arises from bounding the bias, while the last two terms result from estimating the sample variance and the computational variance (due to the random choices of the points), respectively. To derive optimal rates, it is necessary to balance these three terms. Solving this trade-off problem leads to different choices on , , and , corresponding to different regularization strategies, as shown in subsequent corollaries.

The first corollary gives generalization error bounds for simple SGM, with a universal step-size depending on the number of sample points.

Corollary 3

Under Assumptions 1, 2 and 3, let , , and for all . If then with probability at least , there holds

(12)

and in particular,

(13)

where Here, is exactly the same as in Theorem 1.

Remark 3

Ignoring the logarithmic term and letting , Eq. (12) becomes

A smaller may lead to a larger bias, while a larger may lead to a larger sample error. From this point of view, has a regularization effect.

The second corollary provides error bounds for SGM with a fixed mini-batch size and a fixed step-size (which depend on the number of sample points).

Corollary 4

Under Assumptions 1, 2 and 3, let , , and for all . If then with probability at least , there holds

(14)

and particularly,

(15)

where

The above two corollaries follow from Theorem 1 with the simple observation that the dominating terms in (11) are the terms related to the bias and the sample variance, when a small step-size is chosen. The only free parameter in (12) and (14) is the number of iterations/passes. The ideal stopping rule is achieved by balancing the two terms related to the bias and the sample variance, showing the regularization effect of the number of passes. Since the ideal stopping rule depends on the unknown parameters and , a hold-out cross-validation procedure is often used to tune the stopping rule in practice. Using an argument similar to that in Chapter 6 from (Steinwart and Christmann, 2008), it is possible to show that this procedure can achieve the same convergence rate.

We give some further remarks. First, the upper bound in (13) is optimal up to a logarithmic factor, in the sense that it matches the minimax lower rate in (Caponnetto and De Vito, 2007; Blanchard and Mücke, 2016). Second, according to Corollaries 3 and 4, passes over the data are needed to obtain optimal rates in both cases. Finally, in comparing the simple SGM and the mini-batch SGM, Corollaries 3 and 4 show that a larger step-size is allowed to use for the latter.

In the next result, both the step-size and the stopping rule are tuned to obtain optimal rates for simple SGM with multiple passes. In this case, the step-size and the number of iterations are the regularization parameters.

Corollary 5

Under Assumptions 1, 2 and 3, let , , and for all If and then (13) holds with probability at least

The next corollary shows that for some suitable mini-batch sizes, optimal rates can be achieved with a constant step-size (which is nearly independent of the number of sample points) by early stopping.

Corollary 6

Under Assumptions 1, 2 and 3, let , and for all . If and then (13) holds with probability at least

According to Corollaries 5 and 6, around passes over the data are needed to achieve the best performance in the above two strategies. In comparisons with Corollaries 3 and 4 where around passes are required, the latter seems to require fewer passes over the data. However, in this case, one might have to run the algorithms multiple times to tune the step-size, or the mini-batch size.

Remark 4

1) If we make no assumption on the capacity, i.e., , Corollary 5 recovers the result in (Ying and Pontil, 2008) for one pass SGM.
2) If we make no assumption on the capacity and assume that , from Corollaries 5 and 6, we see that the optimal convergence rate can be achieved after one pass over the data in both of these two strategies. In this special case, Corollaries 5 and 6 recover the results for one pass SGM in, e.g., (Shamir and Zhang, 2013; Dekel et al., 2012).

The next result gives generalization error bounds for ‘batch’ SGM with a constant step-size (nearly independent of the number of sample points).

Corollary 7

Under Assumptions 1, 2 and 3, let , and for all If and then (13) holds with probability at least

Theorem 1 and its corollaries give convergence results with respect to the target function values. In the next theorem and corollary, we will present convergence results in -norm.

Theorem 2

Under the assumptions of Theorem 1, the following holds with probability at least for all

(16)

Here, and are positive constants depending on , and (which can be given explicitly in the proof).

The proof of the above theorem is similar as that for Theorem 1, and will be given in Subsection 8. Again, the upper bound in (16) is composed of three terms related to bias, sample variance, and computational variance. Balancing these three terms leads to different choices on , , and , as shown in the following corollary.

Corollary 8

With the same assumptions and notations from any one of Corollaries 3 to 7, the following holds with probability at least

The convergence rate in the above corollary is optimal up to a logarithmic factor, as it matches the minimax rate shown in (Blanchard and Mücke, 2016).

In the next subsection, we will present convergence results in the non-attainable case, i.e., .

3.4 Main Results for SGM: Non-attainable Case

Our main theorem in the non-attainable case is stated next, and provides error bounds for the studied algorithm. Here, we present results with a fixed step-size, whereas general results with a decaying step-size will be given in Section 7.

Theorem 3

Under Assumptions 1, 2 and 3, let , , for all with . Then the following holds for all with probability at least : 1) if and then

(17)

2) if and for some , , then

Here, (or ), and are positive constants depending only on , , and (or ) also on (and ).

The upper bounds in (11) (for the attainable case) and (17) (for the non-attainable case) are similar, whereas the latter has an extra logarithmic factor. Consequently, in the subsequent corollaries, we derive for the non-attainable case. In comparison with that for the attainable case, the convergence rate for the non-attainable case has an extra factor.

Similar to Corollaries 3 and 4, and as direct consequences of the above theorem, we have the following generalization error bounds for the studied algorithm with different choices of parameters in the non-attainable case.

Corollary 9

Under Assumptions 1, 2 and 3, let , , and for all . With probability at least , the following holds:
1) if , and , then

(18)

2) if , and for some , , and then

(19)

Here, and are given by Theorem 3.

Corollary 10

Under Assumptions 1, 2 and 3, let , , and for all . With probability at least , there holds
1) if , and , then (18) holds;
2) if , for some , , and then (19) holds.

The convergence rates in the above corollaries, i.e., if or otherwise, match those in (Dieuleveut and Bach, 2016) for one pass SGM with averaging, up to a logarithmic factor. Also, in the capacity independent case, i.e., , the convergence rates in the above corollary read as (since is always bigger than ), which are exactly the same as those in (Ying and Pontil, 2008) for one pass SGM.

Similar results to Corollaries 57 can be also derived for the non-attainable case by applying Theorem 3. Refer to Appendix B for more details.

3.5 Main Results for Batch GM

In this subsection, we present convergence results for batch GM. As a byproduct of our proofs in this paper, we have the following convergence rates for batch GM.

Theorem 4

Under Assumptions 1, 2 and 3, set , for all Let if , or with otherwise. Then with probability at least (), the following holds for the learning sequence generated by (10):
1) if and , then

2) if and , then

3) if and , then

Here, (or ), and all the constants in the upper bounds are positive and depend only on , , and (or ) also on (and ).

3.6 Discussions

We must compare our results with previous works. For non-parametric regression with the square loss, one pass SGM has been studied in, e.g., (Ying and Pontil, 2008; Shamir and Zhang, 2013; Tarres and Yao, 2014; Dieuleveut and Bach, 2016). In particular, Ying and Pontil (2008) proved capacity independent rate of order with a fixed step-size , and Dieuleveut and Bach (2016) derived capacity dependent error bounds of order (when ) for the average. Note also that a regularized version of SGM has been studied in (Tarres and Yao, 2014), where the derived convergence rate is of order assuming that In comparison with these existing convergence rates, our rates from (13) are comparable, either involving the capacity condition, or allowing a broader regularity parameter (which thus improves the rates). For finite dimensional cases, it has been shown in (Bach and Moulines, 2013) that one pass SGM with averaging with a constant step-size achieves the optimal convergence rate of In comparisons, our results for multi-pass SGM with a smaller step-size seems to be suboptimal in the computational complexity, as we need passes over the data to achieve the same rate. The reason for this may arise from “the computational error” that will be introduced later, or the fact that we do not consider an averaging step as done in (Bach and Moulines, 2013). We hope that in the future by considering a larger step-size and averaging, one can reduce the computational complexity of multi-pass SGM while achieving the same rate.

More recently, Rosasco and Villa (2015) studied multiple passes SGM with a fixed ordering at each pass, also called incremental gradient method. Making no assumption on the capacity, rates of order (in -norm) with a universal step-size are derived. In comparisons, Corollary 3 achieves better rates, while considering the capacity assumption. Note also that Rosasco and Villa (2015) proved sharp rate in -norm for in the capacity independent case. In comparisons, we derive optimal capacity-dependent rate, considering mini-batches.

The idea of using mini-batches (and parallel implements) to speed up SGM in a general stochastic optimization setting can be found, e.g., in (Shalev-Shwartz et al., 2011; Dekel et al., 2012; Sra et al., 2012; Ng, 2016). Our theoretical findings, especially the interplay between the mini-batch size and the step-size, can give further insights on parallelization learning. Besides, it has been shown in (Cotter et al., 2011; Dekel et al., 2012) that for one pass mini-batch SGM with a fixed step-size and a smooth loss function, assuming the existence of at least one solution in the hypothesis space for the expected risk minimization, the convergence rate is of order by considering an averaging scheme. When adapting to the learning setting we consider, this reads as that if , i.e., the convergence rate for the average is . Note that, does not necessarily belong to in general. Also, our derived convergence rate from Corollary 4 is better, when the regularity parameter is greater than or is smaller than .

For batch GM in the attainable case, convergent results with optimal rates have been derived in, e.g, (Bauer et al., 2007; Caponnetto and Yao, 2010; Blanchard and Mücke, 2016; Dicker et al., 2017). In particular, Bauer et al. (2007) proved convergence rates without considering Assumption 3, and Caponnetto and Yao (2010) derived convergence rates For the non-attainable case, convergent results with suboptimal rates can be found in (Yao et al., 2007), and to the best of our knowledge, the only result with optimal rate is the one derived by Caponnetto and Yao (2010), but the result requires extra unlabeled data. In contrast, Theorem 4 of this paper does not require any extra unlabeled data, while achieving the same optimal rates (up to a logarithmic factor). To the best of our knowledge, Theorem 4 may be the first optimal result in the non-attainable case for batch GM.

We end this discussion with some further comments on batch GM and simple SGM. First, according to Corollaries 1 and 2, it seems that both simple SGM (with step-size ) and batch GM (with step-size ) have the same computational complexities (which are related to the number of passes) and the same orders of upper bounds. However, there is a subtle difference between these two algorithms. As we see from (22) in the coming subsection, every iterations of simple SGM (with step-size ) corresponds to one iteration of batch GM (with step-size ). In this sense, SGM discretizes and refines the regularization path of batch GM, which thus may lead to smaller generalization errors. This phenomenon can be further understood by comparing our derived bounds, (11) and (73), for these two algorithms. Indeed, if one can ignore the computational error, one can easily show that the minimization (over ) of right hand-side of (11) with is always smaller than that of (73) with . At last, by Corollary 6, using a larger step-size for SGM allows one to stop earlier (while sharing the same optimal rates), which thus reduces the computational complexity. This suggests that SGM may have some computational advantage over batch GM.

3.7 Proof Sketch (Error Decomposition)

The key to our proof is a novel error decomposition, which may be also used in analysing other learning algorithms. One may also use the approach in (Bousquet and Bottou, 2008; Lin et al., 2016b, a) which is based on the following error decomposition,

where is some suitably intermediate element and denotes the empirical risk over , i.e.,

(20)

However, one can only derive a sub-optimal convergence rate, since the proof procedure involves upper bounding the learning sequence to estimate the sample error (the first term of right-hand side). Also, in this case, the ‘regularity’ of the regression function can not be fully utilized for estimating the bias (the last term). Thanks to the property of squares loss, we can exploit a different error decomposition leading to better results.

To describe the decomposition, we need to introduce two sequences. The population iteration is defined by and

(21)

The above iterated procedure is ideal and can not be implemented in practice, since the distribution is unknown in general. Replacing by the empirical measure and by , we derive the sample iteration (associated with the sample ), i.e., (10). Clearly, is deterministic and is a -valued random variable depending on Given the sample , the sequence has a natural relationship with the learning sequence , since

(22)

Indeed, taking the expectation with respect to on both sides of (4), and noting that depends only on (given any ), one has

and thus,

which satisfies the iterative relationship given in (10). By an induction argument, (22) can then be proved.

Let be the linear map defined by We have the following error decomposition.

Proposition 1

We have

(23)

Proof  For any , we have (Rosasco and Villa, 2015)

Thus, and

Using (22) in the above equality, we get,

The proof is finished by considering,

 
There are three terms in the upper bound of the error decomposition (23). We refer to the deterministic term as the bias, the term depending on as the sample variance, and as the computational variance. The bias term, which is deterministic, has been well studied in the literature, see e.g., (Yao et al., 2007) and also (Rosasco and Villa, 2015). The main novelties of this paper are the estimate of the sample and computational variances and the difficult part is the estimate of the computational variances. The proof of these results is quite lengthy and makes use of some ideas from (Yao et al., 2007; Smale and Zhou, 2007; Bauer et al., 2007; Ying and Pontil, 2008; Tarres and Yao, 2014; Rudi et al., 2015). These three error terms will be estimated in Sections 5 and 6. The bounds in Theorems 1 and 3 thus follow plugging these estimations in the error decomposition, see Section 7 for more details. The proof for Theorem 2 is similar, see Section 8 for the details.

4 Preliminary Analysis

In this section, we introduce some notation and preliminary lemmas that are necessary to our proofs.

4.1 Notation

We first introduce some notations. For for and for any operator where is a Hilbert space and denotes the identity operator on . denotes the expectation of a random variable For a given bounded operator denotes the operator norm of , i.e., . We will use the conventional notations on summation and production: and

We next introduce some auxiliary operators. Let