# Can speed up the convergence rate of stochastic gradient methods to by a gradient averaging strategy?

## Abstract

In this paper we consider the question of whether it is possible to apply a gradient averaging strategy to improve on the sublinear convergence rates without any increase in storage. Our analysis reveals that a positive answer requires an appropriate averaging strategy and iterations that satisfy the variance dominant condition. As an interesting fact, we show that if the iterative variance we defined is always dominant even a little bit in the stochastic gradient iterations, the proposed gradient averaging strategy can increase the convergence rate to in probability for the strongly convex objectives with Lipschitz gradients. This conclusion suggests how we should control the stochastic gradient iterations to improve the rate of convergence.

Speed up the convergence rate of stochastic gradients?X Xu and X Luo \firstpageno1

Stochastic optimization, Stochastic gradient, Convergence rate, Speed up, Strongly convex

## 1 Introduction

In this paper we consider the question of whether it is possible to apply a gradient averaging strategy to improve on the sublinear convergence rates without any increase in storage for stochastic gradient (SG) methods. The SG method is the popular methodology (Zinkevich, 2003; Zhang, 2004; Bottou and Bousquet, 2007; Nemirovski et al., 2009; Shalev-Shwartz et al., 2011) for solving the following class of stochastic optimization problems:

(1) |

where the real-valued function is defined by

(2) |

and can be defined as a collection of real-valued functions with a certain probability distribution over the index set .

With an initial point , these methods are characterized by the iteration

(3) |

where is the stepsize and is the stochastic gradient defined by

(4) |

which is usually assumed to be an unbiased estimate of the socalled full gradient , i.e., (Shapiro et al., 2009; Bottou et al., 2018). The SG method was originally developed by Robbins and Monro (1951) for smooth stochastic approximation problems. It is guaranteed to achieve the sublinear convergence rate for strongly convex objectives (Nemirovski et al., 2009; Bottou et al., 2018) and this theoretical rate is also supported by practical experience (Bottou et al., 2018). In particular, the practical performance of stochastic gradient methods with momentum has made them popular in the community working on training DNNs (Sutskever et al., 2013); in fact, the approach could be viewed as a gradient averaging strategy. While momentum can lead to improved practical performance, it is still not known to lead to a faster convergence rate. Usually, the gradient averaging strategy and its variants can improve the constants in the convergence rate (Xiao, 2010), it does not improve on the sublinear convergence rates for SG methods. However, owing to the successful practical performance of gradient averaging, it is worth considering whether it is possible to improve the convergence rate, which forms the starting point of this work.

The primary contribution of this work is to show that under the variance dominant condition (Assumption 3), the proposed gradient averaging strategy could improve the convergence rate to in probability without any increase in storage for the strongly convex objectives with Lipschitz gradients. This result also suggests how we should control the stochastic gradient iterations to improve the rate of convergence in practice. In particular, our averaging strategy coordinates the relationship between the mean and variance of the increment of the iteration, so that the growth of expectation can be controlled when the variance is reduced.

### 1.1 Related Work

We briefly review several methods related to the new strategy, mainly including stochastic gradient with momentum (SGM), gradient averaging, stochastic variance reduced gradient (SVRG), SAGA and iterate averaging.

SGM. With an initial point , two scalar sequences and that are either predetermined or set dynamically, and , SGM uses iterations of the form (Tseng, 1998; Bottou et al., 2018)

They are procedures in which each step is chosen as a combination of the stochastic gradient direction and the most recent iterate displacement. It is common to set and as some fixed constants, and in this case we can rewrite the SGM iteration as

(5) |

So it is clear that SGM is a weighted average of all previous stochastic gradient directions. In deterministic settings, it is referred to as the heavy ball method (Polyak, 1964). While SGM can lead to improved practical performance, it is still not known to lead to a faster convergence rate. Moreover, see Remark 4.3 in Section 4 for a variance analysis of SGM.

Gradient Averaging. Similar to SGM, gradient averaging is using the average of all previous gradients,

(6) |

This approach is used in the dual averaging method (Nesterov, 2009). Compared with our new strategy, this method reduces the variance to a similar order without considering the stepsize , however, its expectation is not well controlled, for details see Remark 4.2 in Section 4. So as mentioned above, it can improve the constants in the convergence rate (Xiao, 2010) but does not improve on the sublinear convergence rates.

SVRG. SVRG is designed to minimize the objective function of the form of a finite sum (Johnson and Zhang, 2013), i.e.,

The method is able to achieve a linear convergence rate for strongly convex problems, i.e.,

SVRG needs to compute the batch gradients and has two parameters that needs to be set: besides the stepsize , there is an additional parameter , namely the number of iterations per inner loop. In order to guarantee a linear convergence theoretically, one needs to choose and such that

where and are given in Assumption 1. Without explicit knowledge and , the lengths of the inner loop and the stepsize are typically both chosen by experimentation. This improved rate is achieved by either an increase in computation or an increase in storage. Hence, SVRG usually can not beat SG for very large (Bottou et al., 2018).

SAGA. SAGA has its origins in the stochastic average gradient (SAG) algorithm (Le Roux et al., 2012; Schmidt et al., 2017); moreover, the SAG algorithm is a randomized version of the incremental aggregated gradient (IAG) method proposed in Blatt et al. (2007) and analyzed in Gürbüzbalaban et al. (2017). Compared with SVRG, SAGA is to apply an iteration that is closer in form to SG in that it does not compute batch gradients except possibly at the initial point, and SAGA has a practical advantage that there is only one parameter (the stepsize ) to tune instead of two. Beyond its initialization phase, the per-iteration cost of SAGA is the same as in a SG method; but it has been shown that the method can also achieve a linear rate of convergence for strongly convex problems (Defazio et al., 2014). However, the price paid to reach this rate is the need to store stochastic gradient vectors for general cases except logistic and least squares regression (Bottou et al., 2018), which would be prohibitive in many large-scale applications.

Iterate Averaging. Rather than averaging the gradients, some authors propose to perform the basic SG iteration and try to use an average over iterates as the final estimator (Polyak, 1991; Polyak and Juditsky, 1992). Since SG generates noisy iterate sequences that tend to oscillate around minimizers during the optimization process, the iterate averaging would possess less noisy behavior (Bottou et al., 2018). It is shown that suitable iterate averaging strategies obtain an rate for strongly convex problems even for non-smooth objectives (Hazan and Kale, 2014; Rakhlin et al., 2012). However, none of these methods improve on the sublinear convergence rates (Schmidt et al., 2017).

### 1.2 Paper Organization

The next section introduces the assumptions we used for establishing convergence results, especially, the variance dominant condition ( Assumption 3). Then the new strategy is discussed in detail in Section 3. In Section 4, we show that under the variance dominant condition, the proposed strategy could increase the convergence rate to in probability for the strongly convex objectives with Lipschitz gradients, which suggests how we should control the stochastic gradient iterations to improve the rate of convergence. And we draw some conclusions in Section 5.

## 2 Assumptions

### 2.1 Assumption on the objective

First, let us begin with a basic assumption of smoothness of the objective function. Such an assumption is essential for convergence analyses of most gradient-based methods (Bottou et al., 2018).

###### Assumption 1 (Strongly convex objectives with Lipschitz-continuous gradients)

The objective function is continuously differentiable and there exist such that, for all ,

(7) | |||

(8) |

The inequality (7) ensures that the gradient of the objective is bounded and does not change arbitrarily quickly with respect to the parameter vector, which implies that

(9) |

This inequality (9) is an important basis for performing so-called mean-variance analyses for stochastic iterative sequences (Bottou et al., 2018; Luo and Xu, 2019). The inequality (8) is called a strong convexity condition, which is often used to ensure a sublinear convergence for the stochastic gradient methods; and the role of strong sonvexity may be essential for such rates of convergence (Nemirovski et al., 2009; Bottou et al., 2018). Under the strong sonvexity assumption, the gap between the value of the objective and the minima can be bounded by the squared -norm of the gradient of the objective:

(10) |

This is referred to as the Polyak-Łojasiewicz inequality which was originally introduced by Polyak (1963). It is a sufficient condition for gradient descent to achieve a linear convergence rate; and it is also a special case of the Łojasiewicz inequality proposed in the same year (Łojasiewicz, 1963), which gives an upper bound for the distance of a point to the nearest zero of a given real analytic function.

### 2.2 Assumption on the variance

We follow Bottou et al. (2018) to make the following assumption about the variance of stochastic gradients, i.e., . It states that the variance of is restricted in a relatively minor manner.

###### Assumption 2 (Variance limit)

The objective function and the stochastic gradient satisfy there exist scalars and such that, for all ,

(11) |

### 2.3 Assumption on the iteration

Now we make the following variance dominant assumption. It states that the iterative variance is always dominant even a little bit in the stochastic gradient iterations. This assumption guarantees that the proposed strategy could achieve the convergence rate in probability for the strongly convex objectives.

###### Assumption 3 (Variance dominant)

The sequence of iterates satisfy for all , there is a fixed (could be arbitrarily small) such that

(12) |

where denotes the historical expectation operator defined as and the variance is defined as

## 3 Algorithms

### 3.1 Methods

Our accelerated method is procedures in which each step is chosen as a weighted average of all historical stochastic gradients. And specifically, with an initial point , the method is characterized by the iteration

(14) |

where the weighted average direction

(15) |

Here, is the weighted average of past gradients and the values of mean different weighting methods. A larger value of makes us focus on more recent gradients, as shown in Figure 1; and the recommended weighting method is to choose , which uses about the nearest historical gradients.

Moreover, the method (14) can be equivalently rewritten by the iteration

(16) |

where the direction vector is recursively defined as

which could be viewed as the classical stochastic gradient with momentum where the decay factor depends on .

We now define our accelerated method as Algorithm 1. The algorithm presumes that three computational tools exist: (i) a mechanism for generating a realization of random variable (with representing a sequence of jointly independent random variables); (ii) given an iteration number , a mechanism for computing a scalar stepsize ; and (iii) given an iterate and the realization of , a mechanism for computing a stochastic vector and a scalar .

### 3.2 Stepsize Policy

For strongly convex objectives, we consider the stepsize sequence taking the form

(17) |

where the constant will be discussed in Lemma 4.3.

Notice that the accelerated method and the stochastic gradient method are exactly the same in the first iteration. So we assume, without loss of generality, that the first iterations generated by (14) has the sublinear convergence rate under Assumptions 1 and 2, that is, for every , we have

(18) |

then we shall prove by induction on that the accelerated method maintains the sublinear convergence rate under Assumptions 1 and 2; and furthermore, we shall also prove that this method can achieve a convergence rate under Assumptions 1 to 3.

It follows from (18) and Assumption 1 that

(19) |

Since , it follows from (19) that

and then, we obtain

(20) |

Together with Assumption 3, we further obtain

(21) |

And we will finally show that (21) implies actually in Section 4.

On the basis of (20), (21) and the stepsize policy (17), we first prove two Lemmas which are necessary for the following convergence analysis. {lemma} Under the conditions of (18), suppose that the sequence of iterates is generated by (14) with a stepsize sequence taking the form (17). Then, there is such that for any given diagonal matrix with , the inequality

(22) |

holds in probability. {proof} Note that for any ,

(23) |

together with (17), we have ; thus, to prove (22), we only need to show that

holds in probability. Using the mean-variance analysis, it follows from and (20) that

meanwhile, according to (20), we also have

(24) |

Using Chebyshev’s inequality, there is a such that for ,

which gives the inequality (22) in probability.

Under Assumption 3, it is clear that (22) could be further strengthened. {lemma} Suppose the conditions of Lemma 3.2 and Assumption 3 hold. Then, there is such that for any given diagonal matrix with , the inequality

(25) |

holds in probability, where . {proof} Note that (23), we have ; thus, to prove (25), we only need to show that

holds in probability. First, it follows from and (21) that

together with (24) and using Chebyshev’s inequality, there is a such that

It is worth noting that when , the variance will become the principal part; so the proof is complete.

## 4 Convergence Results

### 4.1 Mean-Variance Framework

The mean-variance framework can be described as a fundamental lemma for any iteration based on random steps, which relies only on Assumption 1 and is a slight generalization of Lemma 4.2 in Bottou et al. (2018).

Under Assumption 1, if for every , is any random vector independent of and is a stochastic step depending on , then the iteration

satisfy the following inequality

where the variance of is defined as

(26) |

According to the inequality (9), the iteration satisfy

Noting that is independent of and taking expectations in these inequalities with respect to the distribution of , we obtain

Recalling (26), we finally get the desired bound.

Regardless of the states before , the expected decrease in the objective function yielded by the th stochastic step , say, , could be bounded above by a quantity involving the expectation and variance .

### 4.2 Expectation Analysis

Now we will analyze the mean of to get the bounds of and , where is the historical expectation of . First, according to the definition (15) of the weighted average direction , we have

Further, from Assumption 1, we have

then there is a diagonal matrix with such that

(27) |

Therefore, could be written as

(28) |

where is a diagonal matrix with .

Together with Lemma 3.2, we get the following bounds of and . {theorem} Suppose the conditions of Lemma 3.2 hold. Then for every , the following conditions

(29) | ||||

(30) |

hold in probability. {remark} For the gradient averaging method mentioned (6) in Subsection 1.1, it is using the average of all previous gradients,

where

By (27), the historical expectation of could be written as

where

And the bound of is , which decays slower than , i.e., described in Theorem 4.2. {proof} According to (28) and Lemma 3.2, it follows that

where the vector and . Thus, one obtains (29) and (30) by noting that

and

so the proof is complete.

Under Assumption 3, both (29) and (30) can be further improved. {theorem} Suppose the conditions of Lemma 3.2 hold. Then for every , the following conditions

(31) | ||||

(32) |

hold in probability, where . {proof} According to Lemma 3.2, we have , and the desired results could be proved in the same way as the proof of Theorem 4.2.

### 4.3 Variance Analysis

Now we will analyze the variance of to get the bound of . As an important result, we will show that the variance of tends to zero with a rate as grows. {lemma} Under Assumption 2, suppose that the sequence of iterates is generated by (14) with for any . Then (17), then

(33) |

where is positive real constant. {remark} For the SGM method (5) mentioned in Subsection 1.1, it is using the weighted average of all previous gradients,

where

Since

it follows that

Together with the proof below, one can find that the variance of could be controlled by a by a fixed fraction . {proof} It follows from (27) and that

together with the Arithmetic Mean Geometric Mean inequality, we have

Hence, along with Assumption 2, we obtain

According to (23), and ; therefore, we have , and the proof is complete.

Combining Theorem 4.2 and Lemma 4.3, we can obtain a bound for each iteration of the accelerated method. {lemma} Under the conditions of Theorem 4.2 and Lemma 4.3, suppose that the stepsize sequence satisfies . Then, the inequality

holds in probability, where and ; further,

According to (29), together with the Arithmetic Mean Geometric Mean inequality, one obtains