Reducing the variance in online optimization by transporting past gradients
Abstract
Most stochastic optimization methods use gradients once before discarding them. While variance reduction methods have shown that reusing past gradients can be beneficial when there is a finite number of datapoints, they do not easily extend to the online setting. One issue is the staleness due to using past gradients. We propose to correct this staleness using the idea of implicit gradient transport (IGT) which transforms gradients computed at previous iterates into gradients evaluated at the current iterate without using the Hessian explicitly. In addition to reducing the variance and bias of our updates over time, IGT can be used as a dropin replacement for the gradient estimate in a number of wellunderstood methods such as heavy ball or Adam. We show experimentally that it achieves stateoftheart results on a wide range of architectures and benchmarks. Additionally, the IGT gradient estimator yields the optimal asymptotic convergence rate for online stochastic optimization in the restricted setting where the Hessians of all component functions are equal.^{1}^{1}1Opensource implementation available at: https://github.com/seba1511/igt.pth
Reducing the variance in online optimization by transporting past gradients
Sébastien M. R. Arnold ^{†}^{†}thanks: Work done while at Mila. University of Southern California Los Angeles, CA seb.arnold@usc.edu PierreAntoine Manzagol Google Brain Montréal, QC manzagop@google.com Reza Babanezhad University of British Columbia Vancouver, BC rezababa@cs.ubc.ca Ioannis Mitliagkas Mila, Université de Montréal Montréal, QC ioannis@iro.umontreal.ca Nicolas Le Roux Mila, Google Brain Montréal, QC nlr@google.com
noticebox[b]Preprint. Under review.\end@float
1 Introduction
We wish to solve the following minimization problem:
(1) 
where we only have access to samples and to a firstorder oracle that gives us, for a given and a given , the derivative of with respect to , i.e. . It is known robbins1951stochastic that, when is smooth and strongly convex, there is a converging algorithm for Problem 1 that takes the form , where is a sample from . This algorithm, dubbed stochastic gradient (SG), has a convergence rate of (see for instance bubeck2015convex), within a constant factor of the minimax rate for this problem. When one has access to the true gradient rather than just a sample, this rate dramatically improves to for some .
In addition to hurting the convergence speed, noise in the gradient makes optimization algorithms harder to tune. Indeed, while full gradient descent is convergent for constant stepsize , and also amenable to line searches to find a good value for that stepsize, the stochastic gradient method from robbins1951stochastic with a constant stepsize only converges to a ball around the optimum schmidt2014convergence.^{2}^{2}2Under some conditions, it does converge linearly to the optimum (e.g., Vaswani19) Thus, to achieve convergence, one needs to use a decreasing stepsize. While this seems like a simple modification, the precise decrease schedule can have a dramatic impact on the convergence speed. While theory prescribes with in the smooth case, practictioners often use larger stepsizes like or even constant stepsizes.
When the distribution has finite support, Eq. 1 becomes a finite sum and, in that setting, it is possible to achieve efficient variance reduction and drive the noise to zero, allowing stochastic methods to achieve linear convergence rates leroux2012stochastic; johnson2013accelerating; zhang2013linear; mairal2013optimization; ShalevShwartz2013sdca; defazio2014saga. Unfortunately, the finite support assumption is critical to these algorithms which, while valid in many contexts, does not have the broad applicability of the standard SG algorithm. Several works have extended these approaches to the online setting by applying these algorithms while increasing babanezhad2015wasting; hofmann2015variance but they need to revisit past examples mutiple times and are not truly online.
Another line of work reduces variance by averaging iterates polyak1992acceleration; lacoste2012simpler; bach2013non; flammarion2015averaging; dieuleveut2017harder; dieuleveut2017bridging; JMLR:v18:16595. While these methods converge for a constant stepsize in the stochastic case^{3}^{3}3Under some conditions on ., their practical speed is heavily dependent on the fraction of iterates kept in the averaging, a hyperparameter that is thus hard to tune, and they are rarely used in deep learning.
Our work combines two existing ideas and adds a third: a) At every step, it updates the parameters using a weighted average of past gradients, like in SAG (leroux2012stochastic; schmidt2017minimizing), albeit with a different weighting scheme; b) It reduces the bias and variance induced by the use of these old gradients by transporting them to “equivalent” gradients computed at the current point, similar to gower2017tracking; c) It does so implicitly by computing the gradient at a parameter value different from the current one. The resulting gradient estimator can then be used as a plugin replacement of the stochastic gradient within any optimization scheme. Experimentally, both SG using our estimator and its momentum variant outperform the most commonly used optimizers in deep learning.
2 Momentum and other approaches to dealing with variance
Stochastic variance reduction methods use an average of past gradients to reduce the variance of the gradient estimate. At first glance, it seems like their updates are similar to that of momentum polyak1964some, also known as the heavy ball method, which performs the following updates^{4}^{4}4This is slightly different from the standard formulation but equivalent for constant .:
When , this leads to . Hence, the heavy ball method updates the parameters of the model using an average of past gradients, bearing similarity with SAG leroux2012stochastic, albeit with exponential instead of uniform weights.
Interestingly, while momentum is a popular method for training deep networks, its theoretical analysis in the stochastic setting is limited (sutskever2013importance), except in the particular setting when the noise converges to 0 at the optimum (loizou2017momentum). Also surprising is that, despite the apparent similarity with stochastic variance reduction methods, current convergence rates are slower when using in the presence of noise Schmidt11_inexact, although this might be a limitation of the analysis.
2.1 Momentum and variance
We propose here an analysis of how, on quadratics, using past gradients as done in momentum does not lead to a decrease in variance. If gradients are stochastic, then is a random variable. Denoting the noise at timestep , i.e. , and writing , with the impact of the noise of the th datapoint on the th iterate, we may now analyze the total impact of each on the iterates. Figure 1 shows the impact of on as measured by for three datapoints (, and ) as a function of for stochastic gradient (, left) and momentum (, right). As we can see, when using momentum, the variance due to a given datapoint first increases as the noise influences both the next iterate (through the parameter update) and the subsequent updates (through the velocity). Due to the weight when a point is first sampled, a larger value of leads to a lower immediate impact of the noise of a given point on the iterates. However, a larger also means that the noise of a given gradient is kept longer, leading to little or no decrease of the total variance (dashed blue curve). Even in the case of stochastic gradient, the noise at a given timestep carries over to subsequent timesteps, even if the old gradients are not used for the update, as the iterate itself depends on the noise.
At every timestep, the contribution to the noise of the 1st, the 25th and the 50th points in Fig. 1 is unequal. If we assume that the are i.i.d., then the total variance would be minimal if the contribution from each point was equal. Further, one can notice that the impact of datapoint is only a function of and not of . This guarantees that the total noise will not decrease over time.
To address these two points, one can increase the momentum parameter over time. In doing so, the noise of new datapoints will have a decreasing impact on the total variance as their gradient is multiplied by . Figure 0(c) shows the impact of each noise for an increasing momentum . The peak of noise for is indeed lower than that of . However, the variance still does not go to 0. This is because, as the momentum parameter increases, the update is an average of many gradients, including stale ones. Since these gradients were computed at iterates already influenced by the noise over previous datapoints, that past noise is amplified, as testified by the higher peak at for the increasing momentum. Ultimately, increasing momentum does not lead to a convergent algorithm in the presence of noise when using a constant stepsize.
2.2 SAG and Hessian modelling
The impact of the staleness of the gradients on the convergence is not limited to momentum. In SAG, for instance, the excess error after updates is proportional to , compared to the excess error of the full gradient method which is where is the condition number of the problem. ^{5}^{5}5The in the convergence rate of SAG is generally larger than the in the full gradient algorithm. The difference between the two rates is larger when the minimum in the SAG rate is the second term. This happens either when is small, i.e. the problem is well conditioned and a lot of progress is made at each step, or when is large, i.e. there are many points to the training set. Both cases imply that a large distance has been travalled between two draws of the same datapoint.
Recent works showed that correcting for that staleness by modelling the Hessian wai2017curvature; gower2017tracking leads to improved convergence. As momentum uses stale gradients, the velocity is an average of current and past gradients and thus can be seen as an estimate of the true gradient at a point which is not the current one but rather a convex combination of past iterates. As past iterates depend on the noise of previous gradients, this bias in the gradients amplifies the noise and leads to a nonconverging algorithm. We shall thus “transport” the old stochastic gradients to make them closer to their corresponding value at the current iterate, . Past works did so using the Hessian or an explicit approximation thereof, which can be expensive and difficult to compute and maintain. We will resort to using implicit transport, a new method that aims at compensating the staleness of past gradients without making explicit use of the Hessian.
3 Converging optimization through implicit gradient transport
Before showing how to combine the advantages of both increasing momentum and gradient transport, we demonstrate how to transport gradients implicitly. This transport is only exact under a strong assumption that will not hold in practice. However, this result will serve to convey the intuition behind implicit gradient transport. We will show in Section 4 how to mitigate the effect of the unsatisfied assumption.
3.1 Implicit gradient transport
Let us assume that we received samples in an online fashion. We wish to approach the full gradient as accurately as possible. We also assume here that a) We have a noisy estimate of ; b) We can compute the gradient at any location . We shall seek a such that
To this end, we shall make the following assumption:
Assumption 3.1.
All individual functions are quadratics with the same Hessian .
This is the same assumption as (flammarion2015averaging, Section 4.1). Although it is unlikely to hold in practice, we shall see that our method still performs well when that assumption is violated.
Under Assumption 3.1, we then have (see details in Appendix)
Thus, we can transport our current estimate of the gradient by computing the gradient on the new point at a shifted location . This extrapolation step is reminiscent of Nesterov’s acceleration with the difference that the factor in front of , , is not bounded.
3.2 Combining increasing momentum and implicit gradient transport
We now describe our main algorithm, Implicit Gradient Transport (IGT). IGT uses an increasing momentum . At each step, when updating the velocity, it computes the gradient of the new point at an extrapolated location so that the velocity is a good estimate of the true gradient .
We can rewrite the updates to eliminate the velocity , leading to the update:
(IGT) 
We see in Fig. 0(d) that IGT allows a reduction in the total variance, thus leading to convergence with a constant stepsize. This is captured by the following proposition:
Proposition 3.1.
If is a quadratic function with positive definite Hessian with largest eigenvalue and condition number and if the stochastic gradients satisfy: with a random i.i.d. noise with covariance bounded by , then Eq. IGT with stepsize leads to iterates satisfying
with for every .
The proof of Prop. 3.1 is provided in the appendix.
Despite this theoretical result, two limitations remain: First, Prop. 3.1 shows that IGT does not improve the dependency on the conditioning of the problem; Second, the assumption of equal Hessians is unlikely to be true in practice, leading to an underestimation of the bias. We address the conditioning issue in the next section and the assumption on the Hessians in Section 4.
3.3 IGT as a plugin gradient estimator
We demonstrated that the IGT estimator has lower variance than the stochastic gradient estimator for quadratic objectives. IGT can also be used as a dropin replacement for the stochastic gradient in an existing, popular first order method: the heavy ball (HB). This is captured by the following two propositions:
Proposition 3.2 (Nonstochastic).
In the nonstochastic case, where , variance is equal to and HeavyballIGT achieves the accelerated linear rate using the known, optimal heavy ball tuning, , .
Proposition 3.3 (Online, stochastic).
When , there exist constant hyperparameters , such that converges to zero linearly, and the variance is .
The pseudocode can be found in Algorithm 1.
4 IGT and Anytime Tail Averaging
So far, IGT weighs all gradients equally. This is because, with equal Hessians, one can perfectly transport these gradients irrespective of the distance travelled since they were computed. In practice, the individual Hessians are not equal and might change over time. In that setting, the transport induces an error which grows with the distance travelled. We wish to average a linearly increasing number of gradients, to maintain the rate on the variance, while forgetting about the oldest gradients to decrease the bias. To this end, we shall use anytime tail averaging leroux2019anytime, named in reference to the tail averaging technique used in optimization JMLR:v18:16595.
Tail averaging is an online averaging technique where only the last points, usually a constant fraction of the total number of points seen, is kept. Maintaining the exact average at every timestep is memory inefficient and anytime tail averaging performs an approximate averaging using . We refer the reader to leroux2019anytime for additional details.
5 Impact of IGT on bias and variance in the ideal case
To understand the behaviour of IGT when Assumption 3.1 is verified, we minimize a strongly convex quadratic function with Hessian with condition number , and we have access to the gradient corrupted by noise , where . In that scenario where all Hessians are equal and implicit gradient transport is exact, Fig. 1(a) confirms the rate of IGT with constant stepsize while SGD and HB only converge to a ball around the optimum.
To further understand the impact of IGT, we study the quality of the gradient estimate. Standard stochastic methods control the variance of the parameter update by scaling it with a decreasing stepsize, which slows the optimization down. With IGT, we hope to have a low variance while maintaining a norm of the update comparable to that obtained with gradient descent. To validate the quality of our estimator, we optimized a quadratic function using IGT, collecting iterates . For each iterate, we computed the squared error between the true gradient and either the stochastic or the IGT gradient. In this case where both estimators are unbiased, this is the trace of the noise covariance of our estimators. The results in Figure 1(b) show that, as expected, this noise decreases linearly for IGT and is constant for SGD.
We also analyse the direction and magnitude of the gradient of IGT on the same quadratic setup. Figure 1(c) displays the cosine similarity between the true gradient and either the stochastic or the IGT gradient, as a function of the distance to the optimum. We see that, for the same distance, the IGT gradient is much more aligned with the true gradient than the stochastic gradient is, confirming that variance reduction happens without the need for scaling the estimate.
6 Experiments
While Section 5 confirms the performance of IGT in the ideal case, the assumption of identical Hessians almost never holds in practice. In this section, we present results on more realistic and larger scale machine learning settings. All experiments are extensively described in the Appendix A and additional baselines compared in Appendix B.
6.1 Supervised learning
CIFAR10 image classification
We first consider the task of training a ResNet56 model DBLP:journals/corr/HeZRS15 on the CIFAR10 image classification dataset cifar10. We use TF official models code and setup TfResnet, varying only the optimizer: SGD, HB, Adam and our algorithm with anytime tail averaging both on its own (ITA) and combined with Heavy Ball (HBITA). We tuned the step size for each algorithm by running experiments using a logarithmic grid. To factor in ease of tuning wilson2017marginal, we used Adam’s default parameter values and a value of 0.9 for HB’s parameter. We used a linearly decreasing stepsize as it was shown to be simple and perform well shallue2018measuring. For each optimizer we selected the hyperparameter combination that is fastest to reach a consistently attainable target train loss shallue2018measuring. Selecting the hyperparameter combination reaching the lowest training loss yields qualitatively identical curves. Figure 3 presents the results, showing that IGT with the exponential anytime tail average performs favourably, both on its own and combined with Heavy Ball: the learning curves show faster improvement and are much less noisy.
ImageNet image classification
We also consider the task of training a ResNet50 modelDBLP:journals/corr/HeZRS15 on the larger ImageNet dataset ILSVRC15. The setup is similar to the one used for CIFAR10 with the difference that we trained using larger minibatches (1024 instead of 128). In Figure 4, one can see that IGT is as fast as Adam for the train loss, faster for the train accuracy and reaches the same final performance, which Adam does not. We do not see the noise reduction we observed with CIFAR10, which could be explained by the larger batch size (see Appendix A.1).
IMDb sentiment analysis
We train a bidirectional LSTM on the IMDb Large Movie Review Dataset for 200 epochs. maas2011learning We observe that while the training convergence is comparable to HB, HBITA performs better in terms of validation and test accuracy. In addition to the baseline and IGT methods, we also train a variant of Adam using the ITA gradients, dubbed AdamITA, which performs similarly to Adam.
6.2 Reinforcement learning
Linearquadratic regulator
We cast the classical linearquadratic regulator (LQR) kwakernaak1972linear as a policy learning problem to be optimized via gradient descent. This setting is extensively described in Appendix A. Note that despite their simple linear dynamics and a quadratic cost functional, LQR systems are notoriously difficult to optimize due to the nonconvexity of the loss landscape. fazel2018global
The left chart in Figure 5 displays the evaluation cost computed along training and averaged over three random seeds. The first method (Optimal) indicates the cost attained when solving the algebraic Riccati equation of the LQR – this is the optimal solution of the problem. SGD minimizes the costs using the REINFORCE williams1992simple gradient estimator, averaged over 600 trajectories. ITA is similar to SGD but uses the ITA gradient computed from the REINFORCE estimates. Finally, GD uses the analytical gradient by taking the expectation over the policy.
We make two observations from the above chart. First, ITA initially suffers from the stochastic gradient estimate but rapidly matches the performance of GD. Notably, both of them converge to a solution significantly better than SGD, demonstrating the effectiveness of the variance reduction mechanism. Second, the convergence curve is smoother for ITA than for SGD, indicating that the ITA iterates are more likely to induce similar policies from one iteration to the next. This property is particularly desirable in reinforcement learning as demonstrated by the popularity of trustregion methods in largescale applications. schulman2015trust; openai2018learning
6.3 Metalearning
Modelagnostic metalearning
We now investigate the use of IGT in the modelagnostic metalearning (MAML) setting. finn2017model We replicate the 5 ways classification setup with 5 adaptation steps on tasks from the MiniImagenet dataset ravi2016optimization. This setting is interesting because of the many sources contributing to noise in the gradient estimates: the stochastic metagradient depends on the product of 5 stochastic Hessians computed over only 10 data samples, and is averaged over only 4 tasks. We substitute the metaoptimizer with each method, select the stepsize that maximizes the validation accuracy after 10K iterations, and use it to train the model for 100K iterations.
The right graph of Figure 5 compares validation accuracies for three random seeds. We observe that methods from the IGT family significantly outperform their stochastic metagradient counterpart, both in terms of convergence rate and final accuracy. Those results are also reflected in the final test accuracies where AdamITA () performs best, followed by HBITA (), then Adam (), and finally HB ().
7 Conclusion and open questions
We proposed a simple optimizer which, by reusing past gradients and transporting them, offers excellent performance on a variety of problems. While it adds an additional parameter, the ratio of examples to be kept in the tail averaging, it remains competitive across a wide range of such values. Further, by providing a higher quality gradient estimate that can be plugged in any existing optimizer, we expect it to be applicable to a wide range of problems. As the IGT is similar to momentum, this further raises the question on the links between variance reduction and curvature adaptation. Whether there is a way to combine the two without using momentum on top of IGT remains to be seen.
Acknowledgments
The authors would like to thank Liyu Chen for his help with the LQR experiments and Fabian Pedregosa for insightful discussions.
References
Appendix A Experimental Details
This section provides additional information regarding the experiments included in the main text.
For each experimental setting we strive to follow the reproducibility checklist, and provide:

a description and citation of the dataset,

a description of preprocessing steps,

training / validation / testing splits,

a description of the hyperparameter search process and chosen values for each method,

the exact number of evaluation runs,

a clear definition of statistics reported, and

a description of the computing infrastructure.
a.1 CIFAR10 image classification
Dataset
The CIFAR10 dataset [krizhevsky2009learning] consists 50k training and 10k testing images, partitioned over 10 classes. We download and preprocess the images using the TensorFlow models package, available at the following URL: https://github.com/tensorflow/models
Model
We use a residual convolutional network [DBLP:journals/corr/HeZRS15] with 56 layers as defined in the models package. Specifically, we use the second version whose blocks are built as a batch normalization, then a ReLU activation, and then a convolutional layer. [he2016identity]
Hyperparameters
We use the exact setup from https://github.com/tensorflow/models/officials/resnet. As such, training is carried out with minibatches of 128 examples for 182 epochs and the training data is augmented with random crops and horizontal flips. Also note this setup multiplies the step size by the size of the minibatch. One deviation from the setup is our use of a linearly decaying learning rate instead of an explicit schedule. The linearly decaying learning rate schedule is simple and was shown to perform well [shallue2018measuring]. This schedule is specified using two parameters: the decay rate, a multiplier specifying the final step size (0.1 or 0.01), and the decay step, specifying the step at which the fully decayed rate is reached (always set to 90% of the training steps). To factor in ease of tuning[wilson2017marginal] we used Adam’s default parameter values and a value of 0.9 for HB’s parameter. We used IGT with the exponential Anytime Tail Averaging approach. For the tail fraction, we tried two values: the number of epochs and a tenth of that number (180 and 18). We ran using the following learning rate: (1e0, 3e1, 1e1, 3e2, 1e2) for SGD, HB and the IGT variants and (1e2, 3e3, 1e3, 3e4, 1e4) for Adam. We ran a grid search over the base learning rate and its decay rate with a single run per combination. For each optimizer we selected the hyperparameter combination that is fastest to reach a consistently attainable target train loss of 0.2 [shallue2018measuring]. Note that selecting the hyperparameter combination reaching the lowest training loss yields qualitatively identical curves.
The resulting hyperparameters are:

SGD stepsize 0.3, decay 0.01

HB stepsize 0.03, decay 0.01

Adam stepsize 0.001, decay 0.01

ITA stepsize 0.3, decay 0.01, tail fraction 18

HBITA stepsize 0.03, decay 0.1, tail fraction 18
Infrastructure and Runs
The experiments were run using P100 GPUs (single GPU).
Additional Results
We provide all learning curves for the methods comparison presented in the main manuscript in figure 6.
Ablation study: importance of IGT correction
We confirm the importance of the implicit gradient transport correction by running an experiment in which an increasing momentum is used without transport. The results appear in figure 7.
The resulting hyperparameters are:

ATA stepsize 0.3, decay 0.01, tail fraction 18

HBATA stepsize 0.03, decay 0.01, tail fraction 18
Effect of the batch size
We look into the effect of the batch size. To do so, we plot the number of steps required to reach a reliably attainable training loss of 0.4 as a function of the batch size. We ran using the following minibatch sizes: 32, 128, 512 and 2048. Running with larger minibatches led to out of memory errors on our single GPU setup. The results presented in figure 8 show the benefit of IGT lowers as the batch size increases. Note that Adam’s ability to keep benefiting from larger batch sizes is consistent with previous observations.
a.2 ImageNet image classification
Dataset
We use the 2015 edition of the ImageNet LargeScale Visual Recognition Challenge (ImageNet) [ILSVRC15] dataset. This dataset consists of 1.2M images partitioned into 1’000 classes. We use the preprocessing and loading utilities of the TensorFlow models package, available at the following URL: https://github.com/tensorflow/models
Model
Our model is again a large residual network, consisting of 50 layers. Similar to our CIFAR10 experiments above, we use the implementation described in [he2016identity].
Hyperparameters
We used the same setup and approach as for the CIFAR10 experiments. The setup trains for 90 epochs using minibatches of 1024 examples. We used a larger grid for the learning rate schedule: decay (0.1, 0.01, 0.001) and decay step fraction (0.7, 0.8, 0.9).
The resulting hyperparameters are:

SGD stepsize 0.3, decay 0.01, decay step 0.8

HB stepsize 0.03, decay 0.001, decay step 0.9

Adam stepsize 0.0001, decay 0.01, decay step 0.9

ITA stepsize 0.3, decay 0.01, tail fraction 90, decay step 0.9

HBITA stepsize 0.03, decay 0.01, tail fraction 90, decay step 0.9
Infrastructure and Runs
We ran these experiments using Google TPUv2.
Additional Results
We include error and accuracy curves for training and testing in Figure 9.
a.3 IMDb sentiment analysis
Dataset
The Internet Movie Database (IMDb) [maas2011learning] consists of 25’000 training and 25’000 test movie reviews. The objective is binary sentiment classification based on the review’s text. We randomly split the training set in two folds of 17’536 and 7’552 reviews, the former being used for training and the latter for testing. The data is downloaded, splitted, and preprocessed with torchtext package, available at the following URL: https://github.com/pytorch/text More specifically, we tokenize the text at the wordlevel using the spaCy package, and embed the tokens using the 100dimensional GloVe 6B [pennington2014glove] distributed representations.
Model
The model consists of an embedding lookuptable, followed by a bidirectional LSTM with dropout, and then by a fullyconnected layer. The LSTM uses 256 hidden units and the dropout rate is set to 0.5. The whole model consists of 3.2M trainable parameters, with the embedding lookuptable initilized with the GloVe vectors. The model is trained to minimize the BCEWithLogitsLoss with a minibatch size of 64.
Hyperparameters
For each method, we used a gridsearch to find the stepsize minimizing validation error after 15 epochs. The grid starts at 0.00025 and doubles until reaching 0.1024, so as to ensure that no chosen value lies on its boundaries. When applicable, the momentum factor is jointly optimized over values 0.1 to 0.95. The final hyperparameters are displayed in the following table for each method.
HB  Adam  ASGD  HBIGT  HBITA  

0.032  0.0005  0.064  0.128  0.064  
0.95  0.95  n/a  0.9  0.9  
n/a  n/a  100  n/a  n/a  
n/a  n/a  n/a  n/a 
Infrastructure and Runs
All IMDb experiments use a single NVIDIA GTX 1080, with PyTorch v0.3.1.post2, CUDA 8.0, and cuDNN v7.0.5. We run each final configurations with 5 different random seeds and always report the mean tendency one standard deviation. Each run lasts approximately three hours and thirty minutes.
Additional Results
In addition to the results reported in the main text, we include training, validation, and testing curves for each method in Figure 10. Shading indicates the one standard deviation interval. Note that our focus is explicitly on optimization: in the specific case of IMDb, training for 200 epochs is completely unnecessary from a generalization standpoint as performance degrades rapidly after 1520 epochs.
a.4 Linearquadratic regulator
Setup
Our linearquadratic regulator [kwakernaak1972linear] implements the following equations. The cost functional is evaluated at every timestep and is given by
(2) 
for random symmetric positive definite matrices and each with condition number 3. The initial state is sampled around the origin, and the subsequent states evolve according to
(3) 
where entries of are independently sampled from a Normal distribution and then scaled such that the matrix has unit Frobenius norm. The actions are given by the linear stochastic policy , where and are the parameters to be optimized.
Gradient methods in this manuscript optimize the sum of costs using the REINFORCE estimate [williams1992simple] given by
(4) 
In our experiments, the above expectation is approximated by the average of 600 trajectory rollouts. Due to the noisy dynamics of the system, it is possible for the gradient norm to explode leading to numerical unstabilities – especially when using larger stepsizes. To remedy this issue, we simply discard such problematic trajectories from the gradient estimator.
For each training iteration, we first gather 600 trajectories used for learning and then 600 more used to report evaluation metrics.
Hyperparameters
Due to the simplicity of the considered methods, the only hyperparameter is the stepsize. For each method, we choose the stepsize from a logarithmicallyspaced grid so as to minimize the evaluation cost after 600 iterations on a single seed. Incidentally, the optimal stepsize for GD, SGD, and ITA is 0.0002.
Infrastructure and Runs
We use an Intel Core i75820K CPU to run the LQR experiments. All methods are implemented using numpy v1.15.4. We present results averaged over 3 random seeds, and also report the standard deviation. For stochastic gradient methods (SGD, ITA) training for 20K iterations takes about 3h, for fullgradient method (GD) about 10h, and computing the solution of the Riccati equation takes less than 5 seconds.
Additional Results
In addition to the evaluation cost reported in the main text, we also include the cost witnessed during training (and used for optimization) in Figure 11.
We notice that the training cost curve of ITA is not as smooth as the evaluation one. Similarly, the observed learning costs never reach as good a minima as the evaluation ones. This phenomena is easily clarified: during learning, ITA esimates the gradient using the shifted parameters as opposed to the true parameters . Those shifted parameters are not subject to a reduced variance, hence explaining the observed noise in the cost as well as deteriorated convergence.
a.5 Modelagnostic metalearning
Dataset
We use the MiniImagenet dataset [ravi2016optimization] in our modelagnostic metalearning (MAML) [finn2017model] experiments. This dataset comprises 64 training, 12 validation, and 24 test classes. For each of train, validation, and test sets, tasks are constructed by sampling 5 classes from their respective split, and further sampling 5 images per class. Images are downsampled to 84x84x3 tensors of RGB values. For more details, please refer to the official code repository – which we carefully replicated – at the following URL: https://github.com/cbfinn/maml
Our implementation departs in two ways from the reference. First, we train our models for 100k iterations as opposed to 60k and only use 5 image samples to compute a metagradient whereas the reference implementation uses 15. Second, we only use 5 adaptation steps at evaluation time, when the reference uses 10.
Model
The model closely replicates the convolutional neural network of MAML [finn2017model]. It consists of 4 layers, each with 32 3x3 kernels, followed by batch normalization and ReLU activations. For specific implementation details, we refer the reader to the above reference implementation.
Hyperparameters
We only tune the metastepsize for the MAML experiment. We set the momentum constant to 0.9, the adaptationstepsize to 0.01, and average the metagradient of 4 tasks per iterations. Due to the reduced variance in the gradients, we found it necessary to increase the of AdamITA to 0.01.
For each method, we search over stepsize values on a logarithmicallyspaced grid and select those values that maximize validation accuracy after 10k metaiterations. These values are reported in Table 2.
HB  Adam  HBITA  AdamITA  
0.008  0.001  0.008  0.0005 
Infrastructure and Runs
Each MAML experiment is run on a single NVIDIA GTX TITAN X, with PyTorch v1.1.0, CUDA 8.0, and cuDNN v7.0.5. We run each configuration with 3 different random seeds and report the mean tendency one standard deviation. Each run takes approximately 36 hours, and we evaluate the validation and testing accuracy every 100 iteration.
Additional Results
We complete the MAML validation curves from the main manuscript with training and testing accuracy curves in Figure 12. Moreover, we recall the final test accuracies for each method: AdamITA reaches , HBITA , Adam , and HB .
Appendix B Additional Experiments
This section presents additional experiments to the ones included in the main text.
b.1 Baselines comparisons
While experiments in Section 5 highlighted properties of IGT and HBIGT when the assumption of identical, constant Hessians was verified, we now turn to more realistic scenarios where individual functions are neither quadratic nor have the same Hessian to compare our proposed methods against popular baselines for the online stochastic optimization setting. We target optimization benchmarks and focus on training loss minimization.
We investigate three different scenarios: (a) linearmnist: a logistic regression model on MNIST, (b) mnist: a modified version of LeNet5 [lecun2010mnist] on MNIST and (c) cifarsmall: the MobileNetv2 architecture [sandler2018mobilenetv2] consisting of 19 convolutional layers on CIFAR10. All models are trained with a minibatch size of 64, while the remaining hyperparameters are available in Tables 4, 5, and 6.
For each of the above tasks, models are trained for 200 epochs. We compare the following methods:

HB: the heavy ball method [polyak1992acceleration],

Adam [kingma2014adam],

ASGD [jain2018accelerating],

SVRG [johnson2013accelerating],

SGDdec: stochastic gradient method with an exponential learning rate schedule and exponential constant ,

HBIGT: the heavy ball using the IGT as a plugin estimator, and

HBITA: same as HBIGT but using the anytime tail averaging to forget the oldest gradients.
The hyperparameters of each method, and in particular the stepsize, are tuned independently according to a logarithmic grid so as to minimize the mean training error after epoch 15 on one seed. We then use those parameters on 5 random seeds and report the mean and standard deviation of the performance.
Figure 13 shows the training curves for the five algorithms in the three settings.
First, we observe that, for the logistic regression, HBIGT performs on par with HBITA and far better than all the other methods, even though the assumption on the Hessians is violated. When using a ConvNet, however, we see that HBIGT is not competitive with stateoftheart methods such as Adam or ASGD. HBITA, on the other hand, with its smaller reliance on the assumption, once again performs much better than all other methods. In fact, HBITA not only converges to a lower final train error but also has a faster initial rate.
While our focus is on optimization, we also report generalization metrics in Table 3. For each algorithm, we computed the best mean accuracy after each epoch on the test set and report this value together with its standard deviation. The importance of the Anytime TailAveraging mechanism is again apparent: without it, HeavyballIGT is unable to improve for more than a few epochs on the CIFAR10 validation set, regardless of the stepsize choice. On the other hand, it is evident from those results that the solutions found by HeavyballITA are competitive with the ones discovered by other optimization algorithms.
LinearMNIST  MNIST  CIFAR10  IMDb  

Heavyball  92.52 0.04  99.08 0.07  91.55 0.25  86.90 0.67 
Adam  92.57 0.10  98.99 0.05  89.36 0.75  85.62 0.63 
ASGD  92.47 0.08  99.15 0.07  91.45 0.20  87.31 0.21 
SVRG  92.51 0.04  99.06 0.08  86.84 0.17  n/a 
SGDdec  92.52 0.06  99.11 0.06  87.53 0.23  86.73 0.34 
HeavyballIGT  92.47 0.10  99.00 0.05  12.05 0.21  86.61 0.23 
HeavyballITA  92.50 0.10  99.19 0.02  90.37 0.31  87.26 0.24 
HB  Adam  ASGD  HBIGT  HBITA  

0.0128  0.0002  
0.1  0.95  n/a  0.9  0.1  
n/a  n/a  n/a  n/a  
n/a  n/a  n/a  n/a 
HB  Adam  ASGD  HBIGT  HBITA  

0.0064  0.0016  0.0128  0.0032  0.0032  
0.9  0.95  n/a  0.95  0.95  
n/a  n/a  n/a  n/a  
n/a  n/a  n/a  n/a 
HB  Adam  ASGD  HBIGT  HBITA  

0.0512  0.0512  0.1024  0.0128  0.0512  
0.95  0.9  n/a  0.9  0.1  
n/a  n/a  100  n/a  n/a  
n/a  n/a  n/a  n/a 
Appendix C Proofs
c.1 Transport formula
(Quadratic )  
(Identical Hessians)  
( is an approximation) 
Appendix D Proof of Prop. 3.1
In this proof, we assume that is a stronglyconvex quadratic function with hessian .
At timestep , we have access to a stochastic gradient where the are i.i.d. with variance .
We first prove a simple lemma:
Lemma D.1.
If and, for , we have
then
Proof.
Per our assumption, this is true for . Now let us prove the result by induction. Assume this is true for . Then we have:
(recurrence assumption)  
( is quadratic)  
This concludes the proof. ∎
Lemma D.2.
Let us assume we perform the following iterative updates:
starting from . Then, denoting , we have
with
Proof.
The result is true for . We now prove the result for all by induction. Let us assume this is true for . Using Lemma D.1, we have
and thus, using ,
(recurrence assumption)  
with
This concludes the proof. ∎
For the following lemma, we will assume that the Hessian is diagonal and will focus on one dimension with eigenvalue . Indeed, we know that there are no interactions between the eigenspaces and that we can analyze each of them independently [o2015adaptive].
Lemma D.3.
Denote . We assume . Then, for any and any , we have
(Positivity)  
(Zerostart)  
(Constant bound)  
(Decreasing bound) 
Proof.
The Zerostart case is immediate from the recursion of Lemma D.2. The Positivity property of is also immediate from the recursion since the stepsize is such that is positive.
We now turn to the Constant bound property. We have, for ,
Thus, . Summing these inequalities, we get a telescopic sum and, finally:
This bound is trivial in the case . In that case, we keep the first term in the sum separate and get
In the remainder, we shall keep the bound for simplicity. The upper bound on the righthand size is increasing with and its value for is thus an upper bound for all smaller values of . Replacing with leads to
This proves the third inequality.
We shall now prove the Decreasing bound by induction. This bound states that, for large enough, each decreases as . Using the second and third inequalities, we have
The maximum will help us prove the last property. Thus, for , we have
with . The Decreasing bound is verified for .
We now show that if, for any , we have , then . Assume that there is such at . Then
We now shall prove that is negative. First, we have that
Then,
since . Thus, the property is true for every . In addition, we have
and the property is also true for every . This concludes the proof. ∎
Finally, we can prove the Proposition 3.1:
Proof.
The expectation of is immediate using Lemma D.2 and the fact that the are independent, zeromean noises. The variance is equal to . While our analysis was only along one eigenspace of the Hessian with associated eigenvalue , we must now sum over all dimensions. We will thus define
which is, for every , the maximum across all dimensions. We get
Since we have
we get
This concludes the proof. ∎
Appendix E Proof of Proposition 3.2 and Proposition 3.3
In this section we list and prove all lemmas used in the proofs of Proposition 3.2 and Proposition 3.3; all lemmas are stated in the same conditions as the proposition.
We start the following proposition:
Proposition E.1.
Let be a quadratic function with positive definite Hessian with largest eigenvalue and condition number and if the stochastic gradients satisfy with a random uncorrelated noise with covariance bounded by .
Then, Algorithm 1 leads to iterates satisfying
(5) 
where
(6) 
governs the dynamics of this bias. In particular, when its spectral radius, is less than , the iterates converge linearly to .
In a similar fashion, the variance dynamics of HeavyballIGT are governed by the matrix
If the spectral radius of , , is strictly less than 1 or all , then there exist constants and for which
where is a bound on the variance of noise variables .
Lemma E.2 (IGT estimator as true gradient plus noise average).
If and for we have
then
This lemma is already proved in the previous section for the IGT estimator (Lemma D.1) and is just repeated here for completeness. We will use this result in the next few lemmas.
Lemma E.3 (The IGT gradient estimator is unbiased on quadratics).
For the IGT gradient estimator, , corresponding to parameters we have
where the expectation is over all gradient noise vectors .
Proof.
The proof proceeds by induction. The base case holds as we have
For the inductive case, we can write
Where, in the third equality, by the inductive assumption, and the last equality because the gradient of a quadratic function is linear. ∎
Lemma E.4 (Bounding the IGT gradient variance).
Let be the IGT gradient estimator. Then
where is the variance of the homoscedastic noise .