Don’t Decay the Learning Rate,
Increase the Batch Size
Abstract
It is common practice to decay the learning rate. Here we show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training. This procedure is successful for stochastic gradient descent (SGD), SGD with momentum, Nesterov momentum, and Adam. It reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times. We can further reduce the number of parameter updates by increasing the learning rate and scaling the batch size . Finally, one can increase the momentum coefficient and scale , although this tends to slightly reduce the test accuracy. Crucially, our techniques allow us to repurpose existing training schedules for large batch training with no hyperparameter tuning. We train InceptionResNetV2 on ImageNet to validation accuracy in under 2500 parameter updates, efficiently utilizing training batches of 65536 images.
Don’t Decay the Learning Rate,
Increase the Batch Size
Samuel L. Smith^{†}^{†}thanks: Both authors contributed equally. Work performed as members of the Google Brain Residency Program (g.co/brainresidency) , PieterJan Kindermans^{1}^{1}footnotemark: 1 & Quoc V. Le 
Google Brain 
{slsmith, pikinder, qvl}@google.com 
1 Introduction
Stochastic gradient descent (SGD) remains the dominant optimization algorithm of deep learning. However while SGD finds minima that generalize well (Zhang et al., 2016; Wilson et al., 2017), each parameter update only takes a small step towards the objective. Increasing interest has focused on large batch training (Goyal et al., 2017; Hoffer et al., 2017; You et al., 2017a), in an attempt to increase the step size and reduce the number of parameter updates required to train a model. Large batches can be parallelized across many machines, reducing training time. Unfortunately, when we increase the batch size the test set accuracy often falls (Keskar et al., 2016; Goyal et al., 2017).
To understand this surprising observation, Smith & Le (2017) argued that one should interpret SGD as integrating a stochastic differential equation. They showed that the scale of random fluctuations in the SGD dynamics, , where is the learning rate, training set size and batch size. Furthermore, they found empirically that there was an optimum fluctuation scale which maximized the test set accuracy (at constant learning rate). This leads to the emergence of an optimal batch size proportional to the learning rate when . Goyal et al. (2017) had already observed such a scaling rule, and exploited it to train ResNet50 on ImageNet in one hour. In this work we show,

When one decays the learning rate, one simultaneously decays the scale of random fluctuations in the SGD dynamics. Decaying the learning rate is simulated annealing.

We propose an alternative procedure; instead of decaying the learning rate, we increase the batch size during training. This strategy achieves nearidentical model performance on the test set with the same number of training epochs but dramatically fewer parameter updates. Our proposal does not require any finetuning as we follow preexisting training schedules; when the learning rate drops by a factor of , we instead increase the batch size by .

As shown previously, we can further reduce the number of parameter updates without any drop in test set accuracy by increasing the learning rate and scaling . As a method of last resort, one can also increase the momentum coefficient and scale .

Combining these strategies, we train InceptionResNetV2 on ImageNet in under 2500 parameter updates, reaching a validation set accuracy of . To achieve this, we train on vast batches of 65536 images. By contrast, Goyal et al. (2017) required 14000 parameter updates to reach validation accuracy with ResNet50, using batches of 8192 images.
We note that a number of recent works have discussed increasing the batch size during training (Friedlander & Schmidt, 2012; Byrd et al., 2012; Balles et al., 2016; Bottou et al., 2016; De et al., 2017), but to our knowledge no paper has shown empirically that increasing the batch size and decaying the learning rate are quantitatively equivalent. A key contribution of our work is to demonstrate that decaying learning rate schedules can be directly converted into increasing batch size schedules, and vice versa; providing a straightforward pathway towards large batch training.
In section 2 we discuss the convergence criteria for SGD in strongly convex minima, in section 3 we interpret decaying learning rates as simulated annealing, and in section 4 we discuss the difficulties of combining decaying learning rates with large momentum coefficients. Finally in section 5 we present conclusive experimental evidence that the empirical benefits of decaying learning rates in deep learning can be obtained by increasing the batch size during training. We exploit this observation and other tricks to achieve efficient large batch training on CIFAR10 and ImageNet.
2 Stochastic gradient descent and convex optimization
SGD is a computationallyefficient alternative to fullbatch training, but it introduces noise into the gradient, which can obstruct optimization. It is often stated that to reach the minimum of a strongly convex function we should decay the learning rate, such that (Robbins & Monro, 1951):
(1)  
(2) 
denotes the learning rate at the gradient update. Intuitively, equation 1 ensures we can reach the minimum, no matter how far away our parameters are initialized, while equation 2 ensures that the learning rate decays sufficiently quickly that we converge to the minimum, rather than bouncing around it due to gradient noise (Welling & Teh, 2011). However, although these equations appear to imply that the learning rate must decay during training, equation 2 holds only if the batch size is constant.^{1}^{1}1Strictly speaking, equation 2 holds if the batch size is bounded by a value below the training set size. To consider how to proceed when the batch size can vary, we follow recent work by Smith & Le (2017) and interpret SGD as integrating the stochastic differential equation below,
(3) 
represents the cost function (summed over all training examples), and represents the parameters, which evolve in continuous “time” towards their final values. Meanwhile represents Gaussian random noise, which models the consequences of estimating the gradient on a minibatch. They showed that the mean and variance , where describes the covariances in gradient fluctuations between different parameters. They also proved that the “noise scale” , where is the learning rate, the training set size and the batch size. This noise scale is a scalar which controls the magnitude of the random fluctuations.
Usually , and so we may approximate . When we decay the learning rate, the noise scale falls, enabling us to converge to the minimum of the cost function (this is the origin of equation 2 above). However we can achieve the same reduction in noise scale at constant learning rate by increasing the batch size. The main contribution of this work is to show that it is possible to make efficient use of vast training batches, if one increases the batch size during training at constant learning rate until . After this point, we revert to the use of decaying learning rates.
3 Simulated annealing and the generalization gap
To the surprise of many researchers, it is now increasingly accepted that small batch training often generalizes better to the test set than large batch training. This “generalization gap” was explored extensively by Keskar et al. (2016). Smith & Le (2017) observed an optimal batch size which maximized the test set accuracy at constant learning rate. They argued that this optimal batch size arises when the noise scale is also optimal, and supported this claim by demonstrating empirically that . Earlier, Goyal et al. (2017) exploited a linear scaling rule between batch size and learning rate to train ResNet50 on ImageNet in one hour with batches of 8192 images.
These results indicate that gradient noise can be beneficial, especially in nonconvex optimization. It has been proposed that noise helps SGD escape “sharp minima” which generalize poorly (Hochreiter & Schmidhuber, 1997; Chaudhari et al., 2016; Keskar et al., 2016; Smith & Le, 2017). Given these results, it is unclear to the present authors whether equations 1 and 2 are relevant in deep learning. Supporting this view, we note that most researchers employ early stopping (Prechelt, 1998), whereby we intentionally prevent the network from reaching a minimum. Nonetheless, decaying learning rates are empirically successful. To understand this, we note that introducing random fluctuations whose scale falls during training is also a well established technique in nonconvex optimization; simulated annealing. The initial noisy optimization phase allows us to explore a larger fraction of the parameter space without becoming trapped in local minima. Once we have located a promising region of parameter space, we reduce the noise to finetune the parameters.
Finally, we note that this interpretation may explain why conventional learning rate decay schedules like square roots or exponential decay have become less popular in deep learning in recent years. Increasingly, researchers favor sharper decay schedules like cosine decay (Loshchilov & Hutter, 2016) or stepfunction drops (Zagoruyko & Komodakis, 2016). To interpret this shift, we note that in the physical sciences it is well known that slowly annealing the temperature (noise scale) helps the system to converge to the global minimum, which may be sharp. Meanwhile annealing the temperature in a series of discrete steps can trap the system in a “robust” minimum whose cost may be higher but whose curvature is lower. We suspect a similar intuition may hold in deep learning.
4 The effective learning rate and the accumulation variable
Many researchers no longer use vanilla SGD, preferring instead SGD with momentum. The update step applies an exponentially decaying average of the preceding gradient estimates, enhancing the rate of convergence. Smith & Le (2017) extended their analysis of SGD to include momentum, and found that the “noise scale” of random fluctuations in the dynamics,
(4)  
(5) 
This reduces to the noise scale of vanilla SGD when the momentum coefficient . Intuitively, is the effective learning rate. They proposed one could reduce the number of parameter updates required to train a model by increasing the learning rate and momentum coefficient, while simultaneously scaling . We find that increasing the initial learning rate and scaling performs well. However, when combined with a decaying noise scale, increasing the momentum coefficient while scaling slightly reduces the final test set accuracy. To analyze this observation, consider the momentum update equations,
(6)  
(7) 
is the “accumulation”, while is the mean gradient per training example, estimated on a batch of size . In Appendix A we analyze the growth of the accumulation at the start of training. This variable tracks the exponentially decaying average of gradient estimates, but initially it is initialized to zero. We find that the accumulation grows in exponentially towards its steady state value over a “timescale” of approximately training epochs. During this time, the magnitude of the parameter updates is suppressed, reducing the rate of convergence. Consequently when training at high momentum one must introduce additional epochs to allow the dynamics to catch up.
Unfortunately, this same issue will arise every time we change the noise scale . The change enables the dynamics to explore a new region of parameter space, which will change the gradient estimates, but it will take at least epochs for the accumulation to adjust. As a result, we recommend that one only increases the momentum variable as a method of last resort, after first identifying the largest stable learning rate (while scaling ).
5 Experiments
In section 5.1, we demonstrate the equivalence between increasing the batch size and decreasing the learning rate on CIFAR10 with a “164” wide ResNet and a range of optimizers. In section 5.2, we demonstrate that we can further reduce the number of parameter updates by increasing the effective learning rate and scaling the batch size. Finally in section 5.3, we apply our approach to train InceptionResNetV2 on ImageNet in 2500 parameter updates, reaching 77 validation accuracy.
5.1 Simulated annealing in a wide ResNet
Our first experiments are performed on CIFAR10, using a “164” wide ResNet architecture, following the implementation of Zagoruyko & Komodakis (2016). We use ghost batch norm (Hoffer et al., 2017), with a ghost batch size of 128. This ensures the mean gradient is independent of batch size, as required by the analysis of Smith & Le (2017). To demonstrate the equivalence between decreasing the learning rate and increasing the batch size, we consider three different training schedules, as shown in figure 1. “Decaying learning rate” follows the original implementation; the batch size is constant, while the learning rate repeatedly decays by a factor of 5 at a sequence of “steps”. “Hybrid” holds the learning rate constant at the first step, instead increasing the batch size by a factor of 5. However after this first step, the batch size is constant and the learning rate decays by a factor of 5 at each subsequent step. This schedule mimics how one might proceed if hardware imposes a limit on the maximum achievable batch size. In “Increasing batch size”, we hold the learning rate constant throughout training, and increase the batch size by a factor of 5 at every step.
If the learning rate itself must decay during training, then these schedules should show different learning curves (as a function of the number of training epochs) and reach different final test set accuracies. Meanwhile if it is the noise scale which should decay, all three schedules should be indistinguishable. We plot the evolution of the training set cross entropy in figure 2a, where we train using SGD with momentum and a momentum parameter of 0.9. The three training curves are almost identical, despite showing marked drops as we pass through the first two steps (where the noise scale is reduced). These results suggest that it is the noise scale which is relevant, not the learning rate.
To emphasize the potential benefits of increasing the batch size, we replot the training crossentropy in figure 2b, but as a function of the number of parameter updates rather than the number of epochs. While all three schedules match up to the first “step”, after this point increasing the batch size dramatically reduces the number of parameter updates required to train the model. Finally, to confirm that our alternative learning schedules generalize equally well to the test set, in figure 3a we exhibit the test set accuracy, as a function of the number of epochs (so each curve can be directly compared). Once again, the three schedules are almost identical. We conclude that we can achieve all of the benefits of decaying the learning rate in these experiments by instead increasing the batch size.
We present additional results to establish that our proposal holds for a range of optimizers, all using the schedules presented in figure 1. In figure 3b, we present the test set accuracy, when training with Nesterov momentum (Nesterov, 1983) and momentum parameter 0.9, observing three nearidentical curves. In figure 4a, we repeat the same experiment with vanilla SGD, again obtaining three highly similar curves (In this case, there is no clear benefit of decaying the learning rate after the first step). Finally in figure 4b we repeat the experiment with Adam (Kingma & Ba, 2014). We use the default parameter settings of TensorFlow, such that the initial base learning rate here was , and . Thus the learning rate schedule is obtained by dividing figure 1a by . Remarkably, even here the three curves closely track each other.
5.2 Increasing the effective learning rate
We now focus on our secondary objective; minimizing the number of parameter updates required to train a model. As shown above, the first step is to replace decaying learning rates by increasing batch sizes. We show here that we can also increase the effective learning rate at the start of training, while scaling the initial batch size . All experiments are conducted using SGD with momentum. There are 50000 images in the CIFAR10 training set, and since the scaling rules only hold when , we decided to set a maximum batch size .
We consider four training schedules, all of which decay the noise scale by a factor of five in a series of three steps. “Original training schedule” follows the implementation of Zagoruyko & Komodakis (2016), using an initial learning rate of 0.1 which decays by a factor of 5 at each step, a momentum coefficient of 0.9, and a batch size of 128. “Increasing batch size” also uses a learning rate of 0.1, initial batch size of 128 and momentum coefficient of 0.9, but the batch size increases by a factor of 5 at each step. These schedules are identical to “Decaying learning rate” and “Increasing batch size” in section 5.1 above. “Increased initial learning rate” also uses increasing batch sizes during training, but additionally uses an initial learning rate of 0.5 and an initial batch size of 640. Finally “Increased momentum coefficient” combines increasing batch sizes during training and the increased initial learning rate of 0.5, with an increased momentum coefficient of 0.98, and an initial batch size of 3200. Note that we only increase the batch size until it reaches , after this point we achieve subsequent decays in noise scale by decreasing the learning rate.
We plot the evolution of the test set accuracy in figure 5, as a function of the number of parameter updates. Our implementation of the original training schedule requires 80000 updates, and reaches a final test accuracy of 94.3 (the original paper reports 95 accuracy, which we have not been able to replicate). “Increasing batch size” requires 29000 updates, reaching a final accuracy of 94.4. “Increased initial learning rate” requires under 6500 updates, reaching a final accuracy of 94.5. The difference in accuracy between these runs likely arises from the random initialization.
Finally, “Increased momentum coefficient” requires less than 2500 parameter updates, more than 30 times fewer than the original implementation! Unfortunately however it reaches a lower test accuracy of 93.3. It is clear from figure 5 that this final schedule had not converged within the 90 epochs, so it is likely that this performance gap could be reduced, however our goal in this paper is to demonstrate the gains that can be achieved without tuning the hyperparameters or the training schedule. We discussed a potential explanation for this performance gap in section 4.
5.3 Training ImageNet in 2500 parameter updates
We now apply our insights to reduce the number of parameter updates required to train ImageNet. Goyal et al. (2017) trained a ResNet50 on ImageNet in one hour, reaching 76.3 validation accuracy. To achieve this, they used batches of 8192, with an initial learning rate of 3.2 and a momentum coefficient of 0.9. They completed 90 training epochs, decaying the learning rate by a factor of ten at the 30th, 60th and 80th epoch. ImageNet contains around 1.28 million images, so this corresponds to 14000 parameter updates. They also introduced a warmup phase at the start of training, in which the learning rate and batch size was gradually increased.
We also train for 90 epochs and follow the same schedule, decaying the noise scale by a factor of ten at the 30th, 60th and 80th epoch. However we did not include a warmup phase. To set a stronger baseline, we replaced ResNet50 by InceptionResNetV2 (Szegedy et al., 2017). Initially we used a ghost batch size of 32. In figure 6, we train with a learning rate of 3.0 and a momentum coefficient of 0.9. The initial batch size was 8192. For “Decaying learning rate”, we hold the batch size fixed and decay the learning rate, while in “Increasing batch size” we increase the batch size to 81920 at the first step, but decay the learning rate at the following two steps. We repeat each schedule twice, and find that all four runs exhibit a very similar evolution of the test set accuracy during training. The final accuracies of the two “Decaying learning rate” runs are 78.7 and 77.8, while the final accuracy of the two “Increasing batch size” runs are 78.1 and 76.8. Although there is a slight drop, the difference in final test accuracies is similar to the variance between training runs. Increasing the batch size reduces the number of parameter updates during training from just over 14000 to below 6000. Note that the training curves appear unusually noisy because we reduced the number of test set evaluations to reduce the model training time.
Goyal et al. (2017) already increased the learning rate close to its maximum stable value. To further reduce the number of parameter updates we must increase the momentum coefficient. In the following we decided to introduce a maximum batch size, . This ensures , and it also improved the stability of our distributed training. We also increased the ghost batch size to 64, matching the batch size of our GPUs and reducing the training time. We compare three different schedules, all of which have the same base schedule, decaying the noise scale by a factor of ten at the 30th, 60th and 80th epoch. We use an initial learning rate of 3 throughout. “Momentum 0.9” uses an initial batch size of 8192, “Momentum 0.975” uses an initial batch size of 16384, and “Momentum 0.9875” uses an initial batch size of 32768. For all schedules, we decay the noise scale by increasing the batch size until reaching , and then decay the learning rate. We plot the test set accuracy in figure 7. “Momentum 0.9” achieves a final accuracy of 78.8 in just under 6000 updates. We performed two runs of “Momentum 0.95”, achieving final accuracies of 78.1 and 77.8 in under 3500 updates. Finally “Momentum 0.975” achieves final accuracies of 77.5 and 76.8 in under 2500 updates. It is clear from figure 8b, that most of the runs had not converged after 90 epochs, but we chose to stick to the schedule of Goyal et al. (2017) for easier comparison.
6 Related work
This paper extends the analysis of SGD in Smith & Le (2017) to include decaying learning rates. Mandt et al. (2017) also interpreted SGD as a stochastic differential equation, in order to discuss how SGD could be modified to perform approximate Bayesian posterior sampling. However they state that their analysis holds only in the neighborhood of a minimum, while Keskar et al. (2016) showed that the beneficial effects of noise are most pronounced at the start of training. The relationship between SGD and stochastic differential equations was also recently explored by Li et al. (2017), who proposed the use of control theory to set the learning rate and momentum coefficient.
Goyal et al. (2017) observed a linear scaling rule between batch size and learning rate, , and used this rule to reduce the time required to train ResNet50 on ImageNet to one hour. To our knowledge, this scaling rule was fist adopted by Krizhevsky (2014). Bottou et al. (2016) (section 4.2) demonstrated that SGD converges to strongly convex minima in similar numbers of training epochs if . Hoffer et al. (2017) proposed an alternative scaling rule, .
You et al. (2017a) proposed Layerwise Adaptive Rate Scaling (LARS), which applies different learning rates to different parameter in the network, and used it to train ImageNet in 24 minutes (You et al., 2017b), albeit to a lower final accuracy of 58.5. It remains to be seen whether LARS can be combined with the techniques proposed here. KFAC (Martens & Grosse, 2015) is also gaining popularity as an efficient alternative to SGD. Wilson et al. (2017) argued that adaptive optimization methods tend to generalize less well than SGD and SGD with momentum (although they did not include KFAC in their study), while our work reduces the gap in convergence speed.
7 Conclusions
In deep learning, we can often achieve the benefits of decaying the learning rate by instead increasing the batch size. We support this claim with experiments on CIFAR10 and ImageNet, and with a number of optimizers including SGD, SGD with momentum and Adam. Our findings enable the efficient use of vast batch sizes, significantly reducing the number of parameter updates required to train a model. This has the potential to dramatically reduce model training times. We further increase the batch size by increasing the learning rate and momentum parameter , while scaling . Combining these strategies, we train InceptionResNetV2 on ImageNet to 77 validation accuracy in under 2500 parameter updates, using batches of 65536 images. Most strikingly, we achieve this without any hyperparameter tuning, since our scaling rules enable us to directly convert existing hyperparameter choices from the literature for large batch training.
Acknowledgments
We thank Prajit Ramachandran, Vijay Vasudevan, Gabriel Bender, Chris Ying, Matthew Johnson and Martin Abadi for helpful discussions.
References
 Balles et al. (2016) Lukas Balles, Javier Romero, and Philipp Hennig. Coupling adaptive batch sizes with learning rates. arXiv preprint arXiv:1612.05086, 2016.
 Bottou et al. (2016) Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for largescale machine learning. arXiv preprint arXiv:1606.04838, 2016.
 Byrd et al. (2012) Richard H Byrd, Gillian M Chin, Jorge Nocedal, and Yuchen Wu. Sample size selection in optimization methods for machine learning. Mathematical programming, 134(1):127–155, 2012.
 Chaudhari et al. (2016) Pratik Chaudhari, Anna Choromanska, Stefano Soatto, and Yann LeCun. EntropySGD: Biasing gradient descent into wide valleys. arXiv preprint arXiv:1611.01838, 2016.
 De et al. (2017) Soham De, Abhay Yadav, David Jacobs, and Tom Goldstein. Big batch SGD: Automated inference with adaptive batches. In Artificial Intelligence and Statistics, pp. 1504–1513, 2017.
 Friedlander & Schmidt (2012) Michael P Friedlander and Mark Schmidt. Hybrid deterministicstochastic methods for data fitting. SIAM Journal on Scientific Computing, 34(3):A1380–A1405, 2012.
 Goyal et al. (2017) Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
 Hochreiter & Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. Neural Computation, 9(1):1–42, 1997.
 Hoffer et al. (2017) Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: closing the generalization gap in large batch training of neural networks. arXiv preprint arXiv:1705.08741, 2017.
 Keskar et al. (2016) Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On largebatch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016.
 Kingma & Ba (2014) Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 Krizhevsky (2014) Alex Krizhevsky. One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997, 2014.
 Li et al. (2017) Qianxiao Li, Cheng Tai, and E Weinan. Stochastic modified equations and adaptive stochastic gradient algorithms. arXiv preprint arXiv:1511.06251, 2017.
 Loshchilov & Hutter (2016) Ilya Loshchilov and Frank Hutter. SGDR: stochastic gradient descent with restarts. arXiv preprint arXiv:1608.03983, 2016.
 Mandt et al. (2017) Stephan Mandt, Matthew D Hoffman, and David M Blei. Stochastic gradient descent as approximate bayesian inference. arXiv preprint arXiv:1704.04289, 2017.
 Martens & Grosse (2015) James Martens and Roger Grosse. Optimizing neural networks with kroneckerfactored approximate curvature. In International Conference on Machine Learning, pp. 2408–2417, 2015.
 Nesterov (1983) Yurii Nesterov. A method of solving a convex programming problem with convergence rate o (1/k2). In Soviet Mathematics Doklady, volume 27, pp. 372–376, 1983.
 Prechelt (1998) Lutz Prechelt. Early stoppingbut when? Neural Networks: Tricks of the trade, pp. 553–553, 1998.
 Robbins & Monro (1951) Herbert Robbins and Sutton Monro. A stochastic approximation method. The annals of mathematical statistics, pp. 400–407, 1951.
 Smith & Le (2017) Samuel L. Smith and Quoc V. Le. A bayesian perspective on generalization and stochastic gradient descent. arXiv preprint arXiv:1710.06451, 2017.
 Szegedy et al. (2017) Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. Inceptionv4, InceptionResNet and the impact of residual connections on learning. In AAAI, pp. 4278–4284, 2017.
 Welling & Teh (2011) Max Welling and Yee W Teh. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning (ICML11), pp. 681–688, 2011.
 Wilson et al. (2017) Ashia C Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, and Benjamin Recht. The marginal value of adaptive gradient methods in machine learning. arXiv preprint arXiv:1705.08292, 2017.
 You et al. (2017a) Yang You, Igor Gitman, and Boris Ginsburg. Scaling SGD batch size to 32k for imagenet training. arXiv preprint arXiv:1708.03888, 2017a.
 You et al. (2017b) Yang You, Zhao Zhang, James Demmel, Kurt Keutzer, and ChoJui Hsieh. ImageNet training in 24 minutes. arXiv preprint arXiv:1709.05011, 2017b.
 Zagoruyko & Komodakis (2016) Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
 Zhang et al. (2016) Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530, 2016.
Appendix A The growth of the accumulation at the start of training
The update equations for SGD with momentum,
(8)  
(9) 
is the “accumulation” variable, while is the mean gradient per training example, estimated on a batch of size . We initialize the accumulation to zero, and it takes a number of updates for the magnitude of the accumulation to “grow in”. During this time, the size of the parameter updates is suppressed, reducing the effective learning rate. We can model the growth of the accumulation by assuming that the gradient at the start of training is approximately constant, such that . Consequently the accumulation integrates an underlying differential equation,
(10) 
The variable describes the number of parameter updates performed. Since , this differential equation has solution, . We note that to obtain,
(11) 
denotes the number of training epochs performed. The accumulation variable grows in exponentially, and consequently we can estimate the effective number of “lost” training epochs,
(12)  
(13) 
Since the batch size , we find . We must either introduce additional training epochs to compensate, or ensure that the number of lost training epochs is negligible, when compared to the total number of training epochs performed before the decaying the noise scale. Note that rises most rapidly when one increases the momentum coefficient.