Landscape of the Empirical Risk of Overparametrized Deep Networks
CBMM Memo No. 066 July 3, 2019
Theory of Deep Learning II: Landscape of the Empirical Risk in Deep Learning
by
Qianli Liao and Tomaso Poggio
Center for Brains, Minds, and Machines, McGovern Institute for Brain Research,
Massachusetts Institute of Technology, Cambridge, MA, 02139.
Abstract:
Previous theoretical work on deep learning and neural network optimization tend to focus on avoiding saddle points and local minima. However, the practical observation is that, at least in the case of the most successful Deep Convolutional Neural Networks (DCNNs), practitioners can always increase the network size to fit the training data (an extreme example would be [1]). The most successful DCNNs such as VGG and ResNets are best used with a degree of “overparametrization”. In this work, we characterize with a mix of theory and experiments, the landscape of the empirical risk of overparametrized DCNNs. We first prove in the regression framework the existence of a large number of degenerate global minimizers with zero empirical error (modulo inconsistent equations). The argument that relies on the use of Bezout theorem is rigorous when the RELUs are replaced by a polynomial nonlinearity (which empirically works as well). As described in our Theory III [2] paper, the same minimizers are degenerate and thus very likely to be found by SGD that will furthermore select with higher probability the most robust zerominimizer. We further experimentally explored and visualized the landscape of empirical risk of a DCNN on CIFAR10 during the entire training process and especially the global minima. Finally, based on our theoretical and experimental results, we propose an intuitive model of the landscape of DCNN’s empirical loss surface, which might not be as complicated as people commonly believe.
This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF  1231216.
Contents
 1 Introduction
 2 Framework
 3 Landscape of the Empirical Risk: Theoretical Analyses
 4 The Landscape of the Empirical Risk: Visualizing and Analysing the Loss Surface During the Entire Training Process (on CIFAR10)
 5 The Landscape of the Empirical Risk: Towards an Intuitive Baseline Model
 6 Previous theoretical work
 7 Discussion and Conclusions
 A Landscape of the Empirical Risk: Theoretical Analyses
 B The Landscape of the Empirical Risk: Visualizing and Analysing the Loss Surface During the Entire Training Process (on CIFAR10)
 C The Landscape of the Empirical Risk: Towards an Intuitive Baseline Model
 D Overparametrizing DCNN does not harm generalization
 E Study the flat global minima by perturbations on CIFAR10 (with smaller perturbations)
1 Introduction
There are at least three main parts in a theory of Deep Neural Networks. The first part is about approximation – how and when can deep neural networks avoid the curse of dimensionality’ [3]? The second part is about the landscape of the minima of the empirical risk: what can we say in general about global and local minima? The third part is about generalization: why can SGD (Stochastic Gradient Descent) generalize so well despite standard overparametrization of the deep neural networks [2]? In this paper we focus on the second part: the landscape of the empirical risk.
Our main results: we characterize the landscape of the empirical risk from three perspectives:

Theoretical Analyses (Section 3): We study the nonlinear system of equations corresponding to critical points of the gradient of the loss (for the loss function) and to zero minimizers, corresponding to interpolating solutions. In the equations the functions representing the network’s output contain the RELU nonlinearity. We consider an  approximation of it in the sup norm using a polynomial approximation or the corresponding Legendre expansion. We can then invoke Bezout theorem to conclude that there are a very large number of zeroerror minima, and that the zeroerror minima are highly degenerate, whereas the local nonzero minima, if they exist, may not be degenerate. In the case of classification, zero error implies the existence of a margin, that is a flat region in all dimensions around zero error.

Visualizations and Experimental Explorations (Section 4): The theoretical results indicate that there are degenerate global minima in the loss surface of DCNN. However, it is unclear how the rest of the landscape look like. To gain more knowledge about this, we visualize the landscape of the entire training process using Multidimensional Scaling. We also probe locally the landscape at different locations by perturbation and interpolation experiments.

A simple model of the landscape (Section 5). Summarizing our theoretical and experimental results, we propose a simple baseline model for the landscape of empirical risk, as shown in Figure 1. We conjecture that the loss surface of DCNN is not as complicated as commonly believed. At least in the case of overparametrized DCNNs, the loss surface might be simply a collection of (highdimensional) basins, which have some of the following interesting properties: 1. Every basin reaches a flat global minima. 2. The basins may be rugged such that any perturbation or noise leads to a slightly different convergence path. 3. Despite being perhaps locally rugged, the basin has a relatively regular overall landscape such that the average of two model within a basin gives a model whose error is roughly the average of (or even lower than) the errors of original two models. 4. Interpolation between basins, on the other hand, may significantly raise the error. 5. There might be some good properties in each basin such that there is no local minima — we do not encounter any local minima in CIFAR, even when training with batch gradient descent without noise.
2 Framework
We assume a deep network of the convolutional type and overparametrization, that is more weights than data points, since this is how successful deep networks have been used. Under these conditions, we will show that imposing zero empirical error provides a system of equations (at the zeros) that have a large number of degenerate solutions in the weights. The equations are polynomial in the weights, with coefficients reflecting components of the data vectors (one vector per data point). The system of equations is underdetermined (more unknowns than equations, e.g. data points) because of the assumed overparametrization. Because the global minima are degenerate, that is flat in many of the dimensions, they are more likely to be found by SGD than local minima which are less degenerate. Although generalization is not the focus of this work, such flat minima are likely to generalize better than sharp minima, which might be the reason why overparametrized deep networks do not seem to overfit.
3 Landscape of the Empirical Risk: Theoretical Analyses
The following theoretical analysis of the landscape of the empirical risk is based on a few assumptions: (1) We assume that the network is overparametrized, typically using several times more parameters (the weights of the network) than data points. In practice, even with data augmentation (in most of the experiments in this paper we do not use data augmentation), one can always make the model larger to achieve overparametrization without sacrificing either the training or generalization performance. (2) This section assumes a regression framework. We study how many solutions in weights lead to perfect prediction of training labels. In classification settings, such solutions is a subset of all solutions.
Among the critical points of the gradient of the empirical loss, we consider first the zeros of loss function given by the set of equations where is the number of training examples
The function realized by a deep neural network is polynomial if each of RELU units is replaced by a univariate polynomial. Of course each RELU can be approximated within any desired in the sup norm by a polynomial. In the welldetermined case (as many unknown weights as equations, that is data points), Bezout theorem provides an upper bound on the number of solutions. The number of distinct zeros (counting points at infinity, using projective space, assigning an appropriate multiplicity to each intersection point, and excluding degenerate cases) would be equal to  the product of the degrees of each of the equations. Since the system of equations is usually underdetermined – as many equations as data points but more unknowns (the weights) – we expect an infinite number of global minima, under the form of regions of zero empirical error. If the equations are inconsistent there are still many global minima of the squared error that are solutions of systems of equations with a similar form.
We assume for simplicity that the equations have a particular compositional form (see [4]). The degree of each approximating equation is determined by the desired accuracy for approximating the ReLU activation by a univariate polynomial of degree and by the number of layers .
The argument based on RELUs approximation for estimating the number of zeros is a qualitative one since good approximation of the does not imply by itself good approximation – via Bezout theorem – of the number of its zeros. Notice, however, that we could abandon completely the approximation approach and just consider the number of zeros when the RELUs are replaced by a low order univariate polynomial. The argument then would not strctly apply to RELU networks but would still carry weight because the two types of networks – with polynomial activation and with RELUs – behave empirically (see Figure 2) in a similar way.
In the Supporting Material we provide a simple example of a network with associated equations for the exact zeros. They are a system of underconstrained polynomial equations of degree . In general, there are as many constraints as data points for a much larger number of unknown weights . There are no solutions if the system is inconsistent – which happens if and only if is a linear combination (with polynomial coefficients) of the equations (this is Hilbert’s Nullstellensatz). Otherwise, it has infinitely many complex solutions: the set of all solutions is an algebraic set of dimension at least . If the underdetermined system is chosen at random the dimension is equal to with probability one.
Even in the nondegenerate case (as many data as parameters), Bezout theorem suggests that there are many solutions. With layers the degree of the polynomial equations is . With datapoints the Bezout upper bound in the zeros of the weights is . Even if the number of real zero – corresponding to zero empirical error – is much smaller (Smale and Shub estimate [5] ), the number is still enormous: for a CiFAR situation this may be as high as .
As mentioned, in several cases we expect absolute zeros to exist with zero empirical error. If the equations are inconsistent it seems likely that global minima with similar properties exist.
It is interesting now to consider the critical points of the gradient. The critical points of the gradient are , which gives equations: , where is the loss function.
Approximating within in the sup norm each ReLU in with a fixed polynomial yields again a system of polynomial equations in the weights of higher order than in the case of zerominimizers. They are of course satisfied by the degenerate zeros of the empirical error but also by additional nondegenerate (in the general case) solutions.
Thus, we have Proposition 1: There are a very large number of zeroerror minima which are highly degenerate unlike the local nonzero minima.
4 The Landscape of the Empirical Risk: Visualizing and Analysing the Loss Surface During the Entire Training Process (on CIFAR10)
4.1 Experimental Settings
In the empirical work described below we analyze a classification problem with cross entropy loss. Our theoretical analyses with the regression framework provide a lower bound of the number of solutions of the classification problem.
Unless mentioned otherwise, we trained a 6layer (with the 1st layer being the input) Deep Convolutional Neural Network (DCNN) on CIFAR10. All the layers are 3x3 convolutional layers with stride 2. No pooling is performed. Batch Normalizations (BNs) [6] are used between hidden layers. The shifting and scaling parameters in BNs are not used. No data augmentation is performed, so that the training set is fixed (size = 50,000). There are 188,810 parameters in the DCNN.
Multidimensional Scaling The core of our visualization approach is Multidimensional Scaling (MDS) [7]. We record a large number of intermediate models during the process of several training schemes. Each model is a high dimensional point with the number of dimensions being the number of parameters. The strainbased MDS algorithm is applied to such points and a corresponding set of 2D points are found such that the dissimilarity matrix between the 2D points are as similar to those of the highdimensional points as possible. One minus cosine distance is used as the dissimilarity metric. This is more robust to scaling of the weights, which is usually normalized out by BNs. Euclidean distance gives qualitatively similar results though.
4.2 Global Visualization of SGD Training Trajectories
We show in Figure 3 the optimization trajectories of Stochastic Gradient Descent (SGD) throughout the entire optimization process of training a DCNN on CIFAR10. The SGD trajectories follow the minibatch approximations of the training loss surface. Although the trajectories are noisy due to SGD, the collected points along the trajectories provide a good visualization of the landscape of empirical risk. We show the visualization based on the weights of layer 2. The results from other layers (e.g., layer 5) are qualitatively similar and are shown in the Appendix.
4.3 Global Visualization of Training Loss Surface with Batch Gradient Descent
Next, we visualize in Figure 4 the exact training loss surface by training the models using Batch Gradient Descent (BGD). In addition to training, we also performed perterbation and interpolation experiments as described in Figure 4. The main trajectory, branches and the interpolated models together provides a good visualization of the landscape of the empirical risk.
4.4 More Detailed Analyses of Several Local Landscapes (especially the flat global minima)
We perform some more detailed analyses at several locations of the landscape. Especially, we would like to check if the global minima is flat. We train a 6layer (with the 1st layer being the input) DCNN on CIFAR10 with 60 epochs of SGD (batch size = 100) and 400 epochs of Batch Gradient Descent (BGD). BGD is performed to get to as close to the global minima as possible. Next we select three models from this learning trajectory (1) : the model at SGD epoch 5. (2) : the model at SGD epoch 30. (3) : the final model after 60 epochs of SGD and 400 epochs of BGD. The results of (3) are shown in Figure 5 while those of (1) and (2) are shown in the the Appendix.
5 The Landscape of the Empirical Risk: Towards an Intuitive Baseline Model
In this section, we propose a simple baseline model for the landscape of empirical risk that is consistent with all of our theoretical and experimental findings.
In the case of overparametrized DCNNs, here is a recapitulation of our main observations:
Theoretically, we show that there are a large number of global minimizers with zero (or small) empirical error. The same minimizers are degenerate.
Regardless of Stochastic Gradient Descent (SGD) or Batch Gradient Descent (BGD), a small perturbation of the model almost always leads to a slightly different convergence path. The earlier the perturbation is in the training process the more different the final model would be.
Interpolating two “nearby” convergence paths lead to another convergence path with similar errors every epoch. Interpolating two “distant” models lead to raised errors.
We do not observe local minima, even when training with BGD.
There is a simple model that is consistent with above observations. As a firstorder characterization, we believe that the landscape of empirical risk is simply a collection of (hyper) basins that each has a flat global minima. Illustrations are provided in Figure 1.
As shown in Figure 1, the building block of the landscape is a basin. How does a basin look like in high dimension? Is there any evidence for this model? One definition of a hyperbasin would be that as loss decreases, the hypervolume of the parameter space decreases (see Figure 1 (H) for example). As we can see, with the same amount of scaling in each dimension, the volume shrinks much faster as the number of dimension increases — with a linear decrease in each dimension, the hypervolume decreases as a exponential function of the number of dimensions. With the number of dimensions being the number of parameters, the volume shrinks incredibly fast. This leads to a phenomenon that we all observe experimentally: whenever one perturb a model by adding some significant noise, the loss almost always never go down. The larger the perturbation is, the more the error increases. The reasons are simple if the local landscape is a hyperbasin: the volume of a lower loss area is so small that by randomly perturbing the point, there is almost no chance getting there. The larger the perturbation is, the more likely it will get to a much higher loss area.
There are, nevertheless, other plausible variants of this model that can explain our experimental findings. In Figure 1 (G), we show one alternative model we call “basinfractal”. This model is more elegant while being also consistent with most of the above observations. The key difference between simple basins and “basinfractal” is that in “basinfractal”, one should be able to find “walls” (raised errors) between two models within the same basin. Since it is a fractal, these “walls” should be present at all levels of errors. For the moment, we only discovered “walls” between two models the trajectories lead to which are very different (obtained either by splitting very early in training, as shown in Figure 4 branch 1 or by a very significant perturbation, as shown in the Appendix). We have not found other significant “walls” in all other perturbation and interpolation experiments. So a first order model of the landscape would be just a collection of simple basins. Nevertheless, we do find “basinfractal” elegant, and perhaps the “walls” in the low loss areas are just too flat to be noticed.
Another surprising finding about the basins is that, they seem to be so “smooth” such that there is no local minima. Even when training with batch gradient descent, we do not encounter any local minima. When trained long enough with small enough learning rates, one always gets to 0 classification error and negligible cross entropy loss.
6 Previous theoretical work
Deep Learning references start with Hinton’s backpropagation and with LeCun’s convolutional networks (see for a nice review [8]). Of course, multilayer convolutional networks have been around at least as far back as the optical processing era of the 70s. The Neocognitron[9] was a convolutional neural network that was trained to recognize characters. The property of compositionality was a main motivation for hierarchical models of visual cortex such as HMAX which can be regarded as a pyramid of AND and OR layers[10], that is a sequence of conjunctions and disjunctions. [3] provided formal conditions under which deep networks can avoid the curse of dimensionality. More specifically, several papers have appeared on the landscape of the training error for deep networks. Techniques borrowed from the physics of spin glasses (which in turn were based on old work by Marc Kac on the zeros of algebraic equations) were used [11] to suggest the existence of a band of local minima of high quality as measured by the test error. The argument however depends on a number of assumptions which are rather implausible (see [12] and [13] for comments and further work on the problem). Soudry and Carmon [12] show that with mild overparameterization and dropoutlike noise, training error for a neural network with one hidden layer and piecewise linear activation is zero at every local minimum. All these results suggest that the energy landscape of deep neural networks should be easy to optimize. They more or less hold in practice —it is easy to optimize a prototypical deep network to nearzero loss on the training set.
7 Discussion and Conclusions
Are the results shown in this work data dependent? We visualized the SGD trajectories in the case of fitting random labels. There is no qualitative difference between the results from those of normal labels. So it is safe to say the results are at least not label dependent. We will further check if fitting random input data to random labels will give similar results.
What about Generalization? It is experimentally observed that (see Figure 40 in the Appendix), at least in all our experiments, overparametrization (e.g., 60x more parameters than data) does not hurt generalization at all.
Conclusions: Overall, we characterize the landscape of empirical risk of overparametrized DCNNs with a mix of theoretical analyses and experimental explorations. We provide a simple baseline model of the landscape that can account for all of our theoretical and experimental results. Nevertheless, as the final model is so simple, it is hard to believe that it would completely characterize the true loss surface of DCNN. Further research is warranted.
References
 [1] C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals, “Understanding deep learning requires rethinking generalization,” arXiv preprint arXiv:1611.03530, 2016.
 [2] C. Zhang, Q. Liao, A. Rakhlin, K. Sridharan, B. Miranda, N. Golowich, and T. Poggio, “Theory of deep learning iii: Generalization properties of sgd,” tech. rep., Center for Brains, Minds and Machines (CBMM), 2017.
 [3] T. Poggio, H. Mhaskar, L. Rosasco, B. Miranda, and Q. Liao, “Why and when can deep–but not shallow–networks avoid the curse of dimensionality: a review,” arXiv preprint arXiv:1611.00740, 2016.
 [4] H. Mhaskar, Q. Liao, and T. Poggio, “Learning real and boolean functions: When is deep better than shallow?,” Center for Brains, Minds and Machines (CBMM) Memo No. 45, also in arXiv, 2016.
 [5] M. Shub and S. Smale, “Complexity of bezout theorem v: Polynomial time,” Theoretical Computer Science, no. 133, pp. 141–164, 1994.
 [6] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, 2015.
 [7] I. Borg and P. J. Groenen, Modern multidimensional scaling: Theory and applications. Springer Science & Business Media, 2005.
 [8] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, pp. 436–444, 2015.
 [9] K. Fukushima, “Neocognitron: A selforganizing neural network for a mechanism of pattern recognition unaffected by shift in position,” Biological Cybernetics, vol. 36, no. 4, pp. 193–202, 1980.
 [10] M. Riesenhuber and T. Poggio, “Hierarchical models of object recognition in cortex,” Nature Neuroscience, vol. 2, pp. 1019–1025, Nov. 1999.
 [11] A. Choromanska, Y. LeCun, and B. Arous, “Open problem: The landscape of the loss surfaces of multilayer networks,” JMLR: Workshop and Conference Proceedings 28th Annual Conference on Learning Theory, p. 1–5, 2015.
 [12] D. Soudry and Y. Carmon, “No bad local minima: Data independent training error guarantees for multilayer neural networks,” arXiv preprint arXiv:15605.08361, 2016.
 [13] K. Kawaguchi, “Deep learning without poor local minima,” in Advances in Neural Information Processing Systems (NIPS), 2016.
Appendix A Landscape of the Empirical Risk: Theoretical Analyses
a.1 Optimization of compositional functions: Bezout theorem
The following analysis of the landscape of the empirical risk is based on two assumptions that hold true in most applications of deep convolutional networks:

overparametrization of the network, typically using several times more parameters (the weights of the network) than data points. In practice, even with data augmentation, one can always make the model larger to achieve overparametrization without sacrificing either the training or generalization performance.

each of the equations corresponding to zeros of the empirical risk (we assume a regression framework attempting to minimize a loss such as square loss) can be approximated by a polynomial equation in the weights, by a polynomial approximaton within (in the sup norm) of the RELU nonlinearity.
The main observation is that the degree of each approximating equation is determined by the accuracy we desire for approximating the ReLU activation by a univariate polynomial of degree and by the number of layers .^{1}^{1}1Of course we have to constrain the range of values in the argument of the RELUs in order to set the degree of the polynomial that achieves accuracy . This approximation argument is of course a qualitative one sice good approximation of the function does not imply by itself good approximation of the number of its zeros. Notice that we could abandon completely the approximation approach and just consider the number of zeros when the RELUs are exactly a low order univariate polinomial. The argument then would not apply to RELU networks but would still carry weight because the two types of networks – with polynomial activation and with RELUs – behave theoretically (see ) and empirically in a similar way. In the welldetermined case (as many unknown weights as equations, that is data points), Bezout theorem provides an upper bound on the number of solutions. The number of distinct zeros (counting points at infinity, using projective space, assigning an appropriate multiplicity to each intersection point, and excluding degenerate cases) would be equal to  the product of the degrees of each of the equations. Since the system of equations is usually underdetermined – as many equations as data points but more unknowns (the weights) – we expect an infinite number of global minima, under the form of regions of zero empirical error. If the equations are inconsistent there are still many global minima of the squared error that is solutions of systems of equations with a similar form.
a.2 Global minima with zero error
We consider a simple example in which the zeros of the empirical error (that is exact solutions of the set of equations obtained by setting , where are the data points and is the network parametrized by weights ). In particular, we consider the zeros on the training set of a network with ReLUs activation approximating a function of four variables with the following structure:
(1) 
We assume that the deep approximation network uses ReLUs as follows
(2) 
and
(3) 
(4) 
There are usually quite a few more units in the first and second layer than the of the example above.
This example generalizes to the case in which the kernel support is larger than (for instance is as in ResNets). In the standard case each node (for instance in the first layer) contains quite a few units () and as many outputs. However, the effective outputs are much fewer so that each is the linear combination of several ReLUs units.
Consider Figure 7. The approximating polynomial equations for the zero of the empirical errors for this network, which could be part of a larger network, are, for where are the data points:
(5) 
(6) 
The above equations describe the simple case of one ReLU per node for the case of the network of Figure 7. Equations 5 are a system of underconstrained polynomial equations of degree . In general, there are as many constraints as data points for a much larger number of unknown weights . There are no solutions if the system is inconsistent – which happens if and only if is a linear combination (with polynomial coefficients) of the equations (this is Hilbert’s Nullstellensatz). Otherwise, it has infinitely many complex solutions: the set of all solutions is an algebraic set of dimension at least . If the underdetermined system is chosen at random the dimension is equal to with probability one.
Even in the nondegenerate case (as many data as parameters), Bezout theorem suggests that there are many solutions. With layers the degree of the polynomial equations is . With datapoints the Bezout upper bound in the zeros of the weights is . Even if the number of real zero – corresponding to zero empirical error – is much smaller (Smale and Shub estimate [5] ), the number is still enormous: for a CiFAR situation this may be as high as .
a.3 Minima
As mentioned, in several cases we expect absolute zeros to exist with zero empirical error. If the equations are inconsistent it seems likely that global minima with similar properties exist.
Corollary 1
In general, nonzero minima exist with higher dimensionality than the zeroerror global minima: their dimensionality is the number of weights vs. the number of data points . This is true in the linear case and also in the presence of ReLUs.
Let us consider the same example as before looking at the critical points of the gradient. With a square loss function the critical points of the gradient are:
(7) 
which gives equations
(8) 
Approximating within in the sup norm each ReLU in with a fixed polynomial yields again a system of polynomial equations in the weights of higher order than in the case of zerominimizers. They are of course satisfied by the degenerate zeros of the empirical error but also by additional nondegenerate (in the general case) solutions.
We summarize our main observations on the approximating system of equations in the following
Proposition 1
There are a very large number of zeroerror minima which are highly degenerate unlike the local nonzero minima.
Appendix B The Landscape of the Empirical Risk: Visualizing and Analysing the Loss Surface During the Entire Training Process (on CIFAR10)
In the empirical work described below we analyze a classification problem with cross entropy loss. Our theoretical analyses with the regression framework provide a lower bound of the number of solutions of the classification problem.
b.1 Experimental Settings
Unless mentioned otherwise, we trained a 6layer (with the 1st layer being the input) Deep Convolutional Neural Network (DCNN) on CIFAR10. All the layers are 3x3 convolutional layers with stride 2. No pooling is performed. Batch Normalizations (BNs) are used between hidden layers. The shifting and scaling parameters in BNs are not used. No data augmentation is performed, so that the training set is fixed (size = 50,000). There are 188,810 parameters in the DCNN.
Multidimensional Scaling The core of our visualization approach is Multidimensional Scaling (MDS) [7]. We record a large number of intermediate models during the process of several training schemes. Each model is a high dimensional point with the number of dimensions being the number of parameters. The strainbased MDS algorithm is applied to such points and a corresponding set of 2D points are found such that the dissimilarity matrix between the 2D points are as similar to those of the highdimensional points as possible. One minus cosine distance is used as the dissimilarity metric. This is more robust to scaling of the weights, which is usually normalized out by BNs. Euclidean distance gives qualitatively similar results though.
b.2 Global Visualization of SGD Training Trajectories
We show the optimization trajectories of Stochastic Gradient Descent (SGD), since this is what people use in practice. The SGD trajectories follow the minibatch approximations of the training loss surface. Although the trajectories are noisy due to SGD, the collected points along the trajectories provide a visualization of the landscape of empirical risk.
We train a 6layer (with the 1st layer being the input) convolutional network on CIFAR10 with stochastic gradient descent (batch size = 100) We divide the training process into 12 stages. In each stage, we perform 8 parallel SGDs with learning rate 0.01 for 10 epochs, resulting in 8 parallel trajectories denoted by different colors in each subfigure of Figure 9 and 11. Trajectories 1 to 4 in each stage start from the final model (denoted by ) of trajectory 1 of the previous stage. Trajectories 5 to 8 in each stage start from a perturbed version of . The perturbation is performed by adding a gaussian noise to the weights of each layer with the standard deviation being 0.01 times layer’s standard deviation. In general, we observe that running any trajectory with SGD again almost always leads to a slightly different model.
Taking layer 2 weights for example, we plot the global MDS results of stage 1 to 12 in Figure 8. The detailed parallel trajectories of stage 1 to 3 are plotted separately in Figure 9.
The results of stages more than 5 are quite cluttered. So we applied a separate MDS to the stages 5 to 12 and show the results in Figure 10. The detailed parallel trajectories of stage 5 to 7 are plotted separately in Figure 11.
The weights of different layers tend to produce qualitatively similar results. We show the results of layer 5 in Figure 12 and Figure 13.
b.3 Global Visualization of Training Loss Surface with Batch Gradient Descent
Next, we visualize the exact training loss surface by training the models using Batch Gradient Descent (BGD). We adopt the following procedures: We train a model from scratch using BGD. At epoch 0, 10, 50 and 200, we create a branch by perturbing the model by adding a Gaussian noise to all layers. The standard deviation of the Gaussian noise is a meta parameter, and we tried 0.25*S, 0.5*S and 1*S, where S denotes the standard deviation of the weights in each layer, respectively.
We also interpolate (by averaging) the models between the branches and the main trajectory, epoch by epoch. The interpolated models are evaluated on the entire training set to get a performance (in terms of error percentage).
The main trajectory, branches and the interpolated models together provides a good visualization of the landscape of the empirical risk.
b.4 More Detailed Analyses of Several Local Landscapes (especially the flat global minima)
After the global visualization of the loss surface, we perform some more detailed analyses at several locations of the landscape. Especially, we would like to check if the global minima is flat. We train a 6layer (with the 1st layer being the input) DCNN on CIFAR10 with 60 epochs of SGD (batch size = 100) and 400 epochs of Batch Gradient Descent (BGD). BGD is performed to get to as close to the global minima as possible.
Next we select three models from this learning trajectory

: the model at SGD epoch 5.

: the model at SGD epoch 30.

: the final model after 60 epochs of SGD and 400 epochs of BGD.
We perturb the weights of these models and retrain them with BGD, respectively. This procedure was done multiple times for each model to get an idea of the nearby empirical risk landscape.
The results are consistent with the previous theoretical arguments:

global minima are easily found with zero classification error and negligible cross entropy loss.

The global minima seem “flat” under perturbations with zero error corresponding to different parameters values.

The local landscapes at different levels of error seem to be very similar. Perturbing a model always lead to a different convergence path, leading to a similar but distinct model. We tried smaller perturbations and also observed this effect.
b.4.1 Perturbing the model at SGD Epoch 5
b.4.2 Perturbing the model at SGD Epoch 30
b.4.3 Perturbing the final model (SGD 60 + GD 400 epochs)
Appendix C The Landscape of the Empirical Risk: Towards an Intuitive Baseline Model
In this section, we propose a simple baseline model for the landscape of empirical risk that is consistent with all of our theoretical and experimental findings. In the case of overparametrized DCNNs, here is a recapitulation of our main observations:

Theoretically, we show that there are a large number of global minimizers with zero (or small) empirical error. The same minimizers are degenerate.

Regardless of Stochastic Gradient Descent (SGD) or Batch Gradient Descent (BGD), a small perturbation of the model almost always leads to a slightly different convergence path. The earlier the perturbation is in the training process the more different the final model would be.

Interpolating two “nearby” convergence paths lead to another convergence path with similar errors every epoch. Interpolating two “distant” models lead to raised errors.

We do not observe local minima, even when training with BGD.
There is a simple model that is consistent with above observations. As a firstorder characterization, we believe that the landscape of empirical risk is simply a collection of (hyper) basins that each has a flat global minima. Illustrations are provided in Figure 6 and Figure 38 (a concrete 3D case).
As shown in Figure 6 and Figure 38, the building block of the landscape is a basin.How does a basin look like in high dimension? Is there any evidence for this model? One definition of a hyperbasin would be that as loss decreases, the hypervolume of the parameter space decreases: 1D (a slice of 2D), 2D and 3D examples are shown in Figure 38 (A), (B), (C), respectively. As we can see, with the same amount of scaling in each dimension, the volume shrinks much faster as the number of dimension increases — with a linear decrease in each dimension, the hypervolume decreases as a exponential function of the number of dimensions. With the number of dimensions being the number of parameters, the volume shrinks incredibly fast. This leads to a phenomenon that we all observe experimentally: whenever one perturb a model by adding some significant noise, the loss almost always never go down. The larger the perturbation is, the more the error increases. The reasons are simple if the local landscape is a hyperbasin: the volume of a lower loss area is so small that by randomly perturbing the point, there is almost no chance getting there. The larger the perturbation is, the more likely it will get to a much higher loss area.
There are, nevertheless, other plausible variants of this model that can explain our experimental findings. In Figure 39, we show one alternative model we call “basinfractal”. This model is more elegant while being also consistent with most of the above observations. The key difference between simple basins and “basinfractal” is that in “basinfractal”, one should be able to find “walls” (raised errors) between two models within the same basin. Since it is a fractal, these “walls” should be present at all levels of errors. For the moment, we only discovered “walls” between two models the trajectories lead to which are very different (obtained either by splitting very early in training, as shown in Figure 15 (a) and Figure 17 (a) or by a very significant perturbation, as shown in Figure 17 (b)). We have not found other significant “walls” in all other perturbation and interpolation experiments. So a first order model of the landscape would be just a collection of simple basins. Nevertheless, we do find “basinfractal” elegant, and perhaps the “walls” in the low loss areas are just too flat to be noticed.
Another surprising finding about the basins is that, they seem to be so “smooth” such that there is no local minima. Even when training with batch gradient descent, we do not encounter any local minima. When trained long enough with small enough learning rates, one always gets to 0 classification error and negligible cross entropy loss.
Appendix D Overparametrizing DCNN does not harm generalization
See Figure 40.
Appendix E Study the flat global minima by perturbations on CIFAR10 (with smaller perturbations)
Zeroerror model : We first train a 6layer (with the 1st layer being the input) convolutional network on CIFAR10. The model reaches 0 training classification error after 60 epochs of stochastic gradient descent (batch size = 100) and 372 epochs of gradient descent (batch size = training set size). We call this model . Next we perturb the weights of this zeroerror model and continue training it. This procedure was done multiple times to see whether the weights converge to the same point. Note that no data augmentation is performed, so that the training set is fixed (size = 50,000).
The procedures are essentially the same as what described in main text Section B.4. The main difference is that the perturbations are smaller. The classification errors are even 0 after the perturbation and throughout the entire following training process.