Towards Understanding Generalization of Deep Learning: Perspective of Loss Landscapes
It is widely observed that deep learning models with learned parameters generalize well, even with much more model parameters than the number of training samples. We systematically investigate the underlying reasons why deep neural networks often generalize well, and reveal the difference between the minima (with the same training error) that generalize well and those they don’t. We show that it is the characteristics the landscape of the loss function that explains the good generalization capability. For the landscape of loss function for deep networks, the volume of basin of attraction of good minima dominates over that of poor minima, which guarantees optimization methods with random initialization to converge to good minima. We theoretically justify our findings through analyzing 2-layer neural networks; and show that the low-complexity solutions have a small norm of Hessian matrix with respect to model parameters. For deeper networks, extensive numerical evidence helps to support our arguments.
Recently, deep learning  has achieved remarkable success in various application areas. In spite of its powerful modeling capability, we know little about why deep learning works so well from a theoretical perspective. This is widely known as the “black-box” nature of deep learning.
One key observation is that, most of deep neural networks with learned parameters often generalize very well empirically, even equipped with much more effective parameters than the number of training samples, i.e. high-capacity. According to conventional statistical learning theory (including VC dimension  and Rademacher complexity measure ), in such over-parameterized and non-convex models, the system is easy to get stuck into local minima that generalize badly. Some regularizations are required to control the generalization error. However, as shown in , high-capacity neural networks without any regularization can still obtain low complexity solutions and generalize well; and suitable regularization only helps to improve the test error to a small margin. Thus, statistical learning theory cannot explain the generalization ability of deep learning models.
It is worthy of noting that we call the solutions (or minima) with the same and small training error “good ” or “bad” if they have significant difference of generalization performance, i.e. test accuracy. Take the task of MNIST digit classification as an example, with the same training accuracy, we are curious about the striking difference between the minima achieving above test accuracy and those bad ones like a random guess, rather than the small difference between the solutions above test accuracy. For those bad solutions that are rarely observed in normal training procedures, we find them by intentionally adding attacking data to the original training set. To the best of our knowledge, this is the first time that the bad solutions (with the same and small training error as good ones) are accessible, see Section 5.1. This directly provides the possiblity of considering the difference between the good and bad solutions.
In this work, we aim to answer two crucial questions towards understanding the generalization of deep learning:
: What is the property that distinguishes the good solutions (obtained from optimizers) from those that generalize poorly?
: Although there exist many solutions with bad generalization performance, why do the optimizers with random initialization almost surely converge to the minima generalizing well?
We provide reasonable explanation to both of the two questions.
For the first one, we find that local minima with large volume of attractors often lead good generalization performance, as theoretically studied in Section 3.2 and Section 4. A Hessian-based analysis is proposed for quantifying the volume of attractor, based on which extensive numerical evidence (see Section 5) reveals the important relationship between the generalization performance and the volume of basin of attractor.
For the second question, several factors are investigated systematically that might endow neural networks with powerful generalization capability, including initialization, optimization and the characteristics of the landscapes of objective functions. Through extensive theoretical and empirical justifications, we exclude the effects of optimization, as illustrated in Section 3.1. And we conjecture that it is the characteristics of the landscapes of loss functions that automatically guarantees the optimization methods with random initialization to converge to good minima almost surely, see Section 3.2. This property is irrelevant with the types of optimization methods adopted during the training.
Our findings dramatically differ from traditional understanding on generalization of neural network, which attribute it to some particular optimizers (e.g. stochastic gradient descent), or certain regularizing techniques (e.g. weight decay or Dropout). These factors can only explain the small difference between the good solutions, rather than the significant difference between good and bad minima. We conjecture that the “mystery” of small generalization error is due to the special structure of neural networks. This is justified through theoretically analyzing the landscape of 2-layer neural network, as shown in Section 4.
Different approaches have been employed to discuss the generalization of neural networks. Some works  explored the implicit regularization property of SGD. Another perspective relies on the geometry of loss function around a global minimum. It argues that the solutions that generalize well lie in flat valleys, while bad ones always are located in the sharp ravine. This observation dates back to the early works . Recently  adopts this perspective to explain why small-batch SGD often converges to the solution generalizing better than large-batch SGD; and the authors of  proposed to use controllable noise to bias the SGD to find flat solutions. Similar works on this path can also be found in , where the discrete networks are considered.
However, the existing research only considered the small difference between the good solutions, which are not addressing the two key issues we described previously. In , through several numerical experiments, the authors suggest that both explicit and implicit regularizers, when well tuned, could help to reduce the generalization error to a small margin. However, it is far away from the essence of interpreting generalization ability.
2Deep neural networks learn low-complexity solutions
In general, supervised learning usually involves an optimization process of minimizing the empirical risk, ,
where denotes the training set with i.i.d. samples, is the loss function; denotes the whole hypothesis space, and the hypothesis is often parameterized as , such as deep neural networks. According to central limit theorem (CLT), the generalization error (i.e. the population version of risk) of one particular learned model , , can be decomposed into two terms
where the last term is closely related to the complexity of the solution , i.e. the complexity of input-output mapping . So with the same and small training error, simple solutions generalize better than complex solutions. This intuitive explanation is called Occam’s razor, and No Free Lunch theorem  and also related to the minimum description length (MDL) theory .
2.1Optimizers converge to low-complexity solutions
In deep learning, can always be trained to an ignorable threshold . So it is the complexity of candidate solutions that determine the generalization error according to the bound . To get some intuition about this, we use fully connected neural networks (FNN)
we can easily observe that the optimizer did converge to the solutions with low complexity for FNNs with different number of layers. Especially, the -layer network still generalizes well, even with about parameters that is much larger than the size of training set. Maybe one thinks that it is because the hypothesis space of FNN is not as complicated as we imagine. However, this is not true; and we can find many high-complexity solutions, one of them shown as the dash line in Figure 1. The overfitting solution in the figure can be found by attacking the training set intentionally, see Section 5.1 for more details. On the other hand, KRR inevitably produces overfitting solutions when increasing the capacity. To control the complexity of the solutions for KRR models, we have to resort to some regularization.
2.2Connection with classical learning theory
Classic statistical learning theory describes the generalization error bound for the hypothesis space as follows (we present it in a non-rigorous way for simplicity),
where the term complexity measure the complexity of the whole hypothesis class , such as VC dimension  and Rademacher complexity , etc. In spite of similarity with the bound , we emphasize that this bound is universal for the whole hypothesis space , and even holds for the worst solution. Therefore, it is not surprising that only leads to a trivial upper bound . However, in practice, what we really care about is the complexity of the specific solutions found by the optimizers, not the worst case.
As shown in Figure 1 and Section 5.1, for high-capacity deep neural networks, the solution set consists of many solutions with diverse generalization performance, some of which even generalize no better than a random guess. Surprisingly, optimizers with random initialization rarely converge to the bad solutions in . As a comparison, traditional basis expansion methods do not have the nice property of converging to low-complexity solutions (see the analysis in Supplementary Materials). Thus, conventional learning theory cannot answer the Question 2.
3Perspective of loss landscape for understanding generalization
The key point to understand the generalization capability of high-capacity deep learning models boils down to figuring out the mechanism that guides the optimizers to converge towards low-complexity solution areas, rather than pursue tighter bounds for complexity of hypothesis class and the solution set . Recalling the optimization dynamics of SGD, there exists only three factors that might endow the deep learning models with good generalization performance, (1) the stochasticity introduced by mini-batch approximation; (2) the specific initialization; (3) the special structure of the landscape of . After systematical investigation over the three factors, our central finding can be summarized as follows,
The geometrical structure of loss function of deep neural networks guides the optimizers to converge to low-complexity solutions: the volume of basin of good minina dominates over those poor ones such that random initialization induces starting parameters located in good basins with an overwhelming probability, leading to the almost sure convergence to good solutions.
3.1SGD is not the magic
The stochastic approximation of the gradient was originally proposed to overcome the computational bottleneck in classic gradient descent. It seems natural to imagine that the noise introduced by the stochastic approximation should help the system to escape the saddle points and local minima. Also some researchers suggest that SGD acts as an implicit regularizer guiding the system to converge to a small-norm/better solutions .
To evaluate these observations, we trained three networks by batch gradient descent and SGD on MNIST dataset. Here we only use the first images as the training set, due to the limited RAM of GPU for the evaluation of full batch gradient. The architectures of the networks are described in the Supplementary Materials. As shown in Table ?, the models trained by SGD do perform better than full batch gradient descent, which is similar as the observed results in . However this improvement is limited. We thus can conclude that SGD alone cannot explain the good generalization capability of deep neural networks, since the generalization we consider focuses on the significant difference between the good solutions with above test accuracy and those poor ones not better than a random guess.
|full batch ()|
|mini batch ()|
3.2The role of landscape and random initialization
To describe the characteristics of the loss landscape, we borrow the concept of basin of attraction  from dynamical systems, which is the region such that any initial point in that region will eventually be iterated into the attractor (i.e. minima), denoted as . Let be the basins of attractor and w.r.t. the optimization dynamics , respectively. The empirical observation (see Table 1) that random initialization converges to , indicates that
If we choose to be the uniform distribution, then we have
where denotes the volume of the basin of attractor. In terms of Lebesgue measure, the basin of bad minima is a zero measure set. According to , a random initialization of parameters will be located in the basin of good minima with an overwhelming probability. Consequently, optimizers will converge to the solutions generalizing well almost surely. So we conjecture that the ratio should be the reasonable answer to Question 2 in Section 1. Now we empirically demonstrate that some random initialization will result in convergence to good solutions.
We numerically tried different strategies of random initialization, , , , and the one proposed in , i.e. , where the number of inputs for each node. No data augmentation or regularization is applied in this experiment. For each network and method of random initialization, we run the experiments 6 times. The results are reported in Table 1. It can be easily observed that, with these strategies of random initialization, all the cases converge to good solutions. This partially supports our conjecture that random initialization induces that the starting parameters are almost surely located in the basin of good minima.
|Initialization||LeNet (MNIST)||ResNet-18 (CIFAR10)|
|99.92 0.15 / 99.00 0.19||100.00 0.00 / 84.48 0.20|
|99.82 0.33 / 98.99 0.23||100.00 0.00 / 79.06 0.59|
|99.97 0.01 / 99.19 0.03||100.00 0.00 / 81.54 0.28|
|99.97 0.01 / 99.13 0.11||100.00 0.00 / 84.56 0.40|
4Landscape of 2-layer networks
In this section, we analyze the landscape of 2-layer networks to show that the low-complexity solutions are indeed located in “flat” regions with large volume of attractor basin.
The hypothesis represented by the 2-layer network can be written as,
where is the number of hidden nodes, denotes the activation function, and denotes all the parameters. Assume least square loss is used, then fitting becomes minimizing Here the Hessian matrix of can be decomposed into two terms: Fisher information and fitting residual,
The first term in the R.H.S. of Eq. is the empirical Fisher information matrix. The corresponding population version is defined as . According to , for any , we have
To measure the complexity of a hypothesis , we choose due to its merit of considering derivatives w.r.t. input , which reflects the spatial fluctuation of . According to Eq. , we have
where is the Fisher information matrix w.r.t model parameters . By Cauchy-Schwarz inequality, we can obtain the following theorem to relate the complexity of hypothesis with the Fisher information matrix w.r.t. model parameters, see Supplementary Materials for the proof.
The above theorem establishes the relationship between the hypothesis complexity measured by the norm of expected input gradient and the Fisher information matrix. Additionally, the latter is also related to the Hessian of according to , thus we have the following characterization of the landscape of .
The upper bound reveals some remarkable properties of the landscape of . We can ignore the last term, if the number of training samples is large enough compared to . For ReLU networks, , so the complexity of a small-norm hypothesis is bounded by the Frobenius norm of Hessian . For general activation functions, this is true for the hypotheses with small training error, i.e. small enough.
Without loss of generality, we can add constraint , with being very small. Since is invariant to the node-scaling , i.e. , this constraint doesn’t shrink the hypothesis space. It means that any hypothesis has at least one corresponding , such that . For , the bound implies that low-complexity solutions lie in the areas with small Hessian. This indicates that low-complexity solutions are located in the flat and large basins of attractor, while the high-complexity solutions lie in the sharp and small ones. Therefore, a random initialization tends to produce starting parameters located in the basin of good minima with a high probability, giving rise to the almost sure convergence to good minima using gradient-based methods.
In practice, we do not explicitly impose the constraint . What we do is to randomly initialize the system close to zero, which implicitly results in the optimizer exploring the landscape in the small vicinity of zero. Within this area, the high-complexity minima generalizing like random guessing have much smaller attractor basins. Therefore empirically we never observe that optimizers converge to these bad solutions, even though they do exist in this area.
In , the authors argued that the property of Hessian cannot be directly applied to explain generalization. The reason to this argument is that although is invariant to node-scaling, the Hessian not. However, in most cases of neural networks, the learned solutions are close to zero (i.e. with small norms) due to the near-zero random initialization, and thus the term in the bound is dominated by the Hessian . Therefore, it is reasonable to apply the property of the Hessian to explain the generalization ability.
Our theoretical analysis sheds light on the difference between the minima that generalize well and bad, answering the Question 1 raised in Section 1. This part only provides a rough analysis of hypothesis-dependent generalization for 2-layer neural networks. For deeper networks, more elaborated analysis is left as future work.
For deep neural networks, it is difficult to analytically analyze the landscape of , so we resort to numerical evidence, as shown in this section.
5.1Construct solutions with diverse generalization performance
To numerically demonstrate the property described in Eq. and the bound , we need find a large number of minima with diverse generalization ability, particularly including the solutions that perform as nearly random guesses in test set. However, this is not easy if only relying on the training set, since the random initialization always converges to solutions generalizing well, due to Eq. . To overcome this difficulty, we design an extra attack dataset to “fool” the networks to produce bad generalization performance.
For each model, we prepare three datasets, , representing the training, test and attack set, respectively. All the data points on the attack set are intentionally assigned wrong labels. Then we solve the following optimization problem instead of the original one,
Because of the high capacity of over-parameterized neural networks, we can obtain various solutions achieving , thus . But due to the attack term, the performance on the test set is harmed severely. In practice by tuning the hyperparameter and the size of attack set, a series of solutions of can be found, and their generalization error can be very bad.
5.2Spectral analysis of Hessian matrices
Since the volume of attractor basin is a global quantity, it is hard to estimate directly. Fortunately, a large basin often implies that the local valley around the attractor is very flat and vice versa. Similar ideas were also explored in . However, their numerical experiments only consider solutions from . We are investigating the difference between and to understand why optimizers with random initialization rarely converges to .
Figure 2 shows an example of the spectrum of the Hessian of a small CNN around minima. The following observations are not unique to this model, shared across different models and datasets . (1) There are some negative eigenvalues since the optimizer is terminated before it converges to the strict minima. (2) Most of the eigenvalues concentrate around zero. In accordance to the work , the good solutions form a connected manifold (allowing small energy barrier). We conjecture that, the large amount of zero eigenvalues might imply that the dimension of this manifold is large, and the eigenvectors of the zero eigenvalues span the tangent space of the attractor manifold. The eigenvectors of the other large eigenvalues correspond to the directions away from the attractor manifold. We leave the justification of this conjecture as future work. (3) The right most plot shows that the bad solutions have much larger eigenvalues than the good ones. This indicates that the good solutions lie in the wide valley while bad solutions sit in the narrow one. It is consistent with our analysis on 2-layer networks.
Based on the above analysis of the spectrum of Hessian, it is natural to use the product of top- positive eigenvalues to quantify the inverse volume of the attractor. For a given Hessian matrix around a solution, we utilize the logarithm of the product of top- eigenvalues to approximate the inverse volume of basin of attractor:
5.3Numerical evidence for deep neural networks
Small neural networks We train dozens of mLeNets on two datasets, MNIST and SVHN. For each experiment, the first training data are used as our new training set, while the other training data are used as attack set to help generate diverse solutions according to . Different optimizers are adopted to increase the diversity of the solutions.
Large neural networks The ResNet-32 is used to fit CIFAR-10 dataset, where the first samples of training set are selected as our new training data, with the remaining as the attack set. The performance is evaluated on the whole test set. No regularization or data augmentation is used. This model has about million of parameters, which is much larger than the number of training set, . Due to the prohibitive cost to compute the spectrum of Hessian, we employ a statistical estimate of the Frobenius norm of Hessian, although not perfect but computationally feasible for large networks.
For any matrix , the Frobenius norm can be estimated by Therefore in practice, after replacing the matrix-vector product with finite difference, we have the following estimator:
where are i.i.d random samples drawn from . This estimation of the squared Frobenius norm of Hessian becomes exact when and . In this experiment, we set .
The numerical results for both small and large networks are shown in Figure 3 to reveal the relationship between the test accuracy and the inverse volume of basins. We can easily observe that,
Good minima are located in very large valley, while bad ones sit in small valley. The ratio in Eq. through the estimation by Eq. and can be exponentially small due to the high dimensionality. This evidently supports our findings that the volume of basin of good minima dominates over those generalizing poorly, leading to that the optimization methods with random initialization converge to good solutions almost surely.
There exists some variance of the relationship between generalization error and volume of basins. Also it is almost impossible to use the in Eq. to distinguish the minima with equivalent generalization performance. We conjecture that the reason might be that Hessian-based characterization of the volume of basin is only a rough estimate. A better non-local quantity is necessary.
In this work, we attempt to answer two important questions towards understanding generalization of deep learning: what is the difference between the minima that generalize well and poorly; and why training methods converge to good minima with an overwhelming probability. The 2-layer networks are analyzed to show that the low-complexity solutions have a small norm of Hessian matrix w.r.t. model parameters. This directly reveals the difference between good and bad minima. We also investigate this property for deeper neural networks through various numerical experiments, though theoretical justification is still a challenge. This property of the Hessian implies that the volume of basin of good minima dominates over that of poor ones, leading to an almost sure convergence to good solutions, as demonstrated by various empirical results.
Appendix A. Model details
This following list gives the details of the model used in this paper:
A modified LeNet. This model is usually used to compute the Hessian. Since we need to compute a large number of Hessian matrices, the last fully connected layer of LeNet is replaced by a convolutional layer plus a global average pooling layer. The number of model parameter is .
A small network in network model. This model is used to conduct the full batch gradient descent experiments. Because of the limited GPU memory, the feature numbers of three block of NIN is set to .
Standard model (batch normalization is used).
Appendix B: landscape of convex model
For many shallow models, such as basis function modeling and kernel methods, the good and bad solutions are not distinguishable if only relying on the information of empirical risk . For instance, considering the following model,
where , , is the feature map. The second-order derivatives can be written as
For most loss functions used in practice, is a constant at , e.g. loss, hinge loss, etc. It implies that around the global minima , the loss surface has both the same first-order and second-order information, i.e. and . This directly leads to that optimizers themselves are unable to find the solution generalizing better due to the indistinguishability between different global minima. Thus, in order to steer the optimizers to find the low-complexity solutions, the only way is to introduce some proper regularization to shrink the hypothesis space.
Different from other models, the loss surface of deep neural networks owns the minima that are distinguishable with each other, see detailed analysis in Section 3.2 and Section 4. The possibility stems from the non-convexity of neural networks, which make it possible to seperate the good and bad solutions into different valleys, distinguished via the information of .
Appendeix C: Theoretical result proof details
Proof of theorem
Because and , so
Proof of corollary
According to central limit theorem, we have
Since , we have
Next we assume the second-order derivative of activation function is bounded, i.e. is finite. This assumption is satisfied by commonly-used activation functions, like sigmoid, tanh and ReLU, etc. Especially for ReLU, . So we have,
- ReLU is used as our activation function through all the experiments in this paper
- Unreasonable effectiveness of learning neural networks: From accessible states and robust ensembles to basic algorithmic schemes.
C. Baldassi, C. Borgs, J. Chayes, A. Ingrosso, C. Lucibello, L. Saglietti, and R. Zecchina. Proceedings of the National Academy of Sciences, 113(48):E7655–E7662, 2016.
- Subdominant dense clusters allow for simple learning and high computational performance in neural networks with discrete synapses.
C. Baldassi, A. Ingrosso, C. Lucibello, L. Saglietti, and R. Zecchina. Physical review letters, 115(12):128101, 2015.
- Rademacher and gaussian complexities: Risk bounds and structural results.
P. L Bartlett and S. Mendelson. Journal of Machine Learning Research, 3(Nov):463–482, 2002.
- Entropy-sgd: Biasing gradient descent into wide valleys.
P. Chaudhari, A. Choromanska, S. Soatto, and Y. LeCun. In International Conference on Learning Representations (ICLR), 2017.
- Sharp minima can generalize for deep nets.
L. Dinh, R. Pascanu, S. Bengio, and Y. Bengio. arXiv preprint arXiv:1703.04933, 2017.
- Topology and geometry of half-rectified network optimization.
J. Freeman, C. D.and Bruna. In International Conference on Learning Representations (ICLR), 2017.
- Train faster, generalize better: Stability of stochastic gradient descent.
M. Hardt, B. Recht, and Y. Singer. In Proceedings of The 33rd International Conference on Machine Learning, pages 1225–1234, 2016.
- Delving deep into rectifiers: Surpassing human-level performance on imagenet classification.
K. M. He, X. Y. Zhang, S. Q. Ren, and J. Sun. In Proceedings of the IEEE international conference on computer vision, pages 1026–1034, 2015.
- Differential equations, dynamical systems, and an introduction to chaos.
M. W. Hirsch, S. Smale, and R. L. Devaney. Academic press, 2012.
- Flat minima.
S. Hochreiter and J. Schmidhuber. Neural Computation, 9(1):1–42, 1997.
- Simplifying neural nets by discovering flat minima.
S. Hochreiter, J. Schmidhuber, et al. Advances in Neural Information Processing Systems, pages 529–536, 1995.
- On large-batch training for deep learning: Generalization gap and sharp minima.
N. S. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, and P. T. P. Tang. In In International Conference on Learning Representations (ICLR), 2017.
- Deep learning.
Y. LeCun, Y. Bengio, and G. Hinton. Nature, 521(7553):436–444, 2015.
- Network in network.
M. Lin, Q. Chen, and S.C. Yan. arXiv preprint arXiv:1312.4400, 2013.
- The importance of complexity in model selection.
I. J. Myung. Journal of mathematical psychology, 44(1):190–204, 2000.
- Statistical learning theory, volume 1.
Vladimir N. Vapnik. Wiley New York, 1998.
- Path-sgd: Path-normalized optimization in deep neural networks.
B. Neyshabur, R. R. Salakhutdinov, and N. Srebro. In Advances in Neural Information Processing Systems, pages 2422–2430, 2015.
- Information and complexity in statistical modeling.
J. Rissanen. Springer Science & Business Media, 2007.
- Singularity of the hessian in deep learning.
L. Sagun, L. Bottou, and Y. LeCun. arXiv preprint arXiv:1611.07476, 2016.
- Understanding machine learning: From theory to algorithms.
S. Shalev-Shwartz and S. Ben-David. Cambridge university press, 2014.
- Langevin dynamics with continuous tempering for high-dimensional non-convex optimization.
N. Y. Ye, Z. X. Zhu, and R. K Mantiuk. arXiv preprint arXiv:1703.04379, 2017.
- Understanding deep learning requires rethinking generalization.
C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. In International Conference on Learning Representations, 2017.