Empirical Analysis of the Hessian of Over-Parametrized Neural Networks

Empirical Analysis of the Hessian of Over-Parametrized Neural Networks

Abstract

We study the properties of common loss surfaces through their Hessian matrix. In particular, in the context of deep learning, we empirically show that the spectrum of the Hessian is composed of two parts: (1) the bulk centered near zero, (2) and outliers away from the bulk. We present numerical evidence and mathematical justifications to the following conjectures laid out by [23]: Fixing data, increasing the number of parameters merely scales the bulk of the spectrum; fixing the dimension and changing the data (for instance adding more clusters or making the data less separable) only affects the outliers. We believe that our observations have striking implications for non-convex optimization in high dimensions. First, the flatness of such landscapes (which can be measured by the singularity of the Hessian) implies that classical notions of basins of attraction may be quite misleading. And that the discussion of wide/narrow basins may be in need of a new perspective around over-parametrization and redundancy that are able to create large connected components at the bottom of the landscape. Second, the dependence of small number of large eigenvalues to the data distribution can be linked to the spectrum of the covariance matrix of gradients of model outputs. With this in mind, we may reevaluate the connections within the data-architecture-algorithm framework of a model, hoping that it would shed light into the geometry of high-dimensional and non-convex spaces in modern applications. In particular, we present a case that links the two observations: small and large batch gradient descent appear to converge to different basins of attraction but we show that they are in fact connected through their flat region and so belong to the same basin.

1Introduction

In this paper we study the geometry of the loss surface of supervised learning problems through the lens of their second order properties. To introduce the framework, suppose we are given data in the form of input-label pairs, where and that are sampled i.i.d. from a possibly unknown distribution , a model that is parametrized by ; so that the number of examples is and the number of parameters of the system is . Suppose also that there is a predictor . The supervised learning process aims to solve for so that . To make the ’’ precise, we use a non-negative loss function that measures how close the predictor is to the true label, . We wish to find a parameter such that where,

In particular, one is curious about the relationship between and . By the law of large numbers, at a given point , almost surely as for fixed . However in modern applications, especially in deep learning, the number of parameters is comparable to the number of examples (if not much larger). And the behaviour of the two quantities may be drastically different (for a recent analysis on provable estimates see [18]).

A classical algorithm to find is gradient descent (GD), in which the optimization process is carried out using the gradient of . A new parameter is found iteratively by taking a step in the direction of the negative gradient whose size is scaled with a constant step size that is chosen from line-search minimization. Two problems emerge: (1) Gradient computation can be expensive, (2) Line-search can be expensive. More involved algorithms, such as Newton type methods, make use of second order information [19]. Under sufficient regularity conditions we may observe: . A third problem emerges beyond an even more expansive computational cost of the Hessian: (3) Most methods require the Hessian to be non-degenerate to a certain extent.

When the gradients are computationally expensive, one can alternatively use it’s stochastic version (SGD) that replaces the above gradient with the gradient of averages of losses over subsets (such a subset will be called the mini-batch) of (see [6] for a classical reference). The benefit of SGD on real-life time limits is obvious, and GD may be impractical for practical purposes in many problems. In any case, the stochastic gradient can be seen as an approximation to the true gradient, and hence it is important to understand how the two directions are related in the parameter space. Therefore, the discussion around the geometry of the loss surface can be enlightening in the comparison of the two algorithms: Does SGD locate solutions of a different nature than GD? Do they follow different paths? If so, which one is better in terms of generalization performance?

For the second problem of expensive line-search, there are two classical solutions: using a small, constant step size, or scheduling the step size according to a certain rule. In practice, in the context of deep learning, the values for both approaches are determined heuristically, by trial and error. More involved optimal step size choices involve some kind of second order information that can be obtained from the Hessian of the loss function [24]. From a computational point of view, obtaining the Hessian is extremely expensive, however obtaining some of its largest and smallest eigenvalues and eigenvectors are not that expensive. Is it enough to know only those eigenvalues and eigenvectors that are large in magnitude? How do they change through training? Would such a method work in SGD as well as it would on GD?

For the third problem, let’s look at the Hessian a little closer. A critical point is defined by such that and the nature of it can be determined by looking at the signs of its Hessian matrix. If all eigenvalues are positive the point is called a local minimum, if of them are negative and the rest are positive, then it is called a saddle point with index . At the critical point, the eigenvectors indicate the directions in which the value of the function locally changes. Moreover the changes are proportional to the corresponding -signed- eigenvalue. Under sufficient regularity conditions, it is rather straightforward to show that gradient based methods converge to points where gradient is zero. Recently [15] showed that they indeed converge to minimizers. However, a significant and untested assumption to establish these convergence results is that the loss is non-degenerate. A relaxation of the above convergence to the case of non-isolated critical points can be found in [20]. What about the critical points of machine learning loss functions? Do they satisfy the non-degeneracy assumptions? If they don’t, can we still apply the results of provable theorems to gain intuition?

1.1A historical overview

One of the first instances of the comparison of GD and SGD in the context of neural networks dates back to late eighties and early nineties. [5] points out that large eigenvalues of the Hessian of the loss can create the illusion of a local minima and GD can get stuck there, it further claims that the help of the inherent noise in SGD may help getting out of this obstacle. The origin of this observation is due to [7], as well as numerical justifications. However, given the computational limits of the time, these experiments relied on low-dimensional neural networks with few hidden units. The picture may be drastically different in higher dimensions. In fact, provable results in statistical physics tell us that, in certain real-valued non-convex functions, the local minima concentrate at an error level near that of the global minima. A theoretical review on this can be found in [1], while [22] and [3] provide an experimental simulation as well as a numerical study for neural networks. They notably find that high error local minima traps do not appear when the model is over-parametrized.

These concentration results can help explain why we find that the solutions attained by different optimizers like GD and SGD often have comparable training accuracies. However, while these methods find comparable solutions in terms of training error there is no guarantee they generalize equally. A recent work in this direction compares the generalization performance of small batch and large batch methods [13]. They demonstrate that the large batch methods always generalize a little bit worse even when they have similar training accuracies. The paper further makes the observation that the basins found by small batch methods are wider, thereby contributing to the claim that wide basins, as opposed to narrow ones, generalize better 1.

The final part of the historical account is devoted to the observation of flatness of the landscape in neural networks and it’s consequences through the lens of the Hessian. In early nineties, [12] remarks that there are parts of the landscape in which the weights can be perturbed without significantly changing the loss value. Such regions at the bottom of the landscape are called the flat minima, which can be considered as another way of saying a very wide minima. It is further noted that such minima have better generalization properties and a new loss function that makes use of the Hessian of the loss function has been proposed that targets the flat minima. The computational complexity issues have been attempted to be resolved using the -operator of [21]. However, the new loss requires all the entries of the Hessian, and even with the -operator it is unimaginably slow for today’s large networks. More recently, an exact numerical calculation of the Hessian have been carried out by [23]. It turns out that the Hessian is degenerate at any given point including the randomly chosen initial point, that the spectrum of it is composed of two parts: (1) the bulk, and (2) the outliers. The bulk is mostly full of zero eigenvalues with a fast decaying tail, and the outliers are only a handful which appears to depend on the data. This implies that, locally, most directions in the weight space are flat, and leads to little or no change in the loss value, except for the directions of eigenvectors that correspond to the large eigenvalues of the Hessian.

In this work, we study the problems of the third class and discuss their impact on the first two.

1.2Overview of results

In an attempt to develop an understanding for some of the classical and modern problems described above and hopefully to get further insight into non-convex optimization over high dimensional spaces, we study the loss function through its second order properties through the spectrum of its Hessian matrix.

Figure 1: Spectrum at a random point, and at the end of GD when the random point is taken as the initial point.
Figure 1: Spectrum at a random point, and at the end of GD when the random point is taken as the initial point.

The first observation is that the Hessian is not slightly singular but extremely so, having almost all of its eigenvalues at or near zero. And this observations seems to hold even at random points of the space. For example, let us consider an artificial neural network with two layers (totaling parameters) and trained using gradient descent. Figure 1 shows the full spectrum of the Hessian at the random initial point of training and after the final training point. Intuitively, this kind of singularity should be expected to emerge if (1) the data is essentially low dimensional but lives in a larger ambient space, and (2) the model is over-parametrized. In the context of deep learning, over-parametrization is a key feature of the model, and in most cases the data is expected to be essentially on a low dimensional manifold.

We study the Hessian using a decomposition of it as a sum of two matrices, where the first one is the sample covariance matrix of the gradients of model outputs and the second one is the Hessian of the function that describes the model outputs. We argue that the second term can be ignored as training progresses which leaves us with the covariance term. This connection to the covariance of the gradients is then used to explain why having more parameters than samples results in degenerate Hessian matrices. Finally, we empirically examine the spectrum to uncover intricate dependencies within the data-architecture-algorithm triangle. We begin by discussing the decomposition of the Hessian.

2Generalized Gauss-Newton decomposition of the Hessian

In order to study its spectrum, we will describe how the Hessian can be decomposed into two meaningful matrices [14]. Suppose the loss function is given as a composition of two functions, the model function is the real valued output of a network that depends on the parameters; and the loss function is a convex function. Here, refers to the given example. Examples include the regression: the mean-square loss composed with a real valued output function, and classification: the negative log likelihood loss composed with the dot product of the output of a softmax layer with the label vector.

For ease of reading, we indicate the dependencies of functions and to data by the index of the example, or omit it altogether in case it is not necessary, and unless noted otherwise the gradients are taken with respect to . The gradient and the Hessian of the loss for for a given example are given by

where denotes the transpose operation (here the gradient is a column-vector). Note that since is convex and we can take its square root which allows us to rewrite the Hessian of the loss as follows:

Next, we argue how we can ignore the second term in this decomposition.

2.1Hessian at the bottom of the landscape

In general, it isn’t straightforward to discuss the spectrum of the sums of matrices by looking at the individual ones. Nevertheless, looking at the decomposition, we can still infer what we should expect. At a point close to a local minimum, the average gradient is close to zero. However, this doesn’t necessarily imply that the gradients for individual samples are also zero. However, if and are not correlated, then we can ignore the second term. And so the Hessian can be approximated by the first term:

Here, Equation 3 is the sum of rank one matrices (via the outer products of gradients of multiplied by some non-negative number), therefore, the sum can be written as a product of an matrix with its transpose where the columns of the matrix are formed by the scaled gradients of . Immediately, this implies that there are at least many trivial eigenvalues of the Hessian 2.

3Implications of flatness for the geometry of the energy landscape

In this section we focus on implications of the above observations on training. We discuss how data, model, and algorithm affect the spectrum of the Hessian of the loss.

3.1The relation between data and eigenvalues

In many cases of practical interest, the data contains contains redundancies. In such cases, the number of trivial eigenvalues could be even more smaller than . For instance, if one deals with a classification problem where the training data has classes with relatively small deviation among each of them, it is reasonable to expect that there will be an order of many non-trivial eigenvalues of the Hessian (assuming the approximation is valid).

Figure 2: PCA of the synthetic input data for various number of classes.
Figure 2: PCA of the synthetic input data for various number of classes.

To test this idea, we used a feed-forward neural network with a 100 dimensional input layer, two hidden layers each of which with 30 hidden units, and a dimensional output layer that is combined with softmax for -class classification. We sampled random Gaussian clusters in the input space and normalized the data globally. Then we carried out the training using SGD on ReLU network for the following number of clusters: . We look at the data throuth the lens of principle component analysis (PCA), and next to it we calculate the spectrum of the Hessian of the loss function.

Figure 3: Scree plot of eigenvalues with a visible gap. The number of crosses above the gap match the number of categories.
Figure 3: Scree plot of eigenvalues with a visible gap. The number of crosses above the gap match the number of categories.

Even though the number of large principle components do not match the number of clusters exactly in Figure 2, the number of large eigenvalues that are above the gap in Figure 3 match exactly the number of classes in the dataset. Please refer to the table in the appendix for more experiments on this.

3.2The relation between number of parameters and eigenvalues

We test the effect of growing the size of the network with fixed data, architecture, and algorithm. We sample 1K examples from the MNIST dataset, and train all the networks with the same step size until a sufficient stopping condition is satisfied. Then we compute the exact Hessian and plot its eigenvalues in order. In the figures, we show the left and right edges of the spectrum as the rest are just very close zero. Note, also, that for Figure ? the -axis indicates the ordered counts of eigenvalues, whereas for Figure 6 the -axis is scaled to indicate the ratios of eigenvalues.

Figure 4: Right edge of the spectrum of the Hessian of the loss for a fully connected network on MNIST with increasing dimensions.
Figure 4: Right edge of the spectrum of the Hessian of the loss for a fully connected network on MNIST with increasing dimensions.

Our first observation is that there exist negative eigenvalues. This suggests that theoretical convergence results may be reached at time scales much larger than what is required to train a network. Also, the second immediate observation is that the magnitudes of negative eigenvalues (the left edge) are tiny compared to the ones at the positive eigenvalues (the right edge). Also note that, for the right edge of the spectrum (that is, for the large positive eigenvalues), the shape of the plot remains invariant as the number of parameters increase (Figure 4).

Figure 5: Left edge of the spectrum, eigenvalues are scaled by their ratio.
Figure 5: Left edge of the spectrum, eigenvalues are scaled by their ratio.
Figure 6: Left edge of the spectrum, eigenvalues are scaled by their ratio.
Figure 6: Left edge of the spectrum, eigenvalues are scaled by their ratio.

For the left edge of the spectrum, that is, for the small but negative eigenvalues, the shape of the plots of ordered eigenvalues shifts as the number of parameters increase (Figure ?). However, once we look at the ratios of the negative eigenvalues, we observe that increasing the number of dimensions keeps the ratio of small negative eigenvalues fixed. Moreover, they appear to be converging to have the same shape (Figure 6). This observation confirms our previous suspicions: (1) the effects of the ignored term in the decomposition is small (strictly speaking it’s contribution is small for the negative eigenvalues), (2) adding more nodes in the hidden layers proportionally enlarge the bulk, and (3) the large positive eigenvalues depend on data, so that their number should not, in principle, change with the increasing dimensionality of the parameter space.

3.3The relation between the algorithm and eigenvalues

Finally, we turn to describing the nature of solutions found by the large and small batch methods for the training landscape. We train a convnet on MNIST using large and small batch methods, we find a learning rate for which both algorihms converge and use the same rate when training. Note that the stochastic gradients are averaged over the mini batch. Therefore, fixing the learning rate allows the algorithms to take steps whose lengths are proportional to the norms of the corresponding stochastic gradients averaged over the mini batch. This way, the variance of the steps are different, but the means are similar. We train until same number of iterations have been reached. Then, we calculate the Hessian and the spectrum of the Hessian (see Figures ? and Figure 8).

Figure 7: LB-SB left edge.
Figure 7: LB-SB left edge.
Figure 8: LB-SB left edge.
Figure 8: LB-SB left edge.

We observe that the negative eigenvalues are orders of magnitude smaller than the large ones, they also elbow at similar levels. More importantly, we observe that the number of large eigenvalues are the same for both methods, but their magnitude are different. The large batch method locates points with larger positive eigenvalues. This observation is consistent with [13] as the way they measure flatness takes calculates local rates of increase in a neighborhood of the solution which is imtimately linked by the size of the large eigenvalues.

In the next and final section, we will argue that, contrary to what is believed in [13] and [8] the two algorithms do not have to be falling into different basins.

4Discussion on basins of solutions and generalization

Finally, we revisit the issue through the lens of the following question: What does over-parametrization imply on the discussion around GD vs. SGD (or large batch vs small batch) especially for their generalization properties?

As noted in the introduction, for a while the common sense explanation on why SGD works well (in fact better) than GD (or large batch methods) was that the non-convex landscape had local minima at high energies which would trap large-batch or full-batch methods. Something that SGD with small batch shouldn’t suffer due to the inherent noise in the algorithm. However, there are various experiments that have been carried out in the past that show that, for reasonable large systems, this is not the case. For instance, [22] demonstrate that a two hidden layer fully connected network on MNIST can be trained by GD to reach at the same level of loss values as SGD 3. In fact, when the step size is fixed to the same value for both of the algorithms, they reach the same loss value at the same number of iterations. The training accuracy for both algorithms are the same, and the gap between test accuracies diminish as the size of the network increase with GD falling ever so slightly behind. It is also shown in [13] that training accuracies for both large and small batch methods are comparably good. Furthermore, [26] demonstrates that training landscape is easy to optimize even when there is no clear notion of generalization. Such observations are consistent with our observations: over-parametrization (due to the architecture of the model) leads to flatness at the bottom of the landscape which is easy to optimize.

When we turn our attention to generalization, [13] note that LB methods find a basin that is different than the one found by SB methods, and they are characterized by how wide the basin is. As noted in Figure ?, indeed the large eigenvalues are larger in LB than in SB, but is it enough to justify that they are in different basins, especially given the fact that the number of flat directions are enormous.

4.1Are they really different basins? Case of CIFAR10 with LB vs SB

A common way to plot training profiles in larger scale neural networks is to stop every epoch to reserve extra computational power to calculate various statistics of the model at its current position. This becomes problematic when one compares training with different batch sizes, primarily because the larger batch model takes fewer steps in a given epoch. Recall that the overall loss is averaged, therefore, for a fixed point in the weight space, the empirical average of the gradients is an unbiased estimator for the expected gradient. Hence it is reasonable to expect that the norms of the large batch methods match to the ones of the small batch. And for a fair comparison, one should use the same learning rate for both training procedures. This suggests that a better comparison between GD and SGD (or LB and SB) should be scaled with the number of steps, so that, on average both algorithms are able to take similar number of steps of comparable sizes. Moreover, since random initial points in high dimensional spaces are almost always orthogonal, the fact that training with random initial points may lead to different basins should not be a surprising observation. However, the observation that LB converges to tighter basins that is separated by wells from the wider basins found by SB has been an observation that triggered attention. To test this idea even further we conduct the following experiment:

  1. Part I: Train full CIFAR10 data for a bare AlexNet (no momentum, no dropout, no batch normalization) with a batch-size of . Record every 100 steps for 250 times.

  2. Part II: Continue training from the end point of the previous step with a smaller batch-size of 32. Everything else, including the constant learning rate is kept the same. And train another 250 periods each of which with 100 steps.

Figure 9: Large batch training immediately followed by small batch training on the full dataset of CIFAR10 with a raw version of AlexNet.
Figure 9: Large batch training immediately followed by small batch training on the full dataset of CIFAR10 with a raw version of AlexNet.

The key observation is the jump in the training and test losses, and a drop in the corresponding accuracies (Figure 9). Toward the end of Phase II the small batch reaches to a slightly better accuracy (about ). And this looks in line with the observations in [13], in that, it appears that the LB solution and SB solutions are separated by a barrier, and that the latter of which generalizes better. Moreover, the line interpolations extending away from either end points appear to be confirming the sharpness of LB solution. However, we find the straight line interpolation connecting the end points of Part I and Part II turns out to not contain any barriers (Figure 10). This suggests that while the small and large batch method converge to different solutions, these solutions have been in the same basin all along. This raises the striking possibility that that other seemingly different solutions may be similarly connected by a flat region to form a larger basin.

Figure 10: Loss and accuracy evaluation on the straight line that contains LB and SB solutions. The accuracy of the SB solution is \sim 1\% better than the LB solution, but there is no barrier between the two points.
Figure 10: Loss and accuracy evaluation on the straight line that contains LB and SB solutions. The accuracy of the SB solution is better than the LB solution, but there is no barrier between the two points.

One way to interpret the results goes through the Gauss-Newton decomposition introduced earlier. When we decrease the batch size, we increase the noise in the covariance of the gradients, and hence the first term starts to dominate. Even when the weight space has large flat regions, the fluctuations of the stochastic noise should be precisely in the directions of the large eigenvalues. Therefore, at the beginning of Part II, the loss increases because the point fluctuates in the directions corresponding to the large eigenvalues, and eventually settles at a point that lies at the interior of the same level set, essentially staying in the same basin.

4.2Further discussions

One of the most striking implications of flatness may be the connected structure of the solution space. We may wonder whether two given solutions can be connected by a continuous path of solutions. This question have been explored in a recent work: in [10] it shown that for one hidden layer rectified neural networks the solution space is connected which is consistent with the flatness of the landscape.

The classical notion of basins of attractions may not be the suitable objects to study for neural networks. Rather, we may look at the exploration of interiors of level sets of the landscape. We may be tempted to speculate that such an exploration may indeed result in point that generalize better. However, the flat space itself is very high dimensional which comes with its own computational issues.

The training curve can be seen as composed of two parts: (1) high gain part where the norm of the gradients are large, (2) noise of the gradients is larger relative to the size of the stochastic gradients (see [25] for a recent reference). We speculate that the first part is relatively easy and even a large batch method can locate a large level set that contains points that generalize better than what’s initially found.

From a practical point of view, using larger batches with larger step sizes can in fact accelerate training. An example of this can be found in [11], where training Imagenet with a minibatch size of 8192 can match small batch performance.

On a final note for further consideration, we remark that we used standard pre-processing and initialization methods that are commonly used in practice. Fixing these two aspects, we modified the data, model, and algorithm in order to study their relative effects. However, the effects of pre-processing and initialization on the Hessian is highly non-trivial, and deserves a separate attention.

5Conclusion

We have shown that the level of singularity of the Hessian cannot be ignored from theoretical considerations. Furthermore, we use the generalized Gauss-Newton decomposition of the Hessian to argue the cluster of zero eigenvalues are to be expected in practical applications. This allows us to reconsider the division between initial fast decay and final slow progress of training. We see that even large batch methods are able to get to the level where small batch methods go. As opposed to the common intuition, the observed generalization gap between the two is not due to small batch finding a different, better, wider basin. Instead, the two solutions appear to be in the same basin. This lack of barrier between solutions is demonstrated by finding paths between the two points that lie in the same level set. To conclude, we propose a major shift in perspective in considerations of the energy landscape in deep learning problems.

Acknowledgments

We thank Yann Ollivier, Afonso Bandeira, Yann LeCun, and Mihai Nica for their valuable comments. The first author was partially supported by the grant from the Simons Foundation (454935, Giulio Biroli).


AOutlier eigenvalues

In the subsequent experiments, we used a feed-forward neural network with a 100 dimensional input layer, two hidden layers each of which with 30 hidden units, and a dimensional output layer that is combined with softmax for -class classification. We sampled random Gaussian clusters in the input space and normalized the data globally. Then we carried out the training for the following sets of parameters: , algorithm: {GD, SGD}, non-linearity: {tanh, ReLU}, initial multipler for the covariance of the input distribution: . Then we counted the number of large eigenvalues according to three different cutoff methods: largest consecutive gap, largest consecutive ratio, and a heuristic method of determining the threshold by searching for the elbow in the scree plot (see Figure 3). In Table 1 we marked the ones that are off by .

Table 1: Counting outliers for matching the number of blobs. Dictionary of table elements: {cov: scale of covariance for inputs, mean&std: of the eigenvalues, nc: number of clusters, gap: largest consecutive gaps, ratio: largest consecutive ratios, heur: heuristic threshold, cutoff: the value of the heur.}
alg-size cov mean&std nc gap ratio heur cutoff
GD tanh 4022 1.0 1.19e-05, 1.19e-05 2 2 3 N/A N/A
GD tanh 4115 1.0 2.52e-05, 2.52e-05 5 2 17 N/A N/A
GD tanh 4270 1.0 3.75e-05, 3.75e-05 10 2 4 N/A N/A
GD tanh 4580 1.0 5.37e-05, 5.37e-05 20 3 4 N/A N/A
GD ReLU 4022 1.0 7.13e-06, 7.13e-06 2 3 4 2 0.003
GD ReLU 4115 1.0 1.55e-05, 1.55e-05 5 6 7 5 0.003
GD ReLU 4270 1.0 3.04e-05, 3.04e-05 10 11 12 10 0.003
GD ReLU 4580 1.0 5.65e-05, 5.65e-05 20 4 21 21 0.003
GD ReLU 5510 1.0 1.16e-04, 1.16e-04 50 50 51 49 0.003
SGD ReLU 4022 1.0 8.96e-06, 8.96e-06 2 3 4 2 0.002
SGD ReLU 4115 1.0 1.62e-05, 1.62e-05 5 6 7 7 0.002
SGD ReLU 4270 1.0 2.97e-05, 2.97e-05 10 2 14 15 0.002
SGD ReLU 4580 1.0 4.53e-05, 4.53e-05 20 2 3 21 0.002
SGD ReLU 5510 1.0 6.78e-05, 6.78e-05 50 50 51 49 0.002
SGD ReLU 4022 10.0 9.49e-05, 9.49e-05 2 2 4 2 0.015
SGD ReLU 4115 10.0 1.68e-04, 1.68e-04 5 5 6 5 0.015
SGD ReLU 4270 10.0 1.92e-04, 1.92e-04 10 2 11 9 0.015
SGD ReLU 4580 10.0 3.10e-04, 3.10e-04 20 20 21 19 0.015
SGD ReLU 5510 10.0 1.67e-04, 1.67e-04 50 50 51 49 0.005

BThe spectrum of the generalized Gauss-Newton matrix

In this section, we will show that the spectrum of the Generalized Gauss-Newton matrix can be characterized theoretically under some conditions. Suppose that we can express the scaled gradient ( from Equation 3) as with the matrix depending only on the parameters - which is the case for linear models. Then we can write: , where is an matrix. Furthermore, without loss of generality, we assume that the examples are normalized such that the entries of are independent with zero mean and unit variance. One of the first steps in studying goes through understanding its principle components. In particular, we would like to understand how the eigenvalues and eigenvectors of are related to the ones of where .

In the simplest case, we have so that the gradients are uncorrelated and the eigenvalues of are distributed according to the Marčenko-Pastur law in the limit where and . The result dates back to sixties and can be found in [16]. Note that if then there are trivial eigenvalues of at zero. Also, the width of the nontrivial distribution essentially depends on the ratio . Clearly, setting the expected covariance to identity is very limiting. One of the earliest relaxations appear in [2]. They prove a phase transition for the largest eigenvalues of the sample covariance matrix which has been known as the BBP phase transition. A case that may be useful for our setup is as follows:

Typically, due to the correlations in the problem we don’t have to be the identity matrix or a diagonal matrix with spikes. This makes the analysis of their spectrum a lot more difficult. A solution for this slightly more general case with non-trivial correlations has been provided only recently by [4]. We will briefly review these results here see how they are related to the first term of the above decomposition.

This theorem essentially describes the way in which one obtains outlier eigenvalues in the sample covariance matrix assuming the population covariance is known. Here is an example:

Footnotes

  1. A general remark is that the terms wide-narrow, high-low, are all relative. In a recent study, [9] shows how sharp minima can still generalize with proper modifications to the loss function. We note that it takes a non-linear transformation to deform relative widths of basins. And, in this work, we focus on relative values as opposed to absolute values to get a consistent comparison across different setups.
  2. From a theoretical point of view, there are provable results that map the eigenvalues of the sample covarince matrix to the population matrix, a recent result can be found in [4]. Some machine learning problems can be put into such a framework, however, results require independent inputs, and extensions to correlated data appears to be unavailable to the best of our knowledge. Please refer to the appendix for such a theoretical approach to this eigenvalue problem.
  3. Another important observation in this work that will be useful for our purposes is that constant rate GD (as well as SGD) do not stop as training progresses, the loss decreases ever so slightly. This means that even in the flat region, there is a small amount of signal coming from the total gradients that allow the algorithm to keep moving on the loss surface. We emphasize that this doesn’t contradict with the theory, but this implies that the convergence may be achieved at time scales much larger than the ones we observe in practice.

References

  1. Random matrices and complexity of spin glasses.
    Antonio Auffinger, Gérard Ben Arous, and Jiří Černy. Communications on Pure and Applied Mathematics
  2. Phase transition of the largest eigenvalue for nonnull complex sample covariance matrices.
    Jinho Baik, Gérard Ben Arous, Sandrine Péché, et al. The Annals of Probability
  3. Energy landscapes for machine learning.
    Andrew J Ballard, Ritankar Das, Stefano Martiniani, Dhagash Mehta, Levent Sagun, Jacob D Stevenson, and David J Wales. Physical Chemistry Chemical Physics
  4. On the principal components of sample covariance matrices.
    Alex Bloemendal, Antti Knowles, Horng-Tzer Yau, and Jun Yin. Probability Theory and Related Fields
  5. Stochastic gradient learning in neural networks.
    Léon Bottou. Proceedings of Neuro-Nimes
  6. Large-scale machine learning with stochastic gradient descent.
    Léon Bottou. In Proceedings of COMPSTAT’2010, pp. 177–186. Physica-Verlag HD, 2010.
  7. Parallelization of a neural network learning algorithm on a hypercube.
    J Bourrely. Hypercube and distributed computers. Elsiever Science Publishing
  8. Entropy-sgd: Biasing gradient descent into wide valleys.
    Pratik Chaudhari, Anna Choromanska, Stefano Soatto, Yann LeCun, Carlo Baldassi, Christian Borgs, Jennifer Chayes, Levent Sagun, and Riccardo Zecchina. arXiv preprint arXiv:1611.01838
  9. Sharp minima can generalize for deep nets.
    Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. arXiv preprint arXiv:1703.04933
  10. Topology and geometry of deep rectified network optimization landscapes.
    C Daniel Freeman and Joan Bruna. arXiv preprint arXiv:1611.01540
  11. Accurate, large minibatch sgd: Training imagenet in 1 hour.
    Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. arXiv preprint arXiv:1706.02677
  12. Flat minima.
    Sepp Hochreiter and Jürgen Schmidhuber. Neural Computation
  13. On large-batch training for deep learning: Generalization gap and sharp minima.
    Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. arXiv preprint arXiv:1609.04836
  14. Efficient backprop.
    Y LeCun, L Bottou, GB ORR, and K-R Müller. Lecture notes in computer science
  15. Gradient descent converges to minimizers.
    Jason D Lee, Max Simchowitz, Michael I Jordan, and Benjamin Recht. University of California, Berkeley
  16. Distribution of eigenvalues for some sets of random matrices.
    Vladimir A Marčenko and Leonid Andreevich Pastur. Mathematics of the USSR-Sbornik
  17. Deep learning via hessian-free optimization.
    James Martens. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 735–742, 2010.
  18. The landscape of empirical risk for non-convex losses.
    Song Mei, Yu Bai, and Andrea Montanari. arXiv preprint arXiv:1607.06534
  19. Numerical optimization, second edition.
    Jorge Nocedal and Stephen J Wright. Numerical optimization
  20. Gradient descent only converges to minimizers: Non-isolated critical points and invariant regions.
    Ioannis Panageas and Georgios Piliouras. arXiv preprint arXiv:1605.00405
  21. Fast exact multiplication by the hessian.
    Barak A Pearlmutter. Neural computation
  22. Explorations on high dimensional landscapes.
    Levent Sagun, V Uğur Güney, Gérard Ben Arous, and Yann LeCun. ICLR 2015 Workshop Contribution, arXiv:1412.6615
  23. Singularity of the hessian in deep learning.
    Levent Sagun, Léon Bottou, and Yann LeCun. arXiv preprint arXiv:1611.07476
  24. No more pesky learning rates.
    Tom Schaul, Sixin Zhang, and Yann LeCun. ICML (3)
  25. Opening the black box of deep neural networks via information.
    Ravid Shwartz-Ziv and Naftali Tishby. arXiv preprint arXiv:1703.00810
  26. Understanding deep learning requires rethinking generalization.
    Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. arXiv preprint arXiv:1611.03530
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...
11816
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description