Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning

Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning

Charles H. Martin Calculation Consulting, 8 Locksley Ave, 6B, San Francisco, CA 94122, charles@CalculationConsulting.com.    Michael W. Mahoney ICSI and Department of Statistics, University of California at Berkeley, Berkeley, CA 94720, mmahoney@stat.berkeley.edu.
Abstract

Random Matrix Theory (RMT) is applied to analyze the weight matrices of Deep Neural Networks (DNNs), including both production quality, pre-trained models such as AlexNet and Inception, and smaller models trained from scratch, such as LeNet5 and a miniature-AlexNet. Empirical and theoretical results clearly indicate that the DNN training process itself implicitly implements a form of Self-Regularization, implicitly sculpting a more regularized energy or penalty landscape. In particular, the empirical spectral density (ESD) of DNN layer matrices displays signatures of traditionally-regularized statistical models, even in the absence of exogenously specifying traditional forms of explicit regularization, such as Dropout or Weight Norm constraints. Building on relatively recent results in RMT, most notably its extension to Universality classes of Heavy-Tailed matrices, and applying them to these empirical results, we develop a theory to identify 5+1 Phases of Training, corresponding to increasing amounts of Implicit Self-Regularization. These phases can be observed during the training process as well as in the final learned DNNs. For smaller and/or older DNNs, this Implicit Self-Regularization is like traditional Tikhonov regularization, in that there is a “size scale” separating signal from noise. For state-of-the-art DNNs, however, we identify a novel form of Heavy-Tailed Self-Regularization, similar to the self-organization seen in the statistical physics of disordered systems (such as classical models of actual neural activity). This results from correlations arising at all size scales, which for DNNs arises implicitly due to the training process itself. This implicit Self-Regularization can depend strongly on the many knobs of the training process. In particular, by exploiting the generalization gap phenomena, we demonstrate that we can cause a small model to exhibit all 5+1 phases of training simply by changing the batch size. This demonstrates that—all else being equal—DNN optimization with larger batch sizes leads to less-well implicitly-regularized models, and it provides an explanation for the generalization gap phenomena. Our results suggest that large, well-trained DNN architectures should exhibit Heavy-Tailed Self-Regularization, and we discuss the theoretical and practical implications of this.

1 Introduction

Very large very deep neural networks (DNNs) have received attention as a general purpose tool for solving problems in machine learning (ML) and artificial intelligence (AI), and they perform remarkably well on a wide range of traditionally hard if not impossible problems, such as speech recognition, computer vision, and natural language processing. The conventional wisdom seems to be “the bigger the better,” “the deeper the better,” and “the more hyper-parameters the better.” Unfortunately, this usual modus operandi leads to large, complicated models that are extremely hard to train, that are extremely sensitive to the parameters settings, and that are extremely difficult to understand, reason about, and interpret. Relatedly, these models seem to violate what one would expect from the large body of theoretical work that is currently popular in ML, optimization, statistics, and related areas. This leads to theoretical results that fail to provide guidance to practice as well as to confusing and conflicting interpretations of empirical results. For example, current optimization theory fails to explain phenomena like the so-called Generalization Gap—the curious observation that DNNs generalize better when trained with smaller batches sizes—and it often does not provide even qualitative guidance as to how stochastic algorithms perform on non-convex landscapes of interest; and current statistical learning theory, e.g., VC-based methods, fails to provide even qualitative guidance as to the behavior of this class of learning methods that seems to have next to unlimited capacity and yet generalize without overtraining.

1.1 A historical perspective

The inability of optimization and learning theory to explain and predict the properties of NNs is not a new phenomenon. From the earliest days of DNNs, it was suspected that VC theory did not apply to these systems. For example, in 1994, Vapnik, Levin, and LeCun [144] said:

[T]he [VC] theory is derived for methods that minimize the empirical risk. However, existing learning algorithms for multilayer nets cannot be viewed as minimizing the empirical risk over [the] entire set of functions implementable by the network.

It was originally assumed that local minima in the energy/loss surface were responsible for the inability of VC theory to describe NNs [144], and that the mechanism for this was that getting trapped in local minima during training limited the number of possible functions realizable by the network. However, it was very soon realized that the presence of local minima in the energy function was not a problem in practice [79, 39]. (More recently, this fact seems to have been rediscovered [108, 37, 56, 135].) Thus, another reason for the inapplicability of VC theory was needed. At the time, there did exist other theories of generalization based on statistical mechanics [127, 147, 60, 43], but for various technical and nontechnical reasons these fell out of favor in the ML/NN communities. Instead, VC theory and related techniques continued to remain popular, in spite of their obvious problems.

More recently, theoretical results of Choromanska et al. [30] (which are related to [127, 147, 60, 43]) suggested that the Energy/optimization Landscape of modern DNNs resembles the Energy Landscape of a zero-temperature Gaussian Spin Glass; and empirical results of Zhang et al. [156] have again pointed out that VC theory does not describe the properties of DNNs. Motivated by these results, Martin and Mahoney then suggested that the Spin Glass analogy may be useful to understand severe overtraining versus the inability to overtrain in modern DNNs [93].

Many puzzling questions about regularization and optimization in DNNs abound. In fact, it is not even clear how to define DNN regularization. In traditional ML, regularization can be either explicit or implicit. Let’s say that we are optimizing some loss function , specified by some parameter vector or weight matrix . When regularization is explicit, it involves making the loss function “nicer” or “smoother” or “more well-defined” by adding an explicit capacity control term directly to the loss, i.e., by considering a modified objective of the form . In this case, we tune the regularization parameter by cross validation. When regularization is implicit, we instead have some adjustable operational procedure like early stopping of an iterative algorithm or truncating small entries of a solution vector. In many cases, we can still relate this back to the more familiar form of optimizing an effective function of the form . For a precise statement in simple settings, see [89, 115, 53]; and for a discussion of implicit regularization in a broader context, see [88] and references therein.

With DNNs, the situation is far less clear. The challenge in applying these well-known ideas to DNNs is that DNNs have many adjustable “knobs and switches,” independent of the Energy Landscape itself, most of which can affect training accuracy, in addition to many model parameters. Indeed, nearly anything that improves generalization is called regularization, and a recent review presents a taxonomy over 50 different regularization techniques for Deep Learning [76]. The most common include ML-like Weight Norm regularization, so-called “tricks of the trade” like early stopping and decreasing the batch size, and DNN-specific methods like Batch Normalization and Dropout. Evaluating and comparing these methods is challenging, in part since there are so many, and in part since they are often constrained by systems or other not-traditionally-ML considerations. Moreover, Deep Learning avoids cross validation (since there are simply too many parameters), and instead it simply drives training error to zero (followed by subsequent fiddling of knobs and switches). Of course, it is still the case that test information can leak into the training process (indeed, perhaps even more severely for DNNs than traditional ML methods). Among other things, this argues for unsupervised metrics to evaluate model quality.

Motivated by this situation, we are interested here in two related questions.

  • Theoretical Question. Why is regularization in deep learning seemingly quite different than regularization in other areas on ML; and what is the right theoretical framework with which to investigate regularization for DNNs?

  • Practical Question. How can one control and adjust, in a theoretically-principled way, the many knobs and switches that exist in modern DNN systems, e.g., to train these models efficiently and effectively, to monitor their effects on the global Energy Landscape, etc.?

That is, we seek a Practical Theory of Deep Learning, one that is prescriptive and not just descriptive. This theory would provide useful tools for practitioners wanting to know How to characterize and control the Energy Landscape to engineer larger and betters DNNs; and it would also provide theoretical answers to broad open questions as Why Deep Learning even works. For example, it would provide metrics to characterize qualitatively-different classes of learning behaviors, as predicted in recent work [93]. Importantly, VC theory and related methods do not provide a theory of this form.

1.2 Overview of our approach

Let us write the Energy Landscape (or optimization function) for a typical DNN with layers, with activation functions , and with weight matrices and biases and , as follows:

(1)

For simplicity, we do not indicate the structural details of the layers (e.g., Dense or not, Convolutions or not, Residual/Skip Connections, etc.). We imagine training this model on some labeled data , using Backprop, by minimizing the loss (i.e., the cross-entropy), between and the labels , as follows:

(2)

We can initialize the DNN using random initial weight matrices , or we can use other methods such as transfer learning (which we will not consider here). There are various knobs and switches to tune such as the choice of solver, batch size, learning rate, etc.

Most importantly, to avoid overtraining, we must usually regularize our DNN. Perhaps the most familiar approach from ML for implementing this regularization explicitly constrains the norm of the weight matrices, e.g., modifying Objective (2) to give:

(3)

where is some matrix norm, and where is an explicit regularization control parameter.

The point of Objective (3) is that explicit regularization shrinks the norm(s) of the matrices. We may expect similar results to hold for implicit regularization. We will use advanced methods from Random Matrix Theory (RMT), developed in the theory of self organizing systems, to characterize DNN layer weight matrices, ,111We consider weight matrices computed for individual dense layers and other layers that can be easily-represented as matrices, i.e., -index tensors. Nothing in our RMT-based theory, however, requires matrices to be constructed from dense layers—they could easily be applied to matrices constructed from convolutional (or other) layers. For example, we could can stack/reshape weight matrices in various ways, e.g., to study multi-dimensional tensors like Convolutional Layers or how different layers interact. We have unpublished results that indicate this is a promising direction, but we don’t consider these variants here. during and after the training process.

Here is an important (but often under-appreciated) point. We call the Energy Landscape. By this, we mean that part of the optimization problem parameterized by the heretofore unknown elements of the weight matrices and bias vectors, for a fixed (in (3)), and as defined by the data . Because we run Backprop training, we pass the data through the Energy function multiple times. Each time, we adjust the values of the weight matrices and bias vectors. In this sense, we may think of the total Energy Landscape (i.e., the optimization function that is nominally being optimized) as changing at each epoch.

1.3 Summary of our results

We analyze the distribution of eigenvalues, i.e., the Empirical Spectral Density (ESD), , of the correlation matrix associated with the layer weight matrix . We do this for a wide range of large, pre-trained, readily-available state-of-the-art models, including the original LetNet5 convolutional net (which, due to its age, we retrain) and pre-trained models available in Keras and PyTorch such as AlexNet and Inception. In some cases, the ESDs are very well-described by Marchenko-Pastur (MP) RMT. In other cases, the ESDs are well-described by MP RMT, with the exception of one or more large eigenvalues that can be modeled by a Spiked-Covariance model [92, 68]. In still other cases—including nearly every current state-of-the-art model we have examined—the EDSs are poorly-described by traditional RMT, and instead they are more consistent with Heavy-Tailed behavior seen in the statistical physics of disordered systems [134, 24]. Based on our observations, we develop a develop a practical theory of Implicit Self-Regularization in DNNs. This theory takes the form of an operational theory characterizing 5+1 phases of DNN training. To test and validate our theory, we consider two smaller models, a 3-layer MLP (MLP3) and a miniature version of AlexNet (MiniAlexNet), trained on CIFAR10, that we can train ourselves repeatedly, adjusting various knobs and switches along the way.

Main Empirical Results.

Our main empirical results consist in evaluating empirically the ESDs (and related RMT-based statistics) for weight matrices for a suite of DNN models, thereby probing the Energy Landscapes of these DNNs. For older and/or smaller models, these results are consistent with implicit Self-Regularization that is Tikhonov-like; and for modern state-of-the-art models, these results suggest novel forms of Heavy-Tailed Self-Regularization.

  • Capacity Control Metrics. We study simple capacity control metrics, the Matrix Entropy, the linear algebraic or Hard Rank, and the Stable Rank. We also use MP RMT to define a new metric, the MP Soft Rank. These metrics track the amount of Self-Regularization that arises in a weight matrix , either during training or in a pre-trained DNN.

  • Self-Regularization in old/small models. The ESDs of older/smaller DNN models (like LeNet5 and a toy MLP3 model) exhibit weak Self-Regularization, well-modeled by a perturbative variant of MP theory, the Spiked-Covariance model. Here, a small number of eigenvalues pull out from the random bulk, and thus the MP Soft Rank and Stable Rank both decrease. This weak form of Self-Regularization is like Tikhonov regularization, in that there is a “size scale” that cleanly separates “signal” from “noise,” but it is different than explicit Tikhonov regularization in that it arises implicitly due to the DNN training process itself.

  • Heavy-Tailed Self-Regularization. The ESDs of larger, modern DNN models (including AlexNet and Inception and nearly every other large-scale model we have examined) deviate strongly from the common Gaussian-based MP model. Instead, they appear to lie in one of the very different Universality classes of Heavy-Tailed random matrix models. We call this Heavy-Tailed Self-Regularization. Here, the MP Soft Rank vanishes, and the Stable Rank decreases, but the full Hard Rank is still retained. The ESD appears fully (or partially) Heavy-Tailed, but with finite support. In this case, there is not a “size scale” (even in the theory) that cleanly separates “signal” from “noise.”

Main Theoretical Results.

Our main theoretical results consist in an operational theory for DNN Self-Regularization. Our theory uses ideas from RMT—both vanilla MP-based RMT as well as extensions to other Universality classes based on Heavy-Tailed distributions—to provide a visual taxonomy for Phases of Training, corresponding to increasing amounts of Self-Regularization.

  • Modeling Noise and Signal. We assume that a weight matrix can be modeled as where is “noise” and where is “signal.” For small to medium sized signal, is well-approximated by an MP distribution—with elements drawn from the Gaussian Universality class—perhaps after removing a few eigenvectors. For large and strongly-correlated signal, gets progressively smaller, but we can model the non-random strongly-correlated signal by a Heavy-Tailed random matrix, i.e., a random matrix with elements drawn from a Heavy-Tailed (rather than Gaussian) Universality class.

  • 5+1 Phases of Regularization. Based on this approach to modeling noise and signal, we construct a practical, visual taxonomy for 5+1 Phases of Training. Each phase is characterized by stronger, visually distinct signatures in the ESD of DNN weight matrices, and successive phases correspond to decreasing MP Soft Rank and increasing amounts of Self-Regularization. The 5+1 phases are: Random-like, Bleeding-out, Bulk+Spikes, Bulk-decay, Heavy-Tailed, and Rank-collapse.

  • Rank-collapse. One of the predictions of our RMT-based theory is the existence of a pathological phase of training, the Rank-collapse or “+1” Phase, corresponding to a state of over-regularization. Here, one or a few very large eigenvalues dominate the ESD, and the rest of the weight matrix loses nearly all Hard Rank.

Based on these results, we speculate that all well optimized, large DNNs will display Heavy-Tailed Self-Regularization in their weight matrices.

Evaluating the Theory.

We provide a detailed evaluation of our theory using a smaller MiniAlexNew model that we can train and retrain.

  • Effect of Explicit Regularization. We analyze ESDs of MiniAlexNet by removing all explicit regularization (Dropout, Weight Norm constraints, Batch Normalization, etc.) and characterizing how the ESD of weight matrices behave during and at the end of Backprop training, as we systematically add back in different forms of explicit regularization.

  • Implementation Details. Since the details of the methods that underlies our theory (e.g., fitting Heavy-Tailed distributions, finite-size effects, etc.) are likely not familiar to ML and NN researchers, and since the details matter, we describe in detail these issues.

  • Exhibiting the 5+1 Phases. We demonstrate that we can exhibit all 5+1 phases by appropriate modification of the various knobs of the training process. In particular, by decreasing the batch size from 500 to 2, we can make the ESDs of the fully-connected layers of MiniAlexNet vary continuously from Random-like to Heavy-Tailed, while increasing generalization accuracy along the way. These results illustrate the Generalization Gap phenomena [64, 72, 57], and they explain that phenomena as being caused by the implicit Self-Regularization associated with models trained with smaller and smaller batch sizes. By adding extreme Weight Norm regularization, we can also induce the Rank-collapse phase.

Main Methodological Contribution.

Our main methodological contribution consists in using empirical observations as well as recent developments in RMT to motivate a practical predictive DNN theory, rather than developing a descriptive DNN theory based on general theoretical considerations. Essentially, we treat the training of different DNNs as if we are running novel laboratory experiments, and we follow the traditional scientific method:

Make Observations Form Hypotheses Build a Theory Test the theory, literally.

In particular, this means that we can observe and analyze many large, production-quality, pre-trained models directly, without needing to retrain them, and we can also observe and analyze smaller models during the training process. In adopting this approach, we are interested in both “scientific questions” (e.g., “Why is regularization in deep learning seemingly quite different …?”) as well as “engineering questions” (e.g., “How can one control and adjust …?).

To accomplish this, recall that, given an architecture, the Energy Landscape is completely defined by the DNN weight matrices. Since its domain is exponentially large, the Energy Landscape is challenging to study directly. We can, however, analyze the weight matrices, as well as their correlations. (This is analogous to analyzing the expected moments of a complicated distribution.) In principle, this permits us to analyze both local and global properties of the Energy Landscape, as well as something about the class of functions (e.g., VC class, Universality class, etc.) being learned by the DNN. Since the weight matrices of many DNNs exhibit strong correlations and can be modeled by random matrices with elements drawn from the Universality class of Heavy-Tailed distributions, this severely restricts the class of functions learned. It also connects back to the Energy Landscape since it is known that the Energy Landscape of Heavy-Tailed random matrices is very different than that of Gaussian-like random matrices.

1.4 Outline of the paper

In Section 2, we provide a warm-up, including simple capacity metrics and their transitions during Backprop. Then, in Sections 3 and 4, we review background on RMT necessary to understand our experimental methods, and we present our initial experimental results. Based on this, in Section 5, we present our main theory of 5+1 Phases of Training. Then, in Sections 6 and 7, we evaluate our main theory, illustrating the effect of explicit regularization, and demonstrating implications for the generalization gap phenomenon. Finally, in Section 8, we provide a discussion of our results in a broader context. The accompanying code is available at https://github.com/CalculatedContent/ImplicitSelfRegularization. For reference, we provide in Table 1 and Table 2 a summary of acronyms and notation used in the following.

Acronym Description
DNN Deep Neural Network
ML Machine Learning
SGD Stochastic Gradient Descent
RMT Random Matrix Theory
MP Marchenko Pastur
ESD Empirical Spectral Density
PL Power Law
HT Heavy-Tailed
TW Tracy Widom (Law)
SVD Singular Value Decomposition
FC Fully Connected (Layer)
VC Vapnik Chrevonikis (Theory)
SMTOG Statistical Mechanics Theory of Generalization
Table 1: Definitions of acronyms used in the text.
Notation Description
DNN layer weight matrix of size , with
DNN layer weight matrix for layer
DNN layer weight matrix for layer at epoch
random rectangular matrix, elements from truncated Normal distribution
random rectangular matrix, elements from Pareto distribution
normalized correlation matrix for layer weight matrix
apsect ratio of
singular value of
eigenvalue of
maximum eigenvalue in an ESD
eigenvalue at edge of MP Bulk
eigenvalue lying outside MP Bulk,
actual ESD, from some matrix
theoretical ESD, infinite limit
theoretical ESD, finite size
theoretical empirical density of singular values, infinite limit
elementwise variance of , used to define MP distribution
elementwise variance of , as measured after random shuffling
elementwise variance of , after removing/ignoring all spikes
elementwise variance of , determined empirically
Hard Rank, number of non-zero singular values, Eqn. (5)
Matrix Entropy, as defined on , Eqn. (6)
Stable Rank, measures decay of singular values, Eqn. (7)
MP Soft Rank, applied after and depends on MP fit, Eqn. (11)
Vector Entropy, as defined on vector
Localization Ratio, as defined on vector
Participation Ratio, as defined on vector
Pareto distribution, parameterized by
Pareto distribution, parameterized by
theoretical relation, for ESD of , between and (for )
empiricial relation, for ESD of , between and (for )
empirical uncertainty, due to finite-size effects,
in theoretical MP bulk edge
model of perturbations and/or strong correlations in
Table 2: Definitions of notation used in the text.

2 Simple Capacity Metrics and Transitions during Backprop

In this section, we describe simple spectral metrics to characterize DNN weight these matrices as well as initial empirical observations on the capacity properties of training DNNs.

2.1 Simple capacity control metrics

A DNN is defined by its detailed architecture and the values of the weights and biases at each layer. We seek a simple capacity control metric for a learned DNN model that: is easy to compute both during training and for already-trained models; can describe changes in the gross behavior of weight matrices during the Backprop training process; and can identify the onset of subtle structural changes in the weight matrices.

One possibility is to use the Euclidean distance between the initial weight matrix, , and the weight matrix at epoch of training, , i.e., . This distance, however, is not scale invariant. In particular, during training, and with regularization turned off, the weight matrices may shift in scale, gaining or losing Frobenius mass or variance,222We use these terms interchangably, i.e., even if the matrices are not mean-centered. and this distance metric is sensitive to that change. Indeed, the whole point of a BatchNorm layer is to try to prevent this. To start, then, we will consider two scale-invariant measures of capacity control: the Matrix Entropy (), and the Stable Rank . For an arbitrary matrix , both of these metrics are defined in terms of its spectrum.

Consider (real valued) layer weight matrices , where . Let the Singular Value Decomposition of be

where is the singular value333We use rather than to denote singular values since denote variance elsewhere. of , and let . We also define the associated (uncentered) correlation matrix

(4)

where we sometimes drop the subscript for , and where is normalized by . We compute the eigenvalues of ,

where are the squares of the singular values: . Given the singular values of and/or eigenvalues of , there are several well-known matrix complexity metrics.

  • The Hard Rank (or linear algebraic rank),

    (5)

    is the number of singular values greater than zero, , to within a numerical cutoff.

  • The Matrix Entropy,

    (6)

    is also known as the Generalized von-Neumann Matrix Entropy.444This resembles the entanglement Entropy, suggested for the analysis of convolutional nets [81].

  • The Stable Rank,

    (7)

    the ratio of the Frobenius norm to Spectral norm, is a robust variant of the Hard Rank.

We also refer to the Matrix Entropy and Stable Rank of . By this, we mean the metrics computed with the associated eigenvalues. Note and .

It is known that a random matrix has maximum Entropy, and that lower values for the Entropy correspond to more structure/regularity. If is a random matrix, then . For example, we initialize our weight matrices with a truncated random matrix , then . When has significant and observable non-random structure, we expect . We will see, however, that in practice these differences are quite small, and we would prefer a more discriminative metric. In nearly every case, for well-trained DNNs, all the weight matrices retain full Hard Rank ; but the weight matrices do “shrink,” in a sense captured by the Stable Rank. Both and measure matrix capacity, and, up to a scale factor, we will see that they exhibit qualitatively similar behavior.

2.2 Empirical results: Capacity transitions while training

We start by illustrating the behavior of two simple complexity metrics during Backprop training on MLP3, a simple -layer Multi-Layer Perceptron (MLP), described in Table 4. MLP3 consists of 3 fully connected (FC) / dense layers with nodes and ReLU activation, with a final FC layer with nodes and softmax activation. This gives 4 layer weight matrices of shape and with :

For the training, each matrix is initialized with a Glorot normalization [54]. The model is trained on CIFAR10, up to 100 epochs, with SGD (learning rate=, momentum=) and with a stopping criteria of on the MSE loss.555Code will be available at https://github.com/CalculatedContent/ImplicitSelfRegularization/MLP3.

Figure 1 presents the layer entropy (in Figure 1(a)) and the stable rank (in Figure 1(b)), plotted as a function of training epoch, for FC1 and FC2. Both metrics decrease during training (note the scales of the Y axes): the stable rank decreases by approximately a factor of two, and the matrix entropy decreases by a small amount, from roughly to just below (this is for FC2, and there is an even more modest change for FC1). They both track nearly the same changes; and the stable rank is more informative for our purposes; but we will see that the changes to the matrix entropy, while subtle, are significant.

(a) MLP3 Entropies.
(b) MLP3 Stable Ranks.
Figure 1: The behavior of two complexity measures, the Matrix Entropy and the Stable Rank , for Layers FC1 and FC2, during Backprop training, for MLP3. Both measures display a transition during Backprop training.

Figure 2 presents scree plots for the initial and final weight matrices for the FC1 and FC2 layers of our MLP3. A scree plot plots the decreasing variability in the matrix as a function of the increasing index of the corresponding eigenvector [59]. Thus, such scree plots present similar information to the stable rank—e.g., observe the Y-axis of Figure 2(b), which shows that there is a slight increase in the largest eigenvalue for FC1 (again, note the scales of the Y axes) and a larger increase in the largest eigenvalue for FC2, which is consistent with the changes in the stable rank in Figure 1(b))—but they too give a coarse picture of the matrix. In particular, they lack the detailed insight into subtle changes in the entropy and rank associated with the Self-Regularization process, e.g., changes that reside in just a few singular values and vectors, that we will need in our analysis.

(a) MLP3 Initial Scree Plot.
(b) MLP3 Final Scree Plot.
Figure 2: Scree plots for initial and final configurations for Layers FC1 and FC2, during Backprop training, for MLP3.
(a) Initial/final singular value density for Layer FC2.
(b) Initial/final eigenvalue density (ESD, ) for Layer FC2.
Figure 3: Histograms of the Singular Values and associated Eigenvalues , comparing initial and final weight matrices (which are , with ) for Layer FC2 of a MLP3 trained on CIFAR10.

Limitations of these metrics.

We can gain more detailed insight into changes in during training by creating histograms of the singular values and/or eigenvalues (). Figure 3(a) displays the density of singular values of and for the FC2 layer of the MLP3 model. Figure 3(b) displays the associated eigenvalue densities, , which we call the Empirical Spectral Density (ESD) (defined in detail below) plots. Observe that the initial density of singular values (shown in red/purple), resembles a quarter circle,666The initial weight matrix is just a random (Glorot Normal) matrix. and the final density of singular values (blue) consists of a bulk quarter circle, of about the same width, with several spikes of singular value density beyond the bulk’s edge. Observe that the similar heights and widths and shapes of the bulks imply the variance, or Frobenius norm, does not change much: . Observe also that the initial ESD, (red/purple), is crisply bounded between and (and similarly for the density of singular values at the square root of this value), whereas the final ESD (blue) has less density at and several spikes . The largest eigenvalue is , i.e., . We see now why the stable rank for FC2 decreases by ; the Frobenius norm does not change much, but the squared Spectral norm is  larger.

The fine-scale structure that is largely hidden from Figures 1 and 2 but that is easily-revealed by singular/eigen value density plots of Figure 3 suggests that a RMT analysis might be fruitful.

3 Basic Random Matrix Theory (RMT)

In this section, we summarize results from RMT that we use. RMT provides a kind-of Central Limit Theorem for matrices, with unique results for both square and rectangular matrices. Perhaps the most well-known results from RMT are the Wigner Semicircle Law, which describes the eigenvalues of random square symmetric matrices, and the Tracy Widom (TW) Law, which states how the maximum eigenvalue of a (more general) random matrix is distributed. Two issues arise with applying these well-known versions of RMT to DNNs. First, very rarely do we encounter symmetric weight matrices. Second, in training DNNs, we only have one instantiation of each weight matrix, and so it is not generally possible to apply the TW Law.777This is for “production” use, where one performs only a single training run; below, we will generate such distributions of weight matrices for our smaller models to denoise and illustrate better their properties. Several overviews of RMT are available [143, 41, 71, 142, 22, 42, 110, 24]. Here, we will describe a more general form of RMT, the Marchenko-Pastur (MP) theory, applicable to rectangular matrices, including (but not limited to) DNN weight matrices .

3.1 Marchenko-Pastur (MP) theory for rectangular matrices

MP theory considers the density of singular values of random rectangular matrices . This is equivalent to considering the density of eigenvalues , i.e., the ESD, of matrices of the form . MP theory then makes strong statements about such quantities as the shape of the distribution in the infinite limit, it’s bounds, expected finite-size effects, such as fluctuations near the edge, and rates of convergence. When applied to DNN weight matrices, MP theory assumes that , while trained on very specific datasets, exhibits statistical properties that do not depend on the specific details of the elements , and holds even at finite size. This Universality concept is “borrowed” from Statistical Physics, where it is used to model, among other things, strongly-correlated systems and so-called critical phenomena in nature [134].

To apply RMT, we need only specify the number of rows and columns of and assume that the elements are drawn from a specific distribution that is a member of a certain Universality class (there are different results for different Universality classes). RMT then describes properties of the ESD, even at finite size; and one can compare perdictions of RMT with empirical results. Most well-known and well-studied is the Universality class of Gaussian distributions. This leads to the basic or vanilla MP theory, which we describe in this section. More esoteric—but ultimately more useful for us—are Universality classes of Heavy-Tailed distributions. In Section 3.2, we describe this important variant.

Gaussian Universality class.

We start by modeling as an random matrix, with elements drawn from a Gaussian distribution, such that:

Then, MP theory states that the ESD of the correlation matrix, , has the limiting density given by the MP distribution :

(8)

Here, is the element-wise variance of the original matrix, is the aspect ratio of the matrix, and the minimum and maximum eigenvalues, , are given by

(9)

The MP distribution for different aspect ratios and variance parameters .

(a) Different aspect ratios
(b) Different variance parameters
Figure 4: Marchenko-Pastur (MP) distributions, see Eqns. (8) and (9), as the aspect ratio and variance parameter are modified.

The shape of the MP distribution only depends on two parameters, the variance and the aspect ratio . See Figure 4 for an illustration. In particular, see Figure 4(a) for a plot of the MP distribution of Eqns. (8) and (9), for several values of ; and see Figure 4(b) for a plot of the MP distribution for several values of .

As a point of reference, when and (blue in both subfigures), the mass of skews slightly to the left, and is bounded in . For fixed , as increases, the support (i.e., ) narrows, and becomes less skewed. As , the support widens and skews more leftward. Also, is concave for larger , and it is partially convex for smaller .

Although MP distribution depends on and , in practice is fixed, and thus we are interested how varies—distributionally for random matrices, and empirically for weight matrices. Due to Eqn. (9), if is fixed, then (i.e., the largest eigenvalue of the bulk, as well as ) is determined, and vice versa.888In practice, relating and raises some subtle technical issues, and we discuss these in Section 6.3.

The Quarter Circle Law for .

A special case of Eqn. (8) arises when , i.e., when is a square non-symmetric matrix. In this case, the eigenvalue density is very peaked with a bounded tail, and it is sometimes more convenient to consider the density of singular values of , , which takes the form of a Quarter-Circle:

We will not pursue this further, but we saw this earlier, in Figure 3(b), with our toy MLP3 model.

Finite-size Fluctuations at the MP Edge.

In the infinite limit, all fluctuations in concentrate very sharply at the MP edge, , and the distribution of the maximum eigenvalues is governed by the TW Law. Even for a single finite-sized matrix, however, MP theory states the upper edge of is very sharp; and even when the MP Law is violated, the TW Law, with finite-size corrections, works very well at describing the edge statistics. When these laws are violated, this is very strong evidence for the onset of more regular non-random structure in the DNN weight matrices, which we will interpret as evidence of Self-Regularization.

In more detail, in many cases, one or more of the empirical eigenvalues will extend beyond the sharp edge predicted by the MP fit, i.e., such that (where is the largest eigenvalue of ). It will be important to distinguish the case that simply due the finite size of from the case that is “truly” outside the MP bulk. According to MP theory [22], for finite , and with normalization, the fluctuations at the bulk edge scale as :

where is given by Eqn (9). Since , we can also express this in terms of , but with different prefactors [68]. Most importantly, within MP theory (and even more generally), the fluctuations, centered and rescaled, will follow TW statistics.

In the DNNs we consider, , and so the maximum deviation is only . In many cases, it will be obvious whether a given is an outlier. When it is not, one could generate an ensemble of runs and study the information content of the eigenvalues (shown below) and/or apply TW theory (not discussed here).

Fitting MP Distributions.

Several technical challenges with fitting MP distributions, i.e., selecting the bulk edge , are discussed in Section 6.3.

3.2 Heavy-Tailed extensions of MP theory


Generative Model w/ elements from Universality class Finite- Global shape Limiting Global shape Bulk edge Local stats (far) Tail Local stats
Basic MP Gaussian MP, i.e., Eqn. (8) MP TW No tail.
Spiked- Covariance Gaussian, + low-rank perturbations MP + Gaussian spikes MP TW Gaussian
Heavy tail, (Weakly) Heavy-Tailed MP + PL tail MP Heavy-Tailed Heavy-Tailed
Heavy tail, (Moderately) Heavy-Tailed (or “fat tailed”) PL PL No edge. Frechet
Heavy tail, (Very) Heavy-Tailed PL PL No edge. Frechet
Table 3: Basic MP theory, and the spiked and Heavy-Tailed extensions we use, including known, empirically-observed, and conjectured relations between them. Boxes marked “” are best described as following “TW with large finite size corrections” that are likely Heavy-Tailed [20], leading to bulk edge statistics and far tail statistics that are indistinguishable. Boxes marked “” are phenomenological fits, describing large () or small () finite-size corrections on behavior. See [38, 20, 19, 111, 7, 40, 8, 26, 22, 21] for additional details.

MP-based RMT is applicable to a wide range of matrices (even those with large low-rank perturbations to i.i.d. normal behavior); but it is not in general applicable when matrix elements are strongly-correlated. Strong correlations appear to be the case for many well-trained, production-quality DNNs. In statistical physics, it is common to model strongly-correlated systems by Heavy-Tailed distributions [134]. The reason is that these models exhibit, more or less, the same large-scale statistical behavior as natural phenomena in which strong correlations exist [134, 22]. Moreover, recent results from MP/RMT have shown that new Universality classes exist for matrices with elements drawn from certain Heavy-Tailed distributions [22].

We use these Heavy-Tailed extensions of basic MP/RMT to build an operational and phenomenological theory of Regularization in Deep Learning; and we use these extensions to justify our analysis of both Self-Regularization and Heavy-Tailed Self-Regularization.999The Universality of RMT is a concept broad enough to apply to classes of problems that appear well beyond its apparent range of validity. It is in this sense that we apply RMT to understand DNN Regularization. Briefly, our theory for simple Self-Regularization is insipred by the Spiked-Covariance model of Johnstone [68] and it’s interpretation as a form of Self-Organization by Sornette [92]; and our theory for more sophisticated Heavy-Tailed Self-Regularization is inspired by the application of MP/RMT tools in quantitative finance by Bouchuad, Potters, and coworkers [49, 77, 78, 20, 19, 22, 24], as well as the relation of Heavy-Tailed phenomena more generally to Self-Organized Criticality in Nature [134]. Here, we highlight basic results for this generalized MP theory; see [38, 20, 19, 111, 7, 40, 8, 26, 22, 21] in the physics and mathematics literature for additional details.

Universality classes for modeling strongly correlated matrices.

Consider modeling as an random matrix, with elements drawn from a Heavy-Tailed—e.g., a Pareto or Power Law (PL)—distribution:

(10)

In these cases, if is element-wise Heavy-Tailed,101010Heavy-Tailed phenomena have many subtle properties [121]; we consider here only the most simple cases. then the ESD likewise exhibits Heavy-Tailed properties, either globally for the entire ESD and/or locally at the bulk edge.

Table 3 summarizes these (relatively) recent results, comparing basic MP theory, the Spiked-Covariance model,111111We discuss Heavy-Tailed extensions to MP theory in this section. Extensions to large low-rank perturbations are more straightforward and are described in Section 5.3. and Heavy-Tailed extensions of MP theory, including associated Universality classes. To apply the MP theory, at finite sizes, to matrices with elements drawn from a Heavy-Tailed distribution of the form given in Eqn. (10), then, depending on the value of , we have one of the following three121212Both and are “corner cases” that we don’t expect to be able to resolve at finite . Universality classes:

  • (Weakly) Heavy-Tailed, : Here, the ESD exhibits “vanilla” MP behavior in the infinite limit, and the expected mean value of the bulk edge is . Unlike standard MP theory, which exhibits TW statistics at the bulk edge, here the edge exhibits PL / Heavy-Tailed fluctuations at finite . These finite-size effects appear in the edge / tail of the ESD, and they make it hard or impossible to distinguish the edge versus the tail at finite .

  • (Moderately) Heavy-Tailed, : Here, the ESD is Heavy-Tailed / PL in the infinite limit, approaching the form . In this regime of , there is no bulk edge. At finite size, the global ESD can be modeled by the form , for all , but the slope and intercept must be fit, as they display very large finite-size effects. The maximum eigenvalues follow Frechet (not TW) statistics, with , and they have large finite-size effects. Even if the ESD tends to zero, the raw number of eigenvalues can still grow—just not as quickly as (i.e., we may expect some , in the infinite limit, but the eigenvalue density ). Thus, at any finite , is Heavy-Tailed, but the tail decays moderately quickly.

  • (Very) Heavy-Tailed, : Here, the ESD is Heavy-Tailed / PL for all finite , and as it converges more quickly to a PL distribution with tails . In this regime, there is no bulk edge, and the maximum eigenvalues follow Frechet (not TW) statistics. Finite-size effects exist here, but they are are much smaller here than in the regime of .

(a) Three log-log histograms.
(b) Zoomed-in histogram for .
Figure 5: The log-log histogram plots of the ESD for three Heavy-Tailed random matrices with same aspect ratio , with , corresponding to the three Heavy-Tailed Universality classes ( vs and ) described in Table 3.

Visualizing Heavy-Tailed distributions.

It is often fruitful to perform visual exploration and classification of ESDs by plotting them on linear-linear coordinates, log-linear coordinates (linear horizontal/X axis and logarithmic vertical/Y axis), and/or log-log coordinates (logarithmic horizontal/X axis and logarithmic vertical/Y axis). It is known that data from a PL distribution will appear as a convex curve in a linear-linear plot and a log-linear plot and as a straight line in a log-log plot; and that data from a Gaussian distribution will appear as a bell-shaped curve in a linear-linear plot, as an inverted parabola in a log-linear plot, and as a strongly concave curve in a log-log plot. Examining data from an unknown ESD on different axes suggests a classification for them. (See Figures 5 and 16.) More quantitative analysis may lead to more definite conclusions, but that too comes with technical challenges.

To illustrate this, we provide a visual and operational approach to understand the limiting forms for different . See Figure 5. Figure 5(a) displays the log-log histograms for the ESD for three Heavy-Tailed random matrices , with For (blue), the log-log histogram is linear over 5 log scales, from . If increases (not shown), will grow, but this plot will remain linear, and the tail will not decay. In the infinite limit, the ESD will still be Heavy-Tailed. Contrast this with the ESD drawn from the same distribution, except with (green). Here, due to larger finite-size effects, most of the mass is confined to one or two log scales, and it starts to vanish when . This effect is amplified for (red), which shows almost no mass for eigenvalues beyond the MP bulk (i.e. ). Zooming in, in Figure 5(b), we see that the log-log plot is linear—in the central region only—and the tail vanishes very quickly. If increases (not shown), the ESD will remain Heavy-Tailed, but the mass will grow much slower than when . This illustrates that, while ESDs can be Heavy-Tailed at finite size, the tails decay at different rates for different Heavy-Tailed Universality classes ( or or ).

Fitting PL distributions to ESD plots.

Once we have identified PL distributions visually (using a log-log histogram of the ESD, and looking for visual characteristics of Figure 5), we can fit the ESD to a PL in order to obtain the exponent . For this, we use the Clauset-Shalizi-Newman (CSN) approach [33], as implemented in the python PowerLaw package [2],131313See https://github.com/jeffalstott/powerlaw. which computes an such that

Generally speaking, fitting a PL has many subtleties, most beyond the scope of this paper [33, 55, 91, 100, 16, 73, 35, 2, 145, 58]. For example, care must be taken to ensure the distribution is actually linear (in some regime) on a log-log scale before applying the PL estimator, lest it give spurious results; and the PL estimator only works reasonably well for exponents in the range .

To illustrate this, consider Figure 6. In particular, Figure 6(a) shows that the CSN estimator performs well for the regime , while for there are substantial deviations due to finite-size effects, and for no reliable results are obtained; and Figures 6(b) and 6(c) show that the finite-size effects can be quite complex (for fixed , increasing leads to larger finite-size effects, while for fixed , decreasing leads to larger finite-size effects).

(a) vs , for .
(b) vs , for fixed .
(c) vs , for fixed .
Figure 6: Dependence of (the fitted PL parameter) on (the hypothesized limiting PL parameter). In (6(a)), the PL exponent is fit, using the CSN estimator, for the ESD for a random, rectangular Heavy-Tailed matrix , with elements drawn from a Pareto distribution . For , finite-size effects are modest, and the ESD follows the theoretical prediction . For , the ESD still shows roughly linear behavior, but with significant finite-size effects, giving the more general phenomenological relation . For , the CSN method is known to fail to perform well. In (6(b)) and (6(c)), plots are shown for varying , with and fixed, respectively.

Identifying the Universality class.

Given , we identify the corresponding (as illustrated in Figure 6) and thus which of the three Heavy-Tailed Universality classes ( or or , as described in Table 5) is appropriate to describe the system. For our theory, the following are particularly important points. First, observing a Heavy-Tailed ESD may indicate the presence of a scale-free DNN. This suggests that the underlying DNN is strongly-correlated, and that we need more than just a few separated spikes, plus some random-like bulk structure, to model the DNN and to understand DNN regularization. Second, this does not necessarily imply that the matrix elements of form a Heavy-Tailed distribution. Rather, the Heavy-Tailed distribution arises since we posit it as a model of the strongly correlated, highly non-random matrix . Third, we conjecture that this is more general, and that very well-trained DNNs will exhibit Heavy-Tailed behavior in their ESD for many the weight matrices (as we have observed so far with many pre-trained models).

3.3 Eigenvector localization

When entries of a random matrix are drawn from distributions in the Gaussian Universality class, and under typical assumptions, eigenvectors tend to be delocalized, i.e., the mass of the eigenvector tends to be spread out on most or all the components of that vector. For other models, eigenvectors can be localized. For example, spike eigenvectors in Spiked-Covariance models as well as extremal eigenvectors in Heavy-Tailed random matrix models tend to be more localized [110, 24]. Eigenvector delocalization, in traditional RMT, is modeled using the Thomas Porter Distribution [118]. Since a typical bulk eigenvector should have maximum entropy, therefore it’s components should be Gaussian distributed, according to:

Thomas Porter Distribution:

Here, we normalize such that the empirical variance of the elements is unity, . Based on this, we can define several related eigenvector localization metrics.

  • The Generalized Vector Entropy, is computed using a histogram estimator.

  • The Localization Ratio, measures the sum of the absolute values of the elements of , relative to the largest absolute value of an element of .

  • The Participation Ratio, is a robust variant of the Localization Ratio.

For all three metrics, the lower the value, the more localized the eigenvector tends to be. We use deviations from delocalization as a diagnostic that the corresponding eigenvector is more structured/regularized.

4 Empirical Results: ESDs for Existing, Pretrained DNNs

In this section, we describe our main empirical results for existing, pretrained DNNs.141414A practical theory for DNNs should be applicable to very large—production-quality, and even pre-trained—models, as well as to models during training. Early on, we observed that small DNNs and large DNNs have very different ESDs. For smaller models, ESDs tend to fit the MP theory well, with well-understood deviations, e.g., low-rank perturbations. For larger models, the ESDs almost never fit the theoretical , and they frequently have a completely different functional form. We use RMT to compare and contrast the ESDs of a smaller, older NN and many larger, modern DNNs. For the small model, we retrain a modern variant of one of the very early and well-known Convolutional Nets—LeNet5. We use Keras (2), and we train LeNet5 on MNIST. For the larger, modern models, we examine selected layers from AlexNet, InceptionV3, and many other models (as distributed with pyTorch). Table 4 provides a summary of models we analyzed in detail.



Used for: Section Layer Key observation in ESD(s)
MLP3
Initial illustration
of entropy and
spectral properties
2
FC1
FC2
MP Bulk+Spikes
LeNet5
Old state-of-the-art model
4.1 FC1
MP Bulk+Spikes,
with edge Bleeding-out
AlexNet
More recent pre-trained
state-of-the-art model
4.2
FC1
ESD Bulk-decay
into a Heavy-Tailed
Typical properties
from Heavy-Tailed
Universality classes
5.5 FC2
FC3
InceptionV3
More recent pre-trained
state-of-the-art model
with unusual properties
4.3 L226
Bimodel ESD w/
Heavy-Tailed envelope
4.3 L302
Bimodal and “fat” or
Heavy-Tailed ESD
MiniAlexNet
Detailed analysis
illustrating properties
as training knobs change
6
FC1
FC2
Exhibits all 5+1
Phases of Training
by changing batch size
Table 4: Description of main DNNs used in our analysis and the key observations about the ESDs of the specific layer weight matrices using RMT. Names in the “Key observation” column are defined in Section 5 and described in Table 7.

4.1 Example: LeNet5 (1998)

LeNet5 predates both the current Deep Learning revolution and the so-called AI Winter, dating back to the late 1990s [79]. It is the prototype early model for DNNs; it is the most widely-known example of a Convolutional Neural Network (CNN); and it was used in production systems for recognizing hand written digits [79]. The basic design consists of Convolutional (Conv2D) and MaxPooling layers, followed by 2 Dense, or Fully Connected (FC), layers, FC1 and FC2. This design inspired modern DNNs for image classification, e.g., AlexNet, VGG16 and VGG19. All of these latter models consist of a few Conv2D and MaxPooling layers, followed by a few FC layers. Since LeNet5 is older, we actually recoded and retrained it. We used Keras 2.0, using epochs of the AdaDelta optimizer, on the MNIST data set. This model has training accuracy, and test accuracy on the default MNIST split. We analyze the ESD of the FC1 Layer (but not the FC2 Layer since it has only 10 eigenvalues). The FC1 matrix is a matrix, with , and thus it yields 500 eigenvalues.

Fc1: Mp Bulk+Spikes, with edge Bleeding-out.

Figure 7 presents the ESD for FC1 of LeNet5, with Figure 7(a) showing the full ESD and Figure 7(b) showing the same ESD, zoomed-in along the X-axis to highlight smaller peaks outside the main bulk of our MP fit. In both cases, we show (red curve) our fit to the MP distribution . Several things are striking. First, the bulk of the density has a large, MP-like shape for eigenvalues , and the MP distribution fits this part of the ESD very well, including the fact that the ESD just below the best fit is concave. Second, some eigenvalue mass is bleeding out from the MP bulk for , although it is quite small. Third, beyond the MP bulk and this bleeding out region, are several clear outliers, or spikes, ranging from to .

Summary.

The shape of , the quality of the global bulk fit, and the statistics and crisp shape of the local bulk edge all agree well with standard MP theory, or at least the variant of MP theory augmented with a low-rank perturbation. In this sense, this model can be viewed as a real-world example of the Spiked-Covariance model [68].

(a) Full ESD
(b) ESD, zoomed-in (with original histogram bins)
Figure 7: Full and zoomed-in ESD for LeNet5, Layer FC1. Overlaid (in red) are gross fits of the MP distribution (which fit the bulk of the ESD very well).

4.2 Example: AlexNet (2012)

AlexNet was the first modern DNN, and its spectacular performance opened the door for today’s revolution in Deep Learning. Specifically, it was top-5 on the ImageNet ILSVRC2012 classification task [74], achieving an error of , over ahead of the first runner up. AlexNet resembles a scaled-up version of the LeNet5 architecture; it consists of 5 layers, 2 convolutional, followed by 3 FC layers (the last being a softmax classifier).151515It was the first CNN Net to win this competition, and it was also the first DNN to use ReLUs in a wide scale competition. It also used a novel Local Contrast Normalization layer, which is no longer in widespread use, having been mostly replaced with Batch Normalization or similar methods. We will analyze the version of AlexNet currently distributed with pyTorch (version 0.4.1). In this version, FC1 has a matrix, with ; FC2 has a matrix, with ; and FC3 has a matrix, with . Notice that FC3 is the final layer and connects AlexNet to the labels.

Figures 8, 9, and 10 present the ESDs for weight matrices of AlexNet for Layers FC1, FC2, and FC3, with Figures 8(a), 9(a), and 10(a) showing the full ESD, and Figures 8(b), 9(b), and 10(b) showing the results “zoomed-in” along the X-axis. In each cases, we present best MP fits, as determined by holding fixed, adjusting the parameter, and selecting the best bulk fit by visual inspection. Fitting fixes , and the estimates differ for different layers because the matrices have different aspect ratios . In each case, the ESDs exhibit moderate to strong deviations from the best standard MP fit.

Fc1: Bulk-decay into Heavy-Tailed.

Consider first AlexNet FC1 (in Figures 8(a) and 8(b)). The eigenvalues range from near up to ca. , just as with LeNet5. The full ESD, however, is shaped very differently than any theoretical , for any value of . The best MP fit (in red in Figure 8) does capture a good part of the eigenvalue mass, but there are important differences: the peak is not filled in, there is substantial eigenvalue mass bleeding out from the bulk, and the shape of the ESD is convex in the region near to and just above the best fit for of the bulk edge. Contrast this with the excellent MP fit for the ESD for FC1 of LeNet5 (Figure 7(b)), where the red curve captures all of the bulk mass, and only a few outlying spikes appear. Moreover, and very importantly, in AlexNet FC1, the bulk edge is not crisp. In fact, it is not visible at all; and is solely defined operationally by selecting the parameter. As such, the edge fluctuations, , do not resemble a TW distribution, and the bulk itself appears to just decay into the heavy tail. Finally, a PL fit gives good fit , suggesting (due to finite size effects) .

Fc2: (nearly very) Heavy-Tailed Esd.

Consider next AlexNet FC2 (in Figures 9(a) and 9(b)). This ESD differs even more profoundly from standard MP theory. Here, we could find no good MP fit, even by adjusting and simultaneously. The best MP fit (in red) does not fit the Bulk part of at all. The fit suggests there should be significantly more bulk eigenvalue mass (i.e., larger empirical variance) than actually observed. In addition, as with FC1, the bulk edge is indeterminate by inspection. It is only defined by the crude fit we present, and any edge statistics obviously do not exhibit TW behavior. In contrast with MP curves, which are convex near the bulk edge, the entire ESD is concave (nearly) everywhere. Here, a PL fit gives good fit , smaller than FC1 and FC3, indicating a .

Fc3: Heavy-Tailed Esd.

Consider finally AlexNet FC3 (in Figures 10(a) and 10(b)). Here, too, the ESDs deviate strongly from predictions of MP theory, both for the global bulk properties and for the local edge properties. A PL fit gives good fit , which is larger than FC1 and FC2. This suggests a (which is also shown with a log-log histogram plot in Figure 16 in Section 5 below).

Summary.

For all three layers, the shape of , the quality of the global bulk fit, and the statistics and shape of the local bulk edge are poorly-described by standard MP theory. Even when we may think we have moderately a good MP fit because the bulk shape is qualitatively captured with MP theory (at least visual inspection), we may see a complete breakdown RMT at the bulk edge, where we expect crisp TW statistics (or at least a concave envelope of support). In other cases, the MP theory may even be a poor estimator for even the bulk.

(a) Full ESD for FC1
(b) Zoomed-in ESD for FC1
Figure 8: ESD for Layer FC1 of AlexNet. Overlaid (in red) are gross fits of the MP distribution.
(a) Full ESD for FC2
(b) Zoomed-in ESD for FC2
Figure 9: ESD for Layer FC2 of AlexNet. Overlaid (in red) are gross fits of the MP distribution.
(a) Full ESD for FC3
(b) Zoomed-in ESD for FC3
Figure 10: ESD for Layer FC3 of AlexNet. Overlaid (in red) are gross fits of the MP distribution.

4.3 Example: InceptionV3 (2014)

In the few years after AlexNet, several new, deeper DNNs started to win the ILSVRC ImageNet completions, including ZFNet(2013) [155], VGG(2014) [130], GoogLeNet/Inception (2014) [139], and ResNet (2015) [61]. We have observed that nearly all of these DNNs have properties that are similar to AlexNet. Rather than describe them all in detail, in Section 4.4, we perform power law fits on the Linear/FC layers in many of these models. Here, we want to look more deeply at the Inception model, since it displays some unique properties.161616Indeed, these results suggest that Inception models do not truly account for all the correlations in the data.

In 2014, the VGG [130] and GoogLeNet [139] models were close competitors in the ILSVRC2014 challenges. For example, GoogLeNet won the classification challenge, but VGG performed better on the localization challenge. These models were quite deep, with GoogLeNet having 22 layers, and VGG having 19 layers. The VGG model is 2X as deep as AlexNet, but it replaces each larger AlexNet filter with more, smaller filters. Presumably this deeper architecture, with more non-linearities, can capture the correlations in the network better. The VGG features of the second to last FC layer generalize well to other tasks. A downside of the VGG models is that they have a lot of parameters and that they use a lot of memory.

The GoogleLeNet/Inception design resembles the VGG architecture, but it is even more computationally efficient, which (practically) means smaller matrices, fewer parameters (12X fewer than AlexNet), and a very different architecture, including no internal FC layers, except those connected to the labels. In particular, it was noted that most of the activations in these DNNs are redundant because they are so strongly correlated. So, a sparse architecture should perform just as well, but with much less computational cost—if implemented properly to take advantage of low level BLAS calculations on the GPU. So, an Inception module was designed. This module approximates a sparse Convolutional Net, but using many smaller, dense matrices, leading to many small filters of different sizes, concatenated together. The Inception modules are then stacked on top of each other to give the full DNN. GoogLeNet also replaces the later FC layers (i.e., in AlexNet-like architectures) with global average pooling, leaving only a single FC / Dense layer, which connects the DNN to the labels. Being so deep, it is necessary to include an Auxiliary block that also connects to the labels, similar to the final FC layer. From this, we can extract a single rectangular tensor. This gives 2 FC layers to analyze.

For our analysis of InceptionV3 [139], we select a layer (L226) from in the Auxiliary block, as well as the final (L302) FC layer. Figure 11 presents the ESDs for InceptionV3 for Layer L226 and Layer L302, two large, fully-connected weight matrices with aspect ratios and , respectively. We also show typical MP fits for matrices with the same aspect ratios . As with AlexNet, the ESDs for both the L226 and L302 layers display distinct and strong deviations from the MP theory.

L226: Bimodal ESDs.

Consider first L226 of InceptionV3. Figure 11(a) displays the L226 ESD. (Recall this is not a true Dense layer, but it is part of the Inception Auxiliary module, and it looks very different from the other FC layers, both in AlexNet and below.) At first glance, we might hope to select the bulk edge at and treat the remaining eigenvalue mass as an extended spike; but this visually gives a terrible MP fit (not shown). Selecting produces an MP fit with a reasonable shape to the envelope of support of the bulk; but this fit strongly over-estimates the bulk variance / Frobenius mass (in particular near ), and it strongly under-estimates the spike near . We expect this fit would fail any reasonable statistical confidence test for an MP distribution. As in all cases, numerous Spikes extend all the way out to , showing a longer, heavier tail than any MP fit. It is unclear whether or not the edge statistics are TW. There is no good MP fit for the ESD of L226, but it is unclear whether this distribution is “truly” Heavy-Tailed or simply appears Heavy-Tailed as a result of the bimodality. Visually, at least the envelope of the L226 ESD to resembles a Heavy-Tailed MP distribution. It is also possible that the DNN itself is also not fully optimized, and we hypothesize that further refinements could lead to a true Heavy-Tailed ESD.

L302: Bimodal fat or Heavy-Tailed ESDs.

Consider next L302 of InceptionV3 (in Figure 11(b)). The ESD for L302 is slightly bimodal (on a log-log plot), but nowhere near as strongly as L226, and we can not visually select any bulk edge . The bulk barely fits any MP density; our best attempt is shown. Also, the global ESD the wrong shape; and the MP fit is concave near the edge, where the ESD is convex, illustrating that the edge decays into the tail. For any MP fit, significant eigenvalue mass extends out continuously, forming a long tail extending al the way to . The ESD of L302 resembles that of the Heavy-Tailed FC2 layer of AlexNet, except for the small bimodal structure. These initial observations illustrate that we need a more rigorous approach to make strong statements about the specific kind of distribution (i.e., Pareto vs other Heavy-Tailed) and what Universality class it may lay in. We present an approach to resolve these technical details this in Section 5.5.

(a) ESD for L226
(b) ESD for L302
Figure 11: ESD for Layers L226 and L302 in InceptionV3, as distributed with pyTorch. Overlaid (in red) are gross fits of the MP distribution (neither of which which fit the ESD well).

4.4 Empirical results for other pre-trained DNNs

In addition to the models from Table 4 that we analyzed in detail, we have also examined the properties of a wide range of other pre-trained models, including models from both Computer Vision as well as Natural Language Processing (NLP). This includes models trained on ImageNet, distributed with the pyTorch package, including VGG16, VGG19, ResNet50, InceptionV3, etc. See Table 5. This also includes different NLP models, distributed in AllenNLP [51], including models for Machine Comprehension, Constituency Parsing, Semantic Role Labeling, Coreference Resolution, and Named Entity Recognition, giving a total of 84 linear layers. See Table 6. Rather remarkably, we have observed similar Heavy-Tailed properties, visually and in terms of Power Law fits, in all of these larger, state-of-the-art DNNs, leading to results that are nearly universal across these widely different architectures and domains. We have also seen Hard Rank deficiency in layers in several of these models. We provide a brief summary of those results here.

Model Layer Q D
Best
Fit
alexnet 17/FC1 2.25 2.29 0.0527 PL
20/FC2 1 2.25 0.0372 PL
22/FC3 4.1 3.02 0.0186 PL
densenet121 432 1.02 3.32 0.0383 PL
densenet121 432 1.02 3.32 0.0383 PL
densenet161 572 2.21 3.45 0.0322 PL
densenet169 600 1.66 3.38 0.0396 PL
densenet201 712 1.92 3.41 0.0332 PL
inception v3 L226 1.3 5.26 0.0421 PL
L302 2.05 4.48 0.0275 PL
resnet101 286 2.05 3.57 0.0278 PL
resnet152 422 2.05 3.52 0.0298 PL
resnet18 67 1.95 3.34 0.0342 PL
resnet34 115 1.95 3.39 0.0257 PL
resnet50 150 2.05 3.54 0.027 PL
vgg11 24 6.12 2.32 0.0327 PL
27 1 2.17 0.0309 TPL
30 4.1 2.83 0.0398 PL
vgg11 bn 32 6.12 2.07 0.0311 TPL
35 1 1.95 0.0336 TPL
38 4.1 2.99 0.0339 PL
vgg16 34 6.12 2.3 0.0277 PL
37 1 2.18 0.0321 TPL
40 4.1 2.09 0.0403 TPL
vgg16 bn 47 6.12 2.05 0.0285 TPL
50 1 1.97 0.0363 TPL
53 4.1 3.03 0.0358 PL
vgg19 40 6.12 2.27 0.0247 PL
43 1 2.19 0.0313 PL
46 4.1 2.07 0.0368 TPL
vgg19 bn 56 6.12 2.04 0.0295 TPL
59 1 1.98 0.0373 TPL
62 4.1 3.03 0.035 PL

Table 5: Fit of PL exponents for the ESD of selected (2D Linear) layer weight matrices in pre-trained models distributed with pyTorch. Layer is identified by the enumerated id of the pyTorch model; is the aspect ratio; is the shape of ; is the PL exponent, fit using the numerical method described in the text; is the Komologrov-Smirnov distance, measuring the goodness-of-fit of the numerical fitting; and “Best Fit” indicates whether the fit is better described as a PL (Power Law) or TPL (Truncated Power Law) (no fits were found to be better described by Exponential or LogNormal).
Model Problem Domain Linear Layers
MC Machine Comprehension 16
CP Constituency Parsing 17
SRL Semantic Role Labeling 32
COREF Coreference Resolution 4
NER Named Entity Recognition 16
Table 6: Allen NLP Models and number of Linear Layers per model examined.

Power Law Fits.

We have performed Power Law (PL) fits for the ESD of selected (linear) layers from all of these pre-trained ImageNet and NLP models.171717We use the default method (MLE, for continuous distributions), and we set , i.e., to the maximum eigenvalue of the ESD. Table 5 summarizes the detailed results for the ImageNet models. Several observations can be made. First, all of our fits, except for certain layers in InceptionV3, appear to be in the range (where the CSN method is known to perform well). Second, we also check to see whether PL is the best fit by comparing the distribution to a Truncated Power Law (TPL), as well as an exponential, stretch-exponential, and log normal distributions. Column “Best Fit” reports the best distributional fit. In all cases, we find either a PL or TPL fits best (with a p-value ), with TPL being more common for smaller values of . Third, even when taking into account the large finite-size effects in the range , as illustrated in Figure 6, nearly all of the ESDs appear to fall into the Universality class. Figure 12 displays the distribution of PL exponents for each set of models. Figure 12(a) shows the fit power law exponents for all of the linear layers in pre-trained ImageNet models available in PyTorch (in Table 5), with ; and Figure 12(b) shows the same for the pre-trained models available in AllenNLP (in Table 6). Overall, there are ImageNet layers with , and AllenNet FC layers. More than of all the layers have , and nearly all of the rest have . One of these, InceptionV3, was discussed above, precisely since it was unusual, leading to an anomalously large value of due to the dip in its ESD.

Rank Collapse.

RMT also predicts that for matrices with , the minimum singular value will be greater than zero, i.e., . We test this by again looking at all of the FC layers in the pre-trained ImageNet and AllenNLP models. See Figure 13 for a summary of the results. While the ImageNet models mostly follow this rule, of the of FC layers have . In fact, for layers, , i.e., it is close to the numerical threshold for . In these few cases, the ESD still exhibits Heavy-Tailed properties, but the rank loss ranges from one eigenvalue equal to up to of the eigenvalue mass. For the NLP models, we see no rank collapse, i.e., all of the AllenNLP layers have .

(a) ImageNet pyTorch models
(b) AllenNLP models
Figure 12: Distribution of power law exponents for linear layers in pre-trained models trained on ImageNet, available in pyTorch, and for those NLP models, available in AllenNLP.
(a) ImageNet pyTorch models
(b) AllenNLP models
Figure 13: Distribution of minimum singular values for linear layers in pre-trained models trained on ImageNet, available in pyTorch, and for those NLP models, available in AllenNLP.

4.5 Towards a theory of Self-Regularization

In a few cases (e.g., LetNet5 in Section 4.1), MP theory appears to apply to the bulk of the ESD, with only a few outlying eigenvalues larger than the bulk edge. In other more realistic cases (e.g., AlexNet and InceptionV3 in Sections 4.2 and 4.3, respectively, and every other large-scale DNN we have examined, as summarized in Section 4.4), the ESDs do not resemble anything predicted by standard RMT/MP theory. This should not be unexpected—a well-trained DNN should have highly non-random, strongly-correlated weight matrices , in which case MP theory would not seem to apply. Moreover, except for InceptionV3, which was chosen to illustrate several unusual properties, nearly every DNN displays Heavy-Tailed properties such as those seen in AlexNet.

These empirical results suggest the following: first, that we can construct an operational and phenomenological theory (both to obtain fundamental insights into DNN regularization and to help guide the training of very large DNNs); and second, that we can build this theory by applying the full machinery of modern RMT to characterize the state of the DNN weight matrices.

For older and/or smaller models, like LeNet5, the bulk of their ESDs can be well-fit to theoretical MP density , potentially with several distinct, outlying spikes . This is consistent with the Spiked-Covariance model of Johnstone [68], a simple perturbative extension of the standard MP theory.181818It is also consistent with Heavy-Tailed extensions of MP theory, for larger values of the PL exponent. This is also reminiscent of traditional Tikhonov regularization, in that there is a “size scale” separating signal (spikes) from noise (bulk). In this sense, the small NNs of yesteryear—and smallish models used in many research studies—may in fact behave more like traditional ML models. In the context of disordered systems theory, as developed by Sornette [92], this model is a form of Self-Organizaton. Putting this all together demonstrates that the DNN training process itself engineers a form of implicit Self-Regularization into the trained model.

For large, deep, state-of-the-art DNNs, our observations suggest that there are profound deviations from traditional RMT. These networks are reminiscent of strongly-correlated disordered-systems that exhibit Heavy-Tailed behavior. What is this regularization, and how is it related to our observations of implicit Tikhonov-like regularization on LeNet5?

To answer this, recall that similar behavior arises in strongly-correlated physical systems, where it is known that strongly-correlated systems can be modeled by random matrices—with entries drawn from non-Gaussian Universality classes [134], e.g., PL or other Heavy-Tailed distributions. Thus, when we observe that has Heavy-Tailed properties, we can hypothesize that is strongly-correlated,191919For DNNs, these correlations arise in the weight matrices during Backprop training (at least when training on data of reasonable-quality). That is, the weight matrices “learn” the correlations in the data. and we can model it with a Heavy-Tailed distribution. Then, upon closer inspection, we find that the ESDs of large, modern DNNs behave as expected—when using the lens of Heavy-Tailed variants of RMT. Importantly, unlike the Spiked-Covariance case, which has a scale cut-off , in these very strongly Heavy-Tailed cases, correlations appear on every size scale, and we can not find a clean separation between the MP bulk and the spikes. These observations demonstrate that modern, state-of-the-art DNNs exhibit a new form of Heavy-Tailed Self-Regularization.

In the next few sections, we construct and test (on miniature AlexNet) our new theory.

5 5+1 Phases of Regularized Training

In this section, we develop an operational and phenomenological theory for DNN Self-Regularization that is designed to address questions such as the following. How does DNN Self-Regularization differ between older models like LetNet5 and newer models like AlexNet or Inception? What happens to the Self-Regularization when we adjust the numerous knobs and switches of the solver itself during SGD/Backprop training? How are knobs, e.g., early stopping, batch size, and learning rate, related to more familiar regularizers like Weight Norm constraints and Tikhonov regularization? Our theory builds on empirical results from Section 4; and our theory has consequences and makes predictions that we test in Section 6.

MP Soft Rank.

We first define a metric, the MP Soft Rank (), that is designed to capture the “size scale” of the noise part of the layer weight matrix , relative to the largest eigenvalue of . Going beyond spectral methods, this metric exploits MP theory in an essential way.

Let’s first assume that MP theory fits at least a bulk of . Then, we can identify a bulk edge and a bulk variance , and define the MP Soft Rank as the ratio of and :

(11)

Clearly, ; for a purely random matrix (as in Section 5.1); and for a matrix with an ESD with outlying spikes (as in Section 5.3), , and . If there is no good MP fit because the entire ESD is well-approximated by a Heavy-Tailed distribution (as described in Section 5.5, e.g., for a strongly correlated weight matrix), then we can define and still use Eqn. (11), in which case .

The MP Soft Rank is interpreted differently than the Stable Rank (), which is proportional to the bulk MP variance divided by :

(12)

As opposed to the Stable Rank, the MP Soft Rank is defined in terms of the MP distribution, and it depends on how the bulk of the ESD is fit. While the Stable Rank indicates how many eigencomponents are necessary for a relatively-good low-rank approximation of an arbitrary matrix, the MP Soft Rank describes how well MP theory fits part of the matrix ESD . Empirically, and often correlate and track similar changes. Importantly, though, there may be no good low-rank approximation of the layer weight matrices of a DNN—especially a well trained one.


Operational Definition Informal Description via Eqn. (13) Edge/tail Fluctuation Comments Illustration and Description
Random-like ESD well-fit by MP with appropriate random; zero or small is sharp, with TW statistics Fig. 14(a) and Sxn. 5.1
Bleeding-out ESD Random-like, excluding eigenmass just above has eigenmass at bulk edge as spikes “pull out”; medium BPP transition, and separate Fig. 14(b) and Sxn. 5.2
Bulk+Spikes ESD Random-like plus spikes well above well-separated from low-rank ; larger is TW, is Gaussian Fig. 14(c) and Sxn. 5.3
Bulk-decay ESD less Random-like; Heavy-Tailed eigenmass above ; some spikes Complex with correlations that don’t fully enter spike Edge above is not concave Fig. 14(d) and Sxn. 5.4
Heavy-Tailed ESD better-described by Heavy-Tailed RMT than Gaussian RMT is small; is large and strongly-correlated No good ; Fig. 14(e) and Sxn. 5.5
Rank-collapse ESD has large-mass spike at very rank-deficient; over-regularization Fig. 14(f) and Sxn. 5.6
Table 7: The 5+1 phases of learning we identified in DNN training. We observed Bulk+Spikes and Heavy-Tailed in existing trained models (LeNet5 and AlexNet/InceptionV3, respectively; see Section 4); and we exhibited all 5+1 phases in a simple model (MiniAlexNet; see Section 7).

Visual Taxonomy.

We characterize implicit Self-Regularization, both for DNNs during SGD training as well as for pre-trained DNNs, as a visual taxonomy of 5+1 Phases of Training (Random-like, Bleeding-out, Bulk+Spikes, Bulk-decay, Heavy-Tailed, and Rank-collapse). See Table 7 for a summary. The 5+1 phases can be ordered, with each successive phase corresponding to a smaller Stable Rank / MP Soft Rank and to progressively more Self-Regularization than previous phases. Figure 14 depicts typical ESDs for each phase, with the MP fits (in red). Earlier phases of training correspond to the final state of older and/or smaller models like LeNet5 and MLP3. Later phases correspond to the final state of more modern models like AlexNet, Inception, etc. Thus, while we can describe this in terms of SGD training, this taxonomy does not just apply to the temporal ordering given by the training process. It also allows us to compare different architectures and/or amounts of regularization in a trained—or even pre-trained—DNN.

Each phase is visually distinct, and each has a natural interpretation in terms of RMT. One consideration is the global properties of the ESD: how well all or part of the ESD is fit by an MP distribution, for some value of , or how well all or part of the ESD is fit by a Heavy-Tailed or PL distribution, for some value of a PL parameter. A second consideration is local properties of the ESD: the form of fluctuations, in particular around the edge or around the largest eigenvalue . For example, the shape of the ESD near to and immediately above is very different in Figure 14(a) and Figure 14(c) (where there is a crisp edge) versus Figure 14(b) (where the ESD is concave) versus Figure 14(d) (where the ESD is convex). Gaussian-based RMT (when elements are drawn from the Gaussian Universality class) versus Heavy-Tailed RMT (when elements are drawn from a Heavy-Tailed Universality class) provides guidance, as we describe below.

As an illustration, Figure 15 depicts the 5+1 phases for a typical (hypothetical) run of Backprop training for a modern DNN. Figure 15(a) illustrates that we can track the decrease in MP Soft Rank, as changes from an initial random (Gaussian-like) matrix to its final form; and Figure 15(b) illustrates that (at least for the early phases) we can fit its ESD (or the bulk of its ESD) using MP theory, with corresponding to non-random signal eigendirections. Observe that there are eigendirections (below ) that fit very well the MP bulk, there are eigendirections (well above ) that correspond to a spike, and there are eigendirections (just slightly above ) with (convex) curvature more like Figure 14(d) than (the concave curvature of) Figure 14(b). Thus, Figure 15(b) is Bulk+Spikes, with early indications of Bulk-decay.

(a) Random-like.
(b) Bleeding-out.
(c) Bulk+Spikes.
(d) Bulk-decay.
(e) Heavy-Tailed.
(f) Rank-collapse.
Figure 14: Taxonomy of trained models. Starting off with an initial random or Random-like model (14(a)), training can lead to a Bulk+Spikes model (14(c)), with data-dependent spikes on top of a random-like bulk. Depending on the network size and architecture, properties of training data, etc., additional training can lead to a Heavy-Tailed model (14(e)), a high-quality model with long-range correlations. An intermediate Bleeding-out model (14(b)), where spikes start to pull out from the bulk, and an intermediate Bulk-decay model (14(d)), where correlations start to degrade the separation between the bulk and spikes, leading to a decay of the bulk, are also possible. In extreme cases, a severely over-regularized model (14(f)) is possible.
(a) Depiction of how decreasing MP Soft Rank corresponds to different phases of training.
(b) Depiction of each how each phase is related to the MP Soft Rank.
Figure 15: Pictorial illustration of the 5 Phases of Training and the MP Soft Rank

Theory of Each Phase.

RMT provides more than simple visual insights, and we can use RMT to differentiate between the 5+1 Phases of Training using simple models that qualitatively describe the shape of each ESD. In each phase, we model the weight matrices as “noise plus signal,” where the “noise” is modeled by a random matrix , with entries drawn from the Gaussian Universality class (well-described by traditional MP theory) and the “signal” is a (small or very large) correction :

(13)

Table 7 summarizes the theoretical model for each phase. Each model uses RMT to describe the global shape of , the local shape of the fluctuations at the bulk edge, and the statistics and information in the outlying spikes, including possible Heavy-Tailed behaviors.

In the first phase (Random-like), the ESD is well-described by traditional MP theory, in which a random matrix has entries drawn from the Gaussian Universality class. This does not mean that the weight matrix is random, but it does mean that the signal in is too weak to be seen when viewed via the lens of the ESD. In the next phases (Bleeding-out, Bulk+Spikes), and/or for small networks such as LetNet5, is a relatively-small perturbative correction to , and vanilla MP theory (as reviewed in Section 3.1) can be applied, as least to the bulk of the ESD. In these phases, we will model the matrix by a vanilla matrix (for appropriate parameters), and the MP Soft Rank is relatively large . In the Bulk+Spikes phase, the model resembles a Spiked-Covariance model, and the Self-Regularization resembles Tikhonov regularization.

In later phases (Bulk-decay, Heavy-Tailed), and/or for modern DNNs such as AlexNet and InceptionV3, becomes more complex and increasingly dominates over . For these more strongly-correlated phases, is relatively much weaker, and the MP Soft Rank collapses . Consequently, vanilla MP theory is not appropriate, and instead the Self-Regularization becomes Heavy-Tailed. In these phases, we will treat the noise term as small, and we will model the properties of with Heavy-Tailed extensions of vanilla MP theory (as reviewed in Section 3.2) to Heavy-Tailed non-Gaussian universality classes that are more appropriate to model strongly-correlated systems. In these phases, the strongly-correlated model is still regularized, but in a very non-traditional way. The final phase, the Rank-collapse phase, is a degenerate case that is a prediction of the theory.

We now describe in more detail each phase in turn.

5.1 Random-like

In the first phase, the Random-like phase, shown in Figure 14(a), the DNN weight matrices resemble a Gaussian random matrix. The ESDs are easily-fit to an MP distribution, with the same aspect ratio , by fitting the empirical variance . Here, is the element-wise variance (which depends on the normalization of ).

Of course, an initial random weight matrix will show a near perfect MP fit. Even in well trained DNNs, however, the empirical ESDs may be Random-like, even when the model has a non-zero, and even somewhat large, generalization accuracy.202020In particular, as described below, we observe this with toy models when trained with very large batch sizes. That is, being fit well by an MP distribution does not imply that the weight matrix is random. It simply implies that , while having structure, can be modeled as the sum of a random “noise” matrix , with the same and , and some small-sized matrix , as:

where represents “signal” learned during the training process. In this case, is sharply bounded, to within , to the edge of the MP distribution.

5.2 Bleeding-out

In the second phase, the Bleeding-out phase, shown in Figure 14(b), the bulk of the ESD still looks reasonably random, except for one or a small number of eigenvalues that extend at or just beyond the MP edge . That is, for the given value of , we can choose a (or ) parameter so that: (1) most of the ESD is well-fit; and (2) the part of the ESD that is not well-fit consists of a “shelf” of mass, much more than expected by chance, just above :

This corresponds to modeling as the sum of a random “noise” matrix and some medium-sized matrix , as:

where represents “signal” learned during the training process.

As the spikes just begin to pull out from the bulk, i.e., when is small, it may be difficult to determine unambiguously whether any particular eigenvalue is spike or bulk. The reason is that, since the matrix is of finite size, we expect the spike locations to be Gaussian-distributed, with fluctuations of order . One option is to try to estimate precisely from a single run. Another option is to perform an ensemble of runs and plot for the ensemble. Then, if the model is in the Bleeding-out phase, there will be a small bump of eigenvalue mass, shaped like a Gaussian,212121We are being pedantic here since, in later phases, bleeding-out mass will be Heavy-Tailed, not Gaussian. which is very close to but bleeding-out from the bulk edge.

When modeling DNN training in terms of RMT and MP theory, the transition from Random-like to Bleeding-out corresponds to the so-called BPP phase transition [9, 23, 45, 20]. This transition represents a “condensation” of the eigenvector corresponding to the largest eigenvalue onto the eigenvalue of the rank-one (or, more generally, rank-, if the perturbation is higher rank) perturbation  [23].

5.3 Bulk+Spikes

In the third phase, the Bulk+Spikes phase, shown in Figure 14(c), the bulk of the ESD still looks reasonably random, except for one or a small number of eigenvalues that extend well beyond the MP edge . That is, for the given value of , we can choose a (or ) parameter so that: (1) most of the ESD is well-fit; and (2) the part of the ESD that is not well-fit consists of several () eigenvalues, or Spikes, that are much larger than :