Implicit SelfRegularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Abstract
Random Matrix Theory (RMT) is applied to analyze the weight matrices of Deep Neural Networks (DNNs), including both production quality, pretrained models such as AlexNet and Inception, and smaller models trained from scratch, such as LeNet5 and a miniatureAlexNet. Empirical and theoretical results clearly indicate that the DNN training process itself implicitly implements a form of SelfRegularization, implicitly sculpting a more regularized energy or penalty landscape. In particular, the empirical spectral density (ESD) of DNN layer matrices displays signatures of traditionallyregularized statistical models, even in the absence of exogenously specifying traditional forms of explicit regularization, such as Dropout or Weight Norm constraints. Building on relatively recent results in RMT, most notably its extension to Universality classes of HeavyTailed matrices, and applying them to these empirical results, we develop a theory to identify 5+1 Phases of Training, corresponding to increasing amounts of Implicit SelfRegularization. These phases can be observed during the training process as well as in the final learned DNNs. For smaller and/or older DNNs, this Implicit SelfRegularization is like traditional Tikhonov regularization, in that there is a “size scale” separating signal from noise. For stateoftheart DNNs, however, we identify a novel form of HeavyTailed SelfRegularization, similar to the selforganization seen in the statistical physics of disordered systems (such as classical models of actual neural activity). This results from correlations arising at all size scales, which for DNNs arises implicitly due to the training process itself. This implicit SelfRegularization can depend strongly on the many knobs of the training process. In particular, by exploiting the generalization gap phenomena, we demonstrate that we can cause a small model to exhibit all 5+1 phases of training simply by changing the batch size. This demonstrates that—all else being equal—DNN optimization with larger batch sizes leads to lesswell implicitlyregularized models, and it provides an explanation for the generalization gap phenomena. Our results suggest that large, welltrained DNN architectures should exhibit HeavyTailed SelfRegularization, and we discuss the theoretical and practical implications of this.
Contents
 1 Introduction
 2 Simple Capacity Metrics and Transitions during Backprop
 3 Basic Random Matrix Theory (RMT)
 4 Empirical Results: ESDs for Existing, Pretrained DNNs
 5 5+1 Phases of Regularized Training
 6 Empirical Results: Detailed Analysis on Smaller Models
 7 Explaining the Generalization Gap by Exhibiting the Phases
 8 Discussion and Conclusion
1 Introduction
Very large very deep neural networks (DNNs) have received attention as a general purpose tool for solving problems in machine learning (ML) and artificial intelligence (AI), and they perform remarkably well on a wide range of traditionally hard if not impossible problems, such as speech recognition, computer vision, and natural language processing. The conventional wisdom seems to be “the bigger the better,” “the deeper the better,” and “the more hyperparameters the better.” Unfortunately, this usual modus operandi leads to large, complicated models that are extremely hard to train, that are extremely sensitive to the parameters settings, and that are extremely difficult to understand, reason about, and interpret. Relatedly, these models seem to violate what one would expect from the large body of theoretical work that is currently popular in ML, optimization, statistics, and related areas. This leads to theoretical results that fail to provide guidance to practice as well as to confusing and conflicting interpretations of empirical results. For example, current optimization theory fails to explain phenomena like the socalled Generalization Gap—the curious observation that DNNs generalize better when trained with smaller batches sizes—and it often does not provide even qualitative guidance as to how stochastic algorithms perform on nonconvex landscapes of interest; and current statistical learning theory, e.g., VCbased methods, fails to provide even qualitative guidance as to the behavior of this class of learning methods that seems to have next to unlimited capacity and yet generalize without overtraining.
1.1 A historical perspective
The inability of optimization and learning theory to explain and predict the properties of NNs is not a new phenomenon. From the earliest days of DNNs, it was suspected that VC theory did not apply to these systems. For example, in 1994, Vapnik, Levin, and LeCun [144] said:
[T]he [VC] theory is derived for methods that minimize the empirical risk. However, existing learning algorithms for multilayer nets cannot be viewed as minimizing the empirical risk over [the] entire set of functions implementable by the network.
It was originally assumed that local minima in the energy/loss surface were responsible for the inability of VC theory to describe NNs [144], and that the mechanism for this was that getting trapped in local minima during training limited the number of possible functions realizable by the network. However, it was very soon realized that the presence of local minima in the energy function was not a problem in practice [79, 39]. (More recently, this fact seems to have been rediscovered [108, 37, 56, 135].) Thus, another reason for the inapplicability of VC theory was needed. At the time, there did exist other theories of generalization based on statistical mechanics [127, 147, 60, 43], but for various technical and nontechnical reasons these fell out of favor in the ML/NN communities. Instead, VC theory and related techniques continued to remain popular, in spite of their obvious problems.
More recently, theoretical results of Choromanska et al. [30] (which are related to [127, 147, 60, 43]) suggested that the Energy/optimization Landscape of modern DNNs resembles the Energy Landscape of a zerotemperature Gaussian Spin Glass; and empirical results of Zhang et al. [156] have again pointed out that VC theory does not describe the properties of DNNs. Motivated by these results, Martin and Mahoney then suggested that the Spin Glass analogy may be useful to understand severe overtraining versus the inability to overtrain in modern DNNs [93].
Many puzzling questions about regularization and optimization in DNNs abound. In fact, it is not even clear how to define DNN regularization. In traditional ML, regularization can be either explicit or implicit. Let’s say that we are optimizing some loss function , specified by some parameter vector or weight matrix . When regularization is explicit, it involves making the loss function “nicer” or “smoother” or “more welldefined” by adding an explicit capacity control term directly to the loss, i.e., by considering a modified objective of the form . In this case, we tune the regularization parameter by cross validation. When regularization is implicit, we instead have some adjustable operational procedure like early stopping of an iterative algorithm or truncating small entries of a solution vector. In many cases, we can still relate this back to the more familiar form of optimizing an effective function of the form . For a precise statement in simple settings, see [89, 115, 53]; and for a discussion of implicit regularization in a broader context, see [88] and references therein.
With DNNs, the situation is far less clear. The challenge in applying these wellknown ideas to DNNs is that DNNs have many adjustable “knobs and switches,” independent of the Energy Landscape itself, most of which can affect training accuracy, in addition to many model parameters. Indeed, nearly anything that improves generalization is called regularization, and a recent review presents a taxonomy over 50 different regularization techniques for Deep Learning [76]. The most common include MLlike Weight Norm regularization, socalled “tricks of the trade” like early stopping and decreasing the batch size, and DNNspecific methods like Batch Normalization and Dropout. Evaluating and comparing these methods is challenging, in part since there are so many, and in part since they are often constrained by systems or other nottraditionallyML considerations. Moreover, Deep Learning avoids cross validation (since there are simply too many parameters), and instead it simply drives training error to zero (followed by subsequent fiddling of knobs and switches). Of course, it is still the case that test information can leak into the training process (indeed, perhaps even more severely for DNNs than traditional ML methods). Among other things, this argues for unsupervised metrics to evaluate model quality.
Motivated by this situation, we are interested here in two related questions.

Theoretical Question. Why is regularization in deep learning seemingly quite different than regularization in other areas on ML; and what is the right theoretical framework with which to investigate regularization for DNNs?

Practical Question. How can one control and adjust, in a theoreticallyprincipled way, the many knobs and switches that exist in modern DNN systems, e.g., to train these models efficiently and effectively, to monitor their effects on the global Energy Landscape, etc.?
That is, we seek a Practical Theory of Deep Learning, one that is prescriptive and not just descriptive. This theory would provide useful tools for practitioners wanting to know How to characterize and control the Energy Landscape to engineer larger and betters DNNs; and it would also provide theoretical answers to broad open questions as Why Deep Learning even works. For example, it would provide metrics to characterize qualitativelydifferent classes of learning behaviors, as predicted in recent work [93]. Importantly, VC theory and related methods do not provide a theory of this form.
1.2 Overview of our approach
Let us write the Energy Landscape (or optimization function) for a typical DNN with layers, with activation functions , and with weight matrices and biases and , as follows:
(1) 
For simplicity, we do not indicate the structural details of the layers (e.g., Dense or not, Convolutions or not, Residual/Skip Connections, etc.). We imagine training this model on some labeled data , using Backprop, by minimizing the loss (i.e., the crossentropy), between and the labels , as follows:
(2) 
We can initialize the DNN using random initial weight matrices , or we can use other methods such as transfer learning (which we will not consider here). There are various knobs and switches to tune such as the choice of solver, batch size, learning rate, etc.
Most importantly, to avoid overtraining, we must usually regularize our DNN. Perhaps the most familiar approach from ML for implementing this regularization explicitly constrains the norm of the weight matrices, e.g., modifying Objective (2) to give:
(3) 
where is some matrix norm, and where is an explicit regularization control parameter.
The point of Objective (3) is that explicit regularization shrinks the norm(s) of the matrices. We may expect similar results to hold for implicit regularization. We will use advanced methods from Random Matrix Theory (RMT), developed in the theory of self organizing systems, to characterize DNN layer weight matrices, ,^{1}^{1}1We consider weight matrices computed for individual dense layers and other layers that can be easilyrepresented as matrices, i.e., index tensors. Nothing in our RMTbased theory, however, requires matrices to be constructed from dense layers—they could easily be applied to matrices constructed from convolutional (or other) layers. For example, we could can stack/reshape weight matrices in various ways, e.g., to study multidimensional tensors like Convolutional Layers or how different layers interact. We have unpublished results that indicate this is a promising direction, but we don’t consider these variants here. during and after the training process.
Here is an important (but often underappreciated) point. We call the Energy Landscape. By this, we mean that part of the optimization problem parameterized by the heretofore unknown elements of the weight matrices and bias vectors, for a fixed (in (3)), and as defined by the data . Because we run Backprop training, we pass the data through the Energy function multiple times. Each time, we adjust the values of the weight matrices and bias vectors. In this sense, we may think of the total Energy Landscape (i.e., the optimization function that is nominally being optimized) as changing at each epoch.
1.3 Summary of our results
We analyze the distribution of eigenvalues, i.e., the Empirical Spectral Density (ESD), , of the correlation matrix associated with the layer weight matrix . We do this for a wide range of large, pretrained, readilyavailable stateoftheart models, including the original LetNet5 convolutional net (which, due to its age, we retrain) and pretrained models available in Keras and PyTorch such as AlexNet and Inception. In some cases, the ESDs are very welldescribed by MarchenkoPastur (MP) RMT. In other cases, the ESDs are welldescribed by MP RMT, with the exception of one or more large eigenvalues that can be modeled by a SpikedCovariance model [92, 68]. In still other cases—including nearly every current stateoftheart model we have examined—the EDSs are poorlydescribed by traditional RMT, and instead they are more consistent with HeavyTailed behavior seen in the statistical physics of disordered systems [134, 24]. Based on our observations, we develop a develop a practical theory of Implicit SelfRegularization in DNNs. This theory takes the form of an operational theory characterizing 5+1 phases of DNN training. To test and validate our theory, we consider two smaller models, a 3layer MLP (MLP3) and a miniature version of AlexNet (MiniAlexNet), trained on CIFAR10, that we can train ourselves repeatedly, adjusting various knobs and switches along the way.
Main Empirical Results.
Our main empirical results consist in evaluating empirically the ESDs (and related RMTbased statistics) for weight matrices for a suite of DNN models, thereby probing the Energy Landscapes of these DNNs. For older and/or smaller models, these results are consistent with implicit SelfRegularization that is Tikhonovlike; and for modern stateoftheart models, these results suggest novel forms of HeavyTailed SelfRegularization.

Capacity Control Metrics. We study simple capacity control metrics, the Matrix Entropy, the linear algebraic or Hard Rank, and the Stable Rank. We also use MP RMT to define a new metric, the MP Soft Rank. These metrics track the amount of SelfRegularization that arises in a weight matrix , either during training or in a pretrained DNN.

SelfRegularization in old/small models. The ESDs of older/smaller DNN models (like LeNet5 and a toy MLP3 model) exhibit weak SelfRegularization, wellmodeled by a perturbative variant of MP theory, the SpikedCovariance model. Here, a small number of eigenvalues pull out from the random bulk, and thus the MP Soft Rank and Stable Rank both decrease. This weak form of SelfRegularization is like Tikhonov regularization, in that there is a “size scale” that cleanly separates “signal” from “noise,” but it is different than explicit Tikhonov regularization in that it arises implicitly due to the DNN training process itself.

HeavyTailed SelfRegularization. The ESDs of larger, modern DNN models (including AlexNet and Inception and nearly every other largescale model we have examined) deviate strongly from the common Gaussianbased MP model. Instead, they appear to lie in one of the very different Universality classes of HeavyTailed random matrix models. We call this HeavyTailed SelfRegularization. Here, the MP Soft Rank vanishes, and the Stable Rank decreases, but the full Hard Rank is still retained. The ESD appears fully (or partially) HeavyTailed, but with finite support. In this case, there is not a “size scale” (even in the theory) that cleanly separates “signal” from “noise.”
Main Theoretical Results.
Our main theoretical results consist in an operational theory for DNN SelfRegularization. Our theory uses ideas from RMT—both vanilla MPbased RMT as well as extensions to other Universality classes based on HeavyTailed distributions—to provide a visual taxonomy for Phases of Training, corresponding to increasing amounts of SelfRegularization.

Modeling Noise and Signal. We assume that a weight matrix can be modeled as where is “noise” and where is “signal.” For small to medium sized signal, is wellapproximated by an MP distribution—with elements drawn from the Gaussian Universality class—perhaps after removing a few eigenvectors. For large and stronglycorrelated signal, gets progressively smaller, but we can model the nonrandom stronglycorrelated signal by a HeavyTailed random matrix, i.e., a random matrix with elements drawn from a HeavyTailed (rather than Gaussian) Universality class.

5+1 Phases of Regularization. Based on this approach to modeling noise and signal, we construct a practical, visual taxonomy for 5+1 Phases of Training. Each phase is characterized by stronger, visually distinct signatures in the ESD of DNN weight matrices, and successive phases correspond to decreasing MP Soft Rank and increasing amounts of SelfRegularization. The 5+1 phases are: Randomlike, Bleedingout, Bulk+Spikes, Bulkdecay, HeavyTailed, and Rankcollapse.

Rankcollapse. One of the predictions of our RMTbased theory is the existence of a pathological phase of training, the Rankcollapse or “+1” Phase, corresponding to a state of overregularization. Here, one or a few very large eigenvalues dominate the ESD, and the rest of the weight matrix loses nearly all Hard Rank.
Based on these results, we speculate that all well optimized, large DNNs will display HeavyTailed SelfRegularization in their weight matrices.
Evaluating the Theory.
We provide a detailed evaluation of our theory using a smaller MiniAlexNew model that we can train and retrain.

Effect of Explicit Regularization. We analyze ESDs of MiniAlexNet by removing all explicit regularization (Dropout, Weight Norm constraints, Batch Normalization, etc.) and characterizing how the ESD of weight matrices behave during and at the end of Backprop training, as we systematically add back in different forms of explicit regularization.

Implementation Details. Since the details of the methods that underlies our theory (e.g., fitting HeavyTailed distributions, finitesize effects, etc.) are likely not familiar to ML and NN researchers, and since the details matter, we describe in detail these issues.

Exhibiting the 5+1 Phases. We demonstrate that we can exhibit all 5+1 phases by appropriate modification of the various knobs of the training process. In particular, by decreasing the batch size from 500 to 2, we can make the ESDs of the fullyconnected layers of MiniAlexNet vary continuously from Randomlike to HeavyTailed, while increasing generalization accuracy along the way. These results illustrate the Generalization Gap phenomena [64, 72, 57], and they explain that phenomena as being caused by the implicit SelfRegularization associated with models trained with smaller and smaller batch sizes. By adding extreme Weight Norm regularization, we can also induce the Rankcollapse phase.
Main Methodological Contribution.
Our main methodological contribution consists in using empirical observations as well as recent developments in RMT to motivate a practical predictive DNN theory, rather than developing a descriptive DNN theory based on general theoretical considerations. Essentially, we treat the training of different DNNs as if we are running novel laboratory experiments, and we follow the traditional scientific method:
Make Observations Form Hypotheses Build a Theory Test the theory, literally.
In particular, this means that we can observe and analyze many large, productionquality, pretrained models directly, without needing to retrain them, and we can also observe and analyze smaller models during the training process. In adopting this approach, we are interested in both “scientific questions” (e.g., “Why is regularization in deep learning seemingly quite different …?”) as well as “engineering questions” (e.g., “How can one control and adjust …?).
To accomplish this, recall that, given an architecture, the Energy Landscape is completely defined by the DNN weight matrices. Since its domain is exponentially large, the Energy Landscape is challenging to study directly. We can, however, analyze the weight matrices, as well as their correlations. (This is analogous to analyzing the expected moments of a complicated distribution.) In principle, this permits us to analyze both local and global properties of the Energy Landscape, as well as something about the class of functions (e.g., VC class, Universality class, etc.) being learned by the DNN. Since the weight matrices of many DNNs exhibit strong correlations and can be modeled by random matrices with elements drawn from the Universality class of HeavyTailed distributions, this severely restricts the class of functions learned. It also connects back to the Energy Landscape since it is known that the Energy Landscape of HeavyTailed random matrices is very different than that of Gaussianlike random matrices.
1.4 Outline of the paper
In Section 2, we provide a warmup, including simple capacity metrics and their transitions during Backprop. Then, in Sections 3 and 4, we review background on RMT necessary to understand our experimental methods, and we present our initial experimental results. Based on this, in Section 5, we present our main theory of 5+1 Phases of Training. Then, in Sections 6 and 7, we evaluate our main theory, illustrating the effect of explicit regularization, and demonstrating implications for the generalization gap phenomenon. Finally, in Section 8, we provide a discussion of our results in a broader context. The accompanying code is available at https://github.com/CalculatedContent/ImplicitSelfRegularization. For reference, we provide in Table 1 and Table 2 a summary of acronyms and notation used in the following.
Acronym  Description 
DNN  Deep Neural Network 
ML  Machine Learning 
SGD  Stochastic Gradient Descent 
RMT  Random Matrix Theory 
MP  Marchenko Pastur 
ESD  Empirical Spectral Density 
PL  Power Law 
HT  HeavyTailed 
TW  Tracy Widom (Law) 
SVD  Singular Value Decomposition 
FC  Fully Connected (Layer) 
VC  Vapnik Chrevonikis (Theory) 
SMTOG  Statistical Mechanics Theory of Generalization 
Notation  Description  


































































2 Simple Capacity Metrics and Transitions during Backprop
In this section, we describe simple spectral metrics to characterize DNN weight these matrices as well as initial empirical observations on the capacity properties of training DNNs.
2.1 Simple capacity control metrics
A DNN is defined by its detailed architecture and the values of the weights and biases at each layer. We seek a simple capacity control metric for a learned DNN model that: is easy to compute both during training and for alreadytrained models; can describe changes in the gross behavior of weight matrices during the Backprop training process; and can identify the onset of subtle structural changes in the weight matrices.
One possibility is to use the Euclidean distance between the initial weight matrix, , and the weight matrix at epoch of training, , i.e., . This distance, however, is not scale invariant. In particular, during training, and with regularization turned off, the weight matrices may shift in scale, gaining or losing Frobenius mass or variance,^{2}^{2}2We use these terms interchangably, i.e., even if the matrices are not meancentered. and this distance metric is sensitive to that change. Indeed, the whole point of a BatchNorm layer is to try to prevent this. To start, then, we will consider two scaleinvariant measures of capacity control: the Matrix Entropy (), and the Stable Rank . For an arbitrary matrix , both of these metrics are defined in terms of its spectrum.
Consider (real valued) layer weight matrices , where . Let the Singular Value Decomposition of be
where is the singular value^{3}^{3}3We use rather than to denote singular values since denote variance elsewhere. of , and let . We also define the associated (uncentered) correlation matrix
(4) 
where we sometimes drop the subscript for , and where is normalized by . We compute the eigenvalues of ,
where are the squares of the singular values: . Given the singular values of and/or eigenvalues of , there are several wellknown matrix complexity metrics.

The Hard Rank (or linear algebraic rank),
(5) is the number of singular values greater than zero, , to within a numerical cutoff.

The Matrix Entropy,
(6) is also known as the Generalized vonNeumann Matrix Entropy.^{4}^{4}4This resembles the entanglement Entropy, suggested for the analysis of convolutional nets [81].

The Stable Rank,
(7) the ratio of the Frobenius norm to Spectral norm, is a robust variant of the Hard Rank.
We also refer to the Matrix Entropy and Stable Rank of . By this, we mean the metrics computed with the associated eigenvalues. Note and .
It is known that a random matrix has maximum Entropy, and that lower values for the Entropy correspond to more structure/regularity. If is a random matrix, then . For example, we initialize our weight matrices with a truncated random matrix , then . When has significant and observable nonrandom structure, we expect . We will see, however, that in practice these differences are quite small, and we would prefer a more discriminative metric. In nearly every case, for welltrained DNNs, all the weight matrices retain full Hard Rank ; but the weight matrices do “shrink,” in a sense captured by the Stable Rank. Both and measure matrix capacity, and, up to a scale factor, we will see that they exhibit qualitatively similar behavior.
2.2 Empirical results: Capacity transitions while training
We start by illustrating the behavior of two simple complexity metrics during Backprop training on MLP3, a simple layer MultiLayer Perceptron (MLP), described in Table 4. MLP3 consists of 3 fully connected (FC) / dense layers with nodes and ReLU activation, with a final FC layer with nodes and softmax activation. This gives 4 layer weight matrices of shape and with :
For the training, each matrix is initialized with a Glorot normalization [54]. The model is trained on CIFAR10, up to 100 epochs, with SGD (learning rate=, momentum=) and with a stopping criteria of on the MSE loss.^{5}^{5}5Code will be available at https://github.com/CalculatedContent/ImplicitSelfRegularization/MLP3.
Figure 1 presents the layer entropy (in Figure 1(a)) and the stable rank (in Figure 1(b)), plotted as a function of training epoch, for FC1 and FC2. Both metrics decrease during training (note the scales of the Y axes): the stable rank decreases by approximately a factor of two, and the matrix entropy decreases by a small amount, from roughly to just below (this is for FC2, and there is an even more modest change for FC1). They both track nearly the same changes; and the stable rank is more informative for our purposes; but we will see that the changes to the matrix entropy, while subtle, are significant.
Figure 2 presents scree plots for the initial and final weight matrices for the FC1 and FC2 layers of our MLP3. A scree plot plots the decreasing variability in the matrix as a function of the increasing index of the corresponding eigenvector [59]. Thus, such scree plots present similar information to the stable rank—e.g., observe the Yaxis of Figure 2(b), which shows that there is a slight increase in the largest eigenvalue for FC1 (again, note the scales of the Y axes) and a larger increase in the largest eigenvalue for FC2, which is consistent with the changes in the stable rank in Figure 1(b))—but they too give a coarse picture of the matrix. In particular, they lack the detailed insight into subtle changes in the entropy and rank associated with the SelfRegularization process, e.g., changes that reside in just a few singular values and vectors, that we will need in our analysis.
Limitations of these metrics.
We can gain more detailed insight into changes in during training by creating histograms of the singular values and/or eigenvalues (). Figure 3(a) displays the density of singular values of and for the FC2 layer of the MLP3 model. Figure 3(b) displays the associated eigenvalue densities, , which we call the Empirical Spectral Density (ESD) (defined in detail below) plots. Observe that the initial density of singular values (shown in red/purple), resembles a quarter circle,^{6}^{6}6The initial weight matrix is just a random (Glorot Normal) matrix. and the final density of singular values (blue) consists of a bulk quarter circle, of about the same width, with several spikes of singular value density beyond the bulk’s edge. Observe that the similar heights and widths and shapes of the bulks imply the variance, or Frobenius norm, does not change much: . Observe also that the initial ESD, (red/purple), is crisply bounded between and (and similarly for the density of singular values at the square root of this value), whereas the final ESD (blue) has less density at and several spikes . The largest eigenvalue is , i.e., . We see now why the stable rank for FC2 decreases by ; the Frobenius norm does not change much, but the squared Spectral norm is larger.
3 Basic Random Matrix Theory (RMT)
In this section, we summarize results from RMT that we use. RMT provides a kindof Central Limit Theorem for matrices, with unique results for both square and rectangular matrices. Perhaps the most wellknown results from RMT are the Wigner Semicircle Law, which describes the eigenvalues of random square symmetric matrices, and the Tracy Widom (TW) Law, which states how the maximum eigenvalue of a (more general) random matrix is distributed. Two issues arise with applying these wellknown versions of RMT to DNNs. First, very rarely do we encounter symmetric weight matrices. Second, in training DNNs, we only have one instantiation of each weight matrix, and so it is not generally possible to apply the TW Law.^{7}^{7}7This is for “production” use, where one performs only a single training run; below, we will generate such distributions of weight matrices for our smaller models to denoise and illustrate better their properties. Several overviews of RMT are available [143, 41, 71, 142, 22, 42, 110, 24]. Here, we will describe a more general form of RMT, the MarchenkoPastur (MP) theory, applicable to rectangular matrices, including (but not limited to) DNN weight matrices .
3.1 MarchenkoPastur (MP) theory for rectangular matrices
MP theory considers the density of singular values of random rectangular matrices . This is equivalent to considering the density of eigenvalues , i.e., the ESD, of matrices of the form . MP theory then makes strong statements about such quantities as the shape of the distribution in the infinite limit, it’s bounds, expected finitesize effects, such as fluctuations near the edge, and rates of convergence. When applied to DNN weight matrices, MP theory assumes that , while trained on very specific datasets, exhibits statistical properties that do not depend on the specific details of the elements , and holds even at finite size. This Universality concept is “borrowed” from Statistical Physics, where it is used to model, among other things, stronglycorrelated systems and socalled critical phenomena in nature [134].
To apply RMT, we need only specify the number of rows and columns of and assume that the elements are drawn from a specific distribution that is a member of a certain Universality class (there are different results for different Universality classes). RMT then describes properties of the ESD, even at finite size; and one can compare perdictions of RMT with empirical results. Most wellknown and wellstudied is the Universality class of Gaussian distributions. This leads to the basic or vanilla MP theory, which we describe in this section. More esoteric—but ultimately more useful for us—are Universality classes of HeavyTailed distributions. In Section 3.2, we describe this important variant.
Gaussian Universality class.
We start by modeling as an random matrix, with elements drawn from a Gaussian distribution, such that:
Then, MP theory states that the ESD of the correlation matrix, , has the limiting density given by the MP distribution :
(8)  
Here, is the elementwise variance of the original matrix, is the aspect ratio of the matrix, and the minimum and maximum eigenvalues, , are given by
(9) 
The MP distribution for different aspect ratios and variance parameters .
The shape of the MP distribution only depends on two parameters, the variance and the aspect ratio . See Figure 4 for an illustration. In particular, see Figure 4(a) for a plot of the MP distribution of Eqns. (8) and (9), for several values of ; and see Figure 4(b) for a plot of the MP distribution for several values of .
As a point of reference, when and (blue in both subfigures), the mass of skews slightly to the left, and is bounded in . For fixed , as increases, the support (i.e., ) narrows, and becomes less skewed. As , the support widens and skews more leftward. Also, is concave for larger , and it is partially convex for smaller .
Although MP distribution depends on and , in practice is fixed, and thus we are interested how varies—distributionally for random matrices, and empirically for weight matrices. Due to Eqn. (9), if is fixed, then (i.e., the largest eigenvalue of the bulk, as well as ) is determined, and vice versa.^{8}^{8}8In practice, relating and raises some subtle technical issues, and we discuss these in Section 6.3.
The Quarter Circle Law for .
A special case of Eqn. (8) arises when , i.e., when is a square nonsymmetric matrix. In this case, the eigenvalue density is very peaked with a bounded tail, and it is sometimes more convenient to consider the density of singular values of , , which takes the form of a QuarterCircle:
We will not pursue this further, but we saw this earlier, in Figure 3(b), with our toy MLP3 model.
Finitesize Fluctuations at the MP Edge.
In the infinite limit, all fluctuations in concentrate very sharply at the MP edge, , and the distribution of the maximum eigenvalues is governed by the TW Law. Even for a single finitesized matrix, however, MP theory states the upper edge of is very sharp; and even when the MP Law is violated, the TW Law, with finitesize corrections, works very well at describing the edge statistics. When these laws are violated, this is very strong evidence for the onset of more regular nonrandom structure in the DNN weight matrices, which we will interpret as evidence of SelfRegularization.
In more detail, in many cases, one or more of the empirical eigenvalues will extend beyond the sharp edge predicted by the MP fit, i.e., such that (where is the largest eigenvalue of ). It will be important to distinguish the case that simply due the finite size of from the case that is “truly” outside the MP bulk. According to MP theory [22], for finite , and with normalization, the fluctuations at the bulk edge scale as :
where is given by Eqn (9). Since , we can also express this in terms of , but with different prefactors [68]. Most importantly, within MP theory (and even more generally), the fluctuations, centered and rescaled, will follow TW statistics.
In the DNNs we consider, , and so the maximum deviation is only . In many cases, it will be obvious whether a given is an outlier. When it is not, one could generate an ensemble of runs and study the information content of the eigenvalues (shown below) and/or apply TW theory (not discussed here).
Fitting MP Distributions.
Several technical challenges with fitting MP distributions, i.e., selecting the bulk edge , are discussed in Section 6.3.
3.2 HeavyTailed extensions of MP theory

Generative Model w/ elements from Universality class  Finite Global shape  Limiting Global shape  Bulk edge Local stats  (far) Tail Local stats 
Basic MP  Gaussian  MP, i.e., Eqn. (8)  MP  TW  No tail. 
Spiked Covariance  Gaussian, + lowrank perturbations  MP + Gaussian spikes  MP  TW  Gaussian 
Heavy tail,  (Weakly) HeavyTailed  MP + PL tail  MP  HeavyTailed  HeavyTailed 
Heavy tail,  (Moderately) HeavyTailed (or “fat tailed”)  PL  PL  No edge.  Frechet 
Heavy tail,  (Very) HeavyTailed  PL  PL  No edge.  Frechet 
MPbased RMT is applicable to a wide range of matrices (even those with large lowrank perturbations to i.i.d. normal behavior); but it is not in general applicable when matrix elements are stronglycorrelated. Strong correlations appear to be the case for many welltrained, productionquality DNNs. In statistical physics, it is common to model stronglycorrelated systems by HeavyTailed distributions [134]. The reason is that these models exhibit, more or less, the same largescale statistical behavior as natural phenomena in which strong correlations exist [134, 22]. Moreover, recent results from MP/RMT have shown that new Universality classes exist for matrices with elements drawn from certain HeavyTailed distributions [22].
We use these HeavyTailed extensions of basic MP/RMT to build an operational and phenomenological theory of Regularization in Deep Learning; and we use these extensions to justify our analysis of both SelfRegularization and HeavyTailed SelfRegularization.^{9}^{9}9The Universality of RMT is a concept broad enough to apply to classes of problems that appear well beyond its apparent range of validity. It is in this sense that we apply RMT to understand DNN Regularization. Briefly, our theory for simple SelfRegularization is insipred by the SpikedCovariance model of Johnstone [68] and it’s interpretation as a form of SelfOrganization by Sornette [92]; and our theory for more sophisticated HeavyTailed SelfRegularization is inspired by the application of MP/RMT tools in quantitative finance by Bouchuad, Potters, and coworkers [49, 77, 78, 20, 19, 22, 24], as well as the relation of HeavyTailed phenomena more generally to SelfOrganized Criticality in Nature [134]. Here, we highlight basic results for this generalized MP theory; see [38, 20, 19, 111, 7, 40, 8, 26, 22, 21] in the physics and mathematics literature for additional details.
Universality classes for modeling strongly correlated matrices.
Consider modeling as an random matrix, with elements drawn from a HeavyTailed—e.g., a Pareto or Power Law (PL)—distribution:
(10) 
In these cases, if is elementwise HeavyTailed,^{10}^{10}10HeavyTailed phenomena have many subtle properties [121]; we consider here only the most simple cases. then the ESD likewise exhibits HeavyTailed properties, either globally for the entire ESD and/or locally at the bulk edge.
Table 3 summarizes these (relatively) recent results, comparing basic MP theory, the SpikedCovariance model,^{11}^{11}11We discuss HeavyTailed extensions to MP theory in this section. Extensions to large lowrank perturbations are more straightforward and are described in Section 5.3. and HeavyTailed extensions of MP theory, including associated Universality classes. To apply the MP theory, at finite sizes, to matrices with elements drawn from a HeavyTailed distribution of the form given in Eqn. (10), then, depending on the value of , we have one of the following three^{12}^{12}12Both and are “corner cases” that we don’t expect to be able to resolve at finite . Universality classes:

(Weakly) HeavyTailed, : Here, the ESD exhibits “vanilla” MP behavior in the infinite limit, and the expected mean value of the bulk edge is . Unlike standard MP theory, which exhibits TW statistics at the bulk edge, here the edge exhibits PL / HeavyTailed fluctuations at finite . These finitesize effects appear in the edge / tail of the ESD, and they make it hard or impossible to distinguish the edge versus the tail at finite .

(Moderately) HeavyTailed, : Here, the ESD is HeavyTailed / PL in the infinite limit, approaching the form . In this regime of , there is no bulk edge. At finite size, the global ESD can be modeled by the form , for all , but the slope and intercept must be fit, as they display very large finitesize effects. The maximum eigenvalues follow Frechet (not TW) statistics, with , and they have large finitesize effects. Even if the ESD tends to zero, the raw number of eigenvalues can still grow—just not as quickly as (i.e., we may expect some , in the infinite limit, but the eigenvalue density ). Thus, at any finite , is HeavyTailed, but the tail decays moderately quickly.

(Very) HeavyTailed, : Here, the ESD is HeavyTailed / PL for all finite , and as it converges more quickly to a PL distribution with tails . In this regime, there is no bulk edge, and the maximum eigenvalues follow Frechet (not TW) statistics. Finitesize effects exist here, but they are are much smaller here than in the regime of .
Visualizing HeavyTailed distributions.
It is often fruitful to perform visual exploration and classification of ESDs by plotting them on linearlinear coordinates, loglinear coordinates (linear horizontal/X axis and logarithmic vertical/Y axis), and/or loglog coordinates (logarithmic horizontal/X axis and logarithmic vertical/Y axis). It is known that data from a PL distribution will appear as a convex curve in a linearlinear plot and a loglinear plot and as a straight line in a loglog plot; and that data from a Gaussian distribution will appear as a bellshaped curve in a linearlinear plot, as an inverted parabola in a loglinear plot, and as a strongly concave curve in a loglog plot. Examining data from an unknown ESD on different axes suggests a classification for them. (See Figures 5 and 16.) More quantitative analysis may lead to more definite conclusions, but that too comes with technical challenges.
To illustrate this, we provide a visual and operational approach to understand the limiting forms for different . See Figure 5. Figure 5(a) displays the loglog histograms for the ESD for three HeavyTailed random matrices , with For (blue), the loglog histogram is linear over 5 log scales, from . If increases (not shown), will grow, but this plot will remain linear, and the tail will not decay. In the infinite limit, the ESD will still be HeavyTailed. Contrast this with the ESD drawn from the same distribution, except with (green). Here, due to larger finitesize effects, most of the mass is confined to one or two log scales, and it starts to vanish when . This effect is amplified for (red), which shows almost no mass for eigenvalues beyond the MP bulk (i.e. ). Zooming in, in Figure 5(b), we see that the loglog plot is linear—in the central region only—and the tail vanishes very quickly. If increases (not shown), the ESD will remain HeavyTailed, but the mass will grow much slower than when . This illustrates that, while ESDs can be HeavyTailed at finite size, the tails decay at different rates for different HeavyTailed Universality classes ( or or ).
Fitting PL distributions to ESD plots.
Once we have identified PL distributions visually (using a loglog histogram of the ESD, and looking for visual characteristics of Figure 5), we can fit the ESD to a PL in order to obtain the exponent . For this, we use the ClausetShaliziNewman (CSN) approach [33], as implemented in the python PowerLaw package [2],^{13}^{13}13See https://github.com/jeffalstott/powerlaw. which computes an such that
Generally speaking, fitting a PL has many subtleties, most beyond the scope of this paper [33, 55, 91, 100, 16, 73, 35, 2, 145, 58]. For example, care must be taken to ensure the distribution is actually linear (in some regime) on a loglog scale before applying the PL estimator, lest it give spurious results; and the PL estimator only works reasonably well for exponents in the range .
To illustrate this, consider Figure 6. In particular, Figure 6(a) shows that the CSN estimator performs well for the regime , while for there are substantial deviations due to finitesize effects, and for no reliable results are obtained; and Figures 6(b) and 6(c) show that the finitesize effects can be quite complex (for fixed , increasing leads to larger finitesize effects, while for fixed , decreasing leads to larger finitesize effects).
Identifying the Universality class.
Given , we identify the corresponding (as illustrated in Figure 6) and thus which of the three HeavyTailed Universality classes ( or or , as described in Table 5) is appropriate to describe the system. For our theory, the following are particularly important points. First, observing a HeavyTailed ESD may indicate the presence of a scalefree DNN. This suggests that the underlying DNN is stronglycorrelated, and that we need more than just a few separated spikes, plus some randomlike bulk structure, to model the DNN and to understand DNN regularization. Second, this does not necessarily imply that the matrix elements of form a HeavyTailed distribution. Rather, the HeavyTailed distribution arises since we posit it as a model of the strongly correlated, highly nonrandom matrix . Third, we conjecture that this is more general, and that very welltrained DNNs will exhibit HeavyTailed behavior in their ESD for many the weight matrices (as we have observed so far with many pretrained models).
3.3 Eigenvector localization
When entries of a random matrix are drawn from distributions in the Gaussian Universality class, and under typical assumptions, eigenvectors tend to be delocalized, i.e., the mass of the eigenvector tends to be spread out on most or all the components of that vector. For other models, eigenvectors can be localized. For example, spike eigenvectors in SpikedCovariance models as well as extremal eigenvectors in HeavyTailed random matrix models tend to be more localized [110, 24]. Eigenvector delocalization, in traditional RMT, is modeled using the Thomas Porter Distribution [118]. Since a typical bulk eigenvector should have maximum entropy, therefore it’s components should be Gaussian distributed, according to:
Thomas Porter Distribution: 
Here, we normalize such that the empirical variance of the elements is unity, . Based on this, we can define several related eigenvector localization metrics.

The Generalized Vector Entropy, is computed using a histogram estimator.

The Localization Ratio, measures the sum of the absolute values of the elements of , relative to the largest absolute value of an element of .

The Participation Ratio, is a robust variant of the Localization Ratio.
For all three metrics, the lower the value, the more localized the eigenvector tends to be. We use deviations from delocalization as a diagnostic that the corresponding eigenvector is more structured/regularized.
4 Empirical Results: ESDs for Existing, Pretrained DNNs
In this section, we describe our main empirical results for existing, pretrained DNNs.^{14}^{14}14A practical theory for DNNs should be applicable to very large—productionquality, and even pretrained—models, as well as to models during training. Early on, we observed that small DNNs and large DNNs have very different ESDs. For smaller models, ESDs tend to fit the MP theory well, with wellunderstood deviations, e.g., lowrank perturbations. For larger models, the ESDs almost never fit the theoretical , and they frequently have a completely different functional form. We use RMT to compare and contrast the ESDs of a smaller, older NN and many larger, modern DNNs. For the small model, we retrain a modern variant of one of the very early and wellknown Convolutional Nets—LeNet5. We use Keras (2), and we train LeNet5 on MNIST. For the larger, modern models, we examine selected layers from AlexNet, InceptionV3, and many other models (as distributed with pyTorch). Table 4 provides a summary of models we analyzed in detail.

Used for:  Section  Layer  Key observation in ESD(s)  
MLP3 

2 

MP Bulk+Spikes  
LeNet5 

4.1  FC1 


AlexNet 


FC1 



5.5  FC2 


FC3 


InceptionV3 

4.3  L226 


4.3  L302 


MiniAlexNet 

6 


4.1 Example: LeNet5 (1998)
LeNet5 predates both the current Deep Learning revolution and the socalled AI Winter, dating back to the late 1990s [79]. It is the prototype early model for DNNs; it is the most widelyknown example of a Convolutional Neural Network (CNN); and it was used in production systems for recognizing hand written digits [79]. The basic design consists of Convolutional (Conv2D) and MaxPooling layers, followed by 2 Dense, or Fully Connected (FC), layers, FC1 and FC2. This design inspired modern DNNs for image classification, e.g., AlexNet, VGG16 and VGG19. All of these latter models consist of a few Conv2D and MaxPooling layers, followed by a few FC layers. Since LeNet5 is older, we actually recoded and retrained it. We used Keras 2.0, using epochs of the AdaDelta optimizer, on the MNIST data set. This model has training accuracy, and test accuracy on the default MNIST split. We analyze the ESD of the FC1 Layer (but not the FC2 Layer since it has only 10 eigenvalues). The FC1 matrix is a matrix, with , and thus it yields 500 eigenvalues.
Fc1: Mp Bulk+Spikes, with edge Bleedingout.
Figure 7 presents the ESD for FC1 of LeNet5, with Figure 7(a) showing the full ESD and Figure 7(b) showing the same ESD, zoomedin along the Xaxis to highlight smaller peaks outside the main bulk of our MP fit. In both cases, we show (red curve) our fit to the MP distribution . Several things are striking. First, the bulk of the density has a large, MPlike shape for eigenvalues , and the MP distribution fits this part of the ESD very well, including the fact that the ESD just below the best fit is concave. Second, some eigenvalue mass is bleeding out from the MP bulk for , although it is quite small. Third, beyond the MP bulk and this bleeding out region, are several clear outliers, or spikes, ranging from to .
Summary.
The shape of , the quality of the global bulk fit, and the statistics and crisp shape of the local bulk edge all agree well with standard MP theory, or at least the variant of MP theory augmented with a lowrank perturbation. In this sense, this model can be viewed as a realworld example of the SpikedCovariance model [68].
4.2 Example: AlexNet (2012)
AlexNet was the first modern DNN, and its spectacular performance opened the door for today’s revolution in Deep Learning. Specifically, it was top5 on the ImageNet ILSVRC2012 classification task [74], achieving an error of , over ahead of the first runner up. AlexNet resembles a scaledup version of the LeNet5 architecture; it consists of 5 layers, 2 convolutional, followed by 3 FC layers (the last being a softmax classifier).^{15}^{15}15It was the first CNN Net to win this competition, and it was also the first DNN to use ReLUs in a wide scale competition. It also used a novel Local Contrast Normalization layer, which is no longer in widespread use, having been mostly replaced with Batch Normalization or similar methods. We will analyze the version of AlexNet currently distributed with pyTorch (version 0.4.1). In this version, FC1 has a matrix, with ; FC2 has a matrix, with ; and FC3 has a matrix, with . Notice that FC3 is the final layer and connects AlexNet to the labels.
Figures 8, 9, and 10 present the ESDs for weight matrices of AlexNet for Layers FC1, FC2, and FC3, with Figures 8(a), 9(a), and 10(a) showing the full ESD, and Figures 8(b), 9(b), and 10(b) showing the results “zoomedin” along the Xaxis. In each cases, we present best MP fits, as determined by holding fixed, adjusting the parameter, and selecting the best bulk fit by visual inspection. Fitting fixes , and the estimates differ for different layers because the matrices have different aspect ratios . In each case, the ESDs exhibit moderate to strong deviations from the best standard MP fit.
Fc1: Bulkdecay into HeavyTailed.
Consider first AlexNet FC1 (in Figures 8(a) and 8(b)). The eigenvalues range from near up to ca. , just as with LeNet5. The full ESD, however, is shaped very differently than any theoretical , for any value of . The best MP fit (in red in Figure 8) does capture a good part of the eigenvalue mass, but there are important differences: the peak is not filled in, there is substantial eigenvalue mass bleeding out from the bulk, and the shape of the ESD is convex in the region near to and just above the best fit for of the bulk edge. Contrast this with the excellent MP fit for the ESD for FC1 of LeNet5 (Figure 7(b)), where the red curve captures all of the bulk mass, and only a few outlying spikes appear. Moreover, and very importantly, in AlexNet FC1, the bulk edge is not crisp. In fact, it is not visible at all; and is solely defined operationally by selecting the parameter. As such, the edge fluctuations, , do not resemble a TW distribution, and the bulk itself appears to just decay into the heavy tail. Finally, a PL fit gives good fit , suggesting (due to finite size effects) .
Fc2: (nearly very) HeavyTailed Esd.
Consider next AlexNet FC2 (in Figures 9(a) and 9(b)). This ESD differs even more profoundly from standard MP theory. Here, we could find no good MP fit, even by adjusting and simultaneously. The best MP fit (in red) does not fit the Bulk part of at all. The fit suggests there should be significantly more bulk eigenvalue mass (i.e., larger empirical variance) than actually observed. In addition, as with FC1, the bulk edge is indeterminate by inspection. It is only defined by the crude fit we present, and any edge statistics obviously do not exhibit TW behavior. In contrast with MP curves, which are convex near the bulk edge, the entire ESD is concave (nearly) everywhere. Here, a PL fit gives good fit , smaller than FC1 and FC3, indicating a .
Fc3: HeavyTailed Esd.
Consider finally AlexNet FC3 (in Figures 10(a) and 10(b)). Here, too, the ESDs deviate strongly from predictions of MP theory, both for the global bulk properties and for the local edge properties. A PL fit gives good fit , which is larger than FC1 and FC2. This suggests a (which is also shown with a loglog histogram plot in Figure 16 in Section 5 below).
Summary.
For all three layers, the shape of , the quality of the global bulk fit, and the statistics and shape of the local bulk edge are poorlydescribed by standard MP theory. Even when we may think we have moderately a good MP fit because the bulk shape is qualitatively captured with MP theory (at least visual inspection), we may see a complete breakdown RMT at the bulk edge, where we expect crisp TW statistics (or at least a concave envelope of support). In other cases, the MP theory may even be a poor estimator for even the bulk.
4.3 Example: InceptionV3 (2014)
In the few years after AlexNet, several new, deeper DNNs started to win the ILSVRC ImageNet completions, including ZFNet(2013) [155], VGG(2014) [130], GoogLeNet/Inception (2014) [139], and ResNet (2015) [61]. We have observed that nearly all of these DNNs have properties that are similar to AlexNet. Rather than describe them all in detail, in Section 4.4, we perform power law fits on the Linear/FC layers in many of these models. Here, we want to look more deeply at the Inception model, since it displays some unique properties.^{16}^{16}16Indeed, these results suggest that Inception models do not truly account for all the correlations in the data.
In 2014, the VGG [130] and GoogLeNet [139] models were close competitors in the ILSVRC2014 challenges. For example, GoogLeNet won the classification challenge, but VGG performed better on the localization challenge. These models were quite deep, with GoogLeNet having 22 layers, and VGG having 19 layers. The VGG model is 2X as deep as AlexNet, but it replaces each larger AlexNet filter with more, smaller filters. Presumably this deeper architecture, with more nonlinearities, can capture the correlations in the network better. The VGG features of the second to last FC layer generalize well to other tasks. A downside of the VGG models is that they have a lot of parameters and that they use a lot of memory.
The GoogleLeNet/Inception design resembles the VGG architecture, but it is even more computationally efficient, which (practically) means smaller matrices, fewer parameters (12X fewer than AlexNet), and a very different architecture, including no internal FC layers, except those connected to the labels. In particular, it was noted that most of the activations in these DNNs are redundant because they are so strongly correlated. So, a sparse architecture should perform just as well, but with much less computational cost—if implemented properly to take advantage of low level BLAS calculations on the GPU. So, an Inception module was designed. This module approximates a sparse Convolutional Net, but using many smaller, dense matrices, leading to many small filters of different sizes, concatenated together. The Inception modules are then stacked on top of each other to give the full DNN. GoogLeNet also replaces the later FC layers (i.e., in AlexNetlike architectures) with global average pooling, leaving only a single FC / Dense layer, which connects the DNN to the labels. Being so deep, it is necessary to include an Auxiliary block that also connects to the labels, similar to the final FC layer. From this, we can extract a single rectangular tensor. This gives 2 FC layers to analyze.
For our analysis of InceptionV3 [139], we select a layer (L226) from in the Auxiliary block, as well as the final (L302) FC layer. Figure 11 presents the ESDs for InceptionV3 for Layer L226 and Layer L302, two large, fullyconnected weight matrices with aspect ratios and , respectively. We also show typical MP fits for matrices with the same aspect ratios . As with AlexNet, the ESDs for both the L226 and L302 layers display distinct and strong deviations from the MP theory.
L226: Bimodal ESDs.
Consider first L226 of InceptionV3. Figure 11(a) displays the L226 ESD. (Recall this is not a true Dense layer, but it is part of the Inception Auxiliary module, and it looks very different from the other FC layers, both in AlexNet and below.) At first glance, we might hope to select the bulk edge at and treat the remaining eigenvalue mass as an extended spike; but this visually gives a terrible MP fit (not shown). Selecting produces an MP fit with a reasonable shape to the envelope of support of the bulk; but this fit strongly overestimates the bulk variance / Frobenius mass (in particular near ), and it strongly underestimates the spike near . We expect this fit would fail any reasonable statistical confidence test for an MP distribution. As in all cases, numerous Spikes extend all the way out to , showing a longer, heavier tail than any MP fit. It is unclear whether or not the edge statistics are TW. There is no good MP fit for the ESD of L226, but it is unclear whether this distribution is “truly” HeavyTailed or simply appears HeavyTailed as a result of the bimodality. Visually, at least the envelope of the L226 ESD to resembles a HeavyTailed MP distribution. It is also possible that the DNN itself is also not fully optimized, and we hypothesize that further refinements could lead to a true HeavyTailed ESD.
L302: Bimodal fat or HeavyTailed ESDs.
Consider next L302 of InceptionV3 (in Figure 11(b)). The ESD for L302 is slightly bimodal (on a loglog plot), but nowhere near as strongly as L226, and we can not visually select any bulk edge . The bulk barely fits any MP density; our best attempt is shown. Also, the global ESD the wrong shape; and the MP fit is concave near the edge, where the ESD is convex, illustrating that the edge decays into the tail. For any MP fit, significant eigenvalue mass extends out continuously, forming a long tail extending al the way to . The ESD of L302 resembles that of the HeavyTailed FC2 layer of AlexNet, except for the small bimodal structure. These initial observations illustrate that we need a more rigorous approach to make strong statements about the specific kind of distribution (i.e., Pareto vs other HeavyTailed) and what Universality class it may lay in. We present an approach to resolve these technical details this in Section 5.5.
4.4 Empirical results for other pretrained DNNs
In addition to the models from Table 4 that we analyzed in detail, we have also examined the properties of a wide range of other pretrained models, including models from both Computer Vision as well as Natural Language Processing (NLP). This includes models trained on ImageNet, distributed with the pyTorch package, including VGG16, VGG19, ResNet50, InceptionV3, etc. See Table 5. This also includes different NLP models, distributed in AllenNLP [51], including models for Machine Comprehension, Constituency Parsing, Semantic Role Labeling, Coreference Resolution, and Named Entity Recognition, giving a total of 84 linear layers. See Table 6. Rather remarkably, we have observed similar HeavyTailed properties, visually and in terms of Power Law fits, in all of these larger, stateoftheart DNNs, leading to results that are nearly universal across these widely different architectures and domains. We have also seen Hard Rank deficiency in layers in several of these models. We provide a brief summary of those results here.
Model  Layer  Q  D 


alexnet  17/FC1  2.25  2.29  0.0527  PL  
20/FC2  1  2.25  0.0372  PL  
22/FC3  4.1  3.02  0.0186  PL  
densenet121  432  1.02  3.32  0.0383  PL  
densenet121  432  1.02  3.32  0.0383  PL  
densenet161  572  2.21  3.45  0.0322  PL  
densenet169  600  1.66  3.38  0.0396  PL  
densenet201  712  1.92  3.41  0.0332  PL  
inception v3  L226  1.3  5.26  0.0421  PL  
L302  2.05  4.48  0.0275  PL  
resnet101  286  2.05  3.57  0.0278  PL  
resnet152  422  2.05  3.52  0.0298  PL  
resnet18  67  1.95  3.34  0.0342  PL  
resnet34  115  1.95  3.39  0.0257  PL  
resnet50  150  2.05  3.54  0.027  PL  
vgg11  24  6.12  2.32  0.0327  PL  
27  1  2.17  0.0309  TPL  
30  4.1  2.83  0.0398  PL  
vgg11 bn  32  6.12  2.07  0.0311  TPL  
35  1  1.95  0.0336  TPL  
38  4.1  2.99  0.0339  PL  
vgg16  34  6.12  2.3  0.0277  PL  
37  1  2.18  0.0321  TPL  
40  4.1  2.09  0.0403  TPL  
vgg16 bn  47  6.12  2.05  0.0285  TPL  
50  1  1.97  0.0363  TPL  
53  4.1  3.03  0.0358  PL  
vgg19  40  6.12  2.27  0.0247  PL  
43  1  2.19  0.0313  PL  
46  4.1  2.07  0.0368  TPL  
vgg19 bn  56  6.12  2.04  0.0295  TPL  
59  1  1.98  0.0373  TPL  
62  4.1  3.03  0.035  PL  

Model  Problem Domain  Linear Layers 
MC  Machine Comprehension  16 
CP  Constituency Parsing  17 
SRL  Semantic Role Labeling  32 
COREF  Coreference Resolution  4 
NER  Named Entity Recognition  16 
Power Law Fits.
We have performed Power Law (PL) fits for the ESD of selected (linear) layers from all of these pretrained ImageNet and NLP models.^{17}^{17}17We use the default method (MLE, for continuous distributions), and we set , i.e., to the maximum eigenvalue of the ESD. Table 5 summarizes the detailed results for the ImageNet models. Several observations can be made. First, all of our fits, except for certain layers in InceptionV3, appear to be in the range (where the CSN method is known to perform well). Second, we also check to see whether PL is the best fit by comparing the distribution to a Truncated Power Law (TPL), as well as an exponential, stretchexponential, and log normal distributions. Column “Best Fit” reports the best distributional fit. In all cases, we find either a PL or TPL fits best (with a pvalue ), with TPL being more common for smaller values of . Third, even when taking into account the large finitesize effects in the range , as illustrated in Figure 6, nearly all of the ESDs appear to fall into the Universality class. Figure 12 displays the distribution of PL exponents for each set of models. Figure 12(a) shows the fit power law exponents for all of the linear layers in pretrained ImageNet models available in PyTorch (in Table 5), with ; and Figure 12(b) shows the same for the pretrained models available in AllenNLP (in Table 6). Overall, there are ImageNet layers with , and AllenNet FC layers. More than of all the layers have , and nearly all of the rest have . One of these, InceptionV3, was discussed above, precisely since it was unusual, leading to an anomalously large value of due to the dip in its ESD.
Rank Collapse.
RMT also predicts that for matrices with , the minimum singular value will be greater than zero, i.e., . We test this by again looking at all of the FC layers in the pretrained ImageNet and AllenNLP models. See Figure 13 for a summary of the results. While the ImageNet models mostly follow this rule, of the of FC layers have . In fact, for layers, , i.e., it is close to the numerical threshold for . In these few cases, the ESD still exhibits HeavyTailed properties, but the rank loss ranges from one eigenvalue equal to up to of the eigenvalue mass. For the NLP models, we see no rank collapse, i.e., all of the AllenNLP layers have .
4.5 Towards a theory of SelfRegularization
In a few cases (e.g., LetNet5 in Section 4.1), MP theory appears to apply to the bulk of the ESD, with only a few outlying eigenvalues larger than the bulk edge. In other more realistic cases (e.g., AlexNet and InceptionV3 in Sections 4.2 and 4.3, respectively, and every other largescale DNN we have examined, as summarized in Section 4.4), the ESDs do not resemble anything predicted by standard RMT/MP theory. This should not be unexpected—a welltrained DNN should have highly nonrandom, stronglycorrelated weight matrices , in which case MP theory would not seem to apply. Moreover, except for InceptionV3, which was chosen to illustrate several unusual properties, nearly every DNN displays HeavyTailed properties such as those seen in AlexNet.
These empirical results suggest the following: first, that we can construct an operational and phenomenological theory (both to obtain fundamental insights into DNN regularization and to help guide the training of very large DNNs); and second, that we can build this theory by applying the full machinery of modern RMT to characterize the state of the DNN weight matrices.
For older and/or smaller models, like LeNet5, the bulk of their ESDs can be wellfit to theoretical MP density , potentially with several distinct, outlying spikes . This is consistent with the SpikedCovariance model of Johnstone [68], a simple perturbative extension of the standard MP theory.^{18}^{18}18It is also consistent with HeavyTailed extensions of MP theory, for larger values of the PL exponent. This is also reminiscent of traditional Tikhonov regularization, in that there is a “size scale” separating signal (spikes) from noise (bulk). In this sense, the small NNs of yesteryear—and smallish models used in many research studies—may in fact behave more like traditional ML models. In the context of disordered systems theory, as developed by Sornette [92], this model is a form of SelfOrganizaton. Putting this all together demonstrates that the DNN training process itself engineers a form of implicit SelfRegularization into the trained model.
For large, deep, stateoftheart DNNs, our observations suggest that there are profound deviations from traditional RMT. These networks are reminiscent of stronglycorrelated disorderedsystems that exhibit HeavyTailed behavior. What is this regularization, and how is it related to our observations of implicit Tikhonovlike regularization on LeNet5?
To answer this, recall that similar behavior arises in stronglycorrelated physical systems, where it is known that stronglycorrelated systems can be modeled by random matrices—with entries drawn from nonGaussian Universality classes [134], e.g., PL or other HeavyTailed distributions. Thus, when we observe that has HeavyTailed properties, we can hypothesize that is stronglycorrelated,^{19}^{19}19For DNNs, these correlations arise in the weight matrices during Backprop training (at least when training on data of reasonablequality). That is, the weight matrices “learn” the correlations in the data. and we can model it with a HeavyTailed distribution. Then, upon closer inspection, we find that the ESDs of large, modern DNNs behave as expected—when using the lens of HeavyTailed variants of RMT. Importantly, unlike the SpikedCovariance case, which has a scale cutoff , in these very strongly HeavyTailed cases, correlations appear on every size scale, and we can not find a clean separation between the MP bulk and the spikes. These observations demonstrate that modern, stateoftheart DNNs exhibit a new form of HeavyTailed SelfRegularization.
In the next few sections, we construct and test (on miniature AlexNet) our new theory.
5 5+1 Phases of Regularized Training
In this section, we develop an operational and phenomenological theory for DNN SelfRegularization that is designed to address questions such as the following. How does DNN SelfRegularization differ between older models like LetNet5 and newer models like AlexNet or Inception? What happens to the SelfRegularization when we adjust the numerous knobs and switches of the solver itself during SGD/Backprop training? How are knobs, e.g., early stopping, batch size, and learning rate, related to more familiar regularizers like Weight Norm constraints and Tikhonov regularization? Our theory builds on empirical results from Section 4; and our theory has consequences and makes predictions that we test in Section 6.
MP Soft Rank.
We first define a metric, the MP Soft Rank (), that is designed to capture the “size scale” of the noise part of the layer weight matrix , relative to the largest eigenvalue of . Going beyond spectral methods, this metric exploits MP theory in an essential way.
Let’s first assume that MP theory fits at least a bulk of . Then, we can identify a bulk edge and a bulk variance , and define the MP Soft Rank as the ratio of and :
(11) 
Clearly, ; for a purely random matrix (as in Section 5.1); and for a matrix with an ESD with outlying spikes (as in Section 5.3), , and . If there is no good MP fit because the entire ESD is wellapproximated by a HeavyTailed distribution (as described in Section 5.5, e.g., for a strongly correlated weight matrix), then we can define and still use Eqn. (11), in which case .
The MP Soft Rank is interpreted differently than the Stable Rank (), which is proportional to the bulk MP variance divided by :
(12) 
As opposed to the Stable Rank, the MP Soft Rank is defined in terms of the MP distribution, and it depends on how the bulk of the ESD is fit. While the Stable Rank indicates how many eigencomponents are necessary for a relativelygood lowrank approximation of an arbitrary matrix, the MP Soft Rank describes how well MP theory fits part of the matrix ESD . Empirically, and often correlate and track similar changes. Importantly, though, there may be no good lowrank approximation of the layer weight matrices of a DNN—especially a well trained one.

Operational Definition  Informal Description via Eqn. (13)  Edge/tail Fluctuation Comments  Illustration and Description 
Randomlike  ESD wellfit by MP with appropriate  random; zero or small  is sharp, with TW statistics  Fig. 14(a) and Sxn. 5.1 
Bleedingout  ESD Randomlike, excluding eigenmass just above  has eigenmass at bulk edge as spikes “pull out”; medium  BPP transition, and separate  Fig. 14(b) and Sxn. 5.2 
Bulk+Spikes  ESD Randomlike plus spikes well above  wellseparated from lowrank ; larger  is TW, is Gaussian  Fig. 14(c) and Sxn. 5.3 
Bulkdecay  ESD less Randomlike; HeavyTailed eigenmass above ; some spikes  Complex with correlations that don’t fully enter spike  Edge above is not concave  Fig. 14(d) and Sxn. 5.4 
HeavyTailed  ESD betterdescribed by HeavyTailed RMT than Gaussian RMT  is small; is large and stronglycorrelated  No good ;  Fig. 14(e) and Sxn. 5.5 
Rankcollapse  ESD has largemass spike at  very rankdeficient; overregularization  —  Fig. 14(f) and Sxn. 5.6 
Visual Taxonomy.
We characterize implicit SelfRegularization, both for DNNs during SGD training as well as for pretrained DNNs, as a visual taxonomy of 5+1 Phases of Training (Randomlike, Bleedingout, Bulk+Spikes, Bulkdecay, HeavyTailed, and Rankcollapse). See Table 7 for a summary. The 5+1 phases can be ordered, with each successive phase corresponding to a smaller Stable Rank / MP Soft Rank and to progressively more SelfRegularization than previous phases. Figure 14 depicts typical ESDs for each phase, with the MP fits (in red). Earlier phases of training correspond to the final state of older and/or smaller models like LeNet5 and MLP3. Later phases correspond to the final state of more modern models like AlexNet, Inception, etc. Thus, while we can describe this in terms of SGD training, this taxonomy does not just apply to the temporal ordering given by the training process. It also allows us to compare different architectures and/or amounts of regularization in a trained—or even pretrained—DNN.
Each phase is visually distinct, and each has a natural interpretation in terms of RMT. One consideration is the global properties of the ESD: how well all or part of the ESD is fit by an MP distribution, for some value of , or how well all or part of the ESD is fit by a HeavyTailed or PL distribution, for some value of a PL parameter. A second consideration is local properties of the ESD: the form of fluctuations, in particular around the edge or around the largest eigenvalue . For example, the shape of the ESD near to and immediately above is very different in Figure 14(a) and Figure 14(c) (where there is a crisp edge) versus Figure 14(b) (where the ESD is concave) versus Figure 14(d) (where the ESD is convex). Gaussianbased RMT (when elements are drawn from the Gaussian Universality class) versus HeavyTailed RMT (when elements are drawn from a HeavyTailed Universality class) provides guidance, as we describe below.
As an illustration, Figure 15 depicts the 5+1 phases for a typical (hypothetical) run of Backprop training for a modern DNN. Figure 15(a) illustrates that we can track the decrease in MP Soft Rank, as changes from an initial random (Gaussianlike) matrix to its final form; and Figure 15(b) illustrates that (at least for the early phases) we can fit its ESD (or the bulk of its ESD) using MP theory, with corresponding to nonrandom signal eigendirections. Observe that there are eigendirections (below ) that fit very well the MP bulk, there are eigendirections (well above ) that correspond to a spike, and there are eigendirections (just slightly above ) with (convex) curvature more like Figure 14(d) than (the concave curvature of) Figure 14(b). Thus, Figure 15(b) is Bulk+Spikes, with early indications of Bulkdecay.
Theory of Each Phase.
RMT provides more than simple visual insights, and we can use RMT to differentiate between the 5+1 Phases of Training using simple models that qualitatively describe the shape of each ESD. In each phase, we model the weight matrices as “noise plus signal,” where the “noise” is modeled by a random matrix , with entries drawn from the Gaussian Universality class (welldescribed by traditional MP theory) and the “signal” is a (small or very large) correction :
(13) 
Table 7 summarizes the theoretical model for each phase. Each model uses RMT to describe the global shape of , the local shape of the fluctuations at the bulk edge, and the statistics and information in the outlying spikes, including possible HeavyTailed behaviors.
In the first phase (Randomlike), the ESD is welldescribed by traditional MP theory, in which a random matrix has entries drawn from the Gaussian Universality class. This does not mean that the weight matrix is random, but it does mean that the signal in is too weak to be seen when viewed via the lens of the ESD. In the next phases (Bleedingout, Bulk+Spikes), and/or for small networks such as LetNet5, is a relativelysmall perturbative correction to , and vanilla MP theory (as reviewed in Section 3.1) can be applied, as least to the bulk of the ESD. In these phases, we will model the matrix by a vanilla matrix (for appropriate parameters), and the MP Soft Rank is relatively large . In the Bulk+Spikes phase, the model resembles a SpikedCovariance model, and the SelfRegularization resembles Tikhonov regularization.
In later phases (Bulkdecay, HeavyTailed), and/or for modern DNNs such as AlexNet and InceptionV3, becomes more complex and increasingly dominates over . For these more stronglycorrelated phases, is relatively much weaker, and the MP Soft Rank collapses . Consequently, vanilla MP theory is not appropriate, and instead the SelfRegularization becomes HeavyTailed. In these phases, we will treat the noise term as small, and we will model the properties of with HeavyTailed extensions of vanilla MP theory (as reviewed in Section 3.2) to HeavyTailed nonGaussian universality classes that are more appropriate to model stronglycorrelated systems. In these phases, the stronglycorrelated model is still regularized, but in a very nontraditional way. The final phase, the Rankcollapse phase, is a degenerate case that is a prediction of the theory.
We now describe in more detail each phase in turn.
5.1 Randomlike
In the first phase, the Randomlike phase, shown in Figure 14(a), the DNN weight matrices resemble a Gaussian random matrix. The ESDs are easilyfit to an MP distribution, with the same aspect ratio , by fitting the empirical variance . Here, is the elementwise variance (which depends on the normalization of ).
Of course, an initial random weight matrix will show a near perfect MP fit. Even in well trained DNNs, however, the empirical ESDs may be Randomlike, even when the model has a nonzero, and even somewhat large, generalization accuracy.^{20}^{20}20In particular, as described below, we observe this with toy models when trained with very large batch sizes. That is, being fit well by an MP distribution does not imply that the weight matrix is random. It simply implies that , while having structure, can be modeled as the sum of a random “noise” matrix , with the same and , and some smallsized matrix , as:
where represents “signal” learned during the training process. In this case, is sharply bounded, to within , to the edge of the MP distribution.
5.2 Bleedingout
In the second phase, the Bleedingout phase, shown in Figure 14(b), the bulk of the ESD still looks reasonably random, except for one or a small number of eigenvalues that extend at or just beyond the MP edge . That is, for the given value of , we can choose a (or ) parameter so that: (1) most of the ESD is wellfit; and (2) the part of the ESD that is not wellfit consists of a “shelf” of mass, much more than expected by chance, just above :
This corresponds to modeling as the sum of a random “noise” matrix and some mediumsized matrix , as:
where represents “signal” learned during the training process.
As the spikes just begin to pull out from the bulk, i.e., when is small, it may be difficult to determine unambiguously whether any particular eigenvalue is spike or bulk. The reason is that, since the matrix is of finite size, we expect the spike locations to be Gaussiandistributed, with fluctuations of order . One option is to try to estimate precisely from a single run. Another option is to perform an ensemble of runs and plot for the ensemble. Then, if the model is in the Bleedingout phase, there will be a small bump of eigenvalue mass, shaped like a Gaussian,^{21}^{21}21We are being pedantic here since, in later phases, bleedingout mass will be HeavyTailed, not Gaussian. which is very close to but bleedingout from the bulk edge.
When modeling DNN training in terms of RMT and MP theory, the transition from Randomlike to Bleedingout corresponds to the socalled BPP phase transition [9, 23, 45, 20]. This transition represents a “condensation” of the eigenvector corresponding to the largest eigenvalue onto the eigenvalue of the rankone (or, more generally, rank, if the perturbation is higher rank) perturbation [23].
5.3 Bulk+Spikes
In the third phase, the Bulk+Spikes phase, shown in Figure 14(c), the bulk of the ESD still looks reasonably random, except for one or a small number of eigenvalues that extend well beyond the MP edge . That is, for the given value of , we can choose a (or ) parameter so that: (1) most of the ESD is wellfit; and (2) the part of the ESD that is not wellfit consists of several () eigenvalues, or Spikes, that are much larger than :