A Fine-Grained Spectral Perspective on Neural Networks

# A Fine-Grained Spectral Perspective on Neural Networks

Greg Yang
Microsoft Research AI
gregyang@microsoft.com
Microsoft Research AI
Work done as part of the Microsoft AI Residency Program
###### Abstract

Are neural networks biased toward simple functions? Does depth always help learn more complex features? Is training the last layer of a network as good as training all layers? These questions seem unrelated at face value, but in this work we give all of them a common treatment from the spectral perspective. We will study the spectra of the Conjugate Kernel, CK, (also called the Neural Network-Gaussian Process Kernel), and the Neural Tangent Kernel, NTK. Roughly, the CK and the NTK tell us respectively “what a network looks like at initialization” and “what a network looks like during and after training.” Their spectra then encode valuable information about the initial distribution and the training and generalization properties of neural networks. By analyzing the eigenvalues, we lend novel insights into the questions put forth at the beginning, and we verify these insights by extensive experiments of neural networks.

We believe the computational tools we develop here for analyzing the spectra of CK and NTK serve as a solid foundation for future studies of deep neural networks. We have open-sourced the code for it and for generating the plots in this paper at github.com/thegregyang/NNspectra.

\WarningFilter

remresetThe remreset package

## 1 Introduction

Understanding the behavior of neural networks and why they generalize has been a central pursuit of the theoretical deep learning community. Recently, Valle-Pérez et al. (2018) observed that neural networks have a certain “simplicity bias” and proposed this as a solution to the generalization question. One of the ways with which they argued that this bias exists is the following experiment: they drew a large sample of boolean functions by randomly initializing neural networks and thresholding the output. They observed that there is a bias toward some "simple" functions which get sampled disproportionately more often. However, their experiments were only done for relu networks. Can one expect this “simplicity bias” to hold universally, for any architecture?

A priori, this seems difficult, as the nonlinear nature seems to present an obstacle in reasoning about the distribution of random networks. However, this question turns out to be more easily treated if we allow the width to go to infinity. A long line of works starting with Neal (1995) and extended recently by Lee et al. (2018); Novak et al. (2018); Yang (2019) have shown that randomly initialized, infinite-width networks are distributed as Gaussian processes. These Gaussian processes also describe finite width random networks well Valle-Pérez et al. (2018). We will refer to the corresponding kernels as the Conjugate Kernels (CK), following the terminology of Daniely et al. (2016). Given the CK , the simplicity bias of a wide neural network can be read off quickly from the spectrum of : If the largest eigenvalue of accounts for most of , then a typical random network looks like a function from the top eigenspace of .

In this paper, we will use this spectral perspective to probe not only the simplicity bias, but more generally, questions regarding how hyperparameters affect the generalization of neural networks.

Via the usual connection between Gaussian processes and linear models with features, the CK can be thought of as the kernel matrix associated to training only the last layer of a wide randomly initialized network. It is a remarkable recent advance Jacot et al. (2018); Allen-Zhu et al. (2018a, c); Du et al. (2018) that, under a certain regime, a wide neural network of any depth evolves like a linear model even when training all parameters. The associated kernel is call the Neural Tangent Kernel, which is typically different from CK. While its theory was initially derived in the infinite width setting, Lee et al. (2019) confirmed with extensive experiment that this limit is predictive of finite width neural networks as well. Thus, just as the CK reveals information about what a network looks like at initialization, NTK reveals information about what a network looks like after training. As such, if we can understand how hyperparameters change the NTK, we can also hope to understand how they affect the performance of the corresponding finite-width network.

#### Our Contributions

In this paper, in addition to showing that the simplicity bias is not universal, we will attempt a first step at understanding the effects of the hyperparameters on generalization from a spectral perspective.

At the foundation is a spectral theory of the CK and the NTK on the boolean cube. In Section 3, we show that these kernels, as integral operators on functions over the boolean cube, are diagonalized by the natural Fourier basis, echoing similar results for over the sphere Smola et al. (2001). We also partially diagonalize the kernels over standard Gaussian, and show that, as expected, the kernels over the different distributions (boolean cube, sphere, standard Gaussian) behave very similarly in high dimensions. However, the spectrum is much easier to compute over the boolean cube: while the sphere and Gaussian eigenvalues would require integration against a kind of polynomials known as the Gegenbauer polynomials, the boolean ones only require calculating a linear combination of a small number of terms. For this reason, in the rest of the paper we focus on analyzing the eigenvalues over the boolean cube.

Just as the usual Fourier basis over has a notion of frequency that can be interpreted as a measure of complexity, so does the boolean Fourier basis (this is just the degree; see Section 3.1). While not perfect, we adopt this natural notion of complexity in this work; a “simple” function is then one that is well approximated by “low frequencies.”

This spectral perspective immediately yields that the simplicity bias is not universal (Section 4). In particular, while it seems to hold more or less for relu networks, for sigmoidal networks, the simplicity bias can be made arbitrarily weak by changing the weight variance and the depth. In the extreme case, the random function obtained from sampling a deep erf network with large weights is distributed like a “white noise.” However, there is a very weak sense in which the simplicity bias does hold: the eigenvalues of more “complex” eigenspaces cannot be bigger than those of less “complex” eigenspaces (Fig. 1).

Next, we examine how hyperparameters affect the performance of neural networks through the lens of NTK and its spectrum. To do so, we first need to understand the simpler question of how a kernel affects the accuracy of the function learned by kernel regression. A coarse-grained theory, concerned with big-O asymptotics, exists from classical kernel literature Yao et al. (2007); Raskutti et al. (2013); Wei et al. ; Lin and Rosasco ; Schölkopf and Smola (2002). However, the fine-grained details, required for discerning the effect of hyperparameters, have been much less studied. We make a first attempt at a heuristic, fractional variance (i.e. what fraction of the trace of the kernel does an eigenspace contribute), for understanding how a minute change in kernel effects a change in performance. Intuitively, if an eigenspace has very large fractional variance, so that it accounts for most of the trace, then a ground truth function from this eigenspace should be very easy to learn.

Using this heuristic, we make two predictions about neural networks, motivated by observations in the spectra of NTK and CK, and verify them with extensive experiments.

• Deeper networks learn more complex features, but excess depth can be detrimental as well. Spectrally, depth can increase fractional variance of an eigenspace, but past an optimal depth, it will also decrease it. (Section 5) Thus, deeper is not always better.

• Training all layers is better than training just the last layer when it comes to more complex features, but the opposite is true for simpler features. Spectrally, fractional variances of more “complex” eigenspaces for the NTK are larger than the correponding quantities of the CK. (Section 6)

We emphasize that our theory and experiments only concern data distributed uniformly over the boolean cube, the sphere, or as a standard Gaussian. These distributions are theoretically interesting but insights found over them do not always carry over to real datasets. We leave for future work the extension to more practically relevant data distributions.

The code for computing the eigenvalues and for reproducing the plots of this paper is available at github.com/thegregyang/NNspectra. Our predictions made using fractional variance is good but not perfect, and we hope others can build on our open-sourced code to improve the understanding of the effect of hyperparameters on neural network training.

## 2 Kernels Associated to Neural Networks

As mentioned in the introduction, we now know several kernels associated to infinite width, randomly initialized neural networks. The most prominent of these are the neural tangent kernel (NTK) Jacot et al. (2018) and the conjugate kernel (CK) Daniely et al. (2016), which is also called the NNGP kernel Lee et al. (2018). We briefly review them below. First we introduce the following notation that we will repeatedly use. {defn} For , write for the function that takes a PSD (positive semidefinite) kernel function to a PSD kernel of the same domain by the formula

 Vϕ(K)(x,x′)=\EVf∼\Gaus(0,K)ϕ(f(x))ϕ(f(x′)).

#### Conjugate Kernel

Neural networks are commonly thought of as learning a high-quality embedding of inputs to the latent space represented by the network’s last hidden layer, and then using its final linear layer to read out a classification given the embedding. The conjugate kernel is just the kernel associated to the embedding induced by a random initialization of the neural network. Consider an MLP with widths , weight matrices , and biases , . For simplicity of exposition, in this paper, we will only consider scalar output . Suppose it is parametrized by the NTK parametrization, i.e. its computation is given recursively as

 h1(x)=\fσw√n0W1x+σbb1andhl(x)=\fσw√n0Wlϕ(hl−1(x))+σbbl (MLP)

with some hyperparameters that are fixed throughout training111SGD with learning rate in this parametrization is roughly equivalent to SGD with learning rate in the standard parametrization with Glorot initialization; see Lee et al. (2018). At initialization time, suppose for each . It can be shown that, for each , is a Gaussian process with zero mean and kernel function in the limit as all hidden layers become infinitely wide (), where is defined inductively on as

 Σ1(x,x′)def=σ2w(n0)−1\lax,x′\ra+σ2b,Σldef=σ2wVϕ(Σl−1)+σ2b (CK)

The kernel corresponding the the last layer is the network’s conjugate kernel, and the associated Gaussian process limit is the reason for its alternative name Neural Network-Gaussian process kernel. In short, if we were to train a linear model with features given by the embedding when the network parameters are randomly sampled as above, then the CK is the kernel of this linear model. See Daniely et al. (2016); Lee et al. (2018) and Appendix C for more details.

#### Neural Tangent Kernel

On the other hand, the NTK corresponds to training the entire model instead of just the last layer. Intuitively, if we let be the entire set of parameters of Eq. MLP, then for close to its initialized value , we expect

 hL(x;θ)−hL(x;θ0)≈\la∇θhL(x;θ0),θ−θ0\ra

via a naive first-order Taylor expansion. In other words, behaves like a linear model with feature of given by the gradient taken w.r.t. the initial network, , and the weights of this linear model are the deviation of from its initial value. It turns out that, in the limit as all hidden layer widths tend to infinity, this intuition is correct Jacot et al. (2018); Lee et al. (2018); Yang (2019), and the following inductive formula computes the corresponding infinite-width kernel of this linear model:

 Θ1def=Σ1,Θl(x,x′)def=Σl(x,x′)+σ2wΘl−1(x,x′)Vϕ′(Σl−1)(x,x′). (NTK)

#### Computing CK and NTK

While in general, computing and requires evaluating a multivariate Gaussian expectation, in specific cases, such as when or , there exists explicit, efficient formulas that only require pointwise evaluation of some simple functions (see Sections C.2 and C.2). This allows us to evaluate CK and NTK on a set of inputs in only time .

#### What Do the Spectra of CK and NTK Tell Us?

In summary, the CK governs the distribution of a randomly initialized neural network and also the properties of training only the last layer of a network, while the NTK governs the dynamics of training (all parameters of) a neural network. A study of their spectra thus informs us of the “implicit prior” of a randomly initialized neural network as well as the “implicit bias” of GD in the context of training neural networks.

In regards to the implicit prior at initialization, we know from Lee et al. (2018) that a randomly initialized network as in Eq. MLP is distributed as a Gaussian process , where is the corresponding CK, in the infinite-width limit. If we have the eigendecomposition

 K=∑i≥1λiui⊗ui (1)

with eigenvalues in decreasing order and corresponding eigenfunctions , then each sample from this GP can be obtained as

 ∑i≥1√λiωiui,ωi∼\Gaus(0,1).

If, for example, , then a typical sample function is just a very small perturbation of . We will see that for relu, this is indeed the case (Section 4), and this explains the “simplicity bias” in relu networks found by Valle-Pérez et al. (2018).

Training the last layer of a randomly initialized network via full batch gradient descent for an infinite amount of time corresponds to Gaussian process inference with kernel Lee et al. (2018, 2019). A similar intuition holds for NTK: training all parameters of the network (Eq. MLP) for an infinite amount of time yields the mean prediction of the GP in expectation; see Lee et al. (2019) and Section C.4 for more discussion.

Thus, the more the GP prior (governed by the CK or the NTK) is consistent with the ground truth function , the more we expect the Gaussian process inference and GD training to generalize well. We can measure this consistency in the “alignment” between the eigenvalues and the squared coefficients of ’s expansion in the basis. The former can be interpreted as the expected magnitude (squared) of the -component of a sample , and the latter can be interpreted as the actual magnitude squared of such component of . In this paper, we will investigate an even cleaner setting where is an eigenfunction. Thus we would hope to use a kernel whose th eigenvalue is as large as possible.

#### Neural Kernels

From the forms of the equation Eqs. NTK and CK and the fact that only depends on , and , we see that CK or NTK of MLPs takes the form

 K(x,y)=Φ\lp\f\lax,y\ra∥x∥∥y∥,\f∥x∥2d,\f∥y∥2d\rp (2)

for some function . We will refer to this kind of kernel as Neural Kernel in this paper.

#### Kernels as Integral Operators

We will consider input spaces of various forms equipped with some probability measure. Then a kernel function acts as an integral operator on functions by

 Kf(x)=(Kf)(x)=\EVy∼XK(x,y)f(y).

We will use the “juxtaposition syntax” to denote this application of the integral operator. 222 In cases when is finite, can be also thought of as a big matrix and as a vector — but do not confuse with their multiplication! If we use to denote matrix multiplication, then the operator application is the same as the matrix multiplication where is the diagonal matrix encoding the probability values of each point in . Under certain assumptions, it then makes sense to speak of the eigenvalues and eigenfunctions of the integral operator . While we will appeal to an intuitive understanding of eigenvalues and eigenfunctions in the main text below, we include a more formal discussion of Hilbert-Schmidt operators and their spectral theory in Appendix D for completeness. In the next section, we investigate the eigendecomposition of neural kernels as integral operators over different distributions.

## 3 The Spectra of Neural Kernels

### 3.1 Boolean Cube

We first consider a neural kernel on the boolean cube , equipped with the uniform measure. In this case, since each has the same norm, effectively only depends on , so we will treat as a single variate function in this section, .

#### Brief review of basic Fourier analysis on the boolean cube \mancubed (O’Donnell (2014)).

The space of real functions on forms a -dimensional space. Any such function has a unique expansion into a multilinear polynomial (polynomials whose monomials do not contain , of any variable ). For example, the majority function over 3 bits has the following unique multilinear expansion

 maj3:\mancube3→\mancube1,maj3(x1,x2,x3)=\f12(x1+x2+x3−x1x2x3).

In the language of Fourier analysis, the multilinear monomial functions

 χS(x)def=xSdef=∏i∈Sxi,for each S\sbe[d] (3)

form a Fourier basis of the function space , in the sense that their inner products satisfy

 \EVx∼\mancubedχS(x)χT(x)=\ind(S=T).

It turns out that is always diagonalized by this Fourier basis . {thm}[] On the -dimensional boolean cube , for every , is an eigenfunction of with eigenvalue

 μ|S|def=\EVx∈\mancubedxSK(x,1)=\EVx∈\mancubedxSΦ\lp∑ixi/d\rp, (4)

where . This definition of does not depend on the choice , only on the cardinality of . These are all of the eigenfunctions of by dimensionality considerations.333Readers familiar with boolean Fourier analysis may be reminded of the noise operator (O’Donnell, 2014, Defn 2.46). In the language of this work, is a neural kernel with eigenvalues .

Define to be the shift operator on functions over that sends to . Then we can re-express the eigenvalue as follows. {lemma} With as in Section 3.1,

 μk =2−d(I−TΔ)k(I+TΔ)d−kΦ(1)\numberthis =2−dd∑r=0Cd−k,krΦ\lp\lp\fd2−r\rpΔ\rp\numberthis

where

 Cd−k,krdef=∑j=0(−1)r+j(d−kj)(kr−j). (5)
###### Proof.

Because only takes on values , where , we can collect like terms in Eq. 4 and obtain

 μk =2−dd∑r=0∑x has r `−1's\lpk∏i=1xi\rpΦ\lp\lp\fd2−r\rpΔ\rp

which can easily be shown to be equal to

 μk=2−dd∑r=0Cd−k,krΦ\lp\lp\fd2−r\rpΔ\rp,

proving Section 3.1 in the claim. Finally, observe that is also the coefficient of in the polynomial . Some operator arithmetic then yields Section 3.1. ∎

Section 3.1 will be important for computational purposes, and we will come back to discuss this more in Section 3.5. It also turns out affords a pretty expression via the Fourier series coefficients of . As this is not essential to the main text, we relegate its exposition to Section E.1.

### 3.2 Sphere

Now let’s consider the case when is the radius- sphere in equipped with the uniform measure. Again, because all have the same norm, we will treat as a univariate function with . As is long known (Schoenberg, 1942; Gneiting, 2013; Xu and Cheney, 1992; Smola et al., 2001), is diagonalized by spherical harmonics, and the eigenvalues are given by the coefficients of against a system of orthogonal polynomials called Gegenbuaer polynomials. We relegate a complete review of this topic to Section E.2.

### 3.3 Isotropic Gaussian

Now let’s consider equipped with standard isotropic Gaussian , so that behaves like

 Kf(x)=\EVy∼\Gaus(0,I)K(x,y)f(y)=\EVy∼\Gaus(0,I)Φ\lp\f\lax,y\ra∥x∥∥y∥,\f∥x∥2d,\f∥y∥2d\rpf(y)

for any . In contrast to the previous two sections, will essentially depend on the effect of the norms and on .

Nevertheless, because an isotropic Gaussian vector can be obtained by sampling its direction uniformly from the sphere and its magnitude from a chi distribution, can still be partially diagonalized into a sum of products between spherical harmonics and kernels on equipped with a chi distribution (Section E.3). In certain cases, we can obtain complete eigendecompositions, for example when is positive homogeneous. See Section E.3 for more details.

### 3.4 Kernel is the Same over Boolean Cube, Sphere, or Gaussian in High Dimension

The reason we have curtailed a detailed discussion of neural kernels on the sphere and on the standard Gaussian is because, in high dimension, the kernel behaves the same under these distributions as under uniform distribution over the boolean cube. Indeed, by intuition along the lines of the central limit theorem, we expect that uniform distribution over a high dimension boolean cube should approximate high dimensional standard Gaussian. Similarly, by concentration of measure, most of the mass of a Gaussian is concentrated around a thin shell of radius . Thus, morally, we expect the same kernel function induces approximately the same integral operator on these three distributions in high dimension, and as such, their eigenvalues should also approximately coincide. We verify numerically this is indeed the case in Section E.4.

### 3.5 Computing the Eigenvalues

As the eigenvalues of over the different distributions are very close, we will focus in the rest of this paper on eigenvalues over the boolean cube. This has the additional benefit of being much easier to compute.

Each eigenvalue over the sphere and the standard Gaussian requires an integration of against a Gegenbauer polynomial. In high dimension , these Gegenbauer polynomials varies wildly in a sinusoidal fashion, and blows up toward the boundary (see Fig. 10 in the Appendix). As such, it is difficult to obtain a numerically stable estimate of this integral in an efficient manner when is large.

In contrast, we have multiple ways of computing boolean cube eigenvalues, via Sections 3.1 and 3.1. In either case, we just take some linear combination of the values of at a grid of points on , spaced apart by . While the coefficients (defined in Eq. 5) are relatively efficient to compute, the change in the sign of makes this procedure numerically unstable for large . Instead, we use Section 3.1 to isolate the alternating part to evaluate in a numerically stable way: Since , we can evaluate via finite differences, and then compute

 \lp\fI+TΔ2\rpd−k~Φ(1)=\f12d−kd−k∑r=0(d−kr)~Φ(1−rΔ). (6)

When arises from the CK or the NTK of an MLP, all derivatives of at 0 is nonnegative (Appendix F). Thus intuitively, the finite difference should be also all nonnegative, and this sum can be evaluated without worry about floating point errors from cancellation of large terms.

A slightly more clever way to improve the numerical stability when is to note that

 \lpI+TΔ\rpd−k\lpI−TΔ\rpkΦ(1)=\lpI+TΔ\rpd−2k\lpI−T2Δ\rpkΦ(1)=\lpI+TΔ\rpd−2k\lpI−T2Δ\rpkΦ(1).

So an improved algorithm is to first compute the th finite difference with the larger step size , then compute the sum as in Eq. 6.

## 4 Clarifying the “Simplicity Bias” of Random Neural Networks

As mentioned in the introduction, Valle-Pérez et al. (2018) claims that neural networks are biased toward simple functions. We show that this phenomenon depends crucially on the nonlinearity, the sampling variances, and the depth of the network. In Fig. 1(a), we have repeated their experiment for random functions obtained by sampling relu neural networks with 2 hidden layers, 40 neurons each, following Valle-Pérez et al. (2018)’s architectural choices444Valle-Pérez et al. (2018) actually performed their experiments over the cube, not the cube we are using here. This does not affect our conclusion. See Appendix G for more discussion. We also do the same for erf networks of the same depth and width, varying as well the sampling variances of the weights and biases, as shown in the legend. As discussed in Valle-Pérez et al. (2018), for relu, there is indeed this bias, where a single function gets sampled more than 10% of the time. However, for erf, as we increase , we see this bias disappear, and every function in the sample gets sampled only once.

This phenomenon can be explained by looking at the eigendecomposition of the CK, which is the Gaussian process kernel of the distribution of the random networks as their hidden widths tend to infinity. In Fig. 1(b), we plot the normalized eigenvalues for the CKs corresponding to the networks sampled in Fig. 1(a). Immediately, we see that for relu and , the degree 0 eigenspace, corresponding to constant functions, accounts for more than 80% of the variance. This means that a typical infinite-width relu network of 2 layers is expected to be almost constant, and this should be even more true after we threshold the network to be a boolean function. On the other hand, for erf and , the even degree s all vanish, and most of the variance comes from degree 1 components (i.e. linear functions). This concentration in degree 1 also lessens as increases. But because this variance is spread across a dimension 7 eigenspace, we don’t see duplicate function samples nearly as much as in the relu case. As increases, we also see the eigenvalues become more equally distributed, which corresponds to the flattening of the probability-vs-rank curve in Fig. 1(a). Finally, we observe that a 32-layer erf network with has all its nonzero eigenvalues (associated to odd degrees) all equal (see points marked by in Fig. 1(b)). This means that its distribution is a "white noise" on the space of odd functions, and the distribution of boolean functions obtained by thresholding the Gaussian process samples is the uniform distribution on odd functions. This is the complete lack of simplicity bias modulo the oddness constraint.

However, from the spectral perspective, there is a weak sense in which a simplicity bias holds for all neural network-induced CKs and NTKs. {thm}[Weak Spectral Simplicity Bias] Let be the CK or NTK of an MLP on a boolean cube . Then the eigenvalues satisfy

 μ0≥μ2≥⋯≥μ2k≥⋯,μ1≥μ3≥⋯≥μ2k+1≥⋯. (7)

Even though it’s not true that the fraction of variance contributed by the degree eigenspace is decreasing with , the eigenvalue themselves will be in a nonincreasing pattern across even and odd degrees. In fact, if we fix and let , then we can show that (Appendix F)

 μk=Θ(d−k).

Of course, as we have seen, this is a very weak sense of simplicity bias, as it doesn’t prevent “white noise” behavior as in the case of erf CK with large and large depth.

## 5 Deeper Networks Learn More Complex Features — But Not Too Deep

In the rest of this work, we compute the eigenvalues over the 128-dimensional boolean cube (, with ) for a large number of different hyperparameters, and analyze how the latter affect the former. We vary the degree , the nonlinearity between relu and erf, the depth (number of hidden layers) from 1 to 128, and . We fix for relu kernels, but additionally vary for erf kernels. Comprehensive contour plots of how these hyperparameters affect the kernels are included in Appendix A, but in the main text we summarize several trends we see.

We will primarily measure the change in the spectrum by the degree fractional variance, which is just

 degree k fractional variancedef=\f(dk)μkd∑i=0(di)μi.

This terminology comes from the fact that, if we were to sample a function from a Gaussian process with kernel , then we expect that of the total variance of comes from degree components of , where is the degree fractional variance. If we were to try to learn a homogeneous degree- polynomial using a kernel , intuitively we should try to choose such that its is maximized, relative to other eigenvalues. Fig. 3(a) shows that this is indeed the case even with neural networks: over a large number of different hyperparameter settings, degree fractional variance is inversely related to the validation loss incurred when learning a degree polynomial. However, this plot also shows that there does not seem like a precise, clean relationship between fractional variance and validation loss. Obtaining a better measure for predicting generalization is left for future work.

Before we continue, we remark that the fractional variance of a fixed degree converges to a fixed value as the input dimension : {thm}[Asymptotic Fractional Variance] Let be the CK or NTK of an MLP on a boolean cube . Then can be expressed as for some analytic function . If we fix and let the input dimension , then the fractional variance of degree converges to

 (k!)−1Φ(k)(0)/Φ(1)=\f(k!)−1Φ(k)(0)∑j≥0(j!)−1Φ(j)(0)

where denotes the th derivative of . For the fractional variances we compute in this paper, their values at is already very close to their limit, so we focus on the case experimentally.

If were to be the CK or NTK of a relu or erf MLP, then we find that for higher , depth of the network helps increase the degree fractional variance. In Fig. 2(a) and (b), we plot, for each degree , the depth that (with some combination of other hyperparameters like ) achieves this maximum, for respectively relu and erf kernels. Clearly, the maximizing depths are increasing with for relu, and also for erf when considering either odd or even only. The slightly differing behavior between even and odd is expected, as seen in the form of Fig. 1. Note the different scales of y-axes for relu and erf — the depth effect is much stronger for erf than relu.

For relu NTK and CK, maximizes fractional variance in general, and the same holds for erf NTK and CK in the odd degrees (see Appendix A). In Fig. 2(c) and Fig. 2(d) we give a more fine-grained look at the slice, via heatmaps of fractional variance against degree and depth. Brighter color indicates higher variance, and we see the optimal depth for each degree clearly increases with for relu NTK, and likewise for odd degrees of erf NTK. However, note that as increases, the difference between the maximal fractional variance and those slightly suboptimal becomes smaller and smaller, reflected by suppressed range of color moving to the right. The heatmaps for relu and erf CKs look similar and are omitted.

We verify this increase of optimal depth with degree in Fig. 3(b). There we have trained relu networks of varying depth against a ground truth multilinear polynomial of varying degree. We see clearly that the optimal depth is increasing with degree.

Note that implicit in our results here is a highly nontrivial observation: Past some point (the optimal depth), high depth can be detrimental to the performance of the network, beyond just the difficulty to train, and this detriment can already be seen in the corresponding NTK or CK. In particular, it’s not true that the optimal depth is infinite. This adds significant nuance to the folk wisdom that “depth increases expressivity and allows neural networks to learn more complex features.”

## 6 NTK Favors More Complex Features Than CK

We generally find the degree fractional variance of NTK to be higher than that of CK when is large, and vice versa when is small, as shown in Fig. 4. This means that, if we train only the last layer of a neural network (i.e. CK dynamics), we intuitively should expect to learn simpler features better, while, if we train all parameters of the network (i.e. NTK dynamics), we should expect to learn more complex features better. Similarly, if we were to sample a function from a Gaussian process with the CK as kernel (recall this is just the distribution of randomly initialized infinite width MLPs Lee et al. (2018)), this function is more likely to be accurately approximated by low degree polynomials than the same with the NTK.

We verify this intuition by training a large number of neural networks against ground truth functions of various homogeneous polynomials of different degrees, and show a scatterplot of how training the last layer only measures against training all layers (Fig. 3(c)). Consistent with our theory, the only place training the last layer works meaningfully better than training all layers is when the ground truth is a constant function. However, we reiterate that fractional variance is an imperfect indicator of performance. Even though for erf neural networks and , degree fractional variance of NTK is not always greater than that of the CK, we do not see any instance where training the last layer of an erf network is better than training all layers. We leave an investigation of this discrepancy to future work.

## 7 Related Works

The Gaussian process behavior of neural networks was found by Neal (1995) for shallow networks and then extended over the years to different settings and architectures (Williams, 1997; Le Roux and Bengio, 2007; Hazan and Jaakkola, 2015; Daniely et al., 2016; Lee et al., 2018; Matthews et al., 2018; Novak et al., 2018). This connection was exploited implicitly or explicitly to build new models (Cho and Saul, 2009; Lawrence and Moore, 2007; Damianou and Lawrence, 2013; Wilson et al., 2016a, b; Bradshaw et al., 2017; van der Wilk et al., 2017; Kumar et al., 2018; Blomqvist et al., 2018; Borovykh, 2018; Garriga-Alonso et al., 2018; Novak et al., 2018; Lee et al., 2018). The Neural Tangent Kernel is a much more recent discovery by Jacot et al. (2018) and later Allen-Zhu et al. (2018a, c, b); Du et al. (2018); Arora et al. (2019b); Zou et al. (2018) came upon the same reasoning independently. Like CK, NTK has also been applied toward building new models or algorithms (Arora et al., 2019a; Achiam et al., 2019).

Closely related to the discussion of CK and NTK is the signal propagation literature, which tries to understand how to prevent pathological behaviors in randomly initialized neural networks when they are deep (Poole et al., 2016; Schoenholz et al., 2017; Yang and Schoenholz, 2017, 2018; Hanin, 2018; Hanin and Rolnick, 2018; Chen et al., 2018; Yang et al., 2019; Pennington et al., 2017a; Hayou et al., 2018; Philipp and Carbonell, 2018). This line of work can trace its original at least to the advent of the Glorot and He initialization schemes for deep networks Glorot and Bengio (2010); He et al. (2015). The investigation of forward signal propagation, or how random neural networks change with depth, corresponds to studying the infinite-depth limit of CK, and the investigation of backward signal propagation, or how gradients of random networks change with depth, corresponds to studying the infinite-depth limit of NTK. Some of the quite remarkable results from this literature includes how to train a 10,000 layer CNN Xiao et al. (2017) and that, counterintuitively, batch normalization causes gradient explosion Yang et al. (2019).

This signal propagation perspective can be refined via random matrix theory Pennington et al. (2017b, 2018). In these works, free probability is leveraged to compute the singular value distribution of the input-output map given by the random neural network, as the input dimension and width tend to infinity together. Other works also investigate various questions of neural network training and generalization from the random matrix perspective (Pennington and Worah, 2017; Pennington and Bahri, 2017; Pennington and Worah, 2018).

Yang (2019) presents a common framework, known as Tensor Programs, unifying the GP, NTK, signal propagation, and random matrix perspectives, as well as extending them to new scenarios, like recurrent neural networks. It proves the existence of and allows the computation of a large number of infinite-width limits (including ones relevant to the above perspectives) by expressing the quantity of interest as the output of a computation graph and then manipulating the graph mechanically.

Several other works also adopt a spectral perspective on neural networks (Candès, 1999; Sonoda and Murata, 2017; Eldan and Shamir, 2016; Barron, 1993; Xu et al., 2018; Zhang et al., 2019; Xu et al., 2019; Xu, 2018); here we highlight a few most relevant to us. Rahaman et al. (2018) studies the real Fourier frequencies of relu networks and perform experiments on real data as well as synthetic ones. They convincingly show that relu networks learn low frequencies components first. They also investigate the subtleties when the data manifold is low dimensional and embedded in various ways in the input space. In contrast, our work focuses on the spectra of the CK and NTK (which indirectly informs the Fourier frequencies of a typical network). Nevertheless, our results are complementary to theirs, as they readily explain the low frequency bias in relu that they found. Karakida et al. (2018) studies the spectrum of the Fisher information matrix, which share the nonzero eigenvalues with the NTK. They compute the mean, variance, and maximum of the eigenvalues Fisher eigenvalues (taking the width to infinity first, and then considering finite amount of data sampled iid from a Gaussian). In comparison, our spectral results yield all eigenvalues of the NTK (and thus also all nonzero eigenvalues of the Fisher) as well as eigenfunctions.

Finally, we note that several recent Xie et al. (2016) or concurrent Bietti and Mairal (2019); Basri et al. (2019) works studied one-hidden layer ReLU neural networks over the sphere, building on Smola et al. (2001)’s observation that spherical harmonics diagonalize dot product kernels, with the latter two concurrent to us. This is in contrast to the focus on boolean cube here, which allows us to study the fine-grained effect of hyperparameters on the spectra, leading to a variety of insights into neural networks’ generalization properties.

## 8 Conclusion

In this work, we have taken a first step at studying how hyperparameters change the initial distribution and the generalization properties of neural networks through the lens of neural kernels and their spectra. We obtained interesting insights by computing kernel eigenvalues over the boolean cube and relating them to generalization through the fractional variance heuristic. While it inspired valid predictions that are backed up by experiments, fractional variance is clearly just a rough indicator. We hope future work can refine on this idea to produce a much more precise prediction of test loss. Nevertheless, we believe the spectral perspective is the right line of research that will not only shed light on mysteries in deep learning but also inform design choices in practice.

## Acknowledgements

We thank Zeyuan Allen-Zhu, Sebastien Bubeck, Jaehoon Lee, Yasaman Bahri, Samuel Schoenholz, Jeffrey Pennington, and Jascha Sohl-Dickstein for very helpful feedback throughout the process of writing this paper.

## Appendix A Visualizing the Spectral Effects of σ2w,σ2b, and Depth

While in the main text, we summarized several trends of interest kn several plots, they do not give the entire picture of how eigenvalues and fractional variances vary with , and depth. Here we try to present this relationship more completely in a series of contour plots. Fig. 5 shows how varying depth and changes the fractional variances of each degree, for relu CK and NTK. We are fixing in the CK plots, as the fractional variances only depend on the ratio ; even though this is not true for relu NTK, we fix as well for consistency. For erf, however, the fractional variance will crucially depend on both and , so we present 3D contour plots of how , and depth changes fractional variance in Fig. 8. Complementarily, we also show in Figs. 6 and 7 a few slices of these 3D contour plots for different fixed values of , for erf CK and NTK.

## Appendix B Experimental Details

Fig. 3(a), (b) and (c) differ in the set of hyperparameters they involve (to be specified below), but in all of them, we train relu networks against a randomly generated ground truth multilinear polynomial, with input space and L2 loss

#### Training

We perform SGD with batch size 1000. In each iteration, we freshly sample a new batch, and we train for a total of 100,000 iterations, so the network potentially sees different examples. At every 1000 iterations, we validate the current network on a freshly drawn batch of 10,000 examples. We thus record a total of 100 validation losses, and we take the lowest to be the “best validation loss.”

#### Generating the Ground Truth Function

The ground truth function is generated by first sampling 10 monomials of degree , then randomly sampling 10 coefficients for them. The final function is obtained by normalizing such that the sum of their squares is 1:

 f∗(x)def=10∑i=1aimi/10∑j=1a2j.

#### Hyperparameters for Fig. 3(a)

• The learning rate is half the theoretical maximum learning rate555Note that, because the L2 loss here is , the maximum learning rate is (see Fig. 1). If we instead adopt the convention , then the maximum learning rate would be

• Ground truth degree

• Depth

• activation = relu

• width = 1000

• 10 random seeds per hyperparameter combination

• training last layer (marked “ck”), or all layers (marked “ntk”)

#### Hyperparameters for Fig. 3(b)

• The learning rate is half the theoretical maximum learning rate

• Ground truth degree

• Depth

• activation = relu

• width = 1000

• 100 random seeds per hyperparameter combination

• training last layer weight and bias only

#### Hyperparameters for Fig. 3(c)

• The learning rate

• Ground truth degree

• Depth

• activation

• for relu, but for erf

• width = 1000

• 1 random seed per hyperparameter combination

• Training all layers

## Appendix C Review of the Theory of Neural Tangent Kernels

### c.1 Convergence of Infinite-Width Kernels at Initialization

#### Conjugate Kernel

Via a central-limit-like intuition, each unit of Eq. MLP should behave like a Gaussian as width , as it is a sum of a large number of roughly independent random variables [Poole et al., 2016, Schoenholz et al., 2017, Yang and Schoenholz, 2017]. The devil, of course, is in what “roughly independent” means and how to apply the central limit theorem (CLT) to this setting. It can be done, however, Lee et al. [2018], Matthews et al. [2018], Novak et al. [2018], and in the most general case, using a “Gaussian conditioning” technique, this result can be rigorously generalized to almost any architecture Yang [2019]. In any case, the consequence is that, for any finite set ,

 {hlα(x)}x∈S converges in distribution to \Gaus(0,Σl(S,S)),

as , where is the CK as given in Eq. CK.

#### Neural Tangent Kernel

By a slightly more involved version of the “Gaussian conditioning” technique, Yang [2019] also showed that, for any ,

 \la∇θhL(x),∇θhL(y)\ra converges % almost surely to ΘL(x,y)

as the widths tend to infinity, where is the NTK as given in Eq. NTK.

### c.2 Fast Evaluations of CK and NTK

For certain like relu or erf, and can be evaluated very quickly, so that both the CK and NTK can be computed in time, where is the set of points we want to compute the kernel function over, and is the number of layers. {fact}[Cho and Saul [2009]] For any kernel

 Vrelu(K)(x,x′) =\f12π(√1−c2+(π−arccosc)c)√K(x,x)K(x′,x′) Vrelu′(K)(x,x′) =\f12π(π−arccosc)

where .

{fact}

[Neal [1995]] For any kernel ,

 Verf(K)(x,x′) =\f2πarcsin\fK(x,x′)√(K(x,x)+0.5)(K(x′,x′)+0.5) Verf′(K)(x,x′) =\f4π√(1+2K(x,x))(1+2K(x′,x′))−4K(x,x′)2.
{fact}

Let for some . For any kernel ,

 Vϕ(K)(x,x′) =exp\lp\fK(x,x)+2K(x,x′)+K(x′,x′)2σ2\rp.

### c.3 Linear Evolution of Neural Network under GD

Remarkably, the NTK governs the evolution of the neural network function under gradient descent in the infinite-width limit. First, let’s consider how the parameters and the neural network function evolve under continuous time gradient flow. Suppose is only defined on a finite input space . We will visualize

 \pagecolorblue!10$f(X)$ =\pagecolorblue!10$f(x1)⋮f(xk)$, \pagecolorgreen!10$∇fL$ =\pagecolorgreen!10$\pdfLf(x1)⋮\pdfLf(xk)$, \pagecolorcyan!10$θ$ =\pagecolorcyan!10$θ1⋮θn$, \pagecolorred!10$∇θf$ =\pagecolorred!10$\pdff(x1)θ1⋯\pdff(xk)θ1⋮⋱⋮\pdff(x1)θn⋯\pdff(xk)θn$

(best viewed in color). Then under continuous time gradient descent with learning rate ,

 \pdt\pagecolorcyan!10$θt$ =−η∇θL(ft)=−η \pagecolorred!10$∇θft$⋅\pagecolorgreen!10$∇fL(ft)$, \pdt\pagecolorblue!10$ft$ =\pagecolorred!10$∇θft$⊤⋅\pdt\pagecolorcyan!10$θt$=−η \pagecolorred!10$∇θft$⊤⋅\pagecolorred!10$∇θft$⋅\pagecolorgreen!10$∇fL(ft)$=−η \pagecolororange!10$Θt$⋅\pagecolorgreen!10$∇fL(ft)$\numberthis

where is of course the (finite width) NTK. These equations can be visualized as

 \pdt\pagecolorcyan!10$$=−η \pagecolorred!10$$⋅\pagecolorgreen!10$$, \pdt\pagecolorblue!10$$ =\pagecolorred!10$$⋅\pdt\pagecolorcyan!10$$=−η \pagecolorred!10$$⋅\pagecolorred!10$$⋅\pagecolorgreen!10$$=−η \pagecolororange!10$$⋅\pagecolorgreen!10

Thus undergoes kernel gradient descent with (functional) loss and kernel . This kernel of course changes as evolves, but remarkably, it in fact stays constant for being an infinitely wide MLP Jacot et al. [2018]:

 \pdtft=−ηΘ⋅∇fL(ft), (Training All Layers)

where is the infinite-width NTK corresponding to . A similar equation holds for the CK if we train only the last layer,

 \pdtft=−ηΣ⋅∇fL(ft). (Training Last Layer)

If is the square loss against a ground truth function , then , and the equations above become linear differential equations. However, typically we only have a training set of size far less than . In this case, the loss function is effectively

 L(f)=\f12|Xtrain|∑x∈Xtrain(f(x)−f∗(x))2,

 ∇fL(f)=\f1|Xtrain|Dtrain⋅(f−f∗),

where is a diagonal matrix of size whose diagonal is 1 on and 0 else. Then our function still evolves linearly

 \pdtft=−η(K⋅Dtrain)⋅(ft−f∗) (8)

where is the CK or the NTK depending on which parameters are trained.

### c.4 Relationship to Gaussian Process Inference.

Recall that the initial in Eq. 8 is distributed as a Gaussian process in the infinite width limit. As Eq. 8 is a linear differential equation, the distribution of will remain a Gaussian process for all , whether is CK or NTK. Under suitable conditions, it can be shown that [Lee et al., 2019], in the limit as , if we train only the last layer, then the resulting function is distributed as a Gaussian process with mean given by

 ¯f∞(x)=Σ(x,Xtrain)Σ(Xtrain,Xtrain)−1f∗(Xtrain)

and kernel given by

 \Varf∞(x,x′)=Σ(x,x′)−Σ(x,Xtrain)Σ(Xtrain,Xtrain)−1Σ(Xtrain,x′).

These formulas precisely described the posterior distribution of given prior and data .

If we train all layers, then similarly as , the function is distributed as a Gaussian process with mean given by [Lee et al., 2019]

 ¯f∞(x)=Θ(x,Xt