Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Abstract
At initialization, artificial neural networks (ANNs) are equivalent to Gaussian processes in the infinitewidth limit Neal (1996); Lee et al. (2018), thus connecting them to kernel methods. We prove that the evolution of an ANN during training can also be described by a kernel: during gradient descent on the parameters of an ANN, the network function (which maps input vectors to output vectors) follows the kernel gradient of the functional cost (which is convex, in contrast to the parameter cost) w.r.t. a new kernel: the Neural Tangent Kernel (NTK). This kernel is central to describe the generalization features of ANNs. While the NTK is random at initialization and varies during training, in the infinitewidth limit it converges to an explicit limiting kernel and it stays constant during training. This makes it possible to study the training of ANNs in function space instead of parameter space. Convergence of the training can then be related to the positivedefiniteness of the limiting NTK. We prove the positivedefiniteness of the limiting NTK when the data is supported on the sphere and the nonlinearity is nonpolynomial.
We then focus on the setting of leastsquares regression and show that in the infinitewidth limit, the network function follows a linear differential equation during training. The convergence is fastest along the largest kernel principal components of the input data with respect to the NTK, hence suggesting a theoretical motivation for early stopping.
Finally we study the NTK numerically, observe its behavior for wide networks, and compare it to the infinitewidth limit.
1 Introduction
Artificial neural networks (ANNs) have achieved impressive results in numerous areas of machine learning. While it has long been known that ANNs can approximate any function with sufficiently many hidden neurons Hornik et al. (1989); Leshno et al. (1993), it is not known what the optimization of ANNs converges to. Indeed the loss surface of neural networks optimization problems is highly nonconvex: the high number of saddle points slows down the convergence (Dauphin et al., 2014). A number of results (Choromanska et al., 2015; Pascanu et al., 2014; Pennington and Bahri, 2017) suggest that for wide enough networks, there are very few “bad” local minima, i.e. local minima with much higher cost than the global minimum.
Another mysterious feature of ANNs is their good generalization properties in spite of their usual overparametrization (Sagun et al., 2017). It seems paradoxical that a reasonably large neural network can fit random labels, while still obtaining good test accuracy when trained on real data (Zhang et al., 2017). It can be noted that in this case, kernel methods have the same properties (Belkin et al., 2018).
In the infinitewidth limit, ANNs have a Gaussian distribution described by a kernel Neal (1996); Lee et al. (2018). These kernels are used in Bayesian inference or Support Vector Machines, yielding results comparable to ANNs trained with gradient descent Lee et al. (2018); Cho and Saul (2009). We will see that in the same limit, the behavior of ANNs during training is described by a related kernel, which we call the neural tangent network (NTK).
1.1 Contribution
We study the network function of an ANN, which maps an input vector to an output vector, where is the vector of the parameters of the ANN. In the limit as the widths of the inner layers tend to infinity, the network function at initialization, converges to a Gaussian distribution (Neal, 1996; Lee et al., 2018).
In this paper, we investigate fully connected networks in this infinitewidth limit, and describe the dynamics of the network function during training:

During gradient descent, we show that the dynamics of follows that of the socalled kernel gradient descent in function space with respect to a limiting kernel, which only depends on the depth of the network, the choice of nonlinearity and the initialization variance.

The convergence properties of ANNs during training can then be related to the positivedefiniteness of the infinitewidth limit NTK. In the case when the dataset is supported on a sphere, we prove this positivedefiniteness using recent results on dual activation functions Daniely et al. (2016).

For a leastsquares regression loss, the network function follows a linear differential equation in the infinitewidth limit, and the eigenfunctions of the Jacobian are the kernel principal components of the input data. This shows a direct connection to kernel methods and motivates the use of early stopping to reduce overfitting in the training of ANNs.

Finally we investigate these theoretical results numerically for an artificial dataset (of points on the unit circle) and for the MNIST dataset. In particular we observe that the behavior of wide ANNs is close to the theoretical limit.
2 Neural networks
In this article, we consider fullyconnected ANNs with layers numbered from (input) to (output), each containing neurons, and with a Lipschitz, twice differentiable nonlinearity function , with bounded second derivative
This paper focuses on the ANN realization function , mapping parameters to functions in a space . The dimension of the parameter space is : the parameters consist of the connection matrices and bias vectors for . In our setup, the parameters are initialized as iid Gaussians .
For a fixed distribution on the input space , the function space is defined as . On this space, we consider the seminorm , defined in terms of the bilinear form
In this paper, we assume that the input distribution is the empirical distribution on a finite dataset , i.e the sum of Dirac measures .
We define the network function by , where the functions (called preactivations) and (called activations) are defined from the th to the th layer by:
where the nonlinearity is applied entrywise. The scalar is a parameter which allows us to tune the influence of the bias on the training.
Remark 1.
Our definition of the realization function slightly differs from the classical one. Usually, the factors and the parameter are absent and the parameters are initialized using the socalled Xavier initialization (Glorot and Bengio, 2010), taking and (or sometimes ) to compensate. While the set of representable functions is the same for both parametrizations (with or without the factors and ), the derivatives of the realization function with respect to the connections and bias are scaled by and respectively in comparison to the classical parametrization.
The factors are key to obtaining a consistent asymptotic behavior of neural networks as the widths of the hidden layers grow to infinity. However a sideeffect of these factors is that they reduce greatly the influence of the connection weights during training when is large: the factor is introduced to balance the influence of the bias and connection weights. Taking makes the behavior comparable to a classical ANN parametrization of width . Also, as a result of the presence of the factors, one may take a larger learning rate than in the case of classical parametrizations. The choices of and of the learning rate in numerical experiments are further discussed in Section 6.
3 Kernel gradient
The training of an ANN consists in optimizing in the function space with respect to a functional cost , such as a regression or crossentropy cost. Even for a convex functional cost , the composite cost is in general highly nonconvex (Choromanska et al., 2015). We will show that during training, the network function follows a descent along the kernel gradient with respect to the Neural Tangent Kernel (NTK) which we introduce in Section 4. This makes it possible to study the training of ANNs in the function space , on which the cost is convex.
A multidimensional kernel is a function , which maps any pair to an matrix such that (equivalently is a symmetric tensor in ). Such a kernel defines a bilinear map on , taking the expectation over independent :
The kernel is positive definite with respect to if .
We denote by the dual of with respect to , i.e. the set of linear forms of the form for some . Two elements of define the same linear form if and only if they are equal on the data. The constructions in the paper do not depend on the element chosen in order to represent as . Using the fact that the partial application of the kernel is a function in , we can define a map mapping a dual element to the function with values:
For our setup, which is that of a finite dataset , the cost functional only depends on the values of at the data points. As a result, the (functional) derivative of the cost can be viewed as an element of , which we write . We denote by , a corresponding dual element, such that .
The kernel gradient is defined as . In contrast to which is only defined on the dataset, the kernel gradient generalizes to values outside the dataset thanks to the kernel :
A timedependent function follows the kernel gradient descent with respect to if it satisfies the differential equation
During kernel gradient descent, the cost evolves as
Convergence to a critical point of is hence guaranteed if the kernel is positive definite with respect to : the cost is then strictly decreasing except at points such that . If the cost is convex and bounded from below, the function therefore converges to a global minimum as .
3.1 Random functions approximation
As a starting point to understand the convergence of ANN gradient descent to kernel gradient descent in the infinitewidth limit, we introduce a simple model, inspired by the approach of Rahimi and Recht (2008).
A kernel can be approximated by a choice of random functions sampled independently from any distribution on whose (noncentered) covariance is given by the kernel :
These functions define a random linear parametrization
The partial derivatives of the parametrization are given by
Optimizing the cost through gradient descent, the parameters follow the ODE:
As a result the function evolves according to
where the righthand side is equal to the kernel gradient with respect to the tangent kernel
This is a random dimensional kernel with values
Performing gradient descent on the cost is therefore equivalent to performing kernel gradient descent with the tangent kernel in the function space. In the limit as , by the law of large numbers, the (random) tangent kernel tends to the fixed kernel , which makes this method an approximation of kernel gradient descent with respect to the limiting kernel .
4 Neural tangent kernel
For ANNs trained using gradient descent on the composition , the situation is very similar to that studied in the Section 3.1. During training, the network function evolves along the (negative) kernel gradient
with respect to the neural tangent kernel (NTK)
However, in contrast to , the realization function of ANNs is not linear. As a consequence, the derivatives and the neural tangent kernel depend on the parameters . The NTK is therefore random at initialization and varies during training, which makes the analysis of the convergence of more delicate.
In the next subsections, we show that, in the infinitewidth limit, the NTK becomes deterministic at initialization and stays constant during training. Since at initialization is Gaussian in the limit, the asymptotic behavior of during training can be explicited in the function space .
4.1 Initialization
As observed in (Neal, 1996; Lee et al., 2018), the output functions for tend to iid Gaussian functions in the infinitewidth limit (a proof in our setup is given in the appendix):
Proposition 1.
For a network of depth at initialization, with a Lipschitz nonlinearity , and in the limit as , the output functions , for , tend (in law) to iid centered Gaussian functions of covariance , where is defined recursively by:
taking the expectation with respect to a centered Gaussian function of covariance .
Remark 2.
Strictly speaking, the existence of a suitable Gaussian measure with covariance is not needed: we only deal with the values of at (the joint measure on is simply a Gaussian vector in 2D). For the same reasons, in the proof of Proposition 1 and Theorem 1, we will freely speak of Gaussian processes without discussing their existence.
The first key result of our paper (proven in the appendix) is the following: in the same limit, the Neural Tangent Kernel (NTK) converges in probability to an explicit deterministic limit.
Theorem 1.
For a network of depth at initialization, with a Lipschitz nonlinearity , and in the limit as the layers width , the NTK converges in probability to a deterministic limiting kernel:
The scalar kernel is defined recursively by
where for
taking the expectation with respect to a centered Gaussian function of covariance , and where denotes the derivative of .
Remark 3.
By Rademacher’s theorem, is defined everywhere, except perhaps on a set of zero Lebesgue measure.
Note that the limiting only depends on the choice of , the depth of the network and the variance of the parameters at initialization (which is equal to in our setting).
4.2 Training
Our second key result is that the NTK stays asymptotically constant during training. This applies for a slightly more general definition of training: the parameters are updated according to a training direction :
In the case of gradient descent, (see Section 3), but the direction may depend on another network, as is the case for e.g. Generative Adversarial Networks Goodfellow et al. (2014). We only assume that the integral stays stochastically bounded as the width tends to infinity, which is verified for e.g. leastsquares regression, see Section 5.
Theorem 2.
Assume that is a Lipschitz, twice differentiable nonlinearity function, with bounded second derivative. For any such that the integral stays stochastically bounded, as , we have, uniformly for ,
As a consequence, in this limit, the dynamics of is described by the differential equation
Remark 4.
As the proof of the theorem (in the appendix) shows, the variation during training of the individual activations in the hidden layers shrinks as their width grows. However their collective variation is significant, which allows the parameters of the lower layers to learn: in the formula of the limiting NTK in Theorem 1, the second summand represents the learning due to the last layer, while the first summand represents the learning performed by the lower layers.
As discussed in Section 3, the convergence of kernel gradient descent to a critical point of the cost is guaranteed for positive definite kernels. The limiting NTK is positive definite if the span of the derivatives , becomes dense in w.r.t. the norm as the width grows to infinity. It seems natural to postulate that the span of the preactivations of the last layer (which themselves appear in , corresponding to the connection weights of the last layer) becomes dense in , for a large family of measures and nonlinearities (see e.g. Hornik et al. (1989); Leshno et al. (1993) for classical theorems about ANNs and approximation). In the case when the dataset is supported on a sphere, the positivedefiniteness of the limiting NTK can be shown using Gaussian integration techniques and existing positivedefiniteness criteria, as given by the following proposition, proven in Appendix A.4:
Proposition 2.
For a nonpolynomial Lipschitz nonlinearity , for any input dimension , the restriction of the limiting NTK to the unit sphere is positivedefinite if .
5 Leastsquares regression
Given a goal function and input distribution , the leastsquares regression cost is
Theorems 1 and 2 apply to an ANN trained on such a cost. Indeed the norm of the training direction is strictly decreasing during training, bounding the integral. We are therefore interested in the behavior of a function during kernel gradient descent with a kernel (we are of course especially interested in the case ):
The solution of this differential equation can be expressed in terms of the map :
where is the exponential of . If can be diagonalized by eigenfunctions with eigenvalues , the exponential has the same eigenfunctions with eigenvalues .
For a finite dataset of size , the map takes the form
The map has at most positive eigenfunctions, and they are the kernel principal components of the data with respect to to the kernel Schölkopf et al. (1998); ShaweTaylor and Cristianini (2004). The corresponding eigenvalues correspond to the variance captured by the components.
Decomposing the difference along the eigenspaces of , the trajectory of the function reads
where is in the kernel (nullspace) of and .
The above decomposition can be seen as a motivation for the use of early stopping. The convergence is indeed faster along the eigenspaces corresponding to larger eigenvalues . Early stopping hence focuses the convergence on the most relevant kernel principal components, while avoiding to fit the ones in eigenspaces with lower eigenvalues (such directions are typically the ‘noisier’ ones: for instance, in the case of the RBF kernel, lower eigenvalues correspond to high frequency functions).
Note that by the linearity of the map , if is initialized with a Gaussian distribution (as is the case for ANNs in the infinitewidth limit), then is Gaussian for all times . Assuming that the kernel is positive definite on the data (implying that is invertible), as limit, we get that takes the form
with the vectors , and given by
and where the Gram matrix is given by . The first term is the kernel ridge regression ShaweTaylor and Cristianini (2004) without regularization () and the second term is a centered Gaussian of variance on the points of the dataset.
Remark 5.
From Proposition 2 and the discussion above, we obtain in particular that in a leastsquares regression setup, when the dataset is supported on the unit sphere , the infinitewidth limit of the ANN training converges, as , to a global minimum of the cost.
6 Numerical experiments
In the following numerical experiments, fully connected ANNs of various widths are compared to the theoretical infinitewidth limit. We choose the size of the hidden layers to all be equal to the same value and we take the ReLU nonlinearity .
In the first two experiments, we consider the case . Moreover, the input elements are taken on the unit circle. This can be motivated by the structure of highdimensional data, where the centered data points often have roughly the same norm
In all experiments, we took (note that by our results, a network with outputs behaves asymptotically like networks with scalar outputs which evolve independently). Finally, the value of the parameter is chosen as , see Remark 1.
6.1 Convergence of the NTK
The first experiment illustrates the convergence of the NTK of a network of depth for two different widths . The function is plotted for a fixed and on the unit circle. To observe the distribution of the NTK, independent initializations are performed for both widths. The kernels are plotted at initialization and then after steps of gradient descent with learning rate (i.e. at ). The cost is a regression cost approximating the function on random inputs.
For the wider network, the NTK shows less variance and is smoother. It is interesting to note that the expectation of the NTK is very close for both networks widths. During training, we observe that the NTK tends to “inflate”. As expected, this is much more apparent on the smaller network () than for the wider network () where the NTK stays almost fixed.
6.2 Kernel regression
For a regression cost, the infinitewidth limit network function has a Gaussian distribution for all times (see Section 5). We compare this theoretical limit to the distribution of the network function at convergence for finitewidth networks. We optimized a leastsquares regression cost on points of the unit circle and plotted the function on the unit circle for a large time ( steps with learning rate ) for two different widths and for random initializations each.
We also approximated the kernels and using a largewidth network () and used them to calculate and plot the 10th, 50th and 90th percentiles of the limiting Gaussian distribution.
The distributions of the network functions are very similar for both widths: their mean and variance appear to be close to those of the limiting distribution . The NTK gives a good indication of what the neural network will converge to even for relatively small widths ().
6.3 Convergence along a principal component
We now illustrate our result on the MNIST dataset of handwritten digits made up of grayscale images of dimension , yielding a dimension of .
We computed the first 3 principal components of a batch of digits with respect to the NTK of a highwidth network (giving an approximation of the limiting kernel) using a power iteration method. The respective eigenvalues are , and . The PCA is not centered, the first component is therefore almost equal to the constant function, which explains the large gap between the first and second eigenvalues
We have seen in Section 5 how the convergence of kernel gradient descent follows the kernel principal components. If the difference at initialization is equal (or proportional) to one of the principal components , then the function will converge in a straight line to at an exponential rate .
We tested whether ANNs of various widths behave in a similar manner. We set the goal of the regression cost to and let the network converge. At each time step , we decomposed the difference into a component proportional to and another one orthogonal to . In the infinitewidth limit, the first component decays exponentially fast while the second is null (), as the function converges along a straight line.
As expected, we see in Figure 2(b) that the wider the network, the less it deviates from the straight line (for each width we performed two independent trials). As the width grows, the trajectory along the 2nd principal component (shown in Figure 2(c)) converges to the theoretical limit shown in blue.
A surprising observation is that smaller networks appear to converge faster than wider ones. This may be explained by the inflation of the NTK observed in our first experiment. Indeed, multiplying the NTK by a factor is equivalent to multiplying the learning rate by the same factor. However, note that since the NTK of largewidth network is more stable during training, larger learning rates can in principle be taken. One must hence be careful when comparing the convergence speed in terms of the number of steps (rather than in terms of the time ): both the inflation effect and the learning rate must be taken into account.
7 Conclusion
This paper introduces a new tool to study ANNs, the Neural Tangent Kernel (NTK), which describes the local dynamics of an ANN during gradient descent. This leads to a new connection between ANN training and kernel methods: in the infinitewidth limit, an ANN can be described in the function space directly by the limit of the NTK, an explicit constant kernel , which only depends on its depth, nonlinearity and parameter initialization variance. More precisely, in this limit, ANN gradient descent is shown to be equivalent to a kernel gradient descent with respect to . The limit of the NTK is hence a powerful tool to understand the generalization properties of ANNs, and it allows one to study the influence of the depth and nonlinearity on the learning abilities of the network. The analysis of training using NTK allows one to relate convergence of ANN training with the positivedefiniteness of the limiting NTK and leads to a characterization of the directions favored by early stopping methods.
Appendix A Appendix
This appendix is dedicated to proving the key results of this paper, namely Proposition 1 and Theorems 1 and 2, which describe the asymptotics of neural networks at initialization and during training, as well as Proposition 2, which guarantees the positivedefiniteness of the limiting NTK on the sphere.
We study the limit of the NTK as sequentially, i.e. we first take , then , etc. This leads to much simpler proofs, but our results could in principle be strengthened to the more general setting when .
A natural choice of convergence to study the NTK is with respect to the operator norm on kernels:
where the expectation is taken over two independent . This norm depends on the input distribution . In our setting, is taken to be the empirical measure of a finite dataset of distinct samples . As a result, the operator norm of is equal to the leading eigenvalue of the Gram matrix . In our setting, convergence in operator norm is hence equivalent to pointwise convergence of on the dataset.
a.1 Asymptotics at Initialization
It has already been observed (Neal, 1996; Lee et al., 2018) that the output functions for tend to iid Gaussian functions in the infinitewidth limit.
Proposition 1.
For a network of depth at initialization, with a Lipschitz nonlinearity , and in the limit as sequentially, the output functions , for , tend (in law) to iid centered Gaussian functions of covariance , where is defined recursively by:
taking the expectation with respect to a centered Gaussian function of covariance .
Proof.
We prove the result by induction. When , there are no hidden layers and is a random affine function of the form:
All output functions are hence independent and have covariance as needed.
The key to the induction step is to consider an network as the following composition: an network mapping the input to the preactivations , followed by an elementwise application of the nonlinearity and then a random affine map . The induction hypothesis gives that in the limit as sequentially the preactivations tend to iid Gaussian functions with covariance . The outputs
conditioned on the values of are iid centered Gaussians with covariance
By the law of large numbers, as , this covariance tends in probability to the expectation
In particular the covariance is deterministic and hence independent of . As a consequence, the conditioned and unconditioned distributions of are equal in the limit: they are iid centered Gaussian of covariance . ∎
In the infinitewidth limit, the neural tangent kernel, which is random at initialization, converges in probability to a deterministic limit.
Theorem 1.
For a network of depth at initialization, with a Lipschitz nonlinearity , and in the limit as the layers width sequentially, the NTK converges in probability to a deterministic limiting kernel:
The scalar kernel is defined recursively by
where for
taking the expectation with respect to a centered Gaussian function of covariance , and where denotes the derivative of .
Proof.
The proof is again by induction. When , there is no hidden layer and therefore no limit to be taken. The neural tangent kernel is a sum over the entries of and those of :
Here again, the key to prove the induction step is the observation that a network of depth is an network mapping the inputs to the preactivations of the th layer followed by a nonlinearity and a random affine function. For a network of depth , let us therefore split the parameters into the parameters of the first layers and those of the last layer .
By Proposition 1 and the induction hypothesis, as the preactivations are iid centered Gaussian with covariance and the neural tangent kernel of the smaller network converges to a deterministic limit:
We can split the neural tangent network into a sum over the parameters of the first layers and the remaining parameters and .
For the first sum let us observe that by the chain rule:
By the induction hypothesis, the contribution of the parameters to the neural tangent kernel therefore converges as :
By the law of large numbers, as , this tends to its expectation which is equal to
It is then easy to see that the second part of the neural tangent kernel, the sum over and converges to as . ∎
a.2 Asymptotics during Training
Given a training direction , a neural network is trained in the following manner: the parameters are initialized as iid and follow the differential equation:
In this context, in the infinitewidth limit, the NTK stays constant during training:
Theorem 2.
Assume that is a Lipschitz, twice differentiable nonlinearity function, with bounded second derivative. For any such that the integral stays stochastically bounded, as sequentially, we have, uniformly for ,
As a consequence, in this limit, the dynamics of is described by the differential equation