Interpretable Deep Gaussian Processes with Moments
October 10, 2019
Deep Gaussian Processes (DGPs) combine the expressiveness of Deep Neural Networks (DNNs) with quantified uncertainty of Gaussian Processes (GPs). Expressive power and intractable inference both result from the non-Gaussian distribution over composition functions. We propose interpretable DGP based on approximating DGP as a GP by calculating the exact moments, which additionally identify the heavy-tailed nature of some DGP distributions. Consequently, our approach admits interpretation as both NNs with specified activation functions and as a variational approximation to DGP. We identify the expressivity parameter of DGP and find non-local and non-stationary correlation from DGP composition. We provide general recipes for deriving the effective kernels for DGP of two, three, or infinitely many layers, composed of homogeneous or heterogeneous kernels. Results illustrate the expressiveness of our effective kernels through samples from the prior and inference on simulated and real data and demonstrate advantages of interpretability by analysis of analytic forms, and draw relations and equivalences across kernels.
The success of deep learning models is generally perceived as stemming from greater expressivity which results in powerful generalization (Goodfellow et al., 2016). However, deep learning models are viewed as black boxes because their complexity arises from the enormous number of parameters and possible choices for different structures and activation units. Understanding these models remains an open and challenging problem (Zhang et al., 2016; Belkin et al., 2018; Mei et al., 2018; Advani et al., 2013; Liao and Couillet, 2018). Williams (1997) demonstrated that the characteristics of single-layer neural networks (NNs) can be understood from its effective kernel showing non-stationary and non-local correlation. It is thus appealing to create more interpretable methods through the correspondence between deep learning and kernel-based methods which have the advantage of an explicit mathematical formalization (Schölkopf et al., 1998; Rahimi and Recht, 2008; Cho and Saul, 2009; Mairal et al., 2014; Wilson et al., 2014, 2016; Van der Wilk et al., 2017; Sun et al., 2018).
Quantifying uncertainty of inferences is also a critical issue for deep learning models (Wang and Manning, 2013; Gal and Ghahramani, 2016). Gaussian processes (GPs) (Rasmussen and Williams, 2006), the infinitely wide limit of single-layer neural network with random weight parameters (Neal, 2012), are attractive alternative models capable of quantifying uncertainty through Bayesian inference. Moreover, GPs can be composed into DGPs (Damianou and Lawrence, 2013), with inference via variational approximations (Titsias and Lawrence, 2010; Bui et al., 2016; Cutajar et al., 2017; Salimbeni et al., 2019; Salimbeni and Deisenroth, 2017) and sampling-based method (Havasi et al., 2018), to achieve expressiveness that is comparable to DNNs. However, as for DNNs, with this expressivity comes difficulty in interpretability. It is desirable to leverage interpretability of explicit mathematical formalizations associated with kernel-based methods, while preserving expressivity and uncertainty quantification.
A brief review on GP and intractability of DGP is given in Sec. 2. The multi-modal posterior distribution is shown to result from latent function symmetry in the nested probabilistic model. We introduce calculation of the second and fourth moments in Sec. 3 and 4, respectively. With the second moment, we approximate DGP as GP in Sec. 5 where the validity and the heavy-tailed nature of DGP are discussed by examining the fourth moments. This can be regarded as a new variational approximation to Bayesian inference in DGPs that effectively represents any depth as single layer. In Sec. 6, with these analytic effective covariance functions, we interpret DGP as generation of non-local, non-stationary, and multi-scale correlation through depth of homogeneous or heterogeneous function composition, which leads to the expressiveness of DGP. Sec. 7 demonstrates the expressivness of DGP with optimized marginal likelihood for three-layer DGP against the expressivity parameter defined in DNN model.
1.1 Related Works
DGP (Damianou and Lawrence, 2013) and the earlier Warped GP (Snelson et al., 2004; Lázaro-Gredilla, 2012) were proposed to generalize the idea of GP to serve as a prior distribution over composition function in Bayesian learning framework. The need for approximate inference presents challenges, however. Using variational approach (Titsias, 2009; Titsias and Lawrence, 2010; Salimbeni and Deisenroth, 2017) and expectation propagation (Minka, 2001; Bui et al., 2016) DGPs have demonstrated superior expressivity over GPs. Our contribution is the development of Bayesian DGPs with analytic forms for compositional homogeneous and heterogeneous kernels as well as identification of heavy-tailed distribution associated with some DGP models. Because activation functions of DNNs correspond with kernels in DGPs, our results shall also help interpret the heterogeneous networks using different activation units.
Williams (1997) was first to derive analytic kernels of one-layer neural networks, using sigmoidal and Gaussian activation functions. Later, Cho and Saul (2009) extended analytic results to polynomial activation functions and further made extension to deep models. Duvenaud et al. (2014) studied pathologies in DGPs, primarily focusing on squared exponential kernels and sampling functions from the prior. Daniely et al. (2016) extended these results for homogeneous deep networks by proposing the dual activation for the Hermite, step, and exponential functions. Poole et al. (2016) studied the DNN models and found expressivity parameter indicating chaotic phase and ordered phase from the correlation map. Our approach differs by focusing on DGPs for Bayesian inference, and allowing heterogeneous kernels.
2 GP and DGP Overview
A Gaussian process (GP) is a prior distribution over continuous function of which any subset of points follows the joint multivariate Gaussian distribution specified by mean function and covariance matrix . The properties of function, such as smoothness, are encoded in the covariance function . For example, functions that are sampled from the squared exponential (SE) kernel are infinitely differentiable, while the non-stationary neural network kernel (Williams, 1997) may generate functions which resemble step functions. Under the Bayesian framework, the posterior distribution is used to make predictions. The hyperparameters, denoted by , are determined by the maximizing the marginal distribution . The benefit of Bayesian learning of is that over fitting is naturally avoided by the two competing terms, data-fit and complexity penalty, in the marginal likelihood.
Deep GP serves as a prior distribution over both homogeneous and heterogeneous compositions of functions, where the resultant function from layers of warping is given by
with the hidden functions
The zeroth layer variable represents the input . The prior mean functions and covariance function in the latent layers are predetermined. Homogeneous deep GPs compose functions with the same kernel, , whereas heterogeneous deep GPs compose functions with different kernels . For a finite set of associated with the input , the joint distribution can be expressed as
in which each distribution in the product on right hand side is Gaussian. The DGP is defined as the marginal distribution,
over the composition function. However, the above marginal distribution is not Gaussian as the latent variables appear in the inverse of covariance matrix.
Although the marginalization over the latent variable ’s in Eq. (4) is intractable, we have the following observations regarding the joint and posterior distributions of DGP, which are consistent with the sampling-based DGP in (Havasi et al., 2018).
The joint distribution in Eq. (3) is invariant under the transformation , , if the prior mean .
The proof proceeds by noting that the latent variable appears only in the two distributions, and in Eq. (3). In the former distribution, the quadratic in the exponent of Gaussian distribution permits the sign changing. In the latter distribution, the covariance function is also invariant because the distance , , and the distance to origin, is intact by the sign change. Therefore the joint distribution is left invariant.
If a set of function values associated with each latent layer, , is found to correspond to the maximum of the posterior distribution , with zero prior mean in each latent layer, then the rest of sets, , are also maximum of posterior distribution. Therefore, such DGP posterior distribution is multi-modal.
In the following, we shall consider the family of few-layer DGP. Since DGP allows heterogeneous composition, it is convenient to label DGP by the covariance functions. For example, SE[SC] stands for the DGP that and the covariance functions and , respectively, are squared exponential and squared cosine functions in the output and input layers. The key finding in this paper is, despite the fact that marginalization in DGP inference is intractable, that the calculation of moments is tractable and analytic for some families of DGP.
To illustrate, we first consider the two-layer DGP, , and the prior mean at output layer is zero, . The first moment of the marginal distribution in Eq. (4) is obtained as , and it follows from the zero-mean assumption that this first moment is zero. Likewise, the third moment as well as other odd order moments is zero. Moreover, the second moment is , which can be shown to be the expected value of output layer covariance function with respect to the mutivariate normal distribution associated with the input layer GP,
As will be shown later, the second moment is useful when the DGP is approximated by GP where the effective covariance function is just the second moment. Additionally, the fourth moment is
where, according to the Isserlis’ theorem (Isserlis, 1918), the sum is over all distinct ways of partitioning into pairs and . is abbreviated for convenience. Although the marginal distribution for DGP in Eq. (4) is intractable, the fourth moments are able to reveal additional information about the true distribution and shed light on the validity of approximation scheme in this paper.
3 Second Moment of DGP
We first consider second moments of the DGP distributions, SE and SC, where the covariance function in the input layer is arbitrary but in the output layer is in the exponential family with zero prior mean. The analytic calculation of second moment is possible due to the marginal property of Gaussian which leads to the bivariate distribution,
where denotes all but the integration with respect to and . and denotes the mean and covariance matrix in the remaining block related to and . Explicitly,
and the matrix elements take the value . As such, following the expression in Eq. (5), the second moments of SE is obtained as
with the pair of random variables ’s sampled from the bivariate normal distribution in Eq. (7). Similarly, the second moment of SC is given by
The signal magnitude and length scale are the hyperparameters in the output layer GP.
3.1 Squared Exponential DGP: SE & NuN
The following lemma is useful when calculating the expected value of random exponential quadratic function.
The expected value of the random exponential quadratic form with respect to multivariate normal distribution is given by
where denotes the symmetric matrix of dimension and the matrix .
For SE with prior mean being zero in input and output layers, the calculation Eq. (9) for covariance function in above representation with leads to the second moment,
Moreover, we may consider the covariance function, with , as a more general Gaussian kernel in the output layer. This kernel in fact corresponds to the neural network with Gaussian activation function [Eq. (13) in (Williams, 1997)]. For convenience, we label such DGP as NuN. It should be noted that NuN is identical to SE as . Writing the matrix , the resultant second moment reads,
Note that the determinant term does not appear in the stationary SE case. In addition, the nonstationariness, i.e. the covariance function value depending on the input and as the distance between pair vanishes, is intact as long as . Otherwise, Eq. (13) shall be constant as and .
3.2 Squared Cosine DGP: SC
To capture periodic patterns in data, we may be interested in squared cosine (SC) kernel, . The expectation of general exponential linear function is given by the following lemma.
The expected value of the random exponential linear function with respect to the bivariate nomral distribution is given by
Using , the second moment of SC with zero prior mean in input layer, Eq. (10) becomes
3.3 Three-layer DGP: SE[SE[SE]]
For DGP with , the present approach for obtaining the second moment is still valid but exact closed form seems unlikely. To be able to shed light on the effect of depth on the second moment, we consider the special case for three-layer DGP, SE[SE[SE]], where SE kernel is employed in all three layers and all prior means are set to zero. Following from the second moment of SE[SE], Eq. (12), we can show that the second moment for three-layer SE[SE] is,
with and the ’s are sampled from the bivariate normal distribution . We may proceed by approximating the random second moment inside the braces in Eq. (16) as
As such, we obtain the following approximate second moment for three-layer DGP,
where erf() represents the error function and the parameter .
4 Fourth Moment of SE
The fourth moment is useful in revealing some important properties, e.g. heavy-tailedness, of a distribution. For univariate distribution, the measure of excess kurtosis is linked to the Gaussianity. For example, the heavy-tailed Student t-distribution has positive excess kurtosis whereas the normal distribution has zero. Here, we quantify the non-Gaussianity of DGP in Eq. (4) by investigating the fourth moments of the special case SE.
The product of two exponential quadratic functions has the expected value with respect to the multivariate distribution ,
where corresponds to the second moment for SE. The term , with the ’s denoting the matrix elements of off-diagonal block of covariance matrix .
The proof follows the generalization of Lemma 2 to the case of four random variables. The determinant is a bit tedious but can be done with the help of the equality, .
5 Approximating DGP with GP
In previous sections, we have shown the second and fourth moments of some DGP models. Despite the fact that DGP distribution in Eq. (4) is not tractable and non-Gaussian, these moments are exact and analytic (except the case of SE[SE[SE]]). Here, we propose to approximate the true DPG distribution with GP. Namely, the variational distribution,
is a multivariate normal distribution which minimizes the KL divergence, . Due to the Gaussianity of , the second moments for true distribution in Eq. (4) have the exact correspondence,
Therefore, the second moments turn out to be the effective covariance function .
Considering the case of SE[SE], the approximate distribution acquires the effective covariance function,
where the ’s are signal magnitude and ’s the length scale in the respective layers. In addition, the heterogeneous composition SE[Lin] with linear kernel in input layer gives rise to the rational quadratic kernel of order . The composition can also include the non-stationary kernel such as the neural network kernel (Williams, 1997) in the first layer.
Likewise, the composition SC results in many novel and interesting kernels as well. Composition SC[SE] generates an exponential Gaussian kernel. Surprisingly, the periodic kernel (MacKay, 1998) is identical to the composition SC[SC], and SC[Lin] turns out to be the addition of SE and constant kernels. See Supplementary Material for effective covariance function of SC[NuN].
For the two-layer composition SE, it is clear that the first layer can be regarded as a deterministic mapping when the signal magnitude is set to zero. Thus, with , this two-layer DGP is equivalent to . The nature of DGP distribution becomes transparent by inspecting the fourth moment. The fourth moment of the approximate distribution for SE,
whereas the true distribution,
If we treat the terms inside the bracket as with , following the fact that for , then Eq. (23) is larger than or equal to Eq. (22). Nevertheless, for large value of , Eq. (22) and (23) are identical up to in the expansion. Therefore, the distributions for SE[SE] is a good approximation to in the regime . More generally, the following remark states the fourth moments from and and the nature of distribution of SE.
The proof follows from that the mutivariate normal distribution has with the summation over all three distinct ways of partitioning the quartet into two doublets (Isserlis, 1918). As for , the partition is the same but each term is instead given by Eq. (18), which is greater than the product or the second moments. The particular fourth moment considered in Eq. (22) and (23) is reproduced by setting the random variables and here.
An important implication of above remark is that the true distribution for this particular DGP might be a heavy-tailed non-Gaussian distribution. The above analysis is analogous to the comparison between GP and Student-t process (TP) (Shah et al., 2014). A TP with same covariance function as a GP can be shown to possess larger fourth moment than the GP (Press, 2005), which may account for the more extreme behavior in TP.
5.1 Recursive Composition
The above composition for two-layer DGP can be generalized to the cases where in a recursive manner. For homogeneous composition, e.g. SE[SE[SE]]], one can treat SE[SE] inside the bracket as a GP with zero mean and effective covariance function given in Eq. (21). Thus, the new effective covariance function for the three-layer DGP is obtained by plugging this kernel back into Eq. (12). This trick can be applied recursively so in general the following relation holds,
which relates the current covariance function to the covariance after another layer of composition. Similar approach can be taken for obtaining other homogeneous composition kernels SC[SC[SC[…]]] (Eq. 15) and the non-stationary Gaussian kernels (Eq. 13).
For heterogeneous composition, e.g. SE[SC[NuN]], one can obtain the effective covariance function in similar manner, which leads to the expression for the effective covariance function,
where the NuN covariance function can be the general Gaussian kernel or the arcsine kernel [Eq. (11) in (Williams, 1997)].
5.2 Characteristics of Effective Kernel
Here we consider the distribution over derivatives of composite functions sampled from approximate DGP in order to make connection with the expressivity parameter defined in NN model (Poole et al., 2016). By definition, any two values and from a function must follow the joint Gaussian distribution with the two-by-two covariance matrix . In order to shed light on the expressiveness of our effective kernels, we calculate the following expectation,
which shall asymptotically approach as their the inputs and are close to each other. Consequently, we obtain the expected value of squared derivative,
When applying to the effective kernel of SE[SE] in Eq. (12), one may show the normalized and squared average derivative has
where we follow Poole et al. (2016) and define the parameter important for the analysis of expressivity,
which was suggested to signal the phase transition between chaotic and ordered phases in neural network (Sompolinsky et al., 1988; Poole et al., 2016). With the chain rule for derivative, we can also show that the effective SE[SE[SE]] has the normalized and squared average derivative with . Similar construction is applicable to squared cosine DGP. This characteristic of the derivative of our effective kernels is consistent with (Duvenaud et al., 2014).
6 Interpretation of Effectively DGP
Our approximation based on GPs yields benefits in interpretability for both DGPs and DNNs. Prior results establish that DNNs can be viewed as GPs (Williams, 1997; Cho and Saul, 2009; Lee et al., 2017). In both the DGP and DNNs paradigms, people tend toward homogeneous kernels / activation functions, in part because there are not effective tools for predicting how compositions will behave a priori. In Table 1, we list a few representative analytic kernels from stacking two GPs. Our analysis allows iterative generation of heterogeneous kernels such as SE[SC[NuN]] and homogeneous one SE[SE[SE[…]]] of any depth. Moreover, because our approach sits between these two paradigms, we can view our compositions as arising from activation functions (Daniely et al., 2016), kernels (Schölkopf et al., 1998; Duvenaud et al., 2013), or compositions of both.
|SC[Lin]||SE + Const.|
We may leverage interpretability to analyze differences in expressivity of SE[SE] and SE. First, SE[SE] approaches RQ in small limit. Because the rational quadratic kernel is known to arise from summing over infinitely many SE kernels with Gamma distribution over the inverse of length scale (Rasmussen and Williams, 2006), the effective SE[SE] kernel also possesses the multi-length scale feature. Second, the function composition leads to the long-range correlation, i.e. when , SE[SE] kernel approaches some nonzero constant while SE becomes vanishingly small. This can be seen by noting that the first-layer outputs and are nearly independent if and are far apart. Nevertheless, the distance , generally falling within with high probability, fed into the second layer is greatly reduced if the signal magnitude associated with is small, which leads to finite correlation between and .
The composition using non-stationary kernel, i.e. NuN, is also of interest. Take NuN[SE] for example, the determinant in Eq. (13) generate a squared term, , and as such the combination appears in the effective covariance function.
6.1 SE vs. SE[SE]
Figure 1(a)–(d) shows functions generated from SE and SE[SE]. The functions generated by SE[SE] seem to possess two length scales in variations: the rapidly varying blue line in Figure 1(d) seems to have slowly varying underlying trend that is similar to a shifted version of the black dash line. This is broadly consistent with the discussion in Section 6 of the additional expressivity of multi-length scale behavior in deeper structure.
To further demonstrate the expressivity of SE[SE] kernel over the SE kernel, we use them to fit a dataset with two length scales.111The data is from MATLAB FITRGP example. Figure 1(e) shows that the smooth SE kernel does not fit the small variation, while the SE[SE] kernel is able to capture these fast variation on top of the slowly varying one. Figure 1(f) shows that the SE[SE] kernel is better than SE kernel at prediction not only in the range of larger amount of data, but also in the limited data regime. We also remark in passing that the prediction accuracy for SE[SE] kernel is comparable with the RQ kernel, which is known to be multi-length scale.
Given the ability of the SE[SE] kernel to capture fast variation, one might have the concern that it will fit to noise. To explore this possibility, we trained both SE and SE[SE] kernels on pure noise data. In Figure 1(g) and (h), we show the prediction mean (black) from training on 90 noise data points sampled from the normal distribution with . For this particular noise data, the SE kernel in (g) generates a non-zero prediction mean, while the SE[SE] kernel in (f) nicely predicts the underlying zero function.
We also apply the SE and SE[SE] kernels to two UCI regression data sets, the House Price dataset and the Abalone dataset. We investigated the test error as a function of the fraction of training number. For the House Price dataset, the SE[SE] kernel obtains lower test error than the SE kernel does when the fraction of training data is larger than 30% (see Supplementary Material Figure 1(a)). For the Abalone dataset, the two kernels have similar test error (see Supplementary Material Figure 1(b)).
6.2 Deep SE Compositions
We examine two different methods for obtaining the effective covariance function in SE[SE[SE]]. The straightforward way deals with the expectation of second moment from output to input layers. With approximation, Eq. (17) is obtained. On the other hand, the recursive approach treats the first two layers SE[SE] inside the bracket as a GP first, then take the three-layer DGP as another two-layer DGP, which results in Eq. (25). Fig. 2 (a) and (b), respectively, demonstrates the effective kernel functions from Eq. (17) and Eq. (25) for different hyperparameters. The prominent difference is the presence of plateau for small in Fig. 2(a). For smaller value of and , the two appraoches have closer results (green solid and dashed curves). The functions sampled from these kernels are shown in panel (c) and (d), respectively. With larger value of , the sampled functions in both panels seem to possess variation in shorter length scale. However, the sampled functions in panel (c) are more restricted in the range , but they are much less smooth than the functions obtained using kernel in (b).
7 Hyperparameter Optimization and Expressivity
From above analysis, it is clear that characterizes both the decay of kernel function [Fig. 2 (a) and (b)] and the degree associated with variation in sampled functions. Considering the recursive kernel of depth , increases exponentially with the if the ratio is kept constant in the model. In (Poole et al., 2016), the parameter defined as characterizes the the mapping of manifold in DNNs and serves as expressivity parameter which also signals the transition between chaotic and ordered phases in a mean-field theory for neural network (Sompolinsky et al., 1988).
As a demonstration that DGP with larger is more expressive, we generate data from the non-periodic kernels in Table 1. The hyperparameters of the data-generating kernel are chosen randomly, and we use SE[SE[SE]] kernel to fit the data by maximum marginal likelihood. We run the optimization 20 times with 20 random initialization where the hyperparameters are sampled uniformly from (-10,10) in log space. Figure 3 shows the final log of marginal likelihood versus the value of after optimization. It can be seen that a dramatic increase of marginal likelihood is observed near in the top row of panels, while the transition is shifted in the bottom row. Therefore, the three-layer DGP have superior expressivity in the chaotic regime.
We have presented interpretable deep Gaussian Processes that combine increased expressiveness associated with deep NNs with uncertainty quantification of GPs. Marginalization over latent function variable is not tractable but calculation of moments is. Our approach is based on approximating deep GPs as GPs, which enables one to analytically integrate yielding effectively deep, single layer kernels. The heavy-tailed nature of some deep Gaussian process distribution is revealed by the fourth moment. We have provided a recipe for constructing effective kernels for cases including homogeneous and heterogeneous kernels (equivalently, activation functions), derived a variety of such kernels, analyzed their behavior, and confirmed behavior by prior and posterior predictive simulation. Simpler than alternative approaches to variational inference, our approach yields strong benefits in interpretability while retaining remarkable expressivity.
- Advani et al. (2013) Advani, M., Lahiri, S., and Ganguli, S. (2013). Statistical mechanics of complex neural systems and high dimensional data. Journal of Statistical Mechanics: Theory and Experiment, 2013(03):P03014.
- Belkin et al. (2018) Belkin, M., Ma, S., and Mandal, S. (2018). To understand deep learning we need to understand kernel learning. arXiv preprint arXiv:1802.01396.
- Bui et al. (2016) Bui, T., Hernández-Lobato, D., Hernandez-Lobato, J., Li, Y., and Turner, R. (2016). Deep gaussian processes for regression using approximate expectation propagation. In International Conference on Machine Learning, pages 1472–1481.
- Cho and Saul (2009) Cho, Y. and Saul, L. K. (2009). Kernel methods for deep learning. In Advances in neural information processing systems, pages 342–350.
- Cutajar et al. (2017) Cutajar, K., Bonilla, E. V., Michiardi, P., and Filippone, M. (2017). Random feature expansions for deep gaussian processes. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 884–893. JMLR. org.
- Damianou and Lawrence (2013) Damianou, A. and Lawrence, N. (2013). Deep gaussian processes. In Artificial Intelligence and Statistics, pages 207–215.
- Daniely et al. (2016) Daniely, A., Frostig, R., and Singer, Y. (2016). Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity. In NIPS.
- Duvenaud et al. (2013) Duvenaud, D., Lloyd, J. R., Grosse, R., Tenenbaum, J. B., and Ghahramani, Z. (2013). Structure discovery in nonparametric regression through compositional kernel search. arXiv preprint arXiv:1302.4922.
- Duvenaud et al. (2014) Duvenaud, D., Rippel, O., Adams, R., and Ghahramani, Z. (2014). Avoiding pathologies in very deep networks. In Artificial Intelligence and Statistics, pages 202–210.
- Gal and Ghahramani (2016) Gal, Y. and Ghahramani, Z. (2016). Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pages 1050–1059.
- Goodfellow et al. (2016) Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep learning. MIT press.
- Havasi et al. (2018) Havasi, M., Hernández-Lobato, J. M., and Murillo-Fuentes, J. J. (2018). Inference in deep gaussian processes using stochastic gradient hamiltonian monte carlo. In Advances in Neural Information Processing Systems, pages 7506–7516.
- Isserlis (1918) Isserlis, L. (1918). On a formula for the product-moment coefficient of any order of a normal frequency distribution in any number of variables. Biometrika, 12(1/2):134–139.
- Lázaro-Gredilla (2012) Lázaro-Gredilla, M. (2012). Bayesian warped gaussian processes. In Advances in Neural Information Processing Systems, pages 1619–1627.
- Lee et al. (2017) Lee, J., Bahri, Y., Novak, R., Schoenholz, S. S., Pennington, J., and Sohl-Dickstein, J. (2017). Deep neural networks as gaussian processes. arXiv preprint arXiv:1711.00165.
- Liao and Couillet (2018) Liao, Z. and Couillet, R. (2018). The dynamics of learning: a random matrix approach. arXiv preprint arXiv:1805.11917.
- MacKay (1998) MacKay, D. J. (1998). Introduction to gaussian processes. NATO ASI Series F Computer and Systems Sciences, 168:133–166.
- Mairal et al. (2014) Mairal, J., Koniusz, P., Harchaoui, Z., and Schmid, C. (2014). Convolutional kernel networks. In Advances in neural information processing systems, pages 2627–2635.
- Mei et al. (2018) Mei, S., Montanari, A., and Nguyen, P.-M. (2018). A mean field view of the landscape of two-layer neural networks. Proceedings of the National Academy of Sciences, 115(33):E7665–E7671.
- Minka (2001) Minka, T. P. (2001). Expectation propagation for approximate bayesian inference. In Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence, pages 362–369. Morgan Kaufmann Publishers Inc.
- Neal (2012) Neal, R. M. (2012). Bayesian learning for neural networks, volume 118. Springer Science & Business Media.
- Poole et al. (2016) Poole, B., Lahiri, S., Raghu, M., Sohl-Dickstein, J., and Ganguli, S. (2016). Exponential expressivity in deep neural networks through transient chaos. In NIPS.
- Press (2005) Press, S. J. (2005). Applied multivariate analysis: using Bayesian and frequentist methods of inference. Courier Corporation.
- Rahimi and Recht (2008) Rahimi, A. and Recht, B. (2008). Random features for large-scale kernel machines. In Advances in neural information processing systems, pages 1177–1184.
- Rasmussen and Williams (2006) Rasmussen, C. E. and Williams, C. K. I. (2006). Gaussian Process for Machine Learning. MIT press, Cambridge, MA.
- Salimbeni and Deisenroth (2017) Salimbeni, H. and Deisenroth, M. (2017). Doubly stochastic variational inference for deep gaussian processes. In Advances in Neural Information Processing Systems.
- Salimbeni et al. (2019) Salimbeni, H., Dutordoir, V., Hensman, J., and Deisenroth, M. P. (2019). Deep gaussian processes with importance-weighted variational inference. arXiv preprint arXiv:1905.05435.
- Schölkopf et al. (1998) Schölkopf, B., Smola, A., and Müller, K.-R. (1998). Nonlinear component analysis as a kernel eigenvalue problem. Neural computation, 10(5):1299–1319.
- Shah et al. (2014) Shah, A., Wilson, A., and Ghahramani, Z. (2014). Student-t processes as alternatives to gaussian processes. In Artificial intelligence and statistics, pages 877–885.
- Snelson et al. (2004) Snelson, E., Ghahramani, Z., and Rasmussen, C. E. (2004). Warped gaussian processes. In Advances in neural information processing systems, pages 337–344.
- Sompolinsky et al. (1988) Sompolinsky, H., Crisanti, A., and Sommers, H.-J. (1988). Chaos in random neural networks. Physical review letters, 61(3):259.
- Sun et al. (2018) Sun, S., Zhang, G., Wang, C., Zeng, W., Li, J., and Grosse, R. (2018). Differentiable compositional kernel learning for gaussian processes. arXiv preprint arXiv:1806.04326.
- Titsias (2009) Titsias, M. (2009). Variational learning of inducing variables in sparse gaussian processes. In Artificial Intelligence and Statistics, pages 567–574.
- Titsias and Lawrence (2010) Titsias, M. and Lawrence, N. (2010). Bayesian gaussian process latent variable model. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pages 844–851.
- Van der Wilk et al. (2017) Van der Wilk, M., Rasmussen, C. E., and Hensman, J. (2017). Convolutional gaussian processes. In Advances in Neural Information Processing Systems, pages 2849–2858.
- Wang and Manning (2013) Wang, S. and Manning, C. (2013). Fast dropout training. In international conference on machine learning, pages 118–126.
- Williams (1997) Williams, C. K. (1997). Computing with infinite networks. In Advances in neural information processing systems, pages 295–301.
- Wilson et al. (2014) Wilson, A. G., Gilboa, E., Nehorai, A., and Cunningham, J. P. (2014). Fast kernel learning for multidimensional pattern extrapolation. In Advances in Neural Information Processing Systems, pages 3626–3634.
- Wilson et al. (2016) Wilson, A. G., Hu, Z., Salakhutdinov, R., and Xing, E. P. (2016). Deep kernel learning. In Artificial Intelligence and Statistics, pages 370–378.
- Zhang et al. (2016) Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. (2016). Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530.