Training Deep Gaussian Processes using Stochastic Expectation Propagation and Probabilistic Backpropagation
Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (GPs) and are formally equivalent to neural networks with multiple, infinitely wide hidden layers. DGPs are probabilistic and non-parametric and as such are arguably more flexible, have a greater capacity to generalise, and provide better calibrated uncertainty estimates than alternative deep models. The focus of this paper is scalable approximate Bayesian learning of these networks. The paper develops a novel and efficient extension of probabilistic backpropagation, a state-of-the-art method for training Bayesian neural networks, that can be used to train DGPs. The new method leverages a recently proposed method for scaling Expectation Propagation, called stochastic Expectation Propagation. The method is able to automatically discover useful input warping, expansion or compression, and it is therefore is a flexible form of Bayesian kernel design. We demonstrate the success of the new method for supervised learning on several real-world datasets, showing that it typically outperforms GP regression and is never much worse.
Training Deep Gaussian Processes using Stochastic Expectation Propagation and Probabilistic Backpropagation
Thang D. Bui University of Cambridge email@example.com José Miguel Hernández-Lobato Harvard University firstname.lastname@example.org Yingzhen Li University of Cambridge email@example.com Daniel Hernández-Lobato Universidad Autónoma de Madrid firstname.lastname@example.org Richard E. Turner University of Cambridge email@example.com
Gaussian Processes (GPs)are powerful nonparametric distributions over continuous functions which can be used for both supervised and unsupervised learning problems . In this article, we study a multi-layer hierarchical generalisation of GPsor deep Gaussian Processes (DGPs)for supervised learning tasks. A GPis equivalent to an infinitely wide neural network with single hidden layer and similarly a DGPis a multi-layer neural network with multiple infinitely wide hidden layers . The mapping between layers in this type of network is parameterised by a GP, and as a result DGPsare arguably more flexible, have a greater capacity to generalise, and are able to provide better calibrated predictive uncertainty estimates than standard multi-layer models .
More formally, suppose we have a training set comprising -dimensional input vectors and corresponding real valued scalar observations . The probabilistic representation of a DGPcomprising of layers can be written as follows,
Here hidden layers are denoted and the mapping function between the layers, , is drawn from a GP. A Gaussian (regression) likelihood is used in this work, a different likelihood can easily be accommodated.111Hidden variables in the intermediate layers can and will generally have multiple dimensions but we have omitted this here to lighten the notation.
The DGPcollapses back to a standard GPwhen (when there are no hidden layers) or when only one of the functions is non-linear. The addition of non-linear hidden layers can potentially overcome practical limitations of shallow GPs. First, modelling real-world complex datasets often requires rich, hand-designed covariance functions. DGPscan perform input warping or dimensionality compression or expansion, and hence automatically learn to construct a kernel that works well for the data at hand. Second, the functional mapping from inputs to outputs specified by a DGPis non-Gaussian which is a more general and flexible modelling choice. Third, DGPscan repair damage done by sparse approximations to the representational power of each GPlayer. For example, inducing point based approximation methods for GPstrade model complexity for a lower computational complexity of where is the number of layers, is the number of datapoints and is the the number of inducing points. This complexity scales quadratically in whereas the dependence on the number of layers is only linear. Therefore, it can be cheaper to increase the representation power of the model by adding extra layers rather than adding more inducing points.
The focus of this paper is Bayesian learning of DGPs, which involves inferring the posterior over the mappings between layers and hyperparameter tuning using the marginal likelihood. Unfortunately, exact Bayesian learning in this model is analytically intractable and as such approximate inference is needed. Recent work in this frontier largely focussed on variational free-energy approaches . We introduce an alternative approximation scheme based on three approximations. First, in order to side step the cubic computational cost of GPs we leverage a well-known inducing point sparse approximation [5, 6]. Second, expectation propagation is used to approximate the analytically intractable posterior over the inducing points. Here we utilise stochastic expectation propagation (SEP) that prevents the memory overhead from increasing with the number of datapoints . Third, the SEP moment computation is itself analytically intractable and requires one final approximation. For this we use the probabilistic backpropagation approximation . The proposed enables the advantages of the DGPmodel to be realised through a computationally efficient, scalable and easy to implement algorithm.
2 The Fully Independent Training Conditional approximation
The computational complexity of full GPmodels scales cubically with the number of training instances, making it intractable in practice. Sparse approximation techniques are therefore often resorted to. They can be coarsely put into two classes: ones that explicitly sparsify and create a parametric representation that approximates the original model, and ones that retain the original nonparametric properties and perform sparse approximation to the exact posterior. The method we describe and use here, Fully Independent Training Conditional (FITC), falls into the first category. The FITC approximation is formed by considering a small set of function values in the infinite dimensional vector and assuming conditional independence between the remaining values given the set [5, 6]. This set is often called inducing points or pseudo datapoints and their input locations can be chosen by optimising the approximate marginal likelihood so that the approximate model is closer to the original model. The resulting model can be written as follows,
where and . The FITC approximation creates a parametric model, but one which is cleverly structured so that the induced non-stationary noise captures the uncertainty introduced from the sparsification.
3 Stochastic expectation propagation for deep, sparse Gaussian processes
Having specified a probabilistic model for data using a deep sparse Gaussian processes we now consider inference for the inducing outputs and learning of the inducing inputs and hyperparameters . The posterior distribution over the inducing points can be written as . This quantity can then be used for prediction of output given a test input, . However, the posterior of is not analytically tractable when there is more than one GPlayer in the model. As such, approximate inference is needed; here we use Stochastic Expectation Propagation (SEP), a recently proposed modification to Expectation Propagation (EP).
In SEP, the posterior is approximated by , where the factor could be thought of as an average data factor that captures the average effect of a likelihood term on the posterior. The form chosen, though seems limited as first, in practice performs almost as well as EPin which there is a factor per datapoint, while significant reducing EP’s memory footprint . Specifically for our model, the memory complexity of EPis as we need to store the mean and covariance matrix for each data factor; in contrast, such requirement for SEPis only regardless of the number of training points.
The SEPprocedure involves looping through the dataset multiple times and performing the following steps: 1. remove from the approximate posterior to form the cavity distribution, 2. incorporate a likelihood term into the cavity to form the tilted distribution, 3. moment match the approximate posterior to this distribution, and in addition to EP, 4. perform a small update to the average factor . We choose a Gaussian form for both and , and as a result steps 1 and 4 are analytically tractable. We will discuss how to deal with the intermediate steps given a training datapoint in the next section.
4 Probabilistic backpropagation for deep, sparse Gaussian processes
The moment matching step in SEP is analytically intractable as it involves propagating a Gaussian distribution through a DGPand computing the moments of the resulting complex distribution. However, for certain choices of covariance functions , it is possible to use an efficient and accurate approximation which propagates a Gaussian through the first layer of the network and projects this non-Gaussian distribution back to a moment matched Gaussian before propagating through the next layer and repeating the same steps. This scheme is a central part of the probabilistic backpropagation algorithm that has been applied to standard neural networks .
In more detail, let be the cavity distribution, the difficult steps above are equivalent to the following updates to the mean and covariance of the approximate posterior:
where . The inference scheme therefore reduces to evaluating the normalising constant and its gradient. By reintroducing the hidden variables in the middle layers, we perform Gaussian approximation to in a sequential fashion, taking a two GPlayers case as an example:
We can exactly marginalise out the inducing points for each GPlayer leading to where , and
Following , we can approximate the difficult integral in the equation above by a Gaussian where the mean and variance take the following form,
where . The equations above require the expectations of the kernel matrix under a Gaussian distribution over the inputs, which are analytically tractable for widely used kernels such as exponentiated quadratic, linear or a more general class of spectral mixture kernels . Importantly, the computation graph of the approximation to and its gradient can be easily programmed using symbolic packages such as Theano .
5 Stochastic optimisation of hyperparameters and inducing point locations
We complete the main text by discussing the optimisation of model hyperparameters and inducing point locations. Given the approximation to the posterior obtained by using SEPas described above, one can also obtain the approximate marginal likelihood and its gradients. As a result, parameter training now involves iterating between running SEPand updating the hypeparameters based on these gradients. However, this procedure can only made efficient and scalable by following two observations discussed in , which include 1. we do not need to wait for (S)EP to converge before making an update to the parameters, and 2. the gradients involve a sum across the whole training set, enabling fast optimisation using stochastic gradients computed on minibatches of datapoints.
6 Experimental results
We test our approximation method on several DGParchitectures for a regression task on several real-world datasets. We obtain 20 random splits of each dataset, 90% for training and 10% for testing and report the average results and their standard deviations in table 1. The prediction errors are evaluated using two metrics: root mean squared error (RMSE) and mean log loss (MLL). We use an exponentiated quadratic kernel with ARD lengthscales. The lengthscales and inducing points of the first GPlayer are sensibly initialised based on the median distance between datapoints in the input space and the k-means cluster centers respectively. We use long lengthscales and initial inducing points between for the higher layers to force them to start up with an identity mapping. For all results reported here, we use Adam  with minibatch size of 50 datapoints and run the optimiser for 4000 iterations. The learning rate is selected by optimising the predictive errors on a small subset of training points using Bayesian optimisation . We experiment with two one-hidden-layer DGPnetworks with hidden variables of one and two dimensions, 50 inducing points per layer and denote them as [DGP, 1, 50] 222This is the same as Bayesian warped GPs, that is the one dimensional output of the first layer is warped through a GPto form the prediction/output. and [DGP, 2, 50] respectively. We compare them against sparse GPregression with the same number of inducing points [GP, 50]. The results in the table below show that overall DGPwith two dimensional hidden variables perform as well or better than sparse GP, and almost always better than the architecture with one dimensional hidden variables. Taking the Boston Housing dataset as an example, the mean test log likelihood using [DGP, 2, 50] is -2.12, which is, to the best of our knowledge, better than state-of-the-art results which were obtained using Bayesian neural networks with probabilistic backpropagation: -2.57 , dropout: -2.46  or SGLD: -2.31 .
|Dataset||N||D||GP, 50||DGP, 1, 50||DGP, 2, 50||GP, 50||DGP, 1, 50||DGP, 2, 50|
In addition, we also vary the number of inducing points per layer for the above networks and trace out the speed-accuracy frontier. Preliminary results indicates that DGPsis very efficient using our inference technique and with a small number of inducing points, can obtain a predictive performance that would require many more inducing points in a shallower architecture.
We have proposed a novel approximation scheme for deep Gaussian processes for supervised learning. Our method extends probabilistic backpropagation for Bayesian neural networks, combines it with an inducing point based sparse GPapproximation and a recently proposed method for scalable approximate Bayesian inference, Stochastic Expectation Propagation. We systematically evaluate our approach on several regression datasets and the initial experimental results demonstrate the validity of our method and the effectiveness of DGPscompared to GPs. Our method is fast, easy to implement and promisingly, gives state-of-the-art performance in various regression tasks.
Current work includes performing experiments on large scale datasets, comparing our method to the variational approach presented in , extension to classification and unsupervised learning, and understanding the effect of the network architectures on prediction quality.
TB thanks Google for funding his European Doctoral Fellowship. JMHL acknowledges support from the Rafael del Pino Foundation. DHL and JMHL acknowledge support from Plan Nacional I+D+i, Grant TIN2013-42351-P, and from CAM, Grant S2013/ICE-2845 CASI-CAM-CM. YL thanks the Schlumberger Foundation for her Faculty for the Future PhD fellowship. RET thanks EPSRC grants EP/G050821/1 and EP/L000776/1.
-  C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press, 2005.
-  A. C. Damianou and N. D. Lawrence, “Deep Gaussian processes,” in 16th International Conference on Artificial Intelligence and Statistics, pp. 207–215, 2013.
-  A. Damianou, Deep Gaussian processes and variational propagation of uncertainty. PhD thesis, University of Sheffield, 2015.
-  J. Hensman and N. D. Lawrence, “Nested variational compression in deep Gaussian processes,” arXiv preprint arXiv:1412.1370, 2014.
-  E. Snelson and Z. Ghahramani, “Sparse Gaussian processes using pseudo-inputs,” in Advances in Neural Information Processing Systems 19, pp. 1257–1264, 2006.
-  J. Quiñonero-Candela and C. E. Rasmussen, “A unifying view of sparse approximate Gaussian process regression,” The Journal of Machine Learning Research, vol. 6, pp. 1939–1959, 2005.
-  Y. Li, J. M. Hernandez-Lobato, and R. E. Turner, “Stochastic expectation propagation,” in Advances in Neural Information Processing Systems 29, 2015.
-  J. M. Hernández-Lobato and R. P. Adams, “Probabilistic backpropagation for scalable learning of Bayesian neural networks,” in 32nd International Conference on Machine Learning, 2015.
-  T. P. Minka, A family of algorithms for approximate Bayesian inference. PhD thesis, Massachusetts Institute of Technology, 2001.
-  A. Girard, C. E. Rasmussen, J. Quiñonero-Candela, and R. Murray-Smith, “Gaussian process priors with uncertain inputs — application to multiple-step ahead time series forecasting,” in Advances in Neural Information Processing Systems 15, pp. 529–536, 2003.
-  A. Wilson and R. Adams, “Gaussian process kernels for pattern discovery and extrapolation,” in 30th International Conference on Machine Learning, pp. 1067–1075, 2013.
-  F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. J. Goodfellow, A. Bergeron, N. Bouchard, and Y. Bengio, “Theano: new features and speed improvements.” Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
-  D. Hernández-Lobato and J. M. Hernández-Lobato, “Scalable Gaussian process classification via expectation propagation,” arXiv preprint arXiv:1507.04513, 2015.
-  D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” in 3rd International Conference on Learning Representations, 2015.
-  J. Snoek, H. Larochelle, and R. P. Adams, “Practical Bayesian optimization of machine learning algorithms,” in Advances in Neural Information Processing Systems 25, pp. 2951–2959, 2012.
-  Y. Gal and Z. Ghahramani, “Dropout as a Bayesian approximation: Representing model uncertainty in deep learning,” arXiv preprint arXiv:1506.02142, 2015.
-  A. Korattikara, V. Rathod, K. Murphy, and M. Welling, “Bayesian dark knowledge,” in Advances in Neural Information Processing Systems 29, 2015.