Adaptive Multiple Importance Sampling for Gaussian Processes
Abstract
In applications of Gaussian processes where quantification of uncertainty is a strict requirement, it is necessary to accurately characterize the posterior distribution over Gaussian process covariance parameters. Normally, this is done by means of standard Markov chain Monte Carlo (MCMC) algorithms. Motivated by the issues related to the complexity of calculating the marginal likelihood that can make MCMC algorithms inefficient, this paper develops an alternative inference framework based on Adaptive Multiple Importance Sampling (AMIS). This paper studies the application of AMIS in the case of a Gaussian likelihood, and proposes the PseudoMarginal AMIS for nonGaussian likelihoods, where the marginal likelihood is unbiasedly estimated. The results suggest that the proposed framework outperforms MCMCbased inference of covariance parameters in a wide range of scenarios and remains competitive for moderately large dimensional parameter spaces.
Keywords:
Gaussian processes Bayesian inference Markov chain Monte Carlo Importance sampling∎
1 Introduction
Gaussian Processes (GPs) have been proved to be a successful class of statistical inference methods for data analysis in several applied domains, such as pattern recognition (Rasmussen and Williams 2006; Bishop 2007; Filippone and Girolami 2014), neuroimaging (Filippone et al. 2012), signal processing (Kim et al. 2014), Bayesian optimization (Jones et al. 1998), and emulation and calibration of computer codes (Kennedy and O’Hagan 2001). The features that make GPs appealing are the nonparametric formulation that yields the possibility to flexibly model data, and the parameterization that makes it possible to gain some understanding on the system under study. These properties hinge on the parameterization of the GP covariance function and on the way GP covariance parameters are optimized or inferred.
It is established that optimizing covariance parameters can severely affect the ability of the model to quantify uncertainty in predictions (Neal 1999; Filippone and Girolami 2014; Filippone et al. 2012; Taylor and Diggle 2012). Therefore, in applications where this is undesirable, it is necessary to accurately characterize the posterior distribution over covariance parameters and propagate this source of uncertainty forward to predictions. This task is particularly challenging when dealing with GPs. Inference of GP covariance parameters is generally analytically intractable, but a further complication arises from the difficulties associated with the repeated computation of the so called marginal likelihood, which is necessary when employing any standard inference method. In particular, in the case of a Gaussian likelihood, the marginal likelihood is computable but extremely costly due to the cubic scaling with the number of input vectors. When the likelihood function is not Gaussian, e.g., in classification, in ordinal regression, or in Coxprocesses, the marginal likelihood is not even computable analytically.
In response to the challenges above, a large body of the literature develops approximate inference methods (Williams and Barber 1998; Opper and Winther 2000; Kuss and Rasmussen 2005; Rasmussen and Williams 2006; Nickisch and Rasmussen 2008; Hensman et al. 2015), which, although successful in many cases, give no guarantees on the effect on the quantification of uncertainty in practice. In the direction of quantify uncertainty without introducing any bias, in the literature there have been attempts to employ Markov chain Monte Carlo (MCMC) techniques; we can broadly divide such attempts in works that propose reparameterization techniques (Neal 1999; Murray and Adams 2010; Vanhatalo and Vehtari 2007; Filippone et al. 2013), or methods that carry out inference based on unbiased computations of the marginal likelihood (Filippone and Girolami 2014; Filippone 2014; Murray and Graham 2015). Although these approaches proved successful in a variety of scenarios, employing MCMC algorithms may lead to inefficiencies; for instance, optimal acceptance rates for popular MCMC algorithms such as the MetropolisHastings (MH) algorithm (around (Roberts et al. 1997)) and the Hybrid Monte Carlo (HMC) algorithm (about (Beskos et al. 2013; Neal 2011)) indicate that several expensive computations are wasted. Furthermore, it is established that introducing adaptivity into MCMC proposal mechanisms to improve efficiency may lead to convergence issues (Andrieu and Robert 2001).
In this paper we develop a general framework to learn GPs aimed at overcoming the aforementioned limitations of MCMC methods for GPs, where expectations under the posterior distribution over covariance parameters are carried out by means of the Adaptive Multiple Importance Sampling (AMIS) algorithm (Cornuet et al. 2012). The application of this framework to the Gaussian likelihood case, although novel, is relatively straightforward given that the likelihood is computable. In the case of nonGaussian likelihoods, the impossibility to compute the likelihood exactly, motivates us to propose a novel version of AMIS where the likelihood is only (unbiasedly) estimated. Inspired by the PseudoMarginal MCMC approaches (Andrieu and Roberts 2009), we therefore propose the PseudoMarginal AMIS (PMAMIS) algorithm, and provide a theoretical analysis showing under which conditions PMAMIS yields expectations under covariance parameters without introducing any bias. The proposed PMAMIS is an instance of the Importance Sampling squared (IS) algorithms (Pitt et al. 2012; Tran et al. 2014) that are gaining popularity as practical Bayesian inference methods.
Summarizing, the main contribution of this work are: (i) the application of AMIS to learn GPs with any likelihoods; (ii) a theoretical analysis of PMAMIS; (iii) an extensive comparison of convergence speed with respect to computational complexity of AMIS versus MCMC methods.
The results demonstrate the value of our proposal. In particular, the results indicate that AMIS is competitive with MCMC algorithms in terms of convergence speed over computational cost when calculating expectations under the posterior distribution over covariance parameters. Furthermore, the results suggest that AMIS is a valid alternative to MCMC algorithms even in the case of moderately large dimensional parameter spaces, which is common when employing richly parameterized covariances (e.g., Automatic Relevance Determination (ARD) covariances (MacKay 1994)). Overall, the results suggest a promising direction to speedup inference over GP covariance parameters given that importance samplingbased inference methods, unlike MCMC algorithms, are inherently parallel.
The paper is organised as follows. In section 2 we provide a brief review of GP regression and Bayesian inference. Section 3 presents the proposed Adaptive Multiple Importance Sampling for Gaussian Processes. Section 4 reports the experiments and the results. In section 5, we conclude the paper.
2 Bayesian Gaussian Processes
2.1 Gaussian Processes
Let be a set of input vectors and let be the vector consisting of the corresponding observations . In most GP models, the assumption is that observations are conditionally independent given a set of latent variables. Such latent variables are modeled as realizations of a function at the input vectors , i.e., . Latent variables are used to express the likelihood function, that under the assumption of independence becomes , with depending on the particular type of data being modeled (e.g., Gaussian for regression, Bernoulli for probit classification with probability where is defined as the cumulative normal distribution).
What characterizes GP models is the way the latent variables are specified. In particular, in GP models the assumption is that the function is distributed as a GP, which implies that the latent function values are jointly distributed as a Gaussian , where is the covariance matrix. The entries of the covariance matrix are specified by a covariance (kernel) function with hyperparameters between pair of input vectors. In this work, two kinds of covariance function are considered. The first is the RBF (Radial Basis Function) kernel defined as:
(1) 
The parameter defines the lengthscale of the interaction between the input vectors, represents the marginal variance for each latent variable. The second is the ARD kernel, which takes the form:
(2) 
The advantage of the ARD kernel is that it accounts for the influence of each feature on the prediction of , with larger values of parameters () indicating a higher influence of the corresponding features (Kim et al. 2014). For simplicity of notation, in the remainder of the paper we will denote by the vector of all kernel parameters.
When making predictions, using a point estimate of has been reported to potentially underestimate the uncertainty in predictions or yield inaccurate assessment of the relative influence of different features (Filippone et al. 2012; Filippone and Girolami 2014; Bishop 2007). Therefore, a Bayesian approach is usually adopted to overcome these limitations, which entails characterizing the posterior distribution over covariance parameters. In order to do so, it is necessary to employ methods, such as MCMC, that require computing the marginal likelihood every time is updated. We now discuss the challenges associated with the computation of the marginal likelihood for the special case of a Gaussian likelihood and the more general case of a nonGaussian likelihood.
2.1.1 Gaussian likelihood
In the GP regression setting, the observations are modeled to be Gaussian distributed with mean of (latent variables) and covariance , where denotes the identity matrix and is the variance of noise on the observations. In this setting, the likelihood and the GP priors form a conjugate pair, so latent variables can be integrated out of the model leading to , where . This gives a direct access to the logmarginal likelihood
Although computable, the logmarginal likelihood requires computing the log determinant of and solve a linear system involving . These calculations are usually carried out by factorizing the matrix using the Cholesky decomposition, giving , with lower triangular. The Cholesky algorithm requires operations, but after that computing the terms of the marginal likelihood requires at most operations (Rasmussen and Williams 2006).
2.1.2 NonGaussian likelihood
In the case of nonGaussian likelihoods, the likelihood and the GP prior are no longer conjugate. As a consequence, it is not possible to solve the integral needed to integrate out latent variables
and this requires some form of approximation. A notable example is GP probit classification, which is what we explore in detail in this paper, where the observations are assumed to be Bernoulli distributed with success probability given by:
(3) 
For GPs with nonGaussian likelihood, there have been several proposals on how to carry out approximation to integrate out the latent variables, or to avoid approximations altogether. The focus of this paper is on methods that do not introduce any bias in the calculation of expectation under the posterior over covariance parameters, so we will discuss these approaches in detail in the next sections.
2.2 Bayesian inference of covariance parameters
For simplicity of notation, we will denote the posterior distribution over covariance parameters by
(4) 
where encodes any prior knowledge on the parameters . Within the Bayesian framework, we are usually interested in calculating expectations of functions of with respect to the posterior distribution, i.e., . For instance, setting , we obtain the predictive distribution for the label associated with a new input vector .
The denominator needed to normalize the posterior distribution is intractable, so it is not possible to characterize the posterior distribution analytically. Despite this complication, it is possible to resort to a Monte Carlo approximation to compute expectations under the posterior distribution of :
(5) 
where denotes the th of samples from . However, as it is usually not feasible to draw samples from directly, usually MCMC methods are employed to generate samples from the posterior .
An alternative way to compute expectations, is by means of importance sampling, which takes the following form:
(6) 
where is the importance distribution. The corresponding Monte Carlo approximation is of the form:
(7) 
where now the samples are drawn from the importance sampling distribution . The key to make this Monte Carlo estimator accurate is to choose to be similar to the function that needs to be integrated. It is easy to verify that when is proportional to the function that needs to be integrated, the variance of the importance sampling estimator is zero. Therefore, the success of importance sampling relies on constructing a tractable importance distribution that well approximates . In the remainder of this paper we will discuss and employ methods that adaptively construct .
Both Monte Carlo approximations in equations (5) and (7) converge to the desired expectation, and in practice, they can estimate the desired integral to a given level of precision (Gelman and Rubin 1992a; Flegal et al. 2007). The experimental part of this work is devoted to the study of the convergence properties of the expectation with respect to the computational effort needed to carry out the Monte Carlo approximations in Equations (5) and (7).
2.3 PseudoMarginal MCMC for inference of covariance parameters
Difficulties in computing the expectation in equation (5) arise when the likelihood is not Gaussian, thus leading to the the impossibility to calculate the marginal likelihood exactly, which is necessary to employ standard MCMC algorithms to draw from the posterior . In cases where the marginal likelihood can be unbiasedly estimated, it is possible to resort to so called PseudoMarginal MCMC approaches. Taking the MetropolisHastings algorithm as an example, it is possible to replace the exact calculation of the Hastings ratio
by an approximation where the marginal likelihood is only unbiasedly estimated
where denotes such an approximation. Interestingly, the introduction of this approximation does not affect the properties of the MCMC approach that still yields samples from the correct posterior . The effect of introducing this approximation is that the efficiency of the corresponding MCMC approach is reduced; this is due to the possibility that the algorithm accepts a proposal with an largely overestimated value of the marginal likelihood, making it difficult for any new proposals to be accepted.
By inspecting the GP marginal likelihood
(8) 
we observe that we can attempt to unbiasedly estimate this integral using importance sampling:
(9) 
Here is the number of samples drawn from the importance density . The motivation for attempting this approximation is that in the literature there have been many efforts to construct accurate approximations to the posterior distribution over the latent variables . This suggests that we can hope that the resulting estimator for the marginal likelihood is accurate enough to introduce an acceptable amount of noise in the calculation of the Hastings ratio to make the resulting MCMC approach reasonably efficient. In this paper, we investigate approximations to the posterior obtained by the Laplace Approximation and Expectation Propagation algorithms (Rasmussen and Williams 2006).
3 Adaptive Multiple Importance Sampling for Gaussian Processes
Inefficiencies arising from the use of MCMC methods to sample from the posterior distribution over covariance parameters are due to fact that several proposals are rejected. In order to mitigate this issue, some adaptation mechanisms of the proposals can be used based on previous MCMC samples, but the chain resulting from the adaptivity is no longer Markovian. As a result, elaborate ergodicity results are needed to establish convergence to the true posterior distribution (Haario et al. 1999; 2001; Andrieu and Robert 2001).
In response to this, Cappe et al. (2004) proposed a universal adaptive sampling scheme called population Monte Carlo (PMC), where the difference from Sequential Monte Carlo (SMC) (Doucet et al. 2001) is that the target distribution becomes static. This method is reported to have better adaptivity than MCMC due to the fact that the use of importance sampling makes ergodicity not an issue. At each iteration of PMC, sampling importance resampling (Rubin 1988) is used to generate samples that are assumed to be marginally distributed from the target distribution and hence the approach is unbiased and can be stopped at any time. Moreover, the importance distribution can be adapted using part (generated at each iteration) or all of the importance sample sequence. Douc et al. (2007a; b) also introduced updating mechanisms for the weights of the mixture in the so called Dkernel PMC, which leads to a reduction either in Kullback divergence between the mixture and the target distribution or in the asymptotic variance for a function of interest. An earlier adaptive importance sampling strategy is illustrated in Oh and Berger (1992).
Cornuet et al. (2012) proposed a new perspective of adaptive importance sampling (AIS), called adaptive multiple importance sampling (AMIS), where the difference with the aforementioned PMC methods is that the importance weights of all simulations, produced previously as well as currently, are reevaluated at each iteration. This method follows the “deterministic multiple mixture” sampling scheme of Owen and Zhou (2000). The corresponding importance weight takes the form
(10) 
where is the total number of iterations, denotes the target distribution up to a constant, i.e., , denotes the importance density at iteration with sequentially updated parameters , are samples drawn from with , .
The fixed denominator in (10) gives the name “deterministic multiple mixture”. The motivation is that this construction can achieve an upper bound on the asymptotic variance of the estimator without rejecting any simulations (Owen and Zhou 2000). In AMIS, the parameters of a parametric importance function are sequentially updated using the entire sequence of weighted importance samples, based on efficiency criteria such as moment matching, minimum Kullback divergence with respect to the target or minimum variance of the weights (see, e.g., Ortiz and Kaelbling (2000) for stochastic gradientbased optimization of these efficiency criteria). This leads to a sequence of importance distributions, . Algorithm 1 gives the pseudo code of the generic AMIS algorithm.
Algorithm 1 Generic AMIS as analyzed by Cornuet et al. (2012)

At iteration ,

Generate independent samples from the initial importance density

For , compute ;

Estimate of using the weighted samples , …, and a wellchosen efficiency criterion for estimation.


At iteration , …, ,

Generate independent samples from

For , compute the multiple mixture at
and derive the importance weights of

For and , update the past importance weights as

Estimate using all the weighted samples
and the same efficiency criterion for estimation.

In this paper, we use a Gaussian importance density with mean and covariance , i.e., . We also choose moment matching as the efficiency criterion to estimate using the selfnormalized AMIS estimator:
Despite the efficiency brought by AMIS compared with other AIS techniques, proving convergence of this algorithm is not straightforward. The work in Marin et al. (2014) proposed a modified version of AMIS called MAMIS, aiming at obtaining a variant of AMIS where convergence can be more easily established. The difference is that the new parameters are estimated based only on samples produced at iteration , i.e., , with classical weights . Then the weights of all simulations are updated according to (10) to give the final output. The sample size is suggested to grow at each iteration so as to improve the accuracy of . MAMIS effectively solves any convergence issues of AMIS, but convergence is slower due to the fact that less samples are used to update the importance distribution.
3.1 PseudoMarginal AMIS
The above AMIS/MAMIS are designed for the general analytically computable marginal likelihood such as in the case of GP regression. In this paper, we proposed using AMIS to sample from the posterior where the likelihood is analytically intractable but can be unbiasedly estimated. In this case, we can attempt employing AMIS replacing the exact calculation of the marginal likelihood by an unbiased estimate, obtaining an unbiased estimate of the posterior up to a normalizing constant:
(11) 
We refer to this as PseudoMarginal AMIS (PMAMIS), inspired by the name PseudoMarginal MCMC that was given to the class of MCMC algorithms that replace exact calculations of the likelihood by unbiased estimates (Andrieu and Roberts 2009). The pseudocode of PMAMIS is similar to that of AMIS described in Algorithm 1 except that the target distribution up to a constant is replaced by the above unbiased estimate .
It is known that despite the fact that calculations are approximate, PseudoMarginal MCMC methods yields samples from the correct posterior distribution over covariance parameters, so a natural question is whether the same arguments hold for our proposal. In the remainder of this section, we provide an analysis of the properties of PseudoMarginal AMIS, discussing the conditions under which it yields expectations with respect to the posterior distribution over covariance parameters that are unbiased. As in Tran et al. (2014) Pitt et al. (2012), we introduce a random variable whose distribution (denoted by herein) is determined by the randomness occurring when carrying out the unbiased estimation of the likelihood . Define:
(12) 
i.e.
(13) 
Due to the unbiased property (), we readily verify that . For the sake of clarity, it is useful to define the unnormalized joint density of and as:
(14) 
with a corresponding normalized version
(15) 
Marginalizing this joint density with respect to
(16) 
yields the target posterior of interest defined in Equation (4).
Recall that our objective is analyzing expectations under the posterior over the parameters of some function
(17) 
We begin our analysis by substituting Equation (16) into Equation (17), obtaining
(18) 
In PMAMIS, let denote the number of samples generated at each iteration , denote the importance density at each iteration for . We also define
(19) 
as the joint importance density at each iteration for , as samples drawn from with .
Since in a practical setting is the only function that we can evaluate, the expectation defined in Equation (18) is estimated by the selfnormalized PMAMIS estimator:
(20) 
where the weights of this estimator are computed as
(21) 
Expanding the terms in the computations of the weights, namely substituting Equation (14) (19) into Equation (21), we have
(22) 
which can be rewritten in terms of the unbiased estimate of the marginal likelihood as
(23) 
Equation (23) shows how the importance weights can be computed by the unbiased estimator of the marginal likelihood.
We now appeal to Lemma 1 in Cornuet et al. (2012), which gives the conditions under which the selfnormalized estimator of AMIS will converge to Equation (17). Following the conditions in Lemma 1 in Cornuet et al. (2012), when and , …, are fixed, and when goes to infinity, (Equation (21)) becomes:
(24) 
Then we have
where the normalizing constant is estimated by
Therefore, under the conditions that and , …, are fixed and that goes to infinity, which are the same conditions mentioned in Lemma 1 in Cornuet et al. (2012), the estimator of Equation (20) proves to be an unbiased estimator of . As noted in Cornuet et al. (2012), we remark that these conditions might prove restrictive in practice; however, these conditions provide some solid grounds onto which convergence can be established for AMIS. Furthermore, we note that in a practical setting, when in doubt as to whether convergence might be an issue, it is always possible to switch to the modified version of AMIS (Marin et al. 2014) during execution.
4 Experiments
4.1 Competing sampling methods
In this section, we present the stateoftheart MCMC and AIS sampling methods considered in this work. The aim is to find out whether adaptive imporatnce sampling (AMIS/MAMIS) can improve speed of convergence with respect to computational complexity compared to MCMC approaches (MH (Metropolis et al. 1953; Hastings 1970), HMC (Duane et al. 1987; Neal 1993), NUTS and NUTSDA (Hoffman and Gelman 2012), SS (Slice Sampling) (Neal 2003b)). The competing sampling algorithms considered in this work are given in Table 1.
Sampler  Tuning parameters 

MetropolisHastings (MH)  Covariance matrix 
Hybrid Monte Carlo (HMC)  Mass matrix , Leapfrog stepsize , Number of leapfrog steps 
NoUTurn Sampler (NUTS)  Mass matrix , Leapfrog stepsize 
NUTS with Dual Averaging (NUTSDA)  Mass matrix 
Slice Sampling (SS)  Width of the initial bracket 
4.2 Datasets
The sampling methods considered in this work are tested on six UCI datasets (Asuncion and Newman 2007). The Concrete, Housing and Parkinsons datasets are used for GP regression, whereas the Glass, Thyroid and Breast datasets are used for GP classification. The number of data points and the dimension of the features for each data point are given in Table 2. For the original Parkinsons dataset we randomly sampled records for each of the patients, resulting in data points in total.
Datasets for regression  Datasets for classification  
Concete  Housing  Parkinsons  Glass  Thyroid  Breast  
n  1030  506  168  214  215  682 
d  8  13  20  9  5  9 
4.3 Experimental setup
4.3.1 Settings for GP regression
We compare three different covariances for the proposals of the MH algorithm. The first is proportional to the identity matrix. The second and third covariances are proportional to the inverse of the negative Hessian (denoted by ) evaluated at the mode (denoted by ); one uses the full matrix, whereas the other uses its diagonal only, namely . The mode is found by the maximum likelihood optimization using the “BFGS” method.
Thus the proposals that we compare in this work take the form of , , and , where is a tuning parameter. We tune in pilot runs until we get the desired acceptance rate (around ), as suggested by Roberts et al. (1997).
The approximate distribution is used to be the initial importance density for AMIS/MAMIS. This approximation is also used to initialize several independent sequences of samples from other samplers considered in this work (listed in Table 1). In this way, valid summary inference from multiple independent sequences can be obtained (Gelman and Rubin 1992a). For AMIS/MAMIS, we explored two kinds of update of the covariance of the importance density. One updates the full covariance, whereas the other updates only the diagonal of the covariance. The first two rows of Table 3 show the experimental settings for AMIS/MAMIS.
Motivated by the fact that using knowledge of scales and correlation of the position variables can improve the performance of HMC (Neal 2011), we also chose three kinds of mass matrix for HMC, namely the identity matrix, the inverse of the approximate covariance, and the inverse of the diagonal of the approximate covariance. We set the maximum leapfrog steps to be . We then tune the stepsize until a suggested acceptance rate (around ) is reached (Beskos et al. 2013; Neal 2011). The three forms of mass matrix apply to NUTS, NUTSDA as well; a full description of the pseudo codes of these algorithms can be found in Algorithm 3 and 6 respectively in Hoffman and Gelman (2012). NUTS requires the tuning of a stepsize . After a few trials, we set the stepsize of NUTS to . Although tuning leapfrog steps and stepsize is not an issue in NUTSDA, the parameters () for the dual averaging scheme therein have to be tuned by hand to produce reasonable results. By trying a few settings for each parameter, finally the values were used in both the RBF and ARD kernel case.
The slice sampling algorithm adopted in this paper makes componentwise updates of the parameters, where a new sample is drawn according to the “stepping out” and “shrinkage” procedures as described in Neal (2003b). In our implementation, we set the estimate of the typical size of a slice to .
RBF kernel  ARD kernel  

AMIS  1120  25  280  100 
MAMIS  46  5  
PMAMIS  60  400  60  400 
4.3.2 Settings for GP classification
We compared only PMAMIS and PseudoMarginal MH (PMMH) in the case of GP classification. Since the likelihood is analytically intractable and thus is unbiasedly estimated, the critical property of reversibility and preservation of volume of HMC, NUTS, NUTSDA is no longer satisfied. Therefore, these algorithms are not considered in the GP classification case. It can be proved that slice sampling with the noisy estimate is still valid, but the adaptation of the slice estimate such as the ”stepping out” and ”doubling” procedure in Neal (2003b) is not proper for this use case and naively running standard SS with the noisy estimate worked very poorly (Murray and Graham 2015). Consequently, SS is also not compared in this case.
Both the EP and LA approximations are used as the importance densities to estimate the marginal likelihood. The last row of Table 3 shows the settings of PMAMIS in both the RBF and ARD cases except for the Breast dataset in the ARD case using LA approximation, where the total number of iterations is set to for the sake of presentation. The initial importance density is obtained by the same optimization method as described in Section 4.3.1 except that the gradient required to perform the optimization cannot be computed analytically but is estimated from the EP or LA approximations. We update the full covariance of the importance density during the adaptation process. The proposal of PMMH also takes the form of where is the Hessian matrix obtained again from the approximate marginal likelihood obtained by the EP or LA algorithms. We tuned until we reached the recommended acceptance rate around and use this tuned proposal to generate new samples.
4.4 Convergence analysis
Since the classic score is for MCMC convergence analysis and not suitable for importance sampling, convergence analysis here is performed based on the IQR (interquartile range) of the expectation of norm of parameters () over several repetitions against the number of operations; This is reported to be a more reliable measure of complexity than running time, as many other factors such as implementation details that do not relate directly to the actual computing complexity of the algorithms can affect the running time (Filippone et al. 2013). For GP regression the IQR is computed over replicates, whereas for GP classification it is based on repetitions.
For AMIS/MAMIS/SS/MH, the computing complexity lies in the computation of the function of , where one operation is required to perform the Cholesky decomposition of the covariance matrix . Whereas for HMC/NUTS/NUTSDA where computing the gradient is necessary, two extra operations are needed for the computation of the inverse of the covariance matrix .
For PMAMIS/PMMH, the computing complexity largely comes from the EP or LA approximation of the posterior of the latent variables in order to compute the unbiased estimate . Both EP and LA approximations take two operations to perform the Cholesky decomposition. One is for the Cholesky decomposition of the covariance matrix of the GP prior, the other is for the Cholesky decomposition of the covariance of the approximating Gaussian. Each iteration of EP and LA requires three operations and one operation respectively. In LA approximation, two extra operations are needed to compute the covariance of the Gaussian approximation.
4.5 Results
4.5.1 Notation of samplers
Table 4 shows the notation for the different samplers considered in this paper.
AMIS/MAMIS  AMIS/MAMIS for GP regression where the full covariance matrix of the proposal distribution is updated at each iteration 

AMISD/MAMISD  AMIS/MAMIS for GP regression where only the diagonal of the covariance matrix of the proposal distribution is updated at each iteration 
MHI  MH for GP regression where the covariance of the starting proposal distribution for tuning is the identity matrix 
MHD  MH for GP regression where the covariance of the starting proposal distribution for tuning is the diagonal of the approximate covariance from the optimization 
MHH  MH for GP regression where the covariance of the starting proposal distribution for tuning is the approximate covariance from the optimization 
HMCI/NUTSI/NUTSDAI  HMC family for GP regression where the mass matrix is the identity matrix 
HMCD/NUTSD/NUTSDAD  HMC family for GP regression where the mass matrix is the inverse of the diagonal of the approximate covariance from the optimization 
HMCH/NUTSH/NUTSDAH  HMC family for GP regression where the mass matrix is the inverse of the approximate covariance from the optimization 
PMAMIS  AMIS for GP classification where the full covariance matrix of the proposal distribution is updated at each iteration 
PMMH  MH for GP classification where the covariance of the starting proposal distribution for tuning is the approximate covariance from the optimization 
4.5.2 Convergence of samplers for GP regression
In this section, we present the comparison of convergence of samplers for GP regression considered in this paper.
Details of convergence results of AMIS family (AMIS/MAMIS), MH family (MHI/MHD/MHH) and HMC family (standard HMC , NUTS, NUTSDA) can be found in A and B. Fig. 1 shows the final result of AMIS, best of MAMIS, best of MH family, best of HMC family and SS for the three datasets in both the RBF and ARD kernel case. It is interesting to see that AMIS/MAIMS performs best among all methods in terms of convergence speed in the RBF kernel case. In the ARD kernel case, AMIS also converges much faster than the other approaches. However, our experiments show that in the ARD kernel case, although MAMIS converges faster than the other approaches in the Concrete dataset, it converges slowly in the Housing and Parkinsons datasets, which is probably due to the higher dimensionality compared to the previous cases.
In cases where MAMIS converges slowly, we can exploit the fact that AMIS converges faster than MAMIS by running AMIS for a fixed number of iterations and then switch to MAMIS. In this way, we can ensure fast convergence of the adaptive scheme while ensuring that the scheme converges without issues. In the experiments, we tested this AMISMAMIS combination in cases where MAMIS converges slowly. We treated samples from AMIS as tuning cost for MAMIS to get an accurate initial importance density as is shown in bottomright of Fig. 7 and Fig. 8 with EOT (end of tuning) indicated by the vertical dotted line. Three settings (Table 5) of AMISMAMIS were tested for the Parkinsons dataset.
for MAMIS  number of tuning samples for MAMIS  the corresponding tuning cost  

AMISMAMIS  13000  4333  
AMISMAMIS’  13000  4333  
AMISMAMIS”  26000  8667 
For the Housing dataset, we tested only AMISMAMIS in Table 5. The results for the Housing and Parkinsons datasets in the ARD kernel case prove the convergence of AMISMAMIS. In particular, AMISMAMIS and AMISMAMIS” seem to compete well with the other MCMC approaches in terms of convergence for the Housing dataset and the Parkinsons dataset respectively. As is shown in the bottomright of Fig. 8, the best performance of AMISMAMIS” for the Parkinsons dataset suggests that for higher dimensional problem, a more accurate initialization and a larger sample size at each iteration for MAMIS are necessary to achieve faster convergence.
Another attempt that we made in this paper to improve convergence speed of the adaptive importance sampling schemes is to regularize the estimation of the parameters of the importance distribution as illustrated in Šmídl and Hofman (2014). The regularization stems from the use of an informative prior on of the importance distribution of MAMIS and treat the update of these parameters in a Bayesian fashion (Kulhavý 1996). This construction makes it possible to avoid situations where the importance distribution degenerates to low rank due to few importance weights dominating all the others. In this work, we use an informative prior based on a Gaussian approximation to the posterior over covariance parameters. We denote this method by MAMISP and in the ARD kernel case it was tested only in the Housing dataset. The result indicates that even though MAMISP improves on MAMIS, its convergence is slower than AMISMAMIS (bottomright of Fig. 7).
4.5.3 Convergence of samplers for GP classification
The comparison of convergence of samplers for GP classification (PMAMIS and PMMH) is presented in this section.
Fig. 2 shows the results of PMAMIS and PMMH using EP and LA approximation with , where denotes the number of importance samples of latent variables to estimate the marginal likelihood . The results indicate that PMAMIS is competitive with PMMH in terms of convergence speed in all the EP approximation cases and in most of the LA approximation cases. The results also seem to suggest that PMAMIS/PMMH converge faster with EP approximation than with LA approximation in most cases, which is probably because EP yields a more accurate approximation than LA as reported in Kuss and Rasmussen (2005), Nickisch and Rasmussen (2008). We also tested the performance of PMAMIS and PMMH with , the results of which are shown in C. As is seen from the figures, both PMAMIS and PMMH algorithms with higher number of importance samples converge much faster than those with lower number of importance samples as expected.
5 Conclusions
In this paper we proposed the use of adaptive importance sampling techniques to carry out expectations under the posterior distribution of covariance parameters in Gaussian processes. The motivation for our proposal is based on a number of observations related to the complexity of dealing with the calculation of the marginal likelihood. In GPs with a Gaussian likelihood, calculating the marginal likelihood and its gradient with respect to covariance parameters is expensive and standard MCMC algorithms reject proposals leading to a waste of computations. In GPs with a nonGaussian likelihood, pseudo marginal MCMC approaches bypass the need to compute the marginal likelihood but may suffer from inefficiencies due to the fact that when a proposal is accepted and the estimated marginal likelihood is largely overestimated, it becomes difficult for the chain to accept any other proposal. A further motivation behind our proposal is that importance samplingbased algorithms are generally easy to implement and tune, and can be massively parallelized.
The extensive set of results reported in this paper supports our intuition that importance samplingbased inference of covariance parameters is competitive with MCMC algorithms. In particular, the results indicate that it is possible to achieve convergence of expectations under the posterior distribution of covariance parameters faster than employing MCMC methods in a wide range of scenarios. Even in the case of around twenty parameters, where importance sampling based methods start to degrade in performance, our proposal is still competitive with MCMC approaches.
References
 Andrieu and Robert (2001) Andrieu, C., Robert, C. P., Controlled MCMC for optimal sampling, Bernoulli 9, 395–422 (2001)
 Andrieu and Roberts (2009) Andrieu, C., Roberts, G. O., The pseudomarginal approach for efficient Monte Carlo computations, The Annals of Statistics 37, 697–725 (2009)
 Asuncion and Newman (2007) Asuncion, A., Newman, D. J., UCI Machine Learning Repository (2007)
 Beskos et al. (2013) Beskos, A., Pillai, N., Roberts, G. O., SanzSerna, J. M., Stuart, A. M., Optimal tuning of hybrid Monte Carlo algorithm, Bernoulli 19, 1501–1534 (2013)
 Bishop (2007) Bishop, C. M., Pattern Recognition and Machine Learning (Information Science and Statistics), 1st Edition. Springer (2007)
 Cappe et al. (2004) Cappe, O., Guillin, A., Marin, J. M., Robert, C. P., Population monte carlo, Journal of Computational and Graphical Statistics 13, 907–929 (2004)
 Cornuet et al. (2012) Cornuet, J.M., Marin, J.M., Mira, A., Robert, C. P., Adaptive multiple importance sampling, Scandinavian Journal of Statistics 39, 798–812 (2012)
 Douc et al. (2007a) Douc, R., Guillin, A., Marin, J.M., Robert, C., Convergence of adaptive mixtures of importance sampling schemes, Ann. Statist. 35, 420–448 (2007a)
 Douc et al. (2007b) Douc, R., Guillin, A., Marin, J.M., Robert, C., Minimum variance importance sampling via population monte carlo, ESAIM: Probab. Stat. 11, 427–447 (2007b)
 Doucet et al. (2001) Doucet, A., deFreitas, N., Gordon, N., Sequential MCMC in Practice. SpringerVerlag, New York (2001)
 Duane et al. (1987) Duane, S., Kennedy, A. D., Pendleton, B. J., Roweth, D., Hybrid Monte Carlo, Physics Letters B 195 (2), 216–222 (1987)
 Filippone (2014) Filippone, M., Bayesian inference for Gaussian process classifiers with annealing and pseudomarginal MCMC, In: 22nd International Conference on Pattern Recognition, ICPR 2014, Stockholm, Sweden, August 2428, 2014. IEEE, pp. 614–619 (2014)
 Filippone and Engler (2015) Filippone, M., Engler, R., Enabling scalable stochastic gradientbased inference for Gaussian processes by employing the Unbiased LInear System SolvEr (ULISSE), In: Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, July 611 (2015)
 Filippone and Girolami (2014) Filippone, M., Girolami, M., Pseudomarginal Bayesian inference for Gaussian processes, IEEE Transactions on Pattern Analysis and Machine Intelligence 36 (11), 2214–2226 (2014)
 Filippone et al. (2012) Filippone, M., Marquand, A. F., Blain, C. R. V., Williams, S. C. R., MourãoMiranda, J., Girolami, M., Probabilistic Prediction of Neurological Disorders with a Statistical Assessment of Neuroimaging Data Modalities, Annals of Applied Statistics 6 (4), 1883–1905 (2012)
 Filippone et al. (2013) Filippone, M., Zhong, M., Girolami, M., A comparative evaluation of stochasticbased inference methods for Gaussian process models, Machine Learning 93 (1), 93–114 (2013)
 Flegal et al. (2007) Flegal, J. M., Haran, M., Jones, G. L., Markov Chain Monte Carlo: Can We Trust the Third Significant Figure? Statistical Science 23 (2), 250–260 (2007)
 Gelman et al. (1996) Gelman, A., Roberts, G. O., Gilks, W. R., Efficient Metropolis jumping rules, In: Bayesian statistics, 5 (Alicante, 1994). Oxford Sci. Publ. Oxford Univ. Press, New York, pp. 599–607 (1996)
 Gelman and Rubin (1992a) Gelman, A., Rubin, D. B., Inference from iterative simulation using multiple sequences, Statistical Science 7 (4), 457–472 (1992a)
 Gilks et al. (1998) Gilks, W., Roberts, G., Sahu, S., Adaptive markov chain monte carlo through regeneration, J. Am. Star. Ass. 93, 1045–1054 (1998)
 Haario et al. (1999) Haario, H., Saksman, E., Tamminen, J., Adaptive proposal distribution for random walk metropolis algorithm, Computational Statistics 14, 375–395 (1999)
 Haario et al. (2001) Haario, H., Saksman, E., Tamminen, J., An adaptive metropolis algorithm, Bernoulli 7, 223–242 (2001)
 Haario et al. (2003) Haario, H., Saksman, E., Tamminen, J., Componentwise adaptation for MCMC, Tech. Rep. Preprint 342, Dept. of Mathematics, University of Helsinki (2003)
 Hastings (1970) Hastings, W., Monte carlo sampling methods using Markov chains and their applications, Biometrika 57 (1970)
 Hensman et al. (2015) Hensman, J., Alexander, Filippone, M., Ghahramani, Z., MCMC for variationally sparse Gaussian processes, Tech. rep., arXiv:1506.04000 (2015)
 Hoffman and Gelman (2012) Hoffman, M. D., Gelman, A., The NoUTurn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo, Journal of Machine Learning Research to appear (2012)
 Jones et al. (1998) Jones, D. R., Schonlau, M., Welch, W. J., Efficient Global Optimization of Expensive BlackBox Functions, Journal of Global Optimization 13 (4), 455–492 (1998)
 Kennedy and O’Hagan (2001) Kennedy, M. C., O’Hagan, A., Bayesian calibration of computer models, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 63 (3), 425–464 (2001)
 Kim et al. (2014) Kim, S., Valente, F., Filippone, M., Vinciarelli, A., Predicting continuous conflict perception with bayesian gaussian processes, IEEE Transactions on Affective Computing to appear (2014)
 KnorrHeld and Rue (2002) KnorrHeld, L., Rue, H., On Block Updating in Markov Random Field Models for Disease Mapping, Scandinavian Journal of Statistics 29 (4), 597–614 (2002)
 Kulhavý (1996) Kulhavý, R., Recursive nonlinear estimation: A geometric approach. Springer (1996)
 Kuss and Rasmussen (2005) Kuss, M., Rasmussen, C. E., Assessing Approximate Inference for Binary Gaussian Process Classification, Journal of Machine Learning Research 6, 1679–1704 (2005)
 MacKay (1994) MacKay, D. J. C., Bayesian Nonlinear Modelling for the Prediction Competition, ASHRAE Transactions 100 (2), 1053–1062 (1994)
 Marin et al. (2014) Marin, J.M., Pudlo, P., Sedki, M., Consistency of the adaptive multiple importance sampling, eprint arXiv:1211.2548v2 (2014)
 Metropolis et al. (1953) Metropolis, N., Rosenbluth, A., Teller, A., Teller, E., Equation of state calulations by fast computing machines, The Journal of Chemical Physics 21, 1087–1092 (1953)
 Murray and Adams (2010) Murray, I., Adams, R. P., Slice sampling covariance hyperparameters of latent Gaussian models, In: Lafferty, J. D., Williams, C. K. I., ShaweTaylor, J., Zemel, R. S., Culotta, A. (Eds.), Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 69 December 2010, Vancouver, British Columbia, Canada. Curran Associates, Inc., pp. 1732–1740 (2010)
 Murray and Graham (2015) Murray, I., Graham, M. M., PseudoMarginal Slice Sampling, arXiv:1510.02958v1 (2015)
 Neal (1993) Neal, R. M., Probabilistic inference using Markov chain Monte Carlo methods, Tech. Rep. CRGTR931, Dept. of Computer Science, University of Toronto (1993)
 Neal (1999) Neal, R. M., Regression and classification using Gaussian process priors (with discussion), Bayesian Statistics 6, 475–501 (1999)
 Neal (2003b) Neal, R. M., Slice Sampling, Annals of Statistics 31, 705–767 (2003b)
 Neal (2011) Neal, R. M., Handbook of Markov Monte Carlo, chapter 5: MCMC using Hamitonian Dynamics. CRC Press (2011)
 Nesterov (2009) Nesterov, Y., Primaldual subgradient methods for convex problems, Mathematical Programming 120 (1), 221–259 (2009)
 Nickisch and Rasmussen (2008) Nickisch, H., Rasmussen, C. E., Approximations for Binary Gaussian Process Classification, Journal of Machine Learning Research 9, 2035–2078 (2008)
 Oh and Berger (1992) Oh, M. S., Berger, J. O., Adaptive importance sampling in monte carlo integration, Journal of Statistical Computing and Simulation 41, 143–168 (1992)
 Opper and Winther (2000) Opper, M., Winther, O., Gaussian processes for classification: Meanfield algorithms, Neural Computation 12 (11), 2655–2684 (2000)
 Ortiz and Kaelbling (2000) Ortiz, L., Kaelbling, L., Adaptive importance sampling for estimation in structured domains, In: Proceedings of the Sixteenth Annual Conference on Uncertainty in Artificial Intelligence (UAI2000). Morgan Kaufmann Publishers, San Francisco, CA., pp. 446–454 (2000)
 Owen and Zhou (2000) Owen, A., Zhou, Y., Safe and effective importance sampling, J. Amer. Statist. Assoc. 95, 135–143 (2000)
 Petelin et al. (2014) Petelin, D., Gasperin, M., Smidl, V., Adaptive importance sampling for Bayesian inference in Gaussian process models, In: Proceedings of the 19th IFAC World Congress. pp. 5011–5016 (2014)
 Pitt et al. (2012) Pitt, M.K., Silva, R.S., Giordani, P., Kohn, R., On some properties of Markov chain Monte Carlo simulation methods based on the particle filter, Journal of Econometrics 171, 134–151 (2012)
 Rasmussen and Williams (2006) Rasmussen, C. E., Williams, C., Gaussian Processes for Machine Learning. MIT Press (2006)
 Roberts and Rosenthal (2001) Roberts, G., Rosenthal, J., Optimal scaling for various metropolishastings algorithms, Statistical Science 16, 351–367 (2001)
 Roberts et al. (1997) Roberts, G. O., Gelman, A., Gilks, W. R., Weak convergence and optimal scaling of random walk Metropolis algorithms, Annals of Applied Probability 7, 110–120 (1997)
 Roberts and Sahu (1997) Roberts, G. O., Sahu, S. K., Updating Schemes, Correlation Structure, Blocking and Parameterization for the Gibbs Sampler, Journal of the Royal Statistical Society, Series B (Methodological) 59 (2) (1997)
 Rubin (1988) Rubin, D., Using the SIR algorithm to simulate posterior distributions, In Bayesian Statistics 3 (J. M. Bernardo, M. H. DeGroot, D. V. Lindley and A. F. M. Smith, eds.) 395402. Oxford Univ. Press (1988)
 Sahu and Zhigljavsky (2003) Sahu, S., Zhigljavsky, A., Self regenerative markov chain monte carlo with adaptation, Bernoulli 9, 395–422 (2003)
 Šmídl and Hofman (2014) Šmídl, V., Hofman, R., Efficient Sequential Monte Carlo Sampling for Continuous Monitoring of a Radiation Situation, Technometrics 56 (4), 514–528 (2014)
 Taylor and Diggle (2012) Taylor, M. B., Diggle, J. P., INLA or MCMC? A Tutorial and Comparative Evaluation for Spatial Prediction in logGaussian Cox Processes, ArXiv:1202.1738 (2012)
 Tran et al. (2014) Tran, M.N., Scharth, M., Pitt, M.K., Kohn, R. Importance sampling squared for Bayesian inference in latent variable models, arXiv:1309.3339 (2014)
 Vanhatalo and Vehtari (2007) Vanhatalo, J., Vehtari, A., Sparse Log Gaussian Processes via MCMC for Spatial Epidemiology, Journal of Machine Learning Research  Proceedings Track 1, 73–89 (2007)
 Williams and Barber (1998) Williams, C. K. I., Barber, D., Bayesian classification with Gaussian processes, IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 1342–1351 (1998)
Appendices
Appendices A and B show the convergence results of the samplers for GP regression with the RBF covariance (RBF kernel case) and ARD covariance (ARD kernel case) respectively. The topleft of figures in A and B demonstrate the result of AMIS/MAMIS. It can be seen that AMIS/MAMIS that exploits the full covariance structure of the proposal distribution performs better than the one that only updates the diagonal of the covariance matrix of the proposal density. For the MH family (MHI/MHD/MHH) and HMC family (standard HMC , NUTS, NUTSDA), figures in A and B show that, the methods that make use of the scales and correlation of the parameters, perform better than the one that does not in most cases. Also, NUTS/NUTSDA turns out to converge much faster than the standard HMC due to the fact that standard HMC has to be tuned costly in pilot runs. For MH and standard HMC, the computational cost of tuning is counted when comparing the convergence, as is shown in topcenter and topright of figures in A and B where the end of tuning (EOT) is indicated by three vertical dotted lines, corresponding to the three variants respectively from left to right. For NUTSDA, the computational cost of tuning the parameters of the dual averaging scheme is also counted when determining the convergence, as is displayed in bottomcenter of figures in A and B with EOT indicated by three vertical dotted lines, relating to the three variants respectively from left to right. Table 6 shows the corresponding computational cost of tuning:
Concrete  Housing  Parkinsons  

RBF  ARD  RBF  ARD  RBF  ARD  
HMCI  6747  5910  4779  3924  1561  1340 
HMCD  6042  7316  7281  7726  8883  8469 
HMCH  10851  9451  10987  8860  10871  8736 
NUTDAI  1402  3528  1193  7433  1338  6488 
NUTDAD  1357  1582  1124  2424  975  1951 
NUTDAH  682  1023  670  1866  728  1794 
C shows the convergence results of PMAMIS/PMMH for the RBF (Fig. 9) and ARD (Fig. 10) case respectively. As can be seen from the figures, both PMAMIS and PMMH algorithms with higher number of importance samples (Nimp=64) converge much faster than those with lower number of importance samples (Nimp=1) in both EP and LA approximation cases as expected. The results also indicate that PMAMIS is competitive with PMMH in terms of convergence speed in most of the EP and LA approximation cases. Moreover, PMAMIS/PMMH seem to converge faster with EP approximation than with LA approximation in most cases which is probably because EP yields a more accurate approximation than LA as reported in Kuss and Rasmussen (2005), Nickisch and Rasmussen (2008).
Appendix A Convergence of samplers for GP regression with the RBF covariance
Concrete dataset  RBF covariance
Housing dataset  RBF covariance
Parkinsons dataset  RBF covariance
Appendix B Convergence of samplers for GP regression with the ARD covariance
Concrete dataset  ARD covariance
Housing dataset  ARD covariance

Parkinsons dataset  ARD covariance

Appendix C Convergence of samplers for GP classification
RBF

ARD
