The controlled thermodynamic integral for Bayesian model comparison
Bayesian model comparison relies upon the model evidence, yet for many models of interest the model evidence is unavailable in closed form and must be approximated. Many of the estimators for evidence that have been proposed in the Monte Carlo literature suffer from high variability. This paper considers the reduction of variance that can be achieved by exploiting control variates in this setting. Our methodology is based on thermodynamic integration and applies whenever the gradient of both the log-likelihood and the log-prior with respect to the parameters can be efficiently evaluated. Results obtained on regression models and popular benchmark datasets demonstrate a significant and sometimes dramatic reduction in estimator variance and provide insight into the wider applicability of control variates to Bayesian model comparison.
Keywords. model evidence, control variates, variance reduction
Author Footnote. Chris J. Oates (E-mail: firstname.lastname@example.org) is Research Fellow, Theodore Papamarkou (E-mail: email@example.com) is Research Associate and Mark Girolami (E-mail: firstname.lastname@example.org) is Professor, Department of Statistics, University of Warwick, Coventry, CV4 7AL. This work was supported by UK EPSRC EP/D002060/1, EP/J016934/1, EU Grant 259348 (Analysing and Striking the Sensitivities of Embryonal Tumours) and a Royal Society Wolfson Research Merit Award. The authors are grateful to Christian Robert for discussions on the use of control variates in this setting.
In hypothesis-driven research we are presented with data that is assumed to have arisen under one of two (or more) putative models characterised by a probability density . Given a priori model probabilities , the data induce a posteriori probabilities that are the basis for Bayesian model comparison. Since any prior probability distribution gets transformed to a posterior probability distribution through consideration of the data, the transformation itself represents the evidence provided by the data (Kass and Raftery, 1995). For the simple case of two models, this transformation follows from Bayes’ rule as
Thus the influence of the data on the posterior probability distribution is captured through that Bayes factor in favour of Model 2 over Model 1. Rearranging, we can interpret the Bayes factor as the ratio of the posterior odds to the prior odds. A natural approach to computation of Bayes factors is to directly compute the evidence
provided by data in favour of model , where are parameters associated with model . Yet for almost all models of interest, the evidence is unavailable in closed form and must be approximated. Numerous techniques have been proposed to approximate the model evidence (Eqn. 2), a selection of which includes path sampling (Ogata, 1989; Gelman and Meng, 1998), harmonic means (Gelfand and Dey, 1994), Chib’s method (Chib and Jeliazkov, 2001), nested sampling (Skilling, 2006), particle filters (Del Moral et al., 2006), multicanonical algorithms (Marinari and Parisi, 1992; Geyer and Thompson, 1995), approximate Bayesian computation (Didelot et al., 2011) and variational approximations (Corduneanu and Bishop, 2001). Alternatively one can directly target the Bayes factor that compares between two models. Here too numerous methods have been proposed, including importance sampling (Gelman and Meng, 1998; Chen et al., 2000), ratio importance sampling (Torrie and Valleau, 1977), bridge sampling (Gelman and Meng, 1998; Chen et al., 2000), sequential Monte Carlo (Zhou et al., 2013), annealed importance sampling (Neal, 2001), reversible-jump Markov chain Monte Carlo (MCMC; Green, 1995) and also again approximate Bayesian computation (Toni et al., 2009). Recent reviews of these methodologies include Vyshemirsky and Girolami (2008); Marin and Robert (2010); Friel and Wyse (2012).
Of the estimators of evidence that are based on Monte Carlo sampling, it remains the case that estimator variance can in general be extremely high. General approaches to reduction of Monte Carlo error that have been proposed in the literature include antithetic variables (Green and Han, 1992), control variates and Rao-Blackwellisation (Robert and Casella, 2004), Riemann sums (Philippe and Robert, 2001) and a plethora of MCMC schemes that aim to improve mixing (e.g. Girolami and Calderhead, 2011). These methods could all be used to reduce the variance of estimators for model evidence that are based on computing Monte Carlo expectations. In this paper we extend the zero-variance (ZV) control variate technique, introduced in the physics literature by Assaraf and Caffarel (1999), to estimators of model evidence that are based on MCMC and thermodynamic integration (TI; Frenkel and Smit, 2002). The methodology applies whenever the gradient of the log-likelihood (and the log-prior) can be evaluated and therefore can be used “for free” when differential geometric sampling schemes are employed in construction of the Markov chain (Papamarkou et al., 2014). Theoretical results are provided that guide maximal variance reduction in practice. Results on popular benchmark datasets demonstrate a substantial reduction in variance compared to existing estimators and the method is shown to be exact in the special case of Bayesian linear regression.
The paper proceeds as follows: Section 2 recalls key ideas from TI and ZV that we use in our methodology. In section 3 we derive control variates for TI and provide theoretical results that guide maximal variance reduction in practice. Section 4 compares the proposed methodology to the state-of-the-art estimators of model evidence applied to popular benchmark datasets. Section 5 investigates scenarios where the proposed methodology is likely to fail. Finally section 6 provides more general insight into the use of control variates in estimation of model evidence, drawing an important distinction between “equilibrium” and “non-equilibrium” estimators that determines whether or not control variates may be applicable.
2.1 Thermodynamic integration
Path sampling and the closely related technique of TI emerged from the physics community as a computational approach to compute normalising constants (Gelman and Meng, 1998). Recent empirical investigations, including Vyshemirsky and Girolami (2008); Friel and Wyse (2012), have revealed that TI is among the most promising approach to estimation of model evidence. Below we provide relevant background on TI, referring the reader to Calderhead and Girolami (2009) for a detailed discussion of implementational details.
TI targets the model evidence directly; in what follows we therefore implicitly condition upon a model and aim to compute the evidence provided by data in favour of model . Following the presentation of Friel and Pettitt (2008), the power posterior is defined as where the normalising constant is given by . Here is known as an inverse temperature parameter and by analogy the process of increasing is known as annealing. Note that is the density of the prior distribution, whereas is the density of the posterior distribution. Varying produces a continuous path between these two distributions and in this paper it is assumed that all intermediate distributions exist and are well-defined. The normalising constant is equal to one and is equal to , the model evidence that we aim to estimate.
The standard thermodynamic identity is
where the expectation in the integrand is with respect to the power posterior whose density is given above. The correctness of Eqn. 3 is established in e.g. Friel and Pettitt (2008). In TI, this one-dimensional integral is evaluated numerically using a quadrature approximation over a discrete temperature ladder, whereas in the related approach of path sampling this integral is evaluated using MCMC. Note that the use of quadrature methods introduces bias into the estimator of model (log-)evidence; it is therefore important to select an accurate quadrature approximation (Appendix A).
2.2 Control variates and the ZV technique
Control variates are often employed when we aim to estimate, with reduced variance, the expectation of a function of a random variable that is distributed according to a (possibly unnormalised) density . In this paper we focus on real-valued and we aim to approximate
The generic control variate principle relies on constructing an auxiliary function that satisfies and so . Write for the variance of the function of a random variable whose (unnormalised) density is . In many cases it is possible to choose such that , leading to a reduction in Monte Carlo variance. Intuitively, greater variance reduction can occur when is negatively correlated with under , since much of the randomness “cancels out” in the auxiliary function . In classical literature is formed as a sum where the have zero mean under and are known as control variates, whilst are coefficients that must be specified. For estimation based on Markov chains, Andradóttir et al. (1993) proposed control variates for discrete state spaces. Later Mira et al. (2003) extended this approach to continuous state spaces, observing that the optimal choice of is intimately associated with the solution of the Poisson equation and proposing to solve this equation numerically. Further work on constructing control variates for Markov chains includes Hammer and Tjelmeland (2008) for Metropolis-Hasings chains and Dellaportas and Kontoyiannis (2012) for Gibbs samplers.
In this paper we consider the particularly tractable class of ZV control variates that are expressed as functions of the gradient of the log-target density (i.e. the score function). More specifically, Mira et al. (2013) proposed to use
where the trial function is a polynomial in and
is proportional to the score function. In this paper we adopt the convention that both and are vectors. The thermodynamic identity (Eqn. 3) is based on expected values of log-likelihoods . Since is closely related to , ZV control variates appear as a natural strategy to achieve variance reduction in TI. As shown in Mira et al. (2013), ZV control variates arise naturally in certain Gaussian models, leading, in some cases, to exact (i.e. deterministic) estimators that have zero variance. Intuitively, any density that approximates a Gaussian forms a suitable candidate for implementing the ZV scheme. Theoretical conditions for asymptotic unbiasedness of ZV have been established (Appendix B).
ZV control variates are particularly tractable for two reasons: (i) For many models of interest it is possible to obtain a closed-form expression for Eqn. 5, compared to alternatives that require numerical solution of the Poisson equation; (ii) As recently noticed by Papamarkou et al. (2014), the ZV technique can be applied essentially “for free” inside differential-geometric MCMC sampling schemes for which the score function is a pre-requisite for sampling (Girolami and Calderhead, 2011).
In section 3.1 we develop a control variate scheme for the estimation of model evidence, taking TI as our base estimator whose variance we propose to reduce. The main methodological challenge in this setting is the elicitation of both the optimal control variate coefficients and the optimal temperature ladder that underlies TI. In section 3.2 we derive optimal expressions for both these quantities and in section 3.3 we describe how coefficients and temperature ladders are selected in practice.
3.1 The controlled thermodynamic integral
Taking the target density to be the power posterior , it follows from Eqn. 6 that
The ZV control variates (Eqn. 5) are then
where is as defined in Eqn. 7. Here the coefficients of the polynomial will in general depend on both the data and inverse temperature . Integrating these control variates into TI, we obtain the “controlled thermodynamic integral” (CTI)
In order to use CTI to estimate the model (log-)evidence we need to specify both (i) polynomial coefficients and (ii) an appropriate discretisation (the temperature ladder) of the one dimensional integral. Specification of both polynomial coefficients and temperature ladder should be targeted at minimising the variance of CTI (see below).
3.2 Optimal coefficients and ladders
We derive the jointly optimal, variance-minimising, polynomial coefficients and temperature ladder. For the latter, note that there is a surjective mapping from partitions to probability distributions on with density function that is given by . For the development below it is convenient to focus on optimising the density , mapping back to the temperature ladder during implementation (see section 3.3 below). For clarity of the exposition write where we temporarily suppress dependence on both data and model . The CTI identity can be rewritten as
where the final expectation is taken with respect to the distribution with density . Under an approximation that Monte Carlo samples are obtained independently, so-called “perfect transitions”, the variance of the estimator of model (log-)evidence is given by
where is the number of Monte Carlo samples.
The optimal choice of polynomial coefficients and temperature ladder are defined as the pair that jointly minimise Eqn. 11. Specifically, we seek to minimise the Lagrangian
over where is the dimension of and depends on the degree of the polynomial that is being employed. Here is a Lagrange multiplier that will be used to ensure . Below we consider degree 1 polynomials so that but the derivation applies analogously to higher degree polynomials, as explained in Appendix C. The solution of the Lagrangian optimisation problem (Eqn. 12) is
where and denote respectively variance and cross-covariance matrices (since ). Notice that the optimal temperature ladder for CTI is not the same as the optimal ladder for standard TI, which is given by (Calderhead and Girolami, 2009).
It can be shown (Rubinstein and Marcus, 1985) that this choice of polynomial coefficients is characterised as the minimiser of the variance ratio
and at this minimum
so that greater variance reduction is expected in the case where a linear combination of the elements of the vector is highly correlated with the target function .
For most models of interest both Eqn. 13 and Eqn. 14 do not possess closed-form expressions and it becomes necessary to employ estimates or approximations to the optimal values. We begin by noting that Eqn. 13 actually defines the optimal, variance-minimising, coefficients independently of the choice of temperature ladder ; this is directly verified from the Euler-Lagrange equations applied to where is held fixed. This observation allows us to discuss these two aspects of the implementation separately:
3.3.1 Polynomial coefficients
Optimal coefficients for control variates are typically estimated based on the same sequence of MCMC samples that will subsequently be used to compute the controlled expectations (Robert and Casella, 2004). Specifically, to estimate the optimal control variate coefficients we exploit MCMC samples to estimate both the covariance and the cross-covariance . These estimates are then plugged directly into Eqn. 13 in order to obtain an estimate
for the optimal coefficients. Further discussion of “plug-in” estimators for control coefficients can be found in Dellaportas and Kontoyiannis (2012).
3.3.2 Temperature ladder
For estimating the optimal temperature ladder of Eqn. 14, one obvious numerical approach would be to firstly estimate up to proportionality over a uniform grid , using a preliminary MCMC run to estimate both and the covariance and cross-covariance matrices and . Then nonparametric density estimation could be applied in order to obtain an estimate for the optimal ladder . However this two-step procedure is computationally burdensome. Neal (1996) showed that a geometric temperature ladder is optimal for annealing on the scale parameter of a Gaussian and Behrens et al. (2012) extended this result to target distributions of the same form as , which includes Gaussians. In this paper we fix a quintic temperature ladder for use in all applications; this ladder is widely used in the TI literature and has demonstrated strong performance in empirical studies (e.g. Calderhead and Girolami, 2009; Friel et al., 2014). The question of how to select appropriate temperature ladders in practice is an ongoing area of research and the recent contributions of Miasojedow et al. (2012); Behrens et al. (2012); Zhou et al. (2013); Friel et al. (2014) are compatible with our methodology.
The second order quadrature method of Friel et al. (2014), described in Appendix A, requires us also to estimate the variance at each step in the temperature ladder. In experiments below we use ZV control variates to estimate this variance, using the identity
and applying control variates in the estimation of each of these expectations.
We present several empirical studies that compare CTI to the state-of-the-art TI estimators of Friel et al. (2014). In all applications below we base estimation on the output of a population MCMC sampler (Jasra et al., 2007) limited to iterations at each of the 51 rungs of the temperature ladder (a total of evaluations of the likelihood function). In brief, the within-temperature proposal was provided by the manifold Metropolis-adjusted Langevin algorithm (mMALA) of Girolami and Calderhead (2011), whilst the between-temperature proposal randomly chooses a pair of (inverse) temperatures and , proposing to swap their state vectors with probability given by the Metropolis-Hastings ratio (Calderhead and Girolami, 2009). To ensure fairness, the same samples were used as the basis for all estimators of model evidence, ensuring that all estimators require essentially the same amount of computation (since the score function is computed as a matter of course in mMALA). Moreover, to explore the statistical properties of the estimators themselves, we generated 100 independent realisations of the population MCMC and thus 100 realisations of each estimator. Full details are provided in the Supplement.
4.1 Bayesian linear regression
4.1.1 Known precision
We begin with an analytically tractable problem in Bayesian linear regression. The (log-)likelihood function is given by
where is , is and is . In simulations below we took , , . The design matrix was populated with rows by drawing each entry independently from the standard normal distribution and then data were generated from ; both and were then fixed for all experiments below. From the Bayesian perspective we take a conjugate prior with . In this section we assume is fixed and known, but we relax this assumption in the next section. Thus the unknown model parameters here are and we aim to compute the evidence by marginalising over these parameters. This example is an ideal benchmark since it is permissible to obtain many relevant quantities in closed form; see Appendix D.1 for full details.
Before applying CTI we are required to check that the sufficient conditions for the unbiasedness of ZV estimators are satisfied (see Appendix B). This amounts to noticing that the tails of the power posterior decay exponentially in (Appendix D.1). Using the plug-in estimates (Eqn. 17) we obtain estimates for the optimal coefficients , that are shown in SFig. 6. For degree 2 polynomials we see that the plug-in estimator is deterministic. Indeed, by direct calculation we see that is an invertible affine transformation of the parameter vector
where . This allows us to intuit that CTI based on degree 2 polynomials will produce an exact estimate of the (log-)evidence (up to quadrature error), as we explain below. Indeed, by another invertible affine transformation we can map which, when multiplied by the polynomial produces a quantity that is perfectly correlated with the log-likelihood under the power posterior. It then follows from Eqn. 15 that CTI will possess zero variance. This argument is made rigorous in the Supplement.
In SFig. 7 we plot 100 independent estimates of the integrand at each of the 51 temperatures in the ladder for polynomial trial functions of degree 0 (i.e. standard TI), 1 and 2. It is apparent that estimator variance is greatest at lower values of ; this motivates the heavily skewed temperature ladder used by ourselves and others, as we wish to target our computational effort on this high-variance region. We quantify the reduction in estimator variance at an (inverse) temperature using the variance ratio as estimated from the MCMC samples. Fig. 1 shows that degree 1 polynomials achieve (on average) variance reduction at all temperatures, with the greatest reduction occurring in the region where is small. This is encouraging as the region where is small is most important for variance reduction of TI, as discussed above. For degree 2 polynomials we have for all , which recapitulates the exactness of the CTI estimator in this example.
Finally we explore the quality of the estimators of model evidence themselves. For this model the (log-)evidence is available in closed form (Appendix D.1) and this allows us to compute the mean square error (MSE) over all 100 independent realisations of each estimator. Results, shown in Table 1, demonstrate that CTI with degree 2 polynomials achieves a 2-fold reduction in MSE compared to standard TI when both estimators are based on first order quadrature. However, first order quadrature is known to lead to significant estimator bias (Friel et al., 2014) and when estimators are based instead on more accurate second order quadrature, CTI is seen to be approximately better that TI in terms of MSE; a dramatic difference. We also compared TI approaches against annealed importance sampling (AIS; Neal, 2001), as described in the Supplement. In this case CTI (degree 2) is over more accurate compared to AIS (SFig. 8(a)).
4.1.2 Unknown precision (Radiata Pine)
We now relax the assumption of known precision ; we will see that in these circumstances CTI is no longer exact. Specifically we consider data from Williams (1959) on specimens of radiata pine. This dataset is well known in the multivariate statistics literature and was recently used by Friel and Wyse (2012); Friel et al. (2014) in order to benchmark estimators of model evidence. Data consist of the maximum compression strength parallel to the grain as a function of density and density adjusted for resin content . It is wished to determine whether the density or resin-adjusted density is a better predictor of compression strength parallel to the grain. Following Friel et al. (2014) we consider Bayesian model comparison between a pair of competing models:
Here and are the sample means of the and respectively. The priors for and are both Gaussian with common mean and precisions , where . Both and were assigned gamma priors with shape and rate . To compare between these models we consider estimates for the log-Bayes factor that are obtained as the difference between independent estimates for the log-evidence of each model.
This application is interesting for two reasons: Firstly, one can directly calculate the Bayes factor for this example as , so that we have a gold standard performance benchmark. Secondly, when the precision (or ) is unknown, ZV methods are no longer exact. We therefore have an opportunity to assess the performance of CTI in a non-trivial setting.
Formulae in Appendix D.2 demonstrate that the sufficient condition for unbiasedness of ZV methods is satisfied. Results in Fig. 2 show that CTI (degree 1) achieves a modest reduction in variance across temperatures , whereas CTI (degree 2) achieves a massive variance reduction. Computing the MSE relative to the true Bayes factor we see that CTI (degree 2) is over more accurate compared to TI, though the variance of the estimator is not identically equal to zero in this case (Table 1). As before, MSE is further reduced as a result of applying second order quadrature. AIS performed slightly worse than the methods based on TI in this example (SFig. 8(b)).
4.2 Bayesian logistic regression (Pima Indians)
Here we examine data that contains instances of diabetes and a range of possible diabetes indicators for women who were at least 21 years old, of Pima Indian heritage and living near Phoenix, Arizona. This dataset is frequently used as a benchmark for supervised learning methods (e.g. Marin and Robert, 2010). Friel et al. (2014) considered seven predictors of diabetes recorded for this group; number of pregnancies (NP); plasma glucose concentration (PGC); diastolic blood pressure (BP); triceps skin fold thickness (TST); body mass index (BMI); diabetes pedigree function (DP) and age (AGE). Diabetes incidence in person is modelled by the binomial likelihood
where the probability of incidence for person is related to the covariates and the parameters by
Bayesian model comparison is desired to be performed between the two candidate models
subject to the prior belief . Following Friel and Wyse (2012) we set .
The unbiasedness criterion in Appendix B is seen to be satisfied and we have
where the th row of is . In Fig. 3 we see that degree 1 ZV methods achieve a greater variance reduction at smaller , but moreover we see that degree 2 ZV methods continue to achieve a substantial variance reduction at all temperatures. In Table 2 we display the mean of each estimator , computed over all 100 independent runs of population MCMC, together with the standard deviation of this collection of estimates. We see that this variance reduction transfers to estimates of the Bayes factor themselves, where the standard deviation of the CTI estimators is approximately lower compared to estimators based on TI. Although no exact expression is available for , Friel et al. (2014) computed the log-evidence for both models using an extended run of 2,000 temperatures and iterations within standard TI. Their estimates were and respectively for Models 1 and 2, corresponding to an estimate of the Bayes factor of . This estimate, obtained at considerable computational expense, closely matches the estimates obtained by CTI (degree 2), which is based on fewer evaluations of the likelihood function. AIS performs comparably with standard TI in this example (SFig. 8(c)).
5 Limitations of CTI
We have demonstrated, using standard benchmark datasets, that CTI is well-suited to Bayesian model comparison between regression models. Regression analyses continue to be widely applicable in disciplines such as econometrics, epidemiology, political science, psychology and sociology, so that these findings have significant implications. Nevertheless in many disciplines such as engineering, geophysics and systems biology, statistical models are significantly more complex, often based on a mechanistic understanding of the underlying process. Below we provide such an example and find that CTI offers little improvement over TI; this allows us to explore the limitations of our approach and, conversely, to understand in what circumstances it is likely to be successful.
5.1 A negative example (Goodwin Oscillator)
We consider nonlinear dynamical systems of the form
We assume only a subset of the variables are observed under noise, so that and is a by matrix of observations of the variables . Model comparison for systems specified by nonlinear differential equations is known to be profoundly challenging (Calderhead and Girolami, 2011).
Write for the times at which observations are obtained, such that . We consider a Gaussian observation process with likelihood
where denotes the solution of the system in Eqn. 28. For the Gaussian observation model it can be shown that a sufficient condition for unbiasedness of ZV is that the parameter prior density vanishes faster than when . Here and is the degree of the polynomial that is being employed (see Appendix B).
Assuming the sufficient condition for ZV is satisfied, we have
where is a matrix of sensitivities with entries . Note that in Eqn. 30, ranges over indices corresponding only to the observed variables. In general the sensitivities will be unavailable in closed form, but may be computed numerically by augmenting the system of ordinary differential equations (ODEs) in Eqn. 28 as described in Appendix E. Indeed, these sensitivities are already computed when differential-geometric sampling schemes are employed, so that the evaluation of Eqn. 30 incurs negligible computational cost.
We focus on a dynamical model of oscillatory enzymatic control due to (Goodwin, 1965), that was recently considered in the context of Bayesian model comparison by Calderhead and Girolami (2009). This kinetic model, specified by a system of ODEs, describes how a negative feedback loop between protein expression and mRNA transcription can induce oscillatory dynamics as experimentally observed in circadian regulation (Locke et al., 2005). A full specification is provided in Appendix E. As shown in Calderhead and Girolami (2009), the Goodwin oscillator induces a highly multi-modal posterior distribution that renders estimation of the model evidence extremely challenging. We consider Bayesian comparison of two models; a simple model with one intermediate protein species () and a more complex model with two intermediate protein species ().
Fig. 4 demonstrates that in this extremely challenging example the benefits of control variate schemes that we have previously observed are heavily reduced. Since the variance ratio is related to the canonical correlation between control variates and the log-likelihood under the power posterior (Eqn. 16), we hypothesise that the extreme multi-modality of the power posterior distribution is limiting the extent to which strong canonical correlation can be achieved. This is confirmed in Fig. 5 where we plot values of the target function against the control variates that are obtained from MCMC sampling in the posterior. We observe much reduced correlation in the case of the Goodwin oscillator that is a consequence of the complex nature of the likelihood surface. Turning to the Bayes factor itself, in Table 2 we display the mean of each estimator of the Bayes factor, together with the standard deviation of this collection of estimates. We find that CTI (degree 1) provides negligible reduction in variance and CTI (degree 2) provides an insignificant reduction in variance.
In this example AIS consistently produced lower estimates for Bayes factors (SFig. 8(d)). This likely reflects the low number of Monte Carlo iterations that are characteristic of such computationally demanding applications.
To the best of our knowledge this is the first paper to consider the use of control variates for the purpose of Bayesian model comparison. Motivated by previous empirical studies, we focussed on TI estimators for the model (log-)evidence. However, in general, control variate techniques could be leveraged in Bayesian model comparison whenever estimators of the evidence (or Bayes factors) take the form of a Monte Carlo expectation. General control variate schemes for MCMC rely on the fact that the expectation of the control variates along the MCMC sample path will be approximately zero. We thus draw a distinction between “equilibrium” Monte Carlo estimators for the model evidence, such as TI and path sampling, that require the underlying Markov chain to have converged, and “non-equilibrium” estimators such as AIS and sequential Monte Carlo that do not require convergence. The former class are amenable to existing control variate schemes whereas the latter are not. This motivates the “equilibration” of these non-equilibrium estimators.
Given its close connection with TI (Gelman and Meng, 1998), we considered whether an equilibrated version of AIS, that jointly samples from all rungs of the temperature ladder at once, would benefit from application of ZV control variates. In contrast to CTI, the controlled AIS estimator (CAIS) demonstrated an increase in variance compared to standard AIS. Full details are provided in the Supplement, in addition to results on each of the applications considered in this paper. To understand these counter-intuitive results, notice that control variates must be constructed simultaneously over all rungs of the temperature ladder, so that for degree 1 polynomials we have to jointly estimate coefficients, where denotes the number of model parameters, and for degree 2 polynomials we have to jointly estimate polynomial coefficients. To achieve this using the plug-in principle, we must estimate covariance matrices containing and entries respectively. Our results are therefore consistent with the finding that poor estimation of the polynomial coefficients can actually increase estimator variance (Glasserman, 2004). It remains unclear how to develop control variates for these non-equilibrium estimators.
We exploited the ZV control variate scheme due to Mira et al. (2013) that permits the automatic construction of control variates for any statistical model in which the gradient of the log-likelihood (and the log-prior) are available. More generally, we envisage that for models where these gradients are unavailable in closed form, the use of numerical approximations could provide a successful strategy (Calderhead and Sustik, 2012). Results on benchmark datasets demonstrate that CTI outperforms standard TI, but that the difference in performance is reduced when the likelihood function is strongly multi-modal. A natural direction for further research is to explore whether alternative control variates are better suited to these challenging problems.
CTI clearly inherits the theoretical and methodological challenges that are associated with control variates more generally. In particular ZV control variates are not parametrisation-invariant and it is unclear how to select an optimal variance-minimising parametrisation. Pertinent to CTI in particular, the optimal coefficients will vary smoothly with (inverse) temperature (SFig. 5(b)), yet the conventional plug-in approach to estimation treats each rung of the temperature ladder independently, leading to rough trajectories (SFig. 5(a)). It would therefore be interesting to design an information sharing scheme that jointly estimates all coefficients.
The development of low-cost computational approaches to Bayesian model comparison is necessary for the widespread adoption of Bayesian methodology in hypothesis-driven research. The extension of control variate strategies to this important setting offers a promising route towards achieving this goal.
Appendix A Quadrature for TI
Implementations of TI employ quadrature to approximate the one dimensional integral in Eqn. 3. Friel and Pettitt (2008) originally employed a simple trapezoidal rule whereby the (inverse) temperature domain was partitioned using and the (log-)evidence was approximated by
The use of quadrature introduces bias into the resulting estimator. To reduce this quadrature error and thus the estimator bias, Friel et al. (2014) proposed the second order correction term
that is subtracted from Eqn. 31. Here denotes the variance of the function , where has distribution with density .
Appendix B Asymptotic unbiasedness
Propositions 1 and 2 of Mira et al. (2013) show that a sufficient conditions for asymptotic unbiasedness of ZV control variates, i.e. , is that, in the case where is unbounded, where is a sequence of bounded subsets and denotes the versor orthogonal to the boundary . This condition could be difficult to verify directly; below we contribute a sufficient condition that is easily verified. Consider a -dimensional hypercube with side length and surface area and let be the degree of the polynomial . Then crude bounds give . Since it follows that a sufficient condition for unbiasedness of ZV is
In practice this requires that the tails of the (unnormalised) density vanish sufficiently quickly, with faster convergence required when higher degree polynomials are to be used.
Appendix C Second degree polynomials
Second degree polynomials can be expressed as where is and is . This leads to ZV control variates of the form
where and denote the quadratic polynomial coefficients and is the trace of . We assume that is symmetric, but this is not required in general. Following Mira et al. (2013), it is possible to rearrange the terms on the right hand side of Eqn. 34 into the form where the column vectors , have elements each, and are defined as , where is the diagonal of and is a column vector with elements, whose element in the position is the lower diagonal -th element of , and , where , with denoting the Hadamard product and the unit vector respectively, while is a column vector comprising elements, whose element in the position equals
The same derivation used to obtain Eqn. 13 can be followed to deduce that optimal coefficients in the case of second order polynomials are given by . Similarly the ZV strategy with degree 2 polynomials can be expected to reduce variance when a linear combination of the components of is highly correlated with the target function .
Appendix D Formulae for Bayesian linear regression
d.1 Known precision
The power posterior follows where , , whilst the integrand has the closed-form expression and the model evidence is
d.2 Unknown precision
Using the transformation we can ensure that the posterior is defined on and has exponential tails so that, by Eqn. 33, the unbiasedness condition is satisfied. For Model 1 we have , and , where the components are ordered with respect to .
Write for the design matrix with th row . The model evidence, that is the object we wish to estimate, is given by
where , and . Derivations for Model 2 are analogous.
Appendix E Formulae for Goodwin Oscillator
The Goodwin oscillator with species is given by
Here represents the concentration of mRNA for a target gene and represents its corresponding protein product. Additional variables represent intermediate protein species that facilitate a cascade of enzymatic activation that ultimately leads to a negative feedback, via , on the rate at which mRNA is transcribed. The solution of this dynamical system depends upon synthesis rate constants , and degradation rate constants , . The Goodwin oscillator permits oscillatory solutions only when . Following Calderhead and Girolami (2009) we therefore set as a fixed parameter. A -variable Goodwin model as described above therefore has uncertain parameters . The Goodwin oscillator does not permit a closed form solution, meaning that each evaluation of the likelihood function requires the numerical integration of the system in Eqn. 37. Due to the substantive computational challenges associated with model comparison in this setting, we considered only 10 independent runs of population MCMC, each using only iterations.
We consider a realistic setting where only mRNA and protein product are observed, corresponding to . We assume and are both known and take sampling times to be . Parameters were assigned independent prior distributions. We generated data using , , , , , which produce oscillatory dynamics that do not depend heavily upon initial conditions (SFig. 8).
In practice we work with the log-transformed parameters . In particular this allows us to verify that ZV methods are valid, since the tails of vanish exponentially quickly. Sensitivities , defined in the main text, satisfy
where at . Eqn. 38 provides a route to compute the sensitivities numerically, when they cannot be obtained analytically, by augmenting the state vector of the dynamical system to include the .
- Andradóttir et al. (1993) Andradóttir et al. (1993), Variance reduction through smoothing and control variates for Markov Chain simulations. ACM T. M. Comput. S. 3(3):167-189.
- Assaraf and Caffarel (1999) Assaraf, R., and Caffarel, M. (1999), Zero-Variance Principle for Monte Carlo Algorithms. Phys. Rev. Lett. 83(23):4682–4685.
- Behrens et al. (2012) Behrens, G., Friel, N., and Hurn, M. (2012), Tuning tempered transitions. Stat. Comput. 22(1):65-78.
- Calderhead and Girolami (2009) Calderhead, B., and Girolami, M. (2009), Estimating Bayes factors via thermodynamic integration and population MCMC. Comput. Stat. Data An. 53(12):4028-4045.
- Calderhead and Girolami (2011) Calderhead, B., and Girolami, M. (2011), Statistical analysis of nonlinear dynamical systems using differential geometric sampling methods. Interface Focus 1(6):821-835
- Calderhead and Sustik (2012) Calderhead, B., and Sustik, M. (2012), Sparse Approximate Manifolds for Differential Geometric MCMC. Adv. Neur. In. 25:2888-2896.
- Chen et al. (2000) Chen, M. H., Shao, Q. M., and Ibrahim, J. G. (2000), Monte Carlo methods in Bayesian computation. Springer New York.
- Chib and Jeliazkov (2001) Chib, S., and Jeliazkov, I. (2001), Marginal likelihood from the Metropolis-Hastings output. J. Am. Stat. Assoc. 96(453):270-281.
- Corduneanu and Bishop (2001) Corduneanu, A., and Bishop, C. M. (2001), Variational Bayesian model selection for mixture distributions. Proceedings of the 8th International Conference on Artificial intelligence and Statistics :27-34.
- Del Moral et al. (2006) Del Moral, P., Doucet, A., and Jasra, A. (2006), Sequential monte carlo samplers. J. R. Statist. Soc. B 68(3):411-436.
- Dellaportas and Kontoyiannis (2012) Dellaportas, P., and Kontoyiannis, I. (2012), Control variates for estimation based on reversible Markov chain Monte Carlo samplers. J. R. Statist. Soc. B 74(1):133-161
- Didelot et al. (2011) Didelot et al. (2011), Likelihood-free estimation of model evidence. Bayesian Analysis 6(1):49-76.
- Frenkel and Smit (2002) Frenkel, D., and Smit, B. (2002), Understanding Molecular Simulation: From Algorithms to Applications (2nd Edition). Academic Press.
- Friel and Pettitt (2008) Friel, N., and Pettitt, A. N. (2008), Marginal likelihood estimation via power posteriors. J. R. Statist. Soc. B 70(3):589-607.
- Friel and Wyse (2012) Friel, N., and Wyse, J. (2012), Estimating the statistical evidence – a review. Stat. Neerl. 66:288-308.
- Friel et al. (2014) Friel, N., Hurn, M. A., and Wyse, J. (2014), Improving power posterior estimation of statistical evidence. Stat. Comp., in press.
- Gelfand and Dey (1994) Gelfand, A. E., and Dey, D. K. (1994), Bayesian model choice: asymptotics and exact calculations. J. R. Statist. Soc. B 56(3):501-514.
- Gelman and Meng (1998) Gelman, A., and Meng, X.-L., (1998), Simulating normalizing constants: From importance sampling to bridge sampling to path sampling. Stat. Sci. 13(2):163-185.
- Geyer and Thompson (1995) Geyer, C. J., and Thompson, E. A. (1995), Annealing Markov chain Monte Carlo with applications to ancestral inference. J. Am. Stat. Assoc. 90(431):909-920.
- Glasserman (2004) Glasserman, P. (2004), Monte Carlo Methods in Financial Engineering. Springer-Verlag New York.
- Girolami and Calderhead (2011) Girolami, M., and Calderhead, B. (2011), Riemann manifold Langevin and Hamiltonian Monte Carlo methods. J. R. Statist. Soc. B 73(2):1-37.
- Goodwin (1965) Goodwin, B. (1965), Oscillatory behavior in enzymatic control processes. Adv. Enzyme Regul. 3:425-438.
- Green and Han (1992) Green, P., and Han, X. (1992), Metropolis methods, Gaussian proposals, and antithetic variables. Lecture Notes in Statistics, Stochastic Methods and Algorithms in Image Analysis 74:142-164.
- Green (1995) Green, P. J. (1995), Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika 82(4):711-732.
- Hammer and Tjelmeland (2008) Hammer, H., and Tjelmeland, H. (2008), Control variates for the Metropolis-Hastings algorithm. Scand. J. Stat. 35(3):400-414.
- Jasra et al. (2007) Jasra, A., Stephens, D., and Holmes, C. (2007), On population-based simulation for static inference. Stat. Comput. 17(3):263-279.
- Kass and Raftery (1995) Kass, R. E., and Raftery, A. E. (1995), Bayes factors. J. Am. Stat. Assoc. 90(430):773-795.
- Locke et al. (2005) Locke, J., Millar, A., and Turner, M. (2005), Modelling genetic networks with noisy and varied experimental data: the circadian clock in arabidopsis thaliana. J. Theor. Biol. 234(3):383-393.
- Marin and Robert (2010) Marin, J. M., and Robert, C. P. (2011), Importance sampling methods for Bayesian discrimination between embedded models. Frontiers of Statistical Decision Making and Bayesian Analysis, Springer New York.
- Marinari and Parisi (1992) Marinari, E., and Parisi, G. (1992), Simulated tempering: a new Monte Carlo scheme. Europhys. Lett. 19(6):451.
- Miasojedow et al. (2012) Miasojedow, B., Moulines, E., and Vihola, M. (2012), Adaptive Parallel Tempering Algorithm. arXiv 1205.1076.
- Mira et al. (2003) Mira, A., Tenconi, P., and Bressanini, D. (2003), Variance reduction for MCMC. Technical Report 2003/29, Universitá degli Studi dell’ Insubria, Italy.
- Mira et al. (2013) Mira, A., Solgi, R., and Imparato, D. (2013), Zero Variance Markov Chain Monte Carlo for Bayesian Estimators, Stat. Comput. 23(5):653-662..
- Neal (1996) Neal, R. (1996), Sampling from multimodal distributions using tempered transitions. Stat. Comput. 6(4):353-366.
- Neal (2001) Neal, R. M. (2001), Annealed importance sampling. Stat. Comput. 11(2):125-139.
- Ogata (1989) Ogata, Y. (1989), A Monte Carlo method for high dimensional integration. Numer. Math. 55(2):137-157.
- Papamarkou et al. (2014) Papamarkou, T., Mira, A., and Girolami, M. (2014), Zero Variance Differential Geometric Markov Chain Monte Carlo Algorithms. Bayesian Analysis, in press.
- Philippe and Robert (2001) Philippe, A., and Robert, C. (2001), Riemann sums for MCMC estimation and convergence monitoring. Stat. Comput. 11(2):103-115.
- Robert and Casella (2004) Robert, C., and Casella, G. (2004), Monte Carlo Statistical Methods. (2nd ed.) Springer-Verlag New York.
- Rubinstein and Marcus (1985) Rubinstein, R. Y., and Marcus, R. (1985), Efficiency of Multivariate Control Variates in Monte Carlo Simulation. Oper. Res. 33(3):661-677.
- Skilling (2006) Skilling, J. (2006), Nested sampling for general Bayesian computation. Bayesian Analysis 1(4):833-859.
- Toni et al. (2009) Toni et al. (2009), Approximate Bayesian computation scheme for parameter inference and model selection in dynamical systems. J. R. Soc. Interface 6(31):187-202.
- Torrie and Valleau (1977) Torrie, G. M., and Valleau, J. P. (1977), Nonphysical sampling distributions in Monte Carlo free-energy estimation: Umbrella sampling. J. Comput. Phys. 23(2):187-199.
- Vyshemirsky and Girolami (2008) Vyshemirsky, V., and Girolami, M. A. (2008), Bayesian ranking of biochemical system models. Bioinformatics 24(6):833-839.
- Williams (1959) Williams, E. (1959), Regression Analysis. Wiley New York.
- Zhou et al. (2013) Zhou, Y., Johansen, A. M., and Aston, J. A. D. (2013), Towards Automatic Model Comparison an Adaptive Sequential Monte Carlo Approach. CRiSM Technical Report, University of Warwick, 13-04.
Proof of exactness
In this section we prove that CTI (degree 2) is exact (up to quadrature error) for the Bayesian linear regression model with known precision.
We have from Eqn. 15 that the minimum variance ratio is given by
Plugging in the expression of Eqn. 34 for we obtain
where the maximum is taken over all symmetric matrices and real vectors .