Variational Bayesian Monte Carlo

Variational Bayesian Monte Carlo

Luigi Acerbi
Department of Basic Neuroscience
University of Geneva
luigi.acerbi@unige.ch
Website: luigiacerbi.com. Alternative e-mail: luigi.acerbi@gmail.com.
Abstract

Many probabilistic models of interest in scientific computing and machine learning have expensive, black-box likelihoods that prevent the application of standard techniques for Bayesian inference, such as MCMC, which would require access to the gradient or a large number of likelihood evaluations. We introduce here a novel sample-efficient inference framework, Variational Bayesian Monte Carlo (VBMC). VBMC combines variational inference with Gaussian-process based, active-sampling Bayesian quadrature, using the latter to efficiently approximate the intractable integral in the variational objective. Our method produces both a nonparametric approximation of the posterior distribution and an approximate lower bound of the model evidence, useful for model selection. We demonstrate VBMC both on several synthetic likelihoods and on a neuronal model with data from real neurons. Across all tested problems and dimensions (up to ), VBMC performs consistently well in reconstructing the posterior and the model evidence with a limited budget of likelihood evaluations, unlike other methods that work only in very low dimensions. Our framework shows great promise as a novel tool for posterior and model inference with expensive, black-box likelihoods.

 

Variational Bayesian Monte Carlo


  Luigi Acerbithanks: Website: luigiacerbi.com. Alternative e-mail: luigi.acerbi@gmail.com. Department of Basic Neuroscience University of Geneva luigi.acerbi@unige.ch

\@float

noticebox[b]32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada.\end@float

1 Introduction

In many scientific, engineering, and machine learning domains, such as in computational neuroscience and big data, complex black-box computational models are routinely used to estimate model parameters and compare hypotheses instantiated by different models. Bayesian inference allows us to do so in a principled way that accounts for parameter and model uncertainty by computing the posterior distribution over parameters and the model evidence, also known as marginal likelihood or Bayes factor. However, Bayesian inference is generally analytically intractable, and the statistical tools of approximate inference, such as Markov Chain Monte Carlo (MCMC) or variational inference, generally require knowledge about the model (e.g., access to the gradients) and/or a large number of model evaluations. Both of these requirements cannot be met by black-box probabilistic models with computationally expensive likelihoods, precluding the application of standard Bayesian techniques of parameter and model uncertainty quantification to domains that would most need them.

Given a dataset and model parameters , here we consider the problem of computing both the posterior and the marginal likelihood (or model evidence) , defined as, respectively,

(1)

where is the likelihood of the model of interest and is the prior over parameters. Crucially, we consider the case in which is a black-box, expensive function for which we have a limited budget of function evaluations (of the order of few hundreds).

A promising approach to deal with such computational constraints consists of building a probabilistic model-based approximation of the function of interest, for example via Gaussian processes (GP) [1]. This statistical surrogate can be used in lieu of the original, expensive function, allowing faster computations. Moreover, uncertainty in the surrogate can be used to actively guide sampling of the original function to obtain a better approximation in regions of interest for the application at hand. This approach has been extremely successful in Bayesian optimization [2, 3, 4, 5] and in Bayesian quadrature for the computation of intractable integrals [6, 7].

In particular, recent works have applied GP-based Bayesian quadrature to the estimation of the marginal likelihood [7, 8, 9, 10], and GP surrogates to build approximations of the posterior [11, 12]. However, none of the existing approaches deals simultaneously with posterior and model inference. Moreover, it is unclear how these approximate methods would deal with likelihoods with realistic properties, such as medium dimensionality (up to ), mild multi-modality, heavy tails, and parameters that exhibit strong correlations—all common issues of real-world applications.

In this work, we introduce Variational Bayesian Monte Carlo (VBMC), a novel approximate inference framework that combines variational inference and active-sampling Bayesian quadrature via GP surrogates.111Code available at https://github.com/lacerbi/vbmc. Our method affords simultaneous approximation of the posterior and of the model evidence in a sample-efficient manner. We demonstrate the robustness of our approach by testing VBMC and other inference algorithms on a variety of synthetic likelihoods with realistic, challenging properties. We also apply our method to a real problem in computational neuroscience, by fitting a model of neuronal selectivity in visual cortex [13]. Among the tested methods, VBMC is the only one with consistently good performance across problems, showing promise as a novel tool for posterior and model inference with expensive likelihoods in scientific computing and machine learning.

2 Theoretical background

2.1 Variational inference

Variational Bayes is an approximate inference method whereby the posterior is approximated by a simpler distribution that usually belongs to a parametric family [14, 15]. The goal of variational inference is to find the variational parameters for which the variational posterior “best” approximates the true posterior. In variational methods, the mismatch between the two distributions is quantified by the Kullback-Leibler (KL) divergence,

(2)

where we adopted the compact notation . Inference is then reduced to an optimization problem, that is finding the variational parameter vector that minimizes Eq. 2. We rewrite Eq. 2 as

(3)

where

(4)

is the negative free energy, or evidence lower bound (ELBO). Here is the log joint probability and is the entropy of . Note that since the KL divergence is always non-negative, from Eq. 3 we have , with equality holding if . Thus, maximization of the variational objective, Eq. 4, is equivalent to minimization of the KL divergence, and produces both an approximation of the posterior and a lower bound on the marginal likelihood, which can be used as a metric for model selection.

Normally, is chosen to belong to a family (e.g., a factorized posterior, or mean field) such that the expected log joint in Eq. 4 and the entropy can be computed analytically, possibly providing closed-form equations for a coordinate ascent algorithm. Here, we assume that , like many computational models of interest, is an expensive black-box function, which prevents a direct computation of Eq. 4 analytically or via simple numerical integration.

2.2 Bayesian quadrature

Bayesian quadrature, also known as Bayesian Monte Carlo, is a means to obtain Bayesian estimates of the mean and variance of non-analytical integrals of the form , defined on a domain [6, 7]. Here, is a function of interest and a known probability distribution. Typically, a Gaussian Process (GP) prior is specified for .

Gaussian processes

GPs are a flexible class of models for specifying prior distributions over unknown functions [1]. GPs are defined by a mean function and a positive definite covariance, or kernel function . In Bayesian quadrature, a common choice is the Gaussian kernel , with , where is the output length scale and is the vector of input length scales. Conditioned on training inputs and associated function values , the GP posterior will have latent posterior conditional mean and covariance in closed form (see [1]), where is the set of training function data for the GP and is a hyperparameter vector for the GP mean, covariance, and likelihood.

Bayesian integration

Since integration is a linear operator, if is a GP, the posterior mean and variance of the integral are [7]

(5)

Crucially, if has a Gaussian kernel and is a Gaussian or mixture of Gaussians (among other functional forms), the integrals in Eq. 5 can be computed analytically.

Active sampling

For a given budget of samples , a smart choice of the input samples X would aim to minimize the posterior variance of the final integral (Eq. 5) [10]. Interestingly, for a standard GP and fixed GP hyperparameters , the optimal variance-minimizing design does not depend on the function values at X, thereby allowing precomputation of the optimal design. However, if the GP hyperparameters are updated online, or the GP is warped (e.g., via a log transform [8] or a square-root transform [9]), the variance of the posterior will depend on the function values obtained so far, and an active sampling strategy is desirable. The acquisition function determines which point in should be evaluated next via a proxy optimization . Examples of acquisition functions for Bayesian quadrature include the expected entropy, which minimizes the expected entropy of the integral after adding to the training set [8], and the much faster to compute uncertainty sampling strategy, which maximizes the variance of the integrand at [9].

3 Variational Bayesian Monte Carlo (VBMC)

We introduce here Variational Bayesian Monte Carlo (VBMC), a sample-efficient inference method that combines variational Bayes and Bayesian quadrature, particularly useful for models with (moderately) expensive likelihoods. The main steps of VBMC are described in Algorithm 1, and an example run of VBMC on a nontrivial 2-D target density is shown in Fig. 1.

VBMC in a nutshell

In each iteration , the algorithm: (1) sequentially samples a batch of ‘promising’ new points that maximize a given acquisition function, and evaluates the (expensive) log joint at each of them; (2) trains a GP model of the log joint , given the training set of points evaluated so far; (3) updates the variational posterior approximation, indexed by , by optimizing the ELBO. This loop repeats until the budget of function evaluations is exhausted, or some other termination criterion is met (e.g., based on the stability of the found solution). VBMC includes an initial warm-up stage to avoid spending computations in regions of low posterior probability mass (see Section 3.5). In the following sections, we describe various features of VBMC.

1:target log joint , starting point , plausible bounds PLB, PUB, additional options
2:Initialization: , initialize variational posterior , StopSampling false
3:repeat
4:     
5:     if  then Initial design, Section 3.5
6:         Evaluate and add to the training set
7:         for  do
8:              Sample a new point
9:              Evaluate and add to the training set          
10:     else
11:         for  do Active sampling, Section 3.3
12:              Actively sample a new point
13:              Evaluate and add to the training set
14:              for each , perform rank-1 update of the GP posterior               
15:     if not StopSampling then GP hyperparameter training, Section 3.4
16:          Sample GP hyperparameters
17:     else
18:          Optimize GP hyperparameters      
19:      Update number of variational components Section 3.6
20:      Optimize ELBO via stochastic gradient descent Section 3.2
21:     Evaluate whether to StopSampling and other TerminationCriteria
22:until fevals MaxFunEvals or TerminationCriteria Stopping criteria, Section 3.7
23:return variational posterior , ,
Algorithm 1 Variational Bayesian Monte Carlo
Figure 1: Example run of VBMC on a 2-D pdf. A. Contour plots of the variational posterior at different iterations of the algorithm. Red crosses indicate the centers of the variational mixture components, black dots are the training samples. B. ELBO as a function of iteration. Shaded area is 95% CI of the ELBO in the current iteration as per the Bayesian quadrature approximation (not the error wrt ground truth). The black line is the true log marginal likelihood (LML). C. True target pdf.

3.1 Variational posterior

We choose for the variational posterior a flexible “nonparametric” family, a mixture of Gaussians with shared covariances, modulo a scaling factor,

(6)

where , , and are, respectively, the mixture weight, mean, and scale of the -th component, and is a covariance matrix common to all elements of the mixture. In the following, we assume a diagonal matrix . The variational posterior for a given number of mixture components is parameterized by , which has parameters. The number of components is set adaptively (see Section 3.6).

3.2 The evidence lower bound

We approximate the ELBO (Eq. 4) in two ways. First, we approximate the log joint probability with a GP with a squared exponential (rescaled Gaussian) kernel, a Gaussian likelihood with observation noise (for numerical stability [16]), and a negative quadratic mean function, defined as

(7)

where is the maximum value of the mean, is the location of the maximum, and is a vector of length scales. This mean function, unlike for example a constant mean, ensures that the posterior GP predictive mean is a proper log probability distribution (that is, it is integrable when exponentiated). Crucially, our choice of variational family (Eq. 6) and kernel, likelihood and mean function of the GP affords an analytical computation of the posterior mean and variance of the expected log joint (using Eq. 5), and of their gradients (see Supplementary Material for details). Second, we approximate the entropy of the variational posterior, , via simple Monte Carlo sampling, and we propagate its gradient through the samples via the reparametrization trick [17].222We also tried a deterministic approximation of the entropy proposed in [18], with mixed results. Armed with expressions for the mean expected log joint, the entropy, and their gradients, we can efficiently optimize the (negative) mean ELBO via stochastic gradient descent [19].

Evidence lower confidence bound

We define the evidence lower confidence bound (ELCBO) as

(8)

where the first two terms are the ELBO (Eq. 4) estimated via Bayesian quadrature, and the last term is the uncertainty in the computation of the expected log joint multiplied by a risk-sensitivity parameter (we set unless specified otherwise). Eq. 8 establishes a probabilistic lower bound on the ELBO, used to assess the improvement of the variational solution (see following sections).

3.3 Active sampling

In VBMC, we are performing active sampling to compute a sequence of integrals , across iterations such that (1) the sequence of variational parameters converges to the variational posterior that minimizes the KL divergence with the true posterior, and (2) we have minimum variance on our final estimate of the ELBO. Note how this differs from active sampling in simple Bayesian quadrature, for which we only care about minimizing the variance of a single fixed integral. The ideal acquisition function for VBMC will correctly balance exploration of uncertain regions and exploitation of regions with high probability mass to ensure a fast convergence of the variational posterior as closely as possible to the ground truth.

We describe here two acquisition functions for VBMC based on uncertainty sampling. Let be the posterior GP variance at given the current training set . ‘Vanilla’ uncertainty sampling for is , where is the current variational posterior. Since only maximizes the variance of the integrand under the current variational parameters, we expect it to be lacking in exploration. To promote exploration, we introduce prospective uncertainty sampling,

(9)

where is the GP posterior predictive mean. aims at reducing uncertainty of the variational objective both for the current posterior and at prospective locations where the variational posterior might move to in the future, if not already there (high GP posterior mean). The variational posterior in acts as a regularizer, preventing active sampling from following too eagerly fluctuations of the GP mean. For numerical stability of the GP, we include in all acquisition functions a regularization factor to prevent selection of points too close to existing training points (see Supplementary Material).

At the beginning of each iteration after the first, VBMC actively samples points ( by default in this work). We select each point sequentially, by optimizing the chosen acquisition function via CMA-ES [20], and apply fast rank-one updates of the GP posterior after each acquisition.

3.4 Adaptive treatment of GP hyperparameters

The GP model in VBMC has hyperparameters, . We impose an empirical Bayes prior on the GP hyperparameters based on the current training set (see Supplementary Material), and we sample from the posterior over hyperparameters via slice sampling [21]. In each iteration, we collect samples, where is the size of the current GP training set, with the rationale that we require less samples as the posterior over hyperparameters becomes narrower due to more observations. Given samples , and a random variable that depends on , we compute the expected mean and variance of as

(10)

where is the sample variance. We use Eq. 10 to compute the GP posterior predictive mean and variances for the acquisition function, and to marginalize the expected log joint over hyperparameters.

The algorithm adaptively switches to a faster maximum-a-posteriori (MAP) estimation of the hyperparameters (via gradient-based optimization) when the additional variability of the expected log joint brought by multiple samples falls below a threshold for several iterations, a signal that sampling is bringing little advantage to the precision of the computation.

3.5 Initialization and warm-up

The algorithm is initialized by providing a starting point (ideally, in a region of high posterior probability mass) and vectors of plausible lower/upper bounds PLB, PUB, that identify a region of high posterior probability mass in parameter space. In the absence of other information, we obtained good results with plausible bounds containing the peak of prior mass in each coordinate dimension, such as the top probability region (that is, mean 1 SD for a Gaussian prior). The initial design consists of the provided starting point(s) and additional points generated uniformly at random inside the plausible box, for a total of points. The plausible box also sets the reference scale for each variable, and in future work might inform other aspects of the algorithm [5]. The VBMC algorithm works in an unconstrained space (), but bound constraints to the variables can be easily handled via a nonlinear remapping of the input space, with an appropriate Jacobian correction of the log probability density [22] (see Section 4.2 and Supplementary Material).333The available code for VBMC currently supports both unbounded variables and bound constraints.

Warm-up

We initialize the variational posterior with components in the vicinity of , and with small values of , and (relative to the width of the plausible box). The algorithm starts in warm-up mode, during which VBMC tries to quickly improve the ELBO by moving to regions with higher posterior probability. During warm-up, is clamped to only two components with , and we collect a maximum of hyperparameter samples. Warm-up ends when the ELCBO (Eq. 8) shows an improvement of less than 1 for three consecutive iterations, suggesting that the variational solution has started to stabilize. At the end of warm-up, we trim the training set by removing points whose value of the log joint probability is more than points lower than the maximum value observed so far. While not necessary in theory, we found that trimming generally increases the stability of the GP approximation, especially when VBMC is initialized in a region of very low probability under the true posterior. To allow the variational posterior to adapt, we do not actively sample new points in the first iteration after the end of warm-up.

3.6 Adaptive number of variational mixture components

After warm-up, we add and remove variational components following a simple set of rules.

Adding components

We define the current variational solution as improving if the ELCBO of the last iteration is higher than the ELCBO in the past few iterations (). In each iteration, we increment the number of components by 1 if the solution is improving and no mixture component was pruned in the last iteration (see below). To speed up adaptation of the variational solution to a complex true posterior when the algorithm has nearly converged, we further add two extra components if the solution is stable (see below) and no component was recently pruned. Each new component is created by splitting and jittering a randomly chosen existing component. We set a maximum number of components , where is the size of the current training set .

Removing components

At the end of each variational optimization, we consider as a candidate for pruning a random mixture component with mixture weight . We recompute the ELCBO without the selected component (normalizing the remaining weights). If the ‘pruned’ ELCBO differs from the original ELCBO less than , we remove the selected component. We iterate the process through all components with weights below threshold. For VBMC we set and .

3.7 Termination criteria

At the end of each iteration, we assign a reliability index to the current variational solution based on the following features: change in ELBO between the current and the previous iteration; estimated variance of the ELBO; KL divergence between the current and previous variational posterior (see Supplementary Material for details). By construction, a is suggestive of a stable solution. The algorithm terminates when obtaining a stable solution for iterations (with at most one non-stable iteration in-between), or when reaching a maximum number of function evaluations. The algorithm returns the estimate of the mean and standard deviation of the ELBO (a lower bound on the marginal likelihood), and the variational posterior, from which we can cheaply draw samples for estimating distribution moments, marginals, and other properties of the posterior. If the algorithm terminates before achieving long-term stability, it warns the user and returns a recent solution with the best ELCBO, using a conservative .

4 Experiments

We tested VBMC and other common inference algorithms on several artificial and real problems consisting of a target likelihood and an associated prior. The goal of inference consists of approximating the posterior distribution and the log marginal likelihood (LML) with a fixed budget of likelihood evaluations, assumed to be (moderately) expensive.

Algorithms

We tested VBMC with the ‘vanilla’ uncertainty sampling acquisition function (VBMC-U) and with prospective uncertainty sampling, (VBMC-P). We also tested simple Monte Carlo (SMC), annealed importance sampling (AIS), the original Bayesian Monte Carlo (BMC), doubly-Bayesian quadrature (BBQ [8])444We also tested BBQ* (approximate GP hyperparameter marginalization), which perfomed similarly to BBQ., and warped sequential active Bayesian integration (WSABI, both in its linearized and moment-matching variants, WSABI-L and WSABI-M [9]). For the basic setup of these methods, we follow [9]. Most of these algorithms only compute an approximation of the marginal likelihood based on a set of sampled points, but do not directly compute a posterior distribution. We obtain a posterior by training a GP model (equal to the one used by VBMC) on the log joint evaluated at the sampled points, and then drawing 2 MCMC samples from the GP posterior predictive mean via parallel slice sampling [21, 23]. We also tested two methods for posterior estimation via GP surrogates, BAPE [11] and AGP [12]. Since these methods only compute an approximate posterior, we obtain a crude estimate of the log normalization constant (the LML) as the average difference between the log of the approximate posterior and the evaluated log joint at the top 20% points in terms of posterior density. For all algorithms, we use default settings, allowing only changes based on knowledge of the mean and (diagonal) covariance of the provided prior.

Procedure

For each problem, we allow a fixed budget of likelihood evaluations, where is the number of variables. Given the limited number of samples, we judge the quality of the posterior approximation in terms of its first two moments, by computing the “Gaussianized” symmetrized KL divergence (gsKL) between posterior approximation and ground truth. The gsKL is defined as the symmetrized KL between two multivariate normal distributions with mean and covariances equal, respectively, to the moments of the approximate posterior and the moments of the true posterior. We measure the quality of the approximation of the LML in terms of absolute error from ground truth, the rationale being that differences of LML are used for model comparison. Ideally, we want the LML error to be of order 1 of less, since much larger errors could severely affect the results of a comparison (e.g., differences of LML of 10 points or more are often presented as decisive evidence in favor of one model [24]). On the other hand, errors can be considered negligible; higher precision is unnecessary. For each algorithm, we ran at least 20 separate runs per test problem with different random seeds, and report the median gsKL and LML error and the 95% CI of the median calculated by bootstrap. For each run, we draw the starting point (if requested by the algorithm) uniformly from a box within 1 prior standard deviation (SD) from the prior mean. We use the same box to define the plausible bounds for VBMC.

4.1 Synthetic likelihoods

Problem set

We built a benchmark set of synthetic likelihoods belonging to three families that represent typical features of target densities (see Supplementary Material for details). Likelihoods in the lumpy family are built out of a mixture of 12 multivariate normals with component means drawn randomly in the unit -hypercube, distinct diagonal covariances with SDs in the range, and mixture weights drawn from a Dirichlet distribution with unit concentration parameter. The lumpy distributions are mildly multimodal, in that modes are nearby and connected by regions with non-neglibile probability mass. In the Student family, the likelihood is a multivariate Student’s -distribution with diagonal covariance and degrees of freedom equally spaced in the range across different coordinate dimensions. These distributions have heavy tails which might be problematic for some methods. Finally, in the cigar family the likelihood is a multivariate normal in which one axis is 100 times longer than the others, and the covariance matrix is non-diagonal after a random rotation. The cigar family tests the ability of an algorithm to explore non axis-aligned directions. For each family, we generated test functions for , for a total of 15 synthetic problems. For each problem, we pick as a broad prior a multivariate normal with mean centered at the expected mean of the family of distributions, and diagonal covariance matrix with SD equal to 3-4 times the SD in each dimension. For all problems, we compute ground truth values for the LML and the posterior mean and covariance analytically or via multiple 1-D numerical integrals.

Results

We show the results for in Fig. 2 (see Supplementary Material for full results, in higher resolution). Almost all algorithms perform reasonably well in very low dimension (), and in fact several algorithms converge faster than VBMC to the ground truth (e.g., WSABI-L). However, as we increase in dimension, we see that all algorithms start failing, with only VBMC peforming consistently well across problems. In particular, besides the simple case, only VBMC obtains acceptable results with non-axis aligned distributions (cigar). Some algorithms (such as AGP and BAPE) exhibited large numerical instabilities on the cigar family, despite our best attempts at regularization, such that many runs were unable to complete.

Figure 2: Synthetic likelihoods. A. Median absolute error of the LML estimate with respect to ground truth, as a function of number of likelihood evaluations, on the lumpy (top), Student (middle), and cigar (bottom) problems, for (columns). B. Median “Gaussianized” symmetrized KL divergence between the algorithm’s posterior and ground truth. For both metrics, shaded areas are 95 % CI of the median, and we consider a desirable threshold to be below one (dashed line).

4.2 Real likelihoods of neuronal model

Problem set

For a test with real models and data, we consider a computational model of neuronal orientation selectivity in visual cortex [13]. We fit the neural recordings of one V1 and one V2 cell with the authors’ neuronal model that combines effects of filtering, suppression, and response nonlinearity [13]. The model is analytical but still computationally expensive due to large datasets and a cascade of several nonlinear operations. For the purpose of our benchmark, we fix some parameters of the original model to their MAP values, yielding an inference problem with free parameters of experimental interest. We transform bounded parameters to uncontrained space via a logit transform [22], and we place a broad Gaussian prior on each of the transformed variables, based on estimates from other neurons in the same study [13] (see Supplementary Material for more details on the setup). For both datasets, we computed the ground truth with samples from the posterior, obtained via parallel slice sampling after a long burn-in. We calculated the ground truth LML from posterior MCMC samples via Geyer’s reverse logistic regression [25], and we independently validated it with a Laplace approximation, obtained via numerical calculation of the Hessian at the MAP (for both datasets, Geyer’s and Laplace’s estimates of the LML are within 1 point).

Figure 3: Neuronal model likelihoods. A. Median absolute error of the LML estimate, as a function of number of likelihood evaluations, for two distinct neurons (). B. Median “Gaussianized” symmetrized KL divergence between the algorithm’s posterior and ground truth. See also Fig. 2.
Results

For both datasets, VBMC is able to find a reasonable approximation of the LML and of the posterior, whereas no other algorithm produces a usable solution (Fig. 3). Importantly, the behavior of VBMC is fairly consistent across runs (see Supplementary Material). We argue that the superior results of VBMC stem from a better exploration of the posterior landscape, and from a better approximation of the log joint (used in the ELBO), related but distinct features. To show this, we first trained GPs (as we did for the other methods) on the samples collected by VBMC (see Supplementary Material). The posteriors obtained by sampling from the GPs trained on the VBMC samples scored a better gsKL than the other methods (and occasionally better than VBMC itself). Second, we estimated the marginal likelihood with WSABI-L using the samples collected by VBMC. The LML error in this hybrid approach is much lower than the error of WSABI-L alone, but still higher than the LML error of VBMC. These results combined suggest that VBMC builds better (and more stable) surrogate models and obtains higher-quality samples than the compared methods.

The performance of VBMC-U and VBMC-P is similar on synthetic functions, but the ‘prospective’ acquisition function converges faster on the real problem set, so we recommend as the default. Besides scoring well on quantitative metrics, VBMC is able to capture nontrivial features of the true posteriors (see Supplementary Material for examples). Moreover, VBMC achieves these results with a relatively small computational cost (see Supplementary Material for discussion).

5 Conclusions

In this paper, we have introduced VBMC, a novel Bayesian inference framework that combines variational inference with active-sampling Bayesian quadrature for models with expensive black-box likelihoods. Our method affords both posterior estimation and model inference by providing an approximate posterior and a lower bound to the model evidence. We have shown on both synthetic and real model-fitting problems that, given a contained budget of likelihood evaluations, VBMC is able to reliably compute valid, usable approximations in realistic scenarios, unlike previous methods whose applicability seems to be limited to very low dimension or simple likelihoods. Our method, thus, represents a novel useful tool for approximate inference in science and engineering.

We believe this is only the starting point to harness the combined power of variational inference and Bayesian quadrature. Not unlike the related field of Bayesian optimization, VBMC paves the way to a plenitude of both theoretical (e.g., analysis of convergence, development of principled acquisition functions) and applied work (e.g., application to case studies of interest, extension to noisy likelihood evaluations, algorithmic improvements), which we plan to pursue as future directions.

Acknowledgments

We thank Robbe Goris for sharing data and code for the neuronal model; Michael Schartner and Rex Liu for comments on an earlier version of the paper; and three anonymous reviewers for useful feedback.

References

References

  • [1] Rasmussen, C. & Williams, C. K. I. (2006) Gaussian Processes for Machine Learning. (MIT Press).
  • [2] Jones, D. R., Schonlau, M., & Welch, W. J. (1998) Efficient global optimization of expensive black-box functions. Journal of Global Optimization 13, 455–492.
  • [3] Brochu, E., Cora, V. M., & De Freitas, N. (2010) A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599.
  • [4] Shahriari, B., Swersky, K., Wang, Z., Adams, R. P., & de Freitas, N. (2016) Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE 104, 148–175.
  • [5] Acerbi, L. & Ma, W. J. (2017) Practical Bayesian optimization for model fitting with Bayesian adaptive direct search. Advances in Neural Information Processing Systems 30, 1834–1844.
  • [6] O’Hagan, A. (1991) Bayes–Hermite quadrature. Journal of Statistical Planning and Inference 29, 245–260.
  • [7] Ghahramani, Z. & Rasmussen, C. E. (2002) Bayesian Monte Carlo. Advances in Neural Information Processing Systems 15, 505–512.
  • [8] Osborne, M., Garnett, R., Ghahramani, Z., Duvenaud, D. K., Roberts, S. J., & Rasmussen, C. E. (2012) Active learning of model evidence using Bayesian quadrature. Advances in Neural Information Processing Systems 25, 46–54.
  • [9] Gunter, T., Osborne, M. A., Garnett, R., Hennig, P., & Roberts, S. J. (2014) Sampling for inference in probabilistic models with fast Bayesian quadrature. Advances in Neural Information Processing Systems 27, 2789–2797.
  • [10] Briol, F.-X., Oates, C., Girolami, M., & Osborne, M. A. (2015) Frank-Wolfe Bayesian quadrature: Probabilistic integration with theoretical guarantees. Advances in Neural Information Processing Systems 28, 1162–1170.
  • [11] Kandasamy, K., Schneider, J., & Póczos, B. (2015) Bayesian active learning for posterior estimation. Twenty-Fourth International Joint Conference on Artificial Intelligence.
  • [12] Wang, H. & Li, J. (2018) Adaptive Gaussian process approximation for Bayesian inference with expensive likelihood functions. Neural Computation pp. 1–23.
  • [13] Goris, R. L., Simoncelli, E. P., & Movshon, J. A. (2015) Origin and function of tuning diversity in macaque visual cortex. Neuron 88, 819–831.
  • [14] Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., & Saul, L. K. (1999) An introduction to variational methods for graphical models. Machine Learning 37, 183–233.
  • [15] Bishop, C. M. (2006) Pattern Recognition and Machine Learning. (Springer).
  • [16] Gramacy, R. B. & Lee, H. K. (2012) Cases for the nugget in modeling computer experiments. Statistics and Computing 22, 713–722.
  • [17] Kingma, D. P. & Welling, M. (2013) Auto-encoding variational Bayes. Proceedings of the 2nd International Conference on Learning Representations.
  • [18] Gershman, S., Hoffman, M., & Blei, D. (2012) Nonparametric variational inference. Proceedings of the 29th International Coference on Machine Learning.
  • [19] Kingma, D. P. & Ba, J. (2014) Adam: A method for stochastic optimization. Proceedings of the 3rd International Conference on Learning Representations.
  • [20] Hansen, N., Müller, S. D., & Koumoutsakos, P. (2003) Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evolutionary Computation 11, 1–18.
  • [21] Neal, R. M. (2003) Slice sampling. Annals of Statistics 31, 705–741.
  • [22] Carpenter, B., Gelman, A., Hoffman, M. D., Lee, D., Goodrich, B., Betancourt, M., Brubaker, M., Guo, J., Li, P., & Riddell, A. (2017) Stan: A probabilistic programming language. Journal of Statistical Software 76.
  • [23] Gilks, W. R., Roberts, G. O., & George, E. I. (1994) Adaptive direction sampling. The Statistician 43, 179–189.
  • [24] Kass, R. E. & Raftery, A. E. (1995) Bayes factors. Journal of the American Statistical Association 90, 773–795.
  • [25] Geyer, C. J. (1994) Estimating normalizing constants and reweighting mixtures. (Technical report).
  • [26] Knuth, D. E. (1992) Two notes on notation. The American Mathematical Monthly 99, 403–422.
  • [27] Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2013) Bayesian Data Analysis (3rd edition). (CRC Press).
  • [28] Yao, Y., Vehtari, A., Simpson, D., & Gelman, A. (2018) Yes, but did it work?: Evaluating variational inference. arXiv preprint arXiv:1802.02538.

Supplementary Material

In this Supplement we include a number of derivations, implementation details, and additional results omitted from the main text.

Code used to generate the results in the paper is available at https://github.com/lacerbi/infbench. The VBMC algorithm is available at https://github.com/lacerbi/vbmc.

Contents\@mkbothContents Contents

Appendix A Computing and optimizing the ELBO

For ease of reference, we recall the expression for the ELBO, for ,

(S1)

with , and of the variational posterior,

(S2)

where , , and are, respectively, the mixture weight, mean, and scale of the -th component, and is a diagonal covariance matrix common to all elements of the mixture. The variational posterior for a given number of mixture components is parameterized by .

In the following paragraphs we derive expressions for the ELBO and for its gradient. Then, we explain how we optimize it with respect to the variational parameters.

a.1 Stochastic approximation of the entropy

We approximate the entropy of the variational distribution via simple Monte Carlo sampling as follows. Let and be the number of samples per mixture component. We have

(S3)

where we used the reparameterization trick [17]. For VBMC, we set during the variational optimization, and for evaluating the ELBO with high precision at the end of each iteration.

a.1.1 Gradient of the entropy

The derivative of the entropy with respect to a variational parameter (that is, not a mixture weight) is

(S4)

where from the second to the third row we used the fact that the expected value of the score is zero, .

In particular, for , with and ,

(S5)

where we used that fact that .

For , with ,

(S6)

where we used that fact that .

For , with ,

(S7)

where we used that fact that .

Finally, the derivative with respect to variational mixture weight , for , is

(S8)

a.2 Expected log joint

For the expected log joint we have

(S9)

To solve the integrals in Eq. S9 we approximate with a Gaussian process (GP) with a squared exponential (that is, rescaled Gaussian) covariance function,

(S10)

where is equal to the normalization factor of the Gaussian.111This choice of notation makes it easy to apply Gaussian identities used in Bayesian quadrature. For the GP we also assume a Gaussian likelihood with observation noise variance and, for the sake of exposition, a constant mean function . We will later consider the case of a negative quadratic mean function, as per the main text.

a.2.1 Posterior mean of the integral and its gradient

The posterior predictive mean of the GP, given training data , where X are training inputs with associated observed values , is

(S11)

Thus, for each integral in Eq. S9 we have in expectation over the GP posterior

(S12)

where is a -dimensional vector with entries for . In particular, defining for ,

(S13)

We can compute derivatives with respect to the variational parameters as

(S14)

The derivative of Eq. S9 with respect to mixture weight is simply .

a.2.2 Posterior variance of the integral

We compute the variance of Eq. S9 under the GP approximation as [7]

(S15)

where is the GP posterior predictive covariance,

(S16)

Thus, each term in Eq. S15 can be written as

(S17)
a.2.3 Negative quadratic mean function

We consider now a GP with a negative quadratic mean function,

(S18)

With this mean function, for each integral in Eq. S9 we have in expectation over the GP posterior,

(S19)

where we defined

(S20)

a.3 Optimization of the approximate ELBO

In the following paragraphs we describe how we optimize the ELBO in each iteration of VBMC, so as to find the variational posterior that best approximates the current GP model of the posterior.

a.3.1 Reparameterization

For the purpose of the optimization, we reparameterize the variational parameters such that they are defined in a potentially unbounded space. The mixture means, , remain the same. We switch from mixture scale parameters to their logarithms, , and similarly from coordinate length scales, , to . Finally, we parameterize mixture weights as unbounded variables, , such that (softmax function). We compute the appropriate Jacobian for the change of variables and apply it to the gradients calculated in Sections A.1 and A.2.

a.3.2 Choice of starting points

In each iteration, we first perform a quick exploration of the ELBO landscape in the vicinity of the current variational posterior by generating candidate starting points, obtained by randomly jittering, rescaling, and reweighting components of the current variational posterior. In this phase we also add new mixture components, if so requested by the algorithm, by randomly splitting and jittering existing components. We evaluate the ELBO at each candidate starting point, and pick the point with the best ELBO as starting point for the subsequent optimization.

For most iterations we use , except for the first iteration and the first iteration after the end of warm-up, for which we set .

a.3.3 Stochastic gradient descent

We optimize the (negative) ELBO via stochastic gradient descent, using a customized version of Adam [19]. Our modified version of Adam includes a time-decaying learning rate, defined as

(S21)

where is the current iteration of the optimizer, and are, respectively, the minimum and maximum learning rate, and is the decay constant. We stop the optimization when the estimated change in function value or in the parameter vector across the past iterations of the optimization goes below a given threshold.

We set as hyperparameters of the optimizer , , (square root of double precision), , , . We set during warm-up, and thereafter.

Appendix B Algorithmic details

We report here several implementation details of the VBMC algorithm omitted from the main text.

b.1 Regularization of acquisition functions

Active sampling in VBMC is performed by maximizing an acquisition function , where is the support of the target density. In the main text we describe two such functions, uncertainty sampling () and prospective uncertainty sampling ().

A well-known problem with GPs, in particular when using smooth kernels such as the squared exponential, is that they become numerically unstable when the training set contains points which are too close to each other, producing a ill-conditioned Gram matrix. Here we reduce the chance of this happening by introducing a correction factor as follows. For any acquisition function , its regularized version is defined as

(S22)

where is the total posterior predictive variance of the GP at for the given training set , a regularization parameter, and we denote with Iverson’s bracket [26], which takes value 1 if the expression inside the bracket is true, 0 otherwise. Eq. S22 enforces that the regularized acquisition function does not pick points too close to points in . For VBMC, we set .

b.2 GP hyperparameters and priors

The GP model in VBMC has hyperparameters, . We define all scale hyperparameters, that is , in log space.

We assume independent priors on each hyperparameter. For some hyperparameters, we impose as prior a broad Student’s distribution with a given mean , scale , and degrees of freedom. Following an empirical Bayes approach, mean and scale of the prior might depend on the current training set. For all other hyperparameters we assume a uniform flat prior. GP hyperparameters and their priors are reported in Table S1.

Hyperparameter Description Prior mean Prior scale
Input length scale (-th dimension) SD
Output scale Uniform
Observation noise 0.5
Mean function maximum
Mean function location (-th dim.) Uniform
Mean function length scale (-th dim.) Uniform
Table S1: GP hyperparameters and their priors. See text for more information.

In Table S1, SD denotes the sample standard deviation and the diameter of a set, that is the maximum element minus the minimum. We define the high posterior density training set, , constructed by keeping a fraction of the training points with highest target density values. For VBMC, we use (that is, we only ignore a small fraction of the points in the training set).

b.3 Transformation of variables

In VBMC, the problem coordinates are defined in an unbounded internal working space, . All original problem coordinates for are independently transformed by a mapping defined as follows.

Unbounded coordinates are ‘standardized’ with respect to the plausible box, , where PLB and PUB are here, respectively, the plausible lower bound and plausible upper bound of the coordinate under consideration.

Bounded coordinates are first mapped to an unbounded space via a logit transform, with , where LB and UB are here, respectively, the lower and upper bound of the coordinate under consideration. The remapped variables are then ‘standardized’ as above, using the remapped PLB and PUB values after the logit transform.

Note that probability densities are transformed under a change of coordinates by a multiplicative factor equal to the inverse of the determinant of the Jacobian of the transformation. Thus, the value of the observed log joint used by VBMC relates to the value of the log joint density, observed in the original (untransformed) coordinates, as follows,

(S23)

where is the derivative of the transformation for the -th coordinate, and . See for example [22] for more information on transformations of variables.

b.4 Termination criteria

The VBMC algorithm terminates when reaching a maximum number of target density evaluations, or when achieving long-term stability of the variational solution, as described below.

b.4.1 Reliability index

At the end of each iteration of the VBMC algorithm, we compute a set of reliability features of the current variational solution.

  1. The absolute change in mean ELBO from the previous iteration:

    (S24)

    where is a tolerance parameter on the error of the ELBO.

  2. The uncertainty of the current ELBO:

    (S25)
  3. The change in symmetrized KL divergence between the current variational posterior and the one from the previous iteration:

    (S26)

    where for Eq. S26 we use the Gaussianized KL divergence (that is, we compare solutions only based on their mean and covariance), and is a tolerance parameter for differences in variational posterior.

The parameters and are chosen such that , with , for features that are deemed indicative of a good solution. For VBMC, we set and .

The reliability index at iteration is obtained by averaging the individual reliability features .

b.4.2 Long-term stability termination condition

The long-term stability termination condition is reached at iteration when:

  1. all reliability features are below 1;

  2. the reliability index has remained below 1 for the past iterations (with the exception of at most one iteration, excluding the current one);

  3. the slope of the ELCBO computed across the past iterations is below a given threshold , suggesting that the ELCBO is stationary.

For VBMC, we set by default and . For computing the ELCBO we use (see Eq. 8 in the main text).

b.4.3 Validation of VBMC solutions

Long-term stability of the variational solution is suggestive of convergence of the algorithm to a (local) optimum, but it should not be taken as a conclusive result without further validation. In fact, without additional information, there is no way to know whether the algorithm has converged to a good solution, let alone to the global optimum. For this reason, we recommend to run the algorithm multiple times and compare the solutions, and to perform posterior predictive checks [27]. See also [28] for a discussion of methods to validate the results of variational inference.

Appendix C Experimental details and additional results

c.1 Synthetic likelihoods

We plot in Fig. S1 synthetic target densities belonging to the test families described in the main text (lumpy, Student, cigar), for the case. We also plot examples of solutions returned by VBMC after reaching long-term stability, and indicate the number of iterations.

Figure S1: Synthetic target densities and example solutions. Top: Contour plots of two-dimensional synthetic target densities. Bottom: Contour plots of example variational posteriors returned by VBMC, and iterations until convergence.

Note that VBMC, despite being overall the best-performing algorithm on the cigar family in higher dimensions, still underestimates the variance along the major axis of the distribution. This is because the variational mixture components have axis-aligned (diagonal) covariances, and thus many mixture components are needed to approximate non-axis aligned densities. Future work should investigate alternative representations of the variational posterior to increase the expressive power of VBMC, while keeping its computational efficiency and stability.

We plot in Fig. S2 the performance of selected algorithms on the synthetic test functions, for . These results are the same as those reported in Fig. 2 in the main text, but with higher resolution. To avoid clutter, we exclude algorithms with particularly poor performance or whose plots are redundant with others. In particular, the performance of VBMC-U is virtually identical to VBMC-P here, so we only report the latter. Analogously, with a few minor exceptions, WSABI-M performs similarly or worse than WSABI-L across all problems. AIS suffers from the lack of problem-specific tuning, performing no better than SMC here, and the AGP algorithm diverges on most problems. Finally, we did not manage to get BAPE to run on the cigar family, for , without systematically incurring in numerical issues with the GP approximation (with and without regularization of the BAPE acquisition function, as per Section B.1), so these plots are missing.

Figure S2: Full results on synthetic likelihoods. A. Median absolute error of the LML estimate with respect to ground truth, as a function of number of likelihood evaluations, on the lumpy (top), Student (middle), and cigar (bottom) problems, for (columns). B. Median “Gaussianized” symmetrized KL divergence between the algorithm’s posterior and ground truth. For both metrics, shaded areas are 95 % CI of the median, and we consider a desirable threshold to be below one (dashed line). This figure reproduces Fig. 2 in the main text with more details. Note that panels here may have different vertical axes.

c.2 Neuronal model

As a real model-fitting problem, we considered in the main text a neuronal model that combines effects of filtering, suppression, and response nonlinearity, applied to two real data sets (one V1 and one V2 neurons) [13]. The purpose of the original study was to explore the origins of diversity of neuronal orientation selectivity in visual cortex via a combination of novel stimuli (orientation mixtures) and modeling [13]. This model was also previously considered as a case study for a benchmark of Bayesian optimization and other black-box optimization algorithms [5].

c.2.1 Model parameters

In total, the original model has 12 free parameters: 5 parameters specifying properties of a linear filtering mechanism, 2 parameters specifying nonlinear transformation of the filter output, and 5 parameters controlling response range and amplitude. For the analysis in the main text, we considered a subset of parameters deemed ‘most interesting’ by the authors of the original study [13], while fixing the others to their MAP values found by our previous optimization benchmark [5].

The seven model parameters of interest from the original model, their ranges, and the chosen plausible bounds are reported in Table S2.

Parameter Description LB UB PLB PUB
Preferred direction of motion (deg) 0 360 90 270
Preferred spatial frequency (cycles per deg) 0.05 15 0.5 10
Aspect ratio of 2-D Gaussian 0.1 3.5 0.3 3.2
Derivative order in space 0.1 3.5 0.3 3.2