Integrals of linearly constrained multivariate Gaussian densities are a frequent problem in machine learning and statistics, arising in tasks like generalized linear models and Bayesian optimization. Yet they are notoriously hard to compute, and to further complicate matters, the numerical values of such integrals may be very small. We present an efficient black-box algorithm that exploits geometry for the estimation of integrals over a small, truncated Gaussian volume, and to simulate therefrom. Our algorithm uses the Holmes-Diaconis-Ross (hdr) method combined with an analytic version of elliptical slice sampling (ess). Adapted to the linear setting, ess allows for rejection-free sampling, because intersections of ellipses and domain boundaries have closed-form solutions. The key idea of hdr is to decompose the integral into easier-to-compute conditional probabilities by using a sequence of nested domains. Remarkably, it allows for direct computation of the logarithm of the integral value and thus enables the computation of extremely small probability masses. We demonstrate the effectiveness of our tailored combination of hdr and ess on high-dimensional integrals and on entropy search for Bayesian optimization.
200E \addbibresourcebib.bib \DeclareCiteCommand] \usebibmacroprenote \usebibmacrociteindex\printtext[bibhyperref]\usebibmacrocite \multicitedelim \usebibmacropostnote \DeclareCiteCommand*] \usebibmacroprenote \usebibmacrociteindex\printtext[bibhyperref]\usebibmacrociteyear \multicitedelim \usebibmacropostnote \DeclareCiteCommand\parencite[\mkbibparens] \usebibmacroprenote \usebibmacrociteindex\printtext[bibhyperref]\usebibmacrocite \multicitedelim \usebibmacropostnote \DeclareCiteCommand*\parencite[\mkbibparens] \usebibmacroprenote \usebibmacrociteindex\printtext[bibhyperref]\usebibmacrociteyear \multicitedelim \usebibmacropostnote \DeclareCiteCommand\footcite[\mkbibfootnote] \usebibmacroprenote \usebibmacrociteindex\printtext[bibhyperref] \usebibmacrocite \multicitedelim \usebibmacropostnote \DeclareCiteCommand\footcitetext[\mkbibfootnotetext] \usebibmacroprenote \usebibmacrociteindex\printtext[bibhyperref]\usebibmacrocite \multicitedelim \usebibmacropostnote \DeclareCiteCommand\textcite \usebibmacrociteindex\printtext[bibhyperref]\usebibmacrotextcite \multicitedelim \usebibmacrotextcite:postnote \DeclareFieldFormattitlecase\MakeSentenceCase*#1 \pdfstringdefDisableCommands
Integrals over Gaussians under Linear Domain Constraints
Alexandra Gessner &Oindrila Kanjilal &Philipp Hennig
University of Tuebingen and
Technical University of Munich
University of Tuebingen and
MPI for Intelligent Systems
Multivariate Gaussian densities are omnipresent in statistics and machine learning. Yet, Gaussian probabilities are hard to compute—they require solving an integral over a constrained Gaussian volume—owing to the intractability of the multivariate version of the Gaussian cumulative distribution function (cdf). The probability mass that lies within a domain restricted by linear constraints can be written as
with the Heaviside step function if and zero otherwise. We take the integration measure to be a standard normal without loss of generality, because any correlated multivariate Gaussian can be whitened by linearly transforming the integration variable.
Gaussian models with linear domain constraints occur in a myriad of applications that span all disciplines of applied statistics and include biostatistics \citepThiebautJ2004, medicine \citepChenC2007, environmental sciences \citepWani2017, robotics and control \citepFisac2018, machine learning \citepSu2016 and more.
A common occurrence of this integral is in spatial statistics, such as Markov random fields \citepBolinL2015, the statistical modeling of spatial extreme events called max-stable processes \citepHuserD2013, GentonYH2011, or in modeling uncertainty regions for latent Gaussian models. An example for the latter is to find regions that are likely to exceed a given reference level, e.g., pollution levels in geostatistics and environmental monitoring \citepBolinL2015, or in climatology \citepFrenchS2013.
Another area where integrals like Eq. (1) are often encountered is in reliability analysis \citepAuB2001b,Melchers2018,AndersenLR2018,StraubSBK2020.
A key problem there is to estimate the probability of a rare event to occur (e.g., a flood) or for a mechanical system to enter a failure mode.
In machine learning, there are many Bayesian models in which linearly constrained multivariate normal distributions play a role, such as Gaussian processes under linear constraints \citepLopezLoperaBDR2017, LopezLoperaJD2019, Agrell2019, DaVeigaM2012, inference in graphical models \citepMulgraveG2018, multi-class Gaussian process classification \citepRasmussenW2006, ordinal and probit regression \citepLawrence2008, Ashford1970, incomplete data classification \citepLiao2007, and Bayesian optimization \citepHennigS2012, Wang2016, to name a few.
This practical relevance has fed a slow-burn research effort in the integration of truncated Gaussians over decades \citepGeweke1991, Genz1992, Joe1995, Vijverberg1997, Nomura2014. \citetGassmannDS2002 and \citetGenzB2009 provide comparisons and attest that the algorithm by \citetGenz1992 provides the best accuracy across a wide range of test problems, which has made it a default choice in the literature. Genz’s method applies a sequence of transformations to transform the integration region to the unit cube and then solves the integral numerically using quasi-random integration points. Other methods focus on specialized settings such as bivariate or trivariate Gaussian probabilities \citepGenz2004, HayterL2013, or on orthant probabilities \citepMiwaHK2003, Craig2008, Nomura2016, HayterL2012. Yet, these methods are only feasible for at most a few tens of variables. Only recent advances have targeted higher-dimensional integrals: \citetAzzimontiG2017 study high-dimensional orthant probabilities and \citetGentonKT2017 consider the special case where the structure of the covariance matrix allows for hierarchical decomposition to reduce computational complexity. \citetPhinikettos2011 employ a combination of four variance reduction techniques to solve such integrals with Monte Carlo methods. \citetBotev2016 constructs an exponential tilting of an importance sampling measure that builds on the method by \citetGenz1992 and reports effectiveness for . A different approach has been suggested by \citetCunninghamHL2011: They use expectation propagation to approximate the constrained normal integrand of Eq. (1) by a moment-matched multivariate normal density. This allows for fast integration, at the detriment of guarantees. Indeed, the authors report cases in which ep is far off the ground truth integral.
Closely related to integration is simulation from linearly constrained Gaussians, yet these tasks have rarely been considered concurrently, except for \citetBotev2016 who proposes an accept-reject sampler alongside the integration scheme. Earlier attempts employ Gibbs sampling \citepGeweke1991, or other Monte Carlo techniques \citepCongCZ2017. \citetKochB2019 recently introduced an algorithm for exact simulation from truncated Gaussians. Their method iteratively samples from transformed univariate truncated Gaussians that satisfy the box constraints.
In our work, we jointly address the sampling and the normalization problem for linearly constrained domains in a Gaussian space, making the following contributions:
We present an adapted version of elliptical slice sampling (ess) which we call lin-ess that allows for rejection-free sampling from the linearly constrained domain . Its effectiveness is not compromised even if the probability mass of is very small (cf. Section 2.1).
Based on the above lin-ess algorithm, we introduce an efficient integrator for truncated Gaussians. It relies on a sequence of nested domains to decompose the integral into multiple, easier-to-solve, conditional probabilities. The method is an adapted version of the Holmes-Diaconis-Ross algorithm \citepDiaconisH1995,Ross2012,KroeseTB2011 (cf. Section 2.2).
With increasing dimension , the integral value can take extremely small values. hdr with a lin-ess sampler allows to compute such integrals efficiently, and to even compute the logarithm of the integral.
With lin-ess, sampling is sufficiently efficient to also compute derivatives of the probability with respect to the parameters of the Gaussian using expectations.
We provide a Python implementation available at https://github.com/alpiges/LinConGauss.
We first introduce an adapted version of elliptical slice sampling, lin-ess, which permits efficient sampling from a linearly constrained Gaussian domain of arbitrarily small mass once an initial sample within the domain is known. This routine is a special case of elliptical slice sampling that leverages the analytic tractability of intersections of ellipses and hyperplanes to speed up the ess loop. lin-ess acts at the back-end of the integration method, which is introduced in Section 2.2.
For further consideration, it is convenient to write the linear constraints of Eq. (1) in vectorial form, , where , and . The integration domain is given by the intersection of the region where all the constraints exceed zero. For example, orthant probabilities of a correlated Gaussian can be written in the form of Eq. (1) by using the transformation , where is the Cholesky decomposition of . Typically, we expect , i.e., there are at least as many linear constraints as dimensions. This is because if , there exists a transformation of such that dimensions can be integrated out in closed form, and an -dimensional integral with constraints remains. However, there are situations in which integrating out dimensions might be undesired. This is the case, e.g., when samples from the untransformed integrand are required.
2.1 Sampling from truncated Gaussians
Elliptical slice sampling (ess) by \citetMurrayAM2010 is a Markov chain Monte Carlo (mcmc) algorithm to draw samples from a posterior when the prior is a multivariate normal distribution . Given an initial location , an auxiliary vector is drawn to construct an ellipse parameterized by the angle . In the general case, the algorithm proceeds similarly to regular slice sampling \citepNeal2003, but on the angular domain. A likelihood threshold is defined, and rejected proposals (in ) with likelihood values below the threshold are used to adapt the bracket to sample from, until a proposal is accepted that serves as new (see \citetMurrayAM2010 for details).
ess is designed for generic likelihood functions. The special form of the likelihood in Eq. (1) can be leveraged to significantly simplify the ess algorithm:
The selector can take only the values 0 and 1. Hence there is no need for a likelihood threshold, the domain to sample from is always defined by for on the ellipse.
The intersections between the ellipse and the linear constraints have closed-form solutions. The angular domain(s) to sample from can be constructed analytically, and lin-ess is thus rejection-free. The typical bisection search of slice sampling becomes a simple analytic expression.
With these simplifications of ess, each sample from requires exactly one auxiliary normal sample and a scalar uniform sample to sample from the angular domain. Fig. 1 illustrates the process of drawing a sample from the domain of interest (blue shaded area) using our version of ess. Given the two base vectors and , the ellipse is parameterized by its angle . The intersections between the ellipse and the domain boundaries can be expressed in closed form in terms of angles on the ellipse as solution to the set of equations . For the constraint, this equation typically has either zero or two solutions,
with . A single solution occurs in the case of a tangential intersection, which is unlikely. Not all intersection angles lie on the domain boundary and we need to identify those active intersections where switches on or off. To identify potentially multiple brackets, we sort the angles in increasing order and check for each of them if adding/subtracting a small causes a likelihood jump. If there is no jump, the angle is discarded, otherwise the sign of the jump is stored (whether from 0 to 1 or the reverse), in order to know the direction of the relevant domain on the slice. Pseudocode for lin-ess can be found in Algorithm 2 in the appendix.
The computational cost of drawing one sample on the ellipse is dominated by the inner products that need to be computed for the intersections, hence the complexity is . This is comparable with standard ess for which drawing from a multivariate normal distribution is , but the suppressed constant can be much smaller because there is no need to evaluate a likelihood function in lin-ess. This version of ess is a rejection-free sampling method to sample from a truncated Gaussian of arbitrarily small mass—except that it requires an initial point within the domain from where to launch the Markov chain. How to obtain such a sample will be discussed in Section 2.2.2.
2.2 Computing Gaussian probabilities
2.2.1 The Holmes-Diaconis-Ross algorithm
The Holmes-Diaconis-Ross algorithm (hdr) \citepDiaconisH1995, Ross2012, KroeseTB2011 is a specialized method for constructing an unbiased estimator for probabilities of the form under an arbitrary prior measure and a domain with a deterministic function . If this domain has very low probability mass, is expensive to compute with direct Monte Carlo because most samples are rejected. hdr mitigates this by using a sequence of nested domains , s.t. . The probability mass of the domain of interest can be decomposed into a product of conditional probabilities,
If each of the conditional probabilities is closer to , they all require quadratically fewer samples, reducing the overall cost despite the linear increase in indidivual sampling problems. Noting that and introducing the shorthand , Eq. (3) can be written in logarithmic form as .
hdr does not deal with the construction of these nested domains—a method to obtain them is discussed in Section 2.2.2. For now, they are assumed to be given in terms of a decreasing sequence of positive scalar values , where . Each shifted domain can then be defined through its corresponding shift value . In the general setting, this is ; in our specific problem of linear constraints, if . Any positive shift thus induces a domain that contains all domains with , and that engulfs a larger volume than . The shift identifies itself.
Given the shift sequence , the hdr algorithm proceeds as follows: Initially, samples are drawn from , the integration measure, in our case a standard normal. corresponds to which is ignored in the sequence. The conditional probability is estimated as the fraction of samples from that also fall into . To estimate the subsequent conditional probabilities for as the fraction of samples from falling into , standard hdr uses an mcmc sampler to simulate from . If the sequence of nestings is chosen well and initial seeds in the domain are known, these samplers achieve a high acceptance rate. This procedure is repeated until . With the estimated conditional probabilities , the estimator for the probability mass is then
In our adapted version of hdr, the lin-ess algorithm (cf. Section 2.1) comes into play, which achieves a 100% acceptance rate for simulating from the nested domains. In order to simulate rejection-free from , lin-ess requires an initial sample from the domain , which is obtained from the previous iteration of the algorithm. Every location sampled requires evaluating the linear constraints, hence the cost for each subset in hdr is . Pseudocode for this algorithm is shown in Algorithm 1, where LinESS is a call to the lin-ess sampler (cf. Section 2.1 and Algorithm 2 in the appendix) that simulates from the linearly constrained domain.
2.2.2 Obtaining nested domains
As the final missing ingredient, the hdr algorithm requires a sequence of nested domains or level sets defined by positive shifts , . In theory, the nested domains should ideally have conditional probabilities of (then each nesting improves the precision by one bit). Yet, in a more practical consideration, the computational overhead for constructing the nested domains should also be small. In practice, the shift sequence is often chosen in an ad hoc way, hoping that conditional probabilities are large enough to enable a decently accurate estimation via hdr \citepKanjilalM2015. This is not straightforward and requires problem-specific knowledge.
We suggest to construct the nestings via subset simulation \citepAuB2001b which is very similar to hdr.
It only differs in that the conditional probabilities are fixed a priori to a value , and then the shift values are computed such that a fraction of the samples drawn from falls into the subsequent domain .
The construction of the nested domains is depicted in Fig. 2. To find the shifts, samples are drawn from the integration measure initially (cf. Fig. 2, left). Then the first (and largest) shift is determined such that a fraction of the samples fall into the domain . This is achieved by computing for each sample by how much the linear constraints would need to be shifted to encompass the sample. For the subsequent shifts, samples are simulated from the current domain , and the next shift is again set s.t. samples fall into the next domain (Fig. 2, center). This requires an initial sample from to launch the lin-ess sampler, which is obtained from the samples gathered in the previous nesting that also lie in , while all other samples are discarded to reduce dependencies. This nesting procedure is repeated until more than samples fall into the domain of interest (cf. Fig. 2, right). We set to maximize the entropy of the binary distribution over whether samples fall in- or outside the next nested domain, yet in reliability analysis a common choice is \citepAuB2001a, which has the advantage of requiring less nestings (to the detriment of more samples). Pseudocode can be found in Algorithm 3 in the appendix.
In fact, subset simulation itself also permits the estimation of the integral , without appealing to hdr: Since the subsets are constructed such that the conditional probabilities take a predefined value, the estimator for the integral is where is the conditional probability for the last domain. For the number of nestings is roughly the negative binary logarithm of the integral estimator (cf. Fig. 3). The main reason not to rely on subset simulation alone is that its estimator is biased, because the samples are both used to construct the domains and to estimate . We thus use hdr for the integral estimation and subset simulation for the construction of the level sets.
Both subset simulation and hdr are instances of a wider class of so-called multilevel splitting methods which are related to sequential Monte Carlo (smc) in that they are concerned with simulating from a sequence of probability distributions. smc methods (aka. particle filters) were conceived for online inference in state space models, but can be extended to non-Markovian latent variable models \citepNaessethLS2019. In this form, smc methods have gained popularity for the estimation of rare events \citepDelMoralDJ2006, BectLV2017, CerouDFG2013.
2.2.3 Derivatives of Gaussian probabilities
Many applications (e.g. Bayesian optimization, see below) additionally require derivatives of the Gaussian probability w.r.t. to parameters of the integration measure or the linear constraints. The absence of such derivatives in classic quadrature sub-routines (such as from \citetGenz1992) has thus sometimes been mentioned as an argument against them \citep[e.g.]CunninghamHL2011). Our method allows to efficiently compute such derivatives, because it can produce samples. This leverages the classic result that derivatives of exponential families with respect to their parameters can be computed from expectations of the sufficient statistics. To do so, it is advantageous to rephrase Eq. (1) as the integral over a correlated Gaussian with mean and covariance matrix with axis-aligned constraints (or constraints that are independent of ). The derivatives w.r.t. a parameter can then be expressed as an expected value,
where the expectation is taken with respect to the transformed integrand Eq. (1). Since lin-ess permits us to simulate from the integrand of Eq. (1), derivatives can be estimated via expectations. We demonstrate in Section 3.2 that this is a lot more efficient than finite differences, which requires to be estimated twice, and at considerably higher accuracy.
To shed light on the interplay of subset simulation, hdr, and lin-ess, we consider a 500-dimensional synthetic integration problem with a closed-form solution. Further 1000-d integrals can be found in Section B.1. We then turn to Bayesian optimization and demonstrate our algorithm’s ability to estimate derivatives.
3.1 Synthetic experiments
As an initial integration problem we consider axis-aligned constraints in a 500-dimensional space. Since this task amounts to computing the mass of a shifted orthant under a standard normal distribution, it allows comparison to an exact analytic answer. The goal of this setup is two-fold: 1) to demonstrate that our method can compute small Gaussian probabilities to high accuracy, and 2) to explore configurations for the construction of nested domains using subset simulation. The domain is defined by . The true mass of this domain is . Estimating this integral naïvely by sampling from the Gaussian would require of the order of samples for one to fall into the domain of interest. With a standard library like numpy.random.randn, this would take about ages of the universe.
First, we compute the shift sequence using subset simulation for various numbers of samples per subset and a fixed conditional probability of .
Since the contributing factor of each nesting is , the integral estimate is roughly for our choice of (cf. Section 2.2.2).
The relation between the number of subsets and the estimated integral value is visualized in Fig. 3. It shows the sequences of shift values for increasing sample sizes and the resulting integral estimate .
The nesting has shift value and is the only subset with a conditional probability that deviates from the chosen value of , yet is a good indicator for the value of the negative binary logarithm of the estimated integral. Hence we use the same axis to display the number of subsets and .
The plot highlights the bias of subset simulation: For small sample sizes, e.g. , the integral is severely underestimated.
This bias is caused by the dependency of the subset construction method on the samples themselves:
Since we are using a mcmc method for simulating from the current domain, samples are correlated and do not fall into the true next subset with probability exactly .
This is why we only accept every sample to diminish this effect when constructing the subsets.
For the subsequent hdr simulation, we accepted every second sample from the ess procedure.
We choose powers of 2 for the number of samples per subset and observe that as of 16 samples per subset, the subset sequence is good enough to be handed to hdr for more accurate and unbiased estimation. This low requirement of 16 samples per nesting also means that subset simulation is a low-cost preparation for hdr, and causes only minor computational overhead.
Fig. 4 shows the results achieved by hdr for the nine subset sequences obtained with to samples per subset and for different numbers of samples per nesting for hdr. The top left panel of Fig. 4 shows the binary logarithm of the hdr integral estimator. The bad performance for the subsets created with 2, 4, or 8 samples per nesting indicates that a good nesting sequence is essential for the effectiveness of hdr, but also that such a sequence can be found using only about 16 samples per subset (this is thus the number used for all subsequent experiments). The bottom left panel displays the relative error of the hdr estimator. It is to bear in mind that the relative error is if the estimator is one order of magnitude off, indicating that hdr achieves the right order of magnitude with a relatively low sample demand. The right panel of Fig. 4 shows the values for the conditional probabilities found by hdr, using samples per subdomain. If subset simulation were perfectly reliable, these should ideally be . The plot confirms that, with , all conditional probabilities found by hdr are far from 0 and 1, warranting the efficiency of hdr.
3.2 Bayesian optimization
Bayesian optimization is a sample-efficient approach to global optimization of expensive-to-evaluate black-box functions (see \citetShahriariSWAF2016 for a review). A surrogate over the objective function serves to build a utility function and ultimately derive a policy to determine the next query point. Information-based utilities are directly concerned with the posterior distribution over the minimizer, , where summarizes previous evaluations of . Entropy search \citepHennigS2012 seeks to evaluate the objective function at the location that bears the most information about the minimizer. The expression is an infinite-dimensional integral itself, but for practical purposes, it can be discretized considering the distribution over so-called representer points. The probability of the representer point to be the minimum can be approximated as
where and are the posterior mean and covariance of the Gaussian process over , respectively. Clearly, this is a linearly constrained Gaussian integral in the form of Eq. (1) which has to be solved for all representer points. Eq. (6) is stated in matrix form in the appendix Section B.2. The original paper and implementation uses expectation propagation (ep) to approximate this integral.
Probability of minimum
For our experiment, we consider the one-dimensional Forrester function \citepForrester2007 with three initial evaluations. The top plot in Fig. 5 shows the ground truth distribution over the minimum obtained by Thompson sampling, i.e., drawing samples from the discretized posterior gp and recording their respective minimum, and the approximation over this distribution obtained by ep. It is apparent that ep fails to accurately represent . For hdr, we consider four locations (indicated by the vertical lines) and show that while it takes longer to compute, the estimate obtained by hdr converges to the true solution (see bottom plot of Fig. 5). In the experiment we use 200 representer points—which is an unusually high number for a 1-d problem—to show that our method can deal with integrals of that dimension. Also note that we are reporting cpu time, which means that due to automatic parallelization in Python the wall clock time is considerably lower.
Entropy search requires derivatives of Eq. (6) to construct a first-order approximation of the predictive information gain from evaluating at a new location .
We can estimate derivatives using expectations (cf. Section 2.2.3 and B.2).
Initially we choose 5 representer points to validate the approach of computing derivatives via moments against finite differences.
The latter requires estimating at very high accuracy and has thus a high sample demand even in this low-dimensional setting, for which we employ both rejection sampling and hdr.
The derivatives computed via moments from rejection sampling and lin-ess take of the time required to get a similar accuracy with finite differences.
Unsurprisingly, rejection sampling is faster in this case, with , i.e. only of the samples from the posterior over need to be discarded to obtain independent draws that have their minimum at .
lin-ess only outperforms rejection sampling at higher rejection rates common to higher-dimensional problems.
Therefore, we also consider 20 representer points, which corresponds to a 20-d linearly constrained space to sample from. In this setting, we consider a location of low probability, with , which renders an estimation via finite differences impossible and highly disfavors rejection sampling even for computing the moments. lin-ess, however, enables us to estimate the gradient of the normal distribution w.r.t. its mean and covariance matrix with a relative standard deviation on the 2-norm of the order of using samples and an average cpu time of 325 s for a problem that was previously unfeasible. A badly conditioned covariance matrix in Eq. (5) deteriorates runtime (which is already apparent in the considered case) since it requires estimating moments at very high accuracy to compensate for numerical errors.
3.3 Constrained samples
We emphasize that lin-ess allows to draw samples from linearly constrained Gaussians without rejection. In the Gaussian process setting, this permits to efficiently draw samples that are subject to linear restrictions \citepAgrell2019, LopezLoperaBDR2017, DaVeigaM2012. In particular, the time required for sampling is essentially independent of the probability mass of the domain of interest. This probability mass only affects the precomputation required to find an initial sample in the domain for lin-ess (cf. Section 2.2.2). Since this can be achieved with samples per subset (cf. Section 3.1), this initial runtime is typically negligible compared to the actual sampling. Fig. 6 displays the posterior distribution of a gp conditioned on the location of the minimum from the Bayesian optimization context, estimated from lin-ess samples. This distribution is required in predictive entropy search \citepHernandezHG2014—a reformulation of the original entropy search—where it is approximated by imposing several related constraints (e.g., on the derivatives at the minimizer ). The probability for the given location to be the minimizer is , which renders direct sampling virtually impossible. The unaltered ess algorithm fails on this problem due to the domain selector—a binary likelihood.
We have introduced a black-box algorithm that computes Gaussian probabilities (i.e. the integral over linearly constrained Gaussian densities) with high numerical precision, even if the integration domain is of high dimensionality and the probability to be computed is very small. This was achieved by adapting two separate pieces of existing prior art and carefully matching them to the problem domain: We designed a special version of elliptical slice sampling that takes explicit advantage of the linearly-constrained Gaussian setting, and used it as an internal step of the hdr algorithm. We showed that, because this algorithm can not just compute integrals but also produces samples from the nestings alongside, it also permits the evaluation of derivatives of the integral with respect to the parameters of the measure. One current limitation is that, because our algorithm was designed to be unbiased, it has comparably high computational cost (but also superior numerical precision) over alternatives like expectation propagation. This problem could be mitigated if one is willing to accept unbiasedness and thus reuse samples. Furthermore, both hdr and lin-ess are highly parallelizable (as opposed to ep) and thus offer margin for implementational improvement.
AG and PH gratefully acknowledge financial support by the European Research Council through ERC StG Action 757275 / PANAMA; the DFG Cluster of Excellence “Machine Learning - New Perspectives for Science”, EXC 2064/1, project number 390727645; the German Federal Ministry of Education and Research (BMBF) through the Tübingen AI Center (FKZ: 01IS18039A); and funds from the Ministry of Science, Research and Arts of the State of Baden-Württemberg. The work was carried out while OK was at the University of Tuebingen, funded by the German Research Foundation (Research Unit 1735). OK also acknowledges financial support through the Alexander von Humboldt Foundation. AG is grateful to the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for support.
Integrals over Gaussians under Linear Domain Constraints \bottomtitlebar
Appendix A Algorithms
Appendix B Details on Experiments
b.1 Synthetic experiments
We further consider three similar synthetic integrals over orthants of 1000-d correlated Gaussians with a fixed mean and a randomly drawn covariance matrix. Table 1 shows the mean and std. dev. of the binary logarithm of the integral estimator averaged over five runs of hdr using samples per nesting for integration, as well as the average cpu time111On 6 cpus, the wall clock time was 20 min per run..
b.2 Bayesian optimization
Probability of minimum
After having chosen representer points, the approximate probability for to be the minimum, Eq. (6) can be rephrased in terms of Eq. (1) by writing the linear constraints in matrix form. This matrix is a identity matrix with a vector of added in the i column,
In order to compute a first-order approximation to the objective function in entropy search, we need the derivatives of w.r.t. the parameters and . The algorithm requires the following derivative, where ,
using . Hence all we need is to compute the derivatives of the log normal distribution w.r.t. its parameters, and the expected values thereof w.r.t. the integrand. The required derivatives are
and the second derivative
Hence we only need and to compute the following gradients,
and the Hessian w.r.t. ,