Algorithms for Kullback-Leibler Approximation of Probability Measures in Infinite Dimensions

Algorithms for Kullback-Leibler Approximation of Probability Measures in Infinite Dimensions

F. J. Pinski222Department of Physics, University of Cincinnati, Cincinnati, OH 45221, USA    G. Simpson333Department of Mathematics, Drexel University, Philadelphia, PA 19104, USA    A. M. Stuart444Mathematics Institute, University of Warwick, Coventry CV4 7AL, UK    H. Weber444Mathematics Institute, University of Warwick, Coventry CV4 7AL, UK
Abstract

In this paper we study algorithms to find a Gaussian approximation to a target measure defined on a Hilbert space of functions; the target measure itself is defined via its density with respect to a reference Gaussian measure. We employ the Kullback-Leibler divergence as a distance and find the best Gaussian approximation by minimizing this distance. It then follows that the approximate Gaussian must be equivalent to the Gaussian reference measure, defining a natural function space setting for the underlying calculus of variations problem. We introduce a computational algorithm which is well-adapted to the required minimization, seeking to find the mean as a function, and parameterizing the covariance in two different ways: through low rank perturbations of the reference covariance; and through Schrödinger potential perturbations of the inverse reference covariance. Two applications are shown: to a nonlinear inverse problem in elliptic PDEs, and to a conditioned diffusion process. We also show how the Gaussian approximations we obtain may be used to produce improved pCN-MCMC methods which are not only well-adapted to the high-dimensional setting, but also behave well with respect to small observational noise (resp. small temperatures) in the inverse problem (resp. conditioned diffusion).

1 Introduction

Probability measures on infinite dimensional spaces arise in a variety of applications, including the Bayesian approach to inverse problems [29] and conditioned diffusion processes [16]. Obtaining quantitative information from such problems is computationally intensive, requiring approximation of the infinite dimensional space on which the measures live. We present a computational approach applicable to this context: we demonstrate a methodology for computing the best approximation to the measure, from within a subclass of Gaussians. In addition we show how this best Gaussian approximation may be used to speed-up Monte Carlo-Markov chain (MCMC) sampling. The measure of “best” is taken to be the Kullback-Leibler (KL) divergence, or relative entropy, a methodology widely adopted in machine learning applications [4]. In the recent paper [24], KL-approximation by Gaussians was studied using the calculus of variations. The theory from that paper provides the mathematical underpinnings for the algorithms presented here.

1.1 Abstract Framework

Assume we are given a measure on the separable Hilbert space equipped with the Borel -algebra, specified by its density with respect to a reference measure . We wish to find the closest element to , with respect to KL divergence, from a subset of the Gaussian probability measures on . We assume the reference measure is itself a Gaussian on . The measure is thus defined by

\hb@xt@.01(1.1)

where we assume that is continuous on some Banach space of full measure with respect to , and that is integrable with respect to . Furthermore, ensuring that is indeed a probability measure. We seek an approximation of which minimizes , the KL divergence between and in . Under these assumptions it is necessarily the case that is equivalent111Two measures are equivalent if they are mutually absolutely continuous. to (we write ) since otherwise This imposes restrictions on the pair , and we build these restrictions into our algorithms. Broadly speaking, we will seek to minimize over all sufficiently regular functions , whilst we will parameterize either through operators of finite rank, or through a function appearing as a potential in an inverse covariance representation.

Once we have found the best Gaussian approximation we will use this to improve upon known MCMC methods. Here, we adopt the perspective of considering only MCMC methods that are well-defined in the infinite-dimensional setting, so that they are robust to finite-dimensional approximation [9]. The best Gaussian approximation is used to make Gaussian proposals within MCMC which are simple to implement, yet which contain sufficient information about to yield significant reduction in the autocovariance of the resulting Markov chain, when compared with the methods developed in [9].

1.2 Relation to Previous Work

In addition to the machine learning applications mentioned above [4], approximation with respect to KL divergence has been used in a variety of applications in the physical sciences, including climate science [13], coarse graining for molecular dynamics [19, 27] and data assimilation [1].

On the other hand, improving the efficiency of MCMC algorithms is a topic attracting a great deal of current interest, as many important PDE based inverse problems result in target distributions for which is computationally expensive to evaluate. In [21], the authors develop a stochastic Newton MCMC algorithm, which resembles our improved pCN-MCMC Algorithm LABEL:a2 in that it uses Gaussian approximations that are adapted to the problem within the proposal distributions. However, while we seek to find minimizers of KL in an offline computation, the work in [21] makes a quadratic approximation of at each step along the MCMC sequence; in this sense it has similarities with the Riemannian Manifold MCMC methods of [14].

As will become apparent, a serious question is how to characterize, numerically, the covariance operator of the Gaussian measure . Recognizing that the covariance operator is compact, with decaying spectrum, it may be well-approximated by a low rank matrix. Low rank approximations are used in [21, 28], and in the earlier work [12]. In [12] the authors discuss how, even in the case where is itself Gaussian, there are significant computational challenges motivating the low rank methodology.

Other active areas in MCMC methods for high dimensional problems include the use of polynomial chaos expansions for proposals [22], and local interpolation of to reduce computational costs [8]. For methods which go beyond MCMC, we mention the paper [11] in which the authors present an algorithm for solving the optimal transport PDE relating to .

1.3 Outline

In Section LABEL:s:scalar_example, we examine these algorithms in the context of a scalar problem, motivating many of our ideas. The general methodology is introduced in Section LABEL:sec:G, where we describe the approximation of defined via (LABEL:e:target) by a Gaussian, summarizing the calculus of variations framework which underpins our algorithms. We describe the problem of Gaussian approximations in general, and then consider two specific paramaterizations of the covariance which are useful in practice, the first via finite rank perturbation of the covariance of the reference measure , and the second via a Schrödinger potential shift from the inverse covariance of . Section LABEL:sec:RM describes the structure of the Euler-Lagrange equations for minimization, and recalls the Robbins-Monro algorithm for locating the zeros of functions defined via an expectation. In Section LABEL:sec:MCMC we describe how the Gaussian approximation found via KL minimization can be used as the basis for new MCMC methods, well-defined on function space and hence robust to discretization, but also taking into account the change of measure via the best Gaussian approximation. Section LABEL:sec:N contains illustrative numerical results, for a Bayesian inverse problem arising in a model of groundwater flow, and in a conditioned diffusion process, prototypical of problems in molecular dynamics. We conclude in Section LABEL:sec:C.

2 Scalar Example

The main challenges and ideas of this work can be exemplified in a scalar problem, which we examine here as motivation. Consider the measure defined via its density with respect to Lebesgue measure:

\hb@xt@.01(2.1)

is a small parameter. Furthermore, let the potential be such that is non-Gaussian. As a concrete example, take

\hb@xt@.01(2.2)

We now explain our ideas in the context of this simple example, referring to algorithms which are detailed later; additional details are given in Section LABEL:s:scalar.

In order to link to the infinite dimensional setting, where Lebesgue measure is not defined and Gaussian measure is used as the reference measure, we write via its density with respect to a unit Gaussian :

We find the best fit , optimizing over and , noting that may be written as

The change of measure is then

\hb@xt@.01(2.3)

For potential (LABEL:e:scalar_potential), can be integrated analytically, yielding,

\hb@xt@.01(2.4)

In subsection LABEL:ss:mins we illustrate an algorithm to find the best Gaussian approximation numerically whilst subsection LABEL:ss:mcmcs demonstrates how this minimizer maybe used to improve MCMC methods. Appendix LABEL:s:SE contains further details of the numerical results, as well as a theoretical analysis of the improved MCMC method for this problem.

2.1 Estimation of the Minimizer

The Euler-Lagrange equations for (LABEL:e:Dnm_scalar) can then be solved to obtain a minimizer which satisfies and

\hb@xt@.01(2.5)

In more complex problems, is not analytically tractable and only defined via expectation. In this setting, we rely on the Robbins-Monro algorithm (Algorithm LABEL:a:RMforKL) to compute solution of the Euler-Lagrange equations defining minimizers. Figure LABEL:f:scalar_conv depicts the convergence of the Robbins-Monro solution towards the desired root at , for our illustrative scalar example. It also shows that is reduced.

(a) Convergence of
(b) Convergence of
(c) Minimization of
Figure 2.1: Convergence of and towards the values found via deterministic root finding for the scalar problem with potential (LABEL:e:scalar_potential) at . The iterates are generated using Algorithm LABEL:a:RMforKL, Robbins-Monro applied to KL minimization. Also plotted are values of KL divergence along the iteration sequence. The true optimal value is recovered, and KL divergence is reduced. To ensure convergence, is constrained to and is constrained to .

2.2 Sampling of the Target Distribution

Having obtained values of and that minimize , we may use to develop an improved MCMC sampling algorithm for the target measure . We compare the performance of the standard pCN method of Algorithm LABEL:a1, which uses no information about the best Gaussian fit , with the improved pCN Algorithm LABEL:a2, based on knowledge of The improved performance, gauged by acceptance rate and autocovariance, is shown in Figure LABEL:f:scalar_pCN_conv.

(a) Acceptance Rate
(b) Autocovariance
Figure 2.2: Acceptance rates and autocovariances for sampling from (LABEL:e:scalar_dist) with potential (LABEL:e:scalar_potential) at . The curves labeled correspond to the samples generated using our improved MCMC, Algorithm LABEL:a2, which uses the KL optimized for proposals. The curves labeled correspond to the samples generated using Algorithm LABEL:a1, which relies on for proposals. Algorithm LABEL:a2 shows an order of magnitude improvement over Algorithm LABEL:a1. For clarity, only a subset of the data is plotted in the figures.

All of this is summarized by Figure LABEL:f:scalar_summary, which shows the three distributions , and KL optimized , together with a histogram generated by samples from the KL-optimized MCMC Algorithm LABEL:a2. Clearly, better characterizes than , and this is reflected in the higher acceptance rate and reduced autocovariance. Though this is merely a scalar problem, these ideas are universal. In all of our examples, we have a non-Gaussian distribution we wish to sample from, an uninformed reference measure which gives poor sampling performance, and an optimized Gaussian which better captures the target measure and can be used to improve sampling.

Figure 2.3: Distributions of (target), (reference) and (KL-optimized Gaussian) for the scalar problem with potential (LABEL:e:scalar_potential) at . Posterior samples have also been plotted, as a histogram. By inspection, better captures , leading to improved performance. .

3 Parameterized Gaussian Approximations

We start in subsection LABEL:ssec:GS by describing some general features of the KL distance. Then in subsection LABEL:ssec:GA we discuss the case where is Gaussian. Subsections LABEL:ssec:FR and LABEL:ssec:SP describe two particular parameterizations of the Gaussian class that we have found useful in practice.

3.1 General Setting

Let be a measure defined by

\hb@xt@.01(3.1)

where we assume that is continuous on We aim to choose the best approximation to given by (LABEL:e:target) from within some class of measures; this class will place restrictions on the form of Our best approximation is found by choosing the free parameters in to minimize the KL divergence between and . This is defined as

\hb@xt@.01(3.2)

Recall that is not symmetric in its two arguments and our reason for choosing relates to the possibility of capturing multiple modes individually; minimizing corresponds to moment matching in the case where is the set of all Gaussians [4, 24].

Provided , we can write

\hb@xt@.01(3.3)

where

\hb@xt@.01(3.4)

Integrating this identity with respect to gives

\hb@xt@.01(3.5)

Combining (LABEL:eq:KL) with (LABEL:eq:mu2) and (LABEL:eq:zm), we have

\hb@xt@.01(3.6)

The computational task in this paper is to minimize (LABEL:eq:target) over the parameters that characterize our class of approximating measures , which for us will be subsets of Gaussians. These parameters enter and the normalization constant It is noteworthy, however, that the normalization constants and do not enter this expression for the distance and are hence not explicitly needed in our algorithms.

To this end, it is useful to find the Euler-Lagrange equations of (LABEL:eq:target). Imagine that is parameterized by and that we wish to differentiate with respect to We rewrite as an integral with respect to , rather than , differentiate under the integral, and then convert back to integrals with respect to From (LABEL:eq:mu2), we obtain

\hb@xt@.01(3.7)

Hence, from (LABEL:eq:mu2),

\hb@xt@.01(3.8)

Thus we obtain, from (LABEL:eq:KL),

\hb@xt@.01(3.9)

and

Therefore, with denoting differentiation with respect to ,

Using (LABEL:eq:b) we may rewrite this as integration with respect to and we obtain

\hb@xt@.01(3.10)

Thus, this derivative is zero if and only if and are uncorrelated under

3.2 Gaussian Approximations

Recall that the reference measure is the Gaussian We assume that is a strictly positive-definite trace class operator on [6]. We let denote the eigenfunction/eigenvalue pairs for . Positive (resp. negative) fractional powers of are thus defined (resp. densely defined) on by the spectral theorem and we may define , the Cameron-Martin space of measure . We assume that so that is equivalent to , by the Cameron-Martin Theorem [6]. We seek to approximate given in (LABEL:e:target) by , where is a subset of the Gaussian measures on . It is shown in [24] that this implies that is equivalent to in the sense of measures and this in turn implies that where and

\hb@xt@.01(3.11)

satisfies

\hb@xt@.01(3.12)

here denotes the space of Hilbert-Schmidt operators on .

For practical reasons, we do not attempt to recover itself, but instead introduce low dimensional parameterizations. Two such parameterizations are introduced in this paper. In one, we introduce a finite rank operator, associated with a vector . In the other, we employ a multiplication operator characterized by a potential function . In both cases, the mean is an element of . Thus minimization will be over either or .

In this Gaussian case the expressions for and its derivative, given by equations (LABEL:eq:target) and (LABEL:e:DJ), can be simplified. Defining

\hb@xt@.01(3.13)

we observe that, assuming ,

\hb@xt@.01(3.14)

This may be substituted into the definition of in (LABEL:e:D), and used to calculate and according to (LABEL:eq:KL2) and (LABEL:e:DJ). However, we may derive alternate expressions as follows. Let , the centered version of , and the centered version of Then, using the Cameron-Martin formula,

\hb@xt@.01(3.15)

where

\hb@xt@.01(3.16)

We also define a reduced function which will play a role in our computations:

\hb@xt@.01(3.17)

The consequence of these calculations is that, in the Gaussian case, (LABEL:eq:target) is

\hb@xt@.01(3.18)

Although the normalization constant now enters the expression for the objective function, it is irrelevant in the minimization since it does not depend on the unknown parameters in . To better see the connection between (LABEL:eq:target) and (LABEL:e:dkl_gauss), note that

\hb@xt@.01(3.19)

Working with (LABEL:e:dkl_gauss), the Euler-Lagrange equations to be solved are:

\hb@xt@.01(3.20a)
\hb@xt@.01(3.20b)

Here, is any of the parameters that define the covariance operator of the Gaussian . Equation (LABEL:e:DJ_Gauss_m) is obtained by direct differentiation of (LABEL:e:dkl_gauss), while (LABEL:e:DJ_Gauss_C) is obtained in the same way as (LABEL:e:DJ). These expressions are simpler for computations for two reasons. First, for the variation in the mean, we do not need the full covariance expression of (LABEL:e:DJ). Second, has fewer terms to compute.

3.3 Finite Rank Parameterization

Let denote orthogonal projection onto the span of the first eigenvectors of and define We then parameterize the covariance of in the form

\hb@xt@.01(3.21)

In words is given by the inverse covariance of on , and is given by on Because is necessarily symmetric it is essentially parametrized by a vector of dimension We minimize over This is a well-defined minimization problem as demonstrated in Example 3.7 of [24] in the sense that minimizing sequences have weakly convergent subsequences in the admissible set. Minimizers need not be unique, and we should not expect them to be, as multimodality is to be expected, in general, for measures defined by (LABEL:e:target).

3.4 Schrödinger Parameterization

In this subsection we assume that comprises a Hilbert space of functions defined on a bounded open subset of . We then seek given by (LABEL:e:covariance) in the form of a multiplication operator so that Whilst minimization over the pair , with and in the space of linear operators satisfying (LABEL:e:FHH), is well-posed [24], minimizing sequences with can behave very poorly with respect to the sequence For this reason we regularize the minimization problem and seek to minimize

where and denotes the Sobolev space of functions on with square integrable derivatives, with boundary conditions chosen appropriately for the problem at hand. The minimization of over is well-defined; see Section 3.3 of [24].

4 Robbins-Monro Algorithm

In order to minimize we will use the Robbins-Monro algorithm [26, 2, 23, 20]. In its most general form this algorithm calculates zeros of functions defined via an expectation. We apply it to the Euler-Lagrange equations to find critical points of a non-negative objective function, defined via an expectation. This leads to a form of gradient descent in which we seek to integrate the equations

until they have reached a critical point. This requires two approximations. First, as (LABEL:e:DJ_Gauss) involve expectations, the right hand sides of these differential equations are evaluated only approximately, by sampling. Second, a time discretization must be introduced. The key idea underlying the algorithm is that, provided the step-length of the algorithm is sent to zero judiciously, the sampling error averages out and is diminished as the step length goes to zero.

4.1 Background on Robbins-Monro

In this section we review some of the structure in the Euler-Lagrange equations for the desired minimization of . We then describe the particular variant of the Robbins-Monro algorithm that we use in practice. Suppose we have a parameterized distribution, , from which we can generate samples, and we seek a value for which

\hb@xt@.01(4.1)

Then an estimate of the zero, , can be obtained via the recursion

\hb@xt@.01(4.2)

Note that the two approximations alluded to above are included in this procedure: sampling and (Euler) time-discretization. The methodology may be adapted to seek solutions to

\hb@xt@.01(4.3)

where is a given, fixed, distribution independent of the parameter . (This setup arises, for example, in (LABEL:e:DJ_Gauss_m), where is fixed and the parameter in question is ) Letting , this induces a distribution , where the pre-image is with respect to the argument. Then with , and this now has the form of (LABEL:e:RM_generic). As suggested in the extensive Robbins-Monro literature, we take the step sequence to satisfy

\hb@xt@.01(4.4)

A suitable choice of is thus , . The smaller the value of , the more “large” steps will be taken, helping the algorithm to explore the configuration space. On the other hand, once the sequence is near the root, the smaller is, the larger the Markov chain variance will be. In addition to the choice of the sequence , (LABEL:e:RM_generic) introduces an additional parameter, , the number of samples to be generated per iteration. See [7, 2] and references therein for commentary on sample size.

The conditions needed to ensure convergence, and what kind of convergence, have been relaxed significantly through the years. In their original paper, Robbins and Monro assumed that were almost surely uniformly bounded, with a constant independent of . If they also assumed that was monotonic and , they could obtain convergence in . With somewhat weaker assumptions, but still requiring that the zero be simple, Blum developed convergence with probability one, [5] . All of this was subsequently generalized to the arbitrary finite dimensional case; see [2, 20, 23].

As will be relevant to this work, there is the question of the applicability to the infinite dimensional case when we seek, for instance, a mean function in a separable Hilbert space. This has also been investigated; see [30, 10] along with references mentioned in the preface of [20]. In this work, we do not verify that our problems satisfy convergence criteria; this is a topic for future investigation.

A variation on the algorithm that is commonly applied is the enforcement of constraints which ensure remain in some bounded set; see [20] for an extensive discussion. We replace (LABEL:e:RM1) by

\hb@xt@.01(4.5)

where is a bounded set, and computes the point in nearest to . This is important in our work, as the parameters that define must correspond to covariance operators. They must be positive definite, symmetric, and trace-class. Our method automatically produces symmetric trace-class operators, but the positivity has to be enforced by a projection.

4.2 Robbins-Monro Applied to KL

We seek minimizers of as stationary points of the associated Euler-Lagrange equations, (LABEL:e:DJ_Gauss). Before applying Robbins-Monro to this problem, we observe that we are free to precondition the Euler-Lagrange equations. In particular, we can apply bounded, positive, invertible operators so that pre-conditioned gradient will lie in the same function space as the parameter; this makes the iteration scheme well posed. For (LABEL:e:DJ_Gauss_m), we have found pre-multiplying by to be sufficient. For (LABEL:e:DJ_Gauss_C), the operator will be problem specific, depending on how parameterizes , and also if there is a regularization. We denote the preconditioner for the second equation by . Thus, the preconditioned Euler-Lagrange equations are

\hb@xt@.01(4.6a)
\hb@xt@.01(4.6b)

We must also ensure that and correspond to a well defined Gaussian; must be a covariance operator. Consequently, the Robbins-Monro iteration scheme is:

Algorithm 4.1
  1. Set . Pick and in the admissible set, and choose a sequence satisfying (LABEL:e:seq_cond)

  2. Update and according to:

    \hb@xt@.01(4.7a)
    \hb@xt@.01(4.7b)
  3. and return to 2

Typically, we have some a priori knowledge of the magnitude of the mean. For instance, may correspond to a mean path, joining two fixed endpoints, and we know it to be confined to some interval . In this case we choose

\hb@xt@.01(4.8)

For , it is necessary to compute part of the spectrum of the operator that induces, check that it is positive, and if it is not, project the value to something satisfactory. In the case of the finite rank operators discussed in Section LABEL:ssec:FR, the matrix must be positive. One way of handing this, for symmetric real matrices is to make the following choice:

\hb@xt@.01(4.9)

where is the spectral decomposition, and and are constants chosen a priori. It can be shown that this projection gives the closest, with respect to the Frobenius norm, symmetric matrix with spectrum constrained to , [17].222Recall that the Frobenius norm is the finite dimensional analog of the Hilbert-Schmidt norm.

5 Improved MCMC Sampling

The idea of the Metropolis-Hastings variant of MCMC is to create an ergodic Markov chain which is reversible, in the sense of Markov processes, with respect to the measure of interest; in particular the measure of interest is invariant under the Markov chain. In our case we are interested in the measure given by (LABEL:e:target). Since this measure is defined on an infinite dimensional space it is advisable to use MCMC methods which are well-defined in the infinite dimensional setting, thereby ensuring that the resulting methods have mixing rates independent of the dimension of the finite dimensional approximation space. This philosophy is explained in the paper [9]. The pCN algorithm is perhaps the simplest MCMC method for (LABEL:e:target) meeting these requirements. It has the following form:

Algorithm 5.1

Define

  1. Set and Pick

  2. Set with probability

  3. Set otherwise

  4. and return to 2

This algorithm has a spectral gap which is independent of the dimension of the discretization space under quite general assumptions on [15]. However, it can still behave poorly if , or its gradients, are large. This leads to poor acceptance probabilities unless is chosen very small so that proposed moves are localized; either way, the correlation decay is slow and mixing is poor in such situations. This problem arises because the underlying Gaussian used in the algorithm construction is far from the target measure . This suggests a potential resolution in cases where we have a good Gaussian approximation to , such as the measure . Rather than basing the pCN approximation on (LABEL:e:target) we base it on (LABEL:eq:mu2); this leads to the following algorithm:

Algorithm 5.2

Define

  1. Set and Pick

  2. Set with probability

  3. Set otherwise

  4. and return to 2

We expect to be smaller than , at least in regions of high probability. This suggests that, for given , Algorithm LABEL:a2 will have better acceptance probability than Algorithm LABEL:a1, leading to more rapid sampling. We show in what follows that this is indeed the case.

6 Numerical Results

In this section we describe our numerical results. These concern both solution of the relevant minimization problem, to find the best Gaussian approximation from within a given class using Algorithm LABEL:a:RMforKL applied to the two parameterizations given in subsections LABEL:ssec:FR and LABEL:ssec:SP, together with results illustrating the new pCN Algorithm LABEL:a2 which employs the best Gaussian approximation within MCMC. We consider two model problems: a Bayesian Inverse problem arising in PDEs, and a Conditioned Diffusion problem motivated by molecular dynamics. Some details on the path generation algorithms used in these two problems are given in Appendix LABEL:a:samples.

6.1 Bayesian Inverse Problem

We consider an inverse problem arising in groundwater flow. The forward problem is modelled by the Darcy constitutive model for porous medium flow. The objective is to find given by the equation

\hb@xt@.01(6.1a)
\hb@xt@.01(6.1b)

The inverse problem is to find given noisy observations

where , the space of continuous linear functionals on . This corresponds to determining the log permeability from measurements of the hydraulic head (height of the water-table). Letting , the solution operator of (LABEL:e:perm) composed with the vector of linear functionals . We then write, in vector form,

We assume that and place a Gaussian prior on . Then the Bayesian inverse problem has the form (LABEL:e:target) where

We consider this problem in dimension one, with , and employing pointwise observation at points as the linear functionals . As prior we take the Gaussian , with

restricted to the subspace of of periodic mean zero functions. In one dimension we may solve the forward problem (LABEL:e:perm) on , with and explicitly to obtain

\hb@xt@.01(6.2)

and

\hb@xt@.01(6.3)

Following the methodology of [18], to compute , we must solve the adjoint problem for :

\hb@xt@.01(6.4)

Again, we can write the solution explicitly via quadrature:

\hb@xt@.01(6.5)

Using (LABEL:e:perm_fwd_soln) and (LABEL:e:perm_adj_soln),

\hb@xt@.01(6.6)

For this application we use a finite rank approximation of the covariance of the approximating measure , as explained in subsection LABEL:ssec:FR. In computing with the finite rank matrix (LABEL:e:lowrank), it is useful, for good convergence, to work with . The preconditioned derivatives, (LABEL:e:preDJ_Gauss), also require , where is given by (LABEL:e:D0). To characterize this term, if , we let be the first coefficients. Then for the finite rank approximation,

\hb@xt@.01(6.7)

Then using our parameterization with respect to the matrix ,

\hb@xt@.01(6.8)

As a preconditioner for (LABEL:e:preDJ_Gauss_C) we found that it was sufficient to multiply by .

We solve this problem with Ranks 2, 4, 6, first minimizing , and then running the pCN Algorithm LABEL:a2 to sample from . The common parameters are:

  • , , and ;

  • There are uniformly spaced grid points in ;

  • (LABEL:e:perm_fwd_soln) and (LABEL:e:perm_adj_soln) are solved via trapezoidal rule quadrature;

  • The true value of ;

  • The dimension of the data is four, with samples at ;

  • and , ;

  • is estimated spectrally;

  • iterations of the Robbins-Monro algorithm are performed with samples per iteration;

  • and ;

  • The eigenvalues of are constrained to the interval and the mean is constrained to ;

  • pCN Algorithms LABEL:a1 and LABEL:a2 are implemented with , and iterations.

The results of the optimization phase of the problem, using the Robbins-Monro Algorithm LABEL:a:RMforKL, appear in Figure LABEL:f:bayesinverse_kl. This figure shows: the convergence of in the Rank 2 case; the convergence of the eigenvalues of for Ranks 2, 4, and 6; and the minimization of . We only present the convergence of the mean in the Rank 2 case, as the others are quite similar. At the termination of the Robbins-Monro step, the matrices are:

\hb@xt@.01(6.9)
\hb@xt@.01(6.10)
\hb@xt@.01(6.11)

Note there is consistency as the rank increases, and this is reflected in the eigenvalues of the shown in Figure LABEL:f:bayesinverse_kl. As in the case of the scalar problem, more iterations of Robbins-Monro are computed than are needed to ensure convergence.

(a) Convergence of for Rank 2
(b) for Rank 2 at Particular Iterations
(c) Convergence of the eigenvalues of
(d) Minimization of
Figure 6.1: Convergence of the Robbins-Monro Algorithm LABEL:a:RMforKL applied to the Bayesian Inverse problem. Figures (a) and (b) show the convergence of in the case of Rank 2, while Figure (c) shows the convergence of the eigenvalues of for Ranks 2, 4 and 6. Figure (d) shows the minimization of . The observational noise is . The figures indicate that Rank 2 has converged after iterations; Rank 4 has converged after iterations; and Rank 6 has converged after iterations.

The posterior sampling, by means of Algorithms LABEL:a1 and LABEL:a2, is described in Figure LABEL:f:bayesinverse_post. There is good posterior agreement in the means and variances in all cases, and the low rank priors provide not just good means but also variances. This is reflected in the high acceptance rates and low auto covariances; there is approximately an order of magnitude in improvement in using Algorithm LABEL:a2, which is informed by the best Gaussian approximation, and Algorithm LABEL:a1, which is not.

However, notice in Figure LABEL:f:bayesinverse_kl that the posterior, even when one standard deviation is included, does not capture the truth. The results are more favorable when we consider the pressure field, and this hints at the origin of the disagreement. The values at and 0.4, and to a lesser extent at 0.6, are dominated by the noise. Our posterior estimates reflect the limitations of what we are able to predict given our assumptions. If we repeat the experiment with smaller observational noise, instead of , we see better agreement, and also variation in performance with respect to approximations of different ranks. These results appear in Figure LABEL:f:bayesinverse_post01. In this smaller noise case, there is a two order magnitude improvement in performance.

(a) Log permeability
(b) Pressure
(c) Acceptance Rate
(d) Autocovariance
Figure 6.2: Behavior of MCMC Algorithms LABEL:a1 and LABEL:a2 for the Bayesian Inverse problem with observational noise . The true posterior distribution, , is sampled using (Algorithm LABEL:a1) and , with Ranks 2, 4 and 6 (Algorithm LABEL:a2). The resulting posterior approximations are labeled (Algorithm LABEL:a1) and Rank 2, (Algorithm LABEL:a2). The notation and Rank is used for the prior and best Gaussian approximations of the corresponding rank. The distributions of , in Figure (a), for the optimized Rank 2 and the posterior overlap, but are still far from the truth. The results for Ranks 4 and 6 are similar. Figures (c) and (d) compare the performance of Algorithm LABEL:a2 when using Rank K for the proposal, with 2, 4, and 6, against Algorithm LABEL:a1. Rank 2 gives an order of magnitude improvement in posterior sampling over . There is not significant improvement when using Ranks 4 and 6 over using Rank 2. Shaded regions enclose one standard deviation.
(a) Log permeability
(b) Pressure
(c) Acceptance Rate
(d) Autocovariance
Figure 6.3: Behavior of MCMC Algorithms LABEL:a1 and LABEL:a2 for the Bayesian Inverse problem with observational noise . Notation as in Figure LABEL:f:bayesinverse_post. The distribution of , shown in Figure (a), for both the optimized Rank 6 , and the posterior overlap, and are close to the truth. Unlike the case of , Figures (c) and (d) show improvement in using Rank 6 within Algorithm LABEL:a2, over Ranks 2 and 4. However, all three cases of Algorithm LABEL:a2 are at least two orders of magnitude better than Algorithm LABEL:a1, which uses only . Shaded regions enclose one standard deviation.

6.2 Conditioned Diffusion Process

Next, we consider measure given by (LABEL:e:target) in the case where is a unit Brownian bridge connecting to on the interval , and

a double well potential. This also has an interpretation as a conditioned diffusion [25]. Note that and with with

We seek the approximating measure in the form with to be varied, where

and is either constant,, or is a function viewed as a multiplication operator.

We examine both cases of this problem, performing the optimization, followed by pCN sampling. The results were then compared against the uninformed prior, . For the constant case, no preconditioning on was performed, and the initial guess was . For , a Tikhonov-Phillips regularization was introduced,

\hb@xt@.01(6.12)

For computing the gradients (LABEL:e:preDJ_Gauss) and estimating ,

\hb@xt@.01(6.13a)
\hb@xt@.01(6.13b)

No preconditioning is applied for (LABEL:e:516b) in the case that is a constant, while in the case that is variable, the preconditioned gradient in is

Because of the regularization, we must invert , requiring the specification of boundary conditions. By a symmetry argument, we specify the Neumann boundary condition, . At the other endpoint, we specify the Dirichlet condition , a “far field” approximation.

The common parameters used are:

  • The temperature ;

  • There were uniformly spaced grid points in ;

  • As the endpoints of the mean path are 0 and 1, we constrained our paths to lie in ;

  • and were constrained to lie in , to ensure positivity of the spectrum;

  • The standard second order centered finite difference scheme was used for ;

  • Trapezoidal rule quadrature was used to estimate and , with second order centered differences used to estimate the derivatives;

  • , , , the right endpoint value;

  • iterations of the Robbins-Monro algorithm are performed with samples per iteration;

  • and