Proposals which speed-up function-space MCMC

Proposals which speed-up function-space MCMC

K. J. H. Law Warwick Mathematics Institute, University of Warwick, Coventry CV4 7AL, UK k.j.h.law@warwick.ac.uk
Abstract

Inverse problems lend themselves naturally to a Bayesian formulation, in which the quantity of interest is a posterior distribution of state and/or parameters given some uncertain observations. For the common case in which the forward operator is smoothing, then the inverse problem is ill-posed. Well-posedness is imposed via regularisation in the form of a prior, which is often Gaussian. Under quite general conditions, it can be shown that the posterior is absolutely continuous with respect to the prior and it may be well-defined on function space in terms of its density with respect to the prior. In this case, by constructing a proposal for which the prior is invariant, one can define Metropolis-Hastings schemes for MCMC which are well-defined on function space (Stuart10 (); CRSW12 ()), and hence do not degenerate as the dimension of the underlying quantity of interest increases to infinity, e.g. under mesh refinement when approximating PDE in finite-dimensions. However, in practice, despite the attractive theoretical properties of the currently available schemes, they may still suffer from long correlation times, particularly if the data is very informative about some of the unknown parameters. In fact, in this case it may be the directions of the posterior which coincide with the (already known) prior which decorrelate the slowest. The information incorporated into the posterior through the data is often contained within some finite-dimensional subspace, in an appropriate basis, perhaps even one defined by eigenfunctions of the prior. We aim to exploit this fact and improve the mixing time of function-space MCMC by careful rescaling of the proposal. To this end, we introduce two new basic methods of increasing complexity, involving (i) characteristic function truncation of high frequencies and (ii) Hessian information to interpolate between low and high frequencies. The second, more sophisticated version, bears some similarities with recent methods which exploit local curvature information, for example RMHMC girolami2011riemann (), and stochastic Newton martin2012stochastic (). These ideas are illustrated with numerical experiments on the Bayesian inversion of the heat equation and Navier-Stokes equation, given noisy observations.

1 Introduction

Sampling distributions over high-dimensional state-spaces is a notoriously difficult problem, and a lot of effort has been devoted to developing methods to do this effectively BRSV08 (); GGR97 (); robtwe96 (); CRSW12 (). If a given target distribution is known pointwise, then Markov chain Monte Carlo is a general method which prescribes a Markov chain having this target distribution as its invariant distribution. Examples include Metropolis-Hastings schemes, developed first by MRRT53 () and later generalized for arbitrary proposals by Has70 (). It is well known that Metropolis-Hastings schemes developed for finite-dimensional systems degenerate in the limit of inference on infinite-dimensional spaces and a lot of work has been devoted to examining the actual rate with which they degenerate GGR97 (). Indeed this analysis has led to improved performance of chains in finite dimensions through arguments about optimal scaling of the acceptance probability. Recently, it has been shown in a series of works by BRSV08 (); CRSW12 (); Stuart10 () that it is possible to overcome this limitation by defining the Metropolis-Hastings scheme in function space, thus confronting the issue of high dimensions head on. This is particularly useful, for example, in the case where the system is a finite-dimensional approximation of a PDE. In such a case, the convergence properties of the function-space scheme are independent of mesh refinement which increases the dimension of the finite-dimensional approximating system. However, it may (and often does in practice) happen that, even though the scheme is defined on function-space and is hence independent of mesh refinement, the integrated autocorrelation may be quite large for certain degrees of freedom, in particular those which have small effect on the likelihood. Hence, there is clearly a paradox in that the degrees of freedom which are not informed by the data, and hence are governed by the prior, are the rate limiting factor in the convergence of MCMC. But, at the same time, more is known a priori about such degrees of freedom. We aim to maintain the positive aspect of being defined in function space while removing the deficit of mixing poorly in directions which are better known a priori. This will be accomplished by appropriate rescaling based on curvature which leverages a priori known information. Inspired by the works BRSV08 (); CRSW12 (); Stuart10 () we will focus on sampling distributions over some Hilbert space which have density with respect to a Gaussian reference measure of the following form

(1.1)

where , and . For simplicity, we let

(1.2)

and we will further assume that is a compact operator. This function arises as the negative log likelihood of given a realization of an observation , where independent of . The posterior distribution on , conditioned on the particular observation , is given by (1.1). Furthermore, we will assume that for some trace-class operator .

The rest of this paper will be organized as follows. In Section 2, we will provide a review of necessary concepts, including the recently developed function-space MCMC method pCN CRSW12 () which will serve as a starting point for this work. In Section 3 we illustrate the remaining decorrelation issues pertaining to proposal chains of the pCN form. In Section 4 we introduce a class of constant operator rescaled proposals which address the issues highlighted in Section 3 and yet still fit into the function-space sampler framework, hence taking full advantage of its benefits. In Section 5, we investigate the performance of these algorithms computationally against standard pCN for the inverse heat equation and inversion of Navier-Stokes equation. Finally, we provide conclusions and possible future directions in Section 6.

2 Background

In this section we give some background on Markov chain Monte Carlo (MCMC) methods, and in particular the Metropolis-Hastings (MH) variants. We then introduce the preconditioned Crank-Nicolson (pCN) proposal that yields an MH algorithm which is well-defined on function spaces for a wide range of posterior distributions of the form (1.1), for example those arising from the Bayesian interpretation of the solution to an inverse problem with Gaussian prior and observational noise. Finally, we will describe the way that correlations enter into approximations based on the resulting set of samples and define the effective sample size, which will be relevant in the derivation of the new methods which follow.

2.1 Mcmc

MCMC methods aim to sample from a target distribution over by designing a Markov transition kernel such that is invariant under the action of

(2.1)

with shorthand (), where the integral on the lefthand side is with respect to . The condition known as detailed balance between a transition kernel and a probability distribution says that

(2.2)

Integrating with respect to , one can see that detailed balance implies . Metropolis-Hastings methods prescribe an accept-reject move based on proposals from an arbitrary transition kernel in order to define a kernel such that detailed balance with an arbitrary probability distribution is guaranteed. In order to make sense of MH in infinite dimensions, first define the measures Tie ()

(2.3)

Provided , where is denoting absolutely continuity when comparing measures, one can define the MH kernel as follows. Given current state , a proposal is drawn , and then accepted with probability

(2.4)

The resulting chain has transition kernel given by

(2.5)

So, we have that

(2.6)

which one can see satisfies the necessary symmetry condition (2.2) . Under an additional assumption of geometric ergodicity, one has that for any HSV12 (). Therefore, after a sufficiently long equilibration period, or burn-in, one has a prescription for generating samples approximately distributed according to , from which integrals with respect to can be approximated. These samples are correlated, however, and this will be revisited below.

Since there is great freedom in choice of proposal kernel , one can imagine that there is great discrepancy in the performance of the algorithms resulting from different choices. It is popular to choose symmetric kernels, such as the classical random walk algorithm, in which the difference between the current and proposed states is taken to be a centered Gaussian distribution, e.g.

where is the covariance of the prior . For this proposal, the absolute continuity condition is violated, so the MH algorithm is defined only in finite dimensions Stuart10 () and degenerates under mesh refinement. However, a small modification to the classical random walk proposal

(2.7)

yields an algorithm which satisfies since and are in detailed balance. This algorithm is therefore defined on function space, and it is referred to as pCN in CRSW12 (). The key point here is that the posterior has a density with respect to the infinite-dimensional prior Gaussian measure and so designing a proposal such as the one above which is reversible with respect to this measure leads to the necessary condition . The condition that for algorithms on function space ensures that the methods are well-behaved with respect to mesh refinement; only such methods are considered in this paper (see CRSW12 () for more details on that point). The choice of proposal which satisfies detailed balance with the prior is natural and elegant because it leads to acceptance probability which depends only on the ratio of the likelihood evaluated at its arguments

(2.8)

2.2 Correlations

Let , and suppose we would like to estimate the integral by samples , where . Denoting this estimate by , we have

Then , so the estimator is without bias. Furthermore, we denote the variance of by

So, we also have that

We restrict attention to the case , for ease of exposition only. Assume also that the samples are correlated in such a way that

For example, such samples arise from a sample path of a homogeneous Markov chain, such as in MCMC. Then

(2.9)

where we assumed for all with , and we denoted . If the are independent then and we recover constant prefactor to . Otherwise, the constant prefactor reduces the effective number of samples with respect to the case of independent samples. It is therefore informative to define the effective sample size by . Phrased another way, one needs samples in order to get another approximately independent sample. Hence, and are referred to as the autocorrelation function and the integrated autocorrelation, respectively. The general case is dealt with similarly, working elementwise. In this case, the effective sample size is given by the minimum over effective sample sizes of individual degrees of freedom, strictly speaking.

From the above discussion, it is clear that the performance of an MH algorithm in approximating integrals with respect to a distribution is limited by the correlation between the samples it generates. The rest of the paper will concern this point about correlations.

3 Proposal Chains for MCMC

In this section, we explore some drawbacks of the function-space sampling algorithm defined in the previous section. We will first motivate the need for modification with a simple one dimensional example. Then, we describe how this same idea lifts into higher dimensions.

3.1 Principle in 1d

Figure 1: Illustration of slow decorrelation in proposal chain , and fast decorrelation in the informed chain . Left plot shows autocorrelations, as compared also to the analytical autocorrelation of . Right plot shows an interval of the evolution over which has barely decorrelated, but has clearly decorrelated within a small sub-interval.

As an example, consider the following Markov chain

where i.i.d. for all and . So, for all , and

Therefore, , and Notice also that this is independent of .

Now, we run an experiment. Let , with and . Consider the following Metropolis-Hastings chain with proposal given by :

(3.1)

The above chain samples the posterior distribution with observation and prior . We will denote the chains by and Figure 1 illustrates the decorrelation time and evolution of the proposal chain, , and the informed chain for a given choice of which yields reasonable acceptance probability. The speedup is astonishing, and indeed counterintuitive: one only accepts or rejects moves from the proposal chain, so intuition may suggest that the informed chain would have larger autocorrelation. But actually, if the distribution is dominated by the likelihood, then the integrated autocorrelation is determined by the accept/reject, and is indeed smaller than that of the proposal. On the other hand, if the distribution is dominated by the prior, then the integrated autocorrelation is dictated by the proposal chain and is then increased by the acceptance probability. If the state-space were two dimensional, with an observation only in the first dimension, then for a given choice of scalar the chain will mix like in the observed component and like in the unobserved component. The same idea extends to higher dimensions, motivating the use of direction-dependent proposal step-sizes.

Figure 2: Expected acceptance probability (top) and lag 1 autocorrelation (bottom) over with (left) and over with fixed (right), for proposals which are reversible with respect to the prior (blue) and the posterior (red dashed). Also shown at the bottom is the lag 1 autocorrelation of the proposal chain, , in black dash-dotted.

We now digress momentarily to explain this counterintuitive phenomenon in more detail for the simple analytically solvable example above. This is crucial to understanding the methods which will be developed in subsequent sections. In the above example, the posterior distribution is given by , where and . Assuming that , we know from Eqs. 3.1 that the correlation between subsequent steps is given by the following analytically known, yet horrendous, integral

(3.2)

where . It is instructive to also consider in Eq. (3.1) the alternative proposal which keeps the posterior invariant:

(3.3)

in which case in Eq. (3.2) and the acceptance (2.8) is modified accordingly. Indeed, it turns out that , with equality in the limit of . See also GGR97 (). We compute this integral using a Riemann sum and plot the corresponding values of and for varying with fixed, and varying with fixed in Fig. 2. In the top left panel, we see that for proposal (3.1) with the acceptance decreases from 1 to 0 as ranges from 0 to 1. In the bottom left panel below this we can see that the minimum lag 1 autocorrelation corresponds to the acceptance probability ; c.f. the well-known optimal scaling results of GGR97 (). For the proposal (3.3), the acceptance is 1 for all . It is clear from the bottom left plot that in this example we should use (3.3) with and do independence sampling on the known posterior. This obvious result in this idealistic example is instructive, however. It is not possible to do independence sampling on the posterior for non-Gaussian posteriors (or we would be done) and even independence sampling on the Gaussian approximation to the posterior may lead to unacceptably small acceptance probability. Indeed we can see that the prior proposal can perform as well as or better than the posterior proposal for appropriate , a step-size that is probably much too large in a Gaussian proposal whenever the target is non-Gaussian. This makes it clear that the crucial thing is to choose the appropriate scale of the proposal steps, rather than naively trying to match the target with the proposal (although this principle does hold for the case presented above in which the proposal keeps the target posterior invariant, and in general this does lead to reasonably effective proposals). From the top right panel, we can see that the acceptance probability for the posterior proposal is again 1 for all for a fixed , while the autocorrelation (bottom right) also remains approximately (which is quite large in this case). However, as this choice of is tuned to the prior proposal for , we see that the acceptance probability for prior proposal increases from to almost 1 as ranges from to 5. The corresponding lag 1 autocorrelation ranges from to approximately as increases and the posterior approaches the prior.

3.2 Higher dimensions

Recall that in one dimension if the distribution is determined by the likelihood then the integrated autocorrelation is close to 1, and if the distribution is determined by the prior then the integrated autocorrelation is dictated by the proposal chain. Suppose that some of the space is dominated by the prior, and some is dominated by the likelihood. If the observations have small variance, then the step-size in the chain will have to be small in order to accommodate a reasonable acceptance probability. But, the part of the space which is dominated by the prior will have autocorrelation, as dictated by the proposal chain.

Now consider the Bayesian inverse problem on a Hilbert space . Let denote an orthonormal basis for the space . Let project onto a finite basis and . Suppose we wish to sample from the posterior distribution with prior distribution , where for some and some distance function between probability measures (for example Hellinger metric or Kullback-Leibler distance). Thus projects onto the “important part” of the space, in the sense that the observations only inform the projection of the posterior distribution on this part of the space. Suppose we use Metropolis-Hastings to sample from posterior with prior . Let , so that the pCN chain

(3.4)

leaves the prior invariant. Without loss of generality, we represent this chain in the basis in which is diagonal, so the proposal for each degree of freedom is an independent version of the above and clearly has decorrelation time . The distribution on , so when we impose accept/reject using the above chain as proposal, we are sampling from with decorrelation time where is the expected acceptance probability. But, we may as well sample independently from the known distribution in this part of the space.

Following the reasoning above, one can consider modifying the above proposal chain to sample from the pCN chain (3.4) on and sample independently from the prior on . This corresponds to setting the chain above to

(3.5)

Our first and simplest variant of weighted proposal follows directly from this. In the next section we will consider a more elaborate extension of this proposal using curvature information.

4 Operator weighted proposal

Inspired by the discussion in the preceding section, we let be an operator weighting different directions differently, and define the proposal chain to be

(4.1)

with . Notice that this proposal preserves the prior in the sense that . This form of proposal for constant in can be derived as a generalization of the pCN proposal from CRSW12 (), for some general pre-conditioner in their notation.

Recall the form of distribution we consider, given by (1.1) and (1.2), with prior of the form . For simplicity of exposition, we will consider standard normal priors, without any loss of generality. In the general case, a change of variables is imposed to normalize the prior, such that a chain of the form (4.1) keeps the prior invariant. The samples of the resulting MH chain are then transformed back as for inference.

For the moment, we revert to the case in which in order to motivate the form of the proposals. In this case we have , where is Lebesgue measure, and after transformation

(4.2)

with . At a given point the local curvature is given by the Hessian of this functional, which can be approximated by the positive definite operator

(4.3)

where denotes the derivative of with respect to evaluated at 111The neglected term is , which is close to for example if the distribution is concentrated close to the data or if is close to quadratic at .. If are eigenpairs associated with this operator, then:

  • if , then the distribution is dominated by the data likelihood in the direction ;

  • if , then the distribution is dominated by the prior in the direction .

The direction of the largest curvature at a given point will determine the smallest scale of the probability distribution at that point. If is the largest eigenvalue of (4.3), then the smallest local scale of the density will be . This scale will directly determine the step-size necessary to obtain a reasonable acceptance probability within the MH algorithm. But, such a small step-size is only necessary in the direction . So, rather than prescribing this as the step-size in the remaining directions, we would like to rescale them according to the operator given in (4.3).

There is no log-density with respect to Lebesgue measure in function space, but the above operator (4.3) is still well-defined and indeed its minimizer gives the point of maximum probability over small sets ourmap (). This functional will be the basis for choosing in (4.1). Based on the above intuition, one can devise a whole hierarchy of effective such operators. In the simplest instance, one can allow for constant, either during the duration of the algorithm, after a transient adaptive phase, or even for some lag-time where it is only updated every so often. In this manuscript we consider only constant , and we consider three simple levels of complexity. The rigorous theory underlying the algorithms will be deferred to future work, although we note that the case of constant is easier to handle theoretically, since one does not need to consider the conditions for ergodicity of adaptive MCMC roberts2007coupling ().

The first level of complexity consists of letting

(4.4)

The resulting method corresponds to the original pCN proposal, and we denote this method by O.

The second level of complexity consists of letting

(4.5)

where , is the Kronecker delta function, and and are chosen during a transient adaptive phase, either together or one at a time, yielding (3.5). The simple idea is that the curvature is constant and prescribed by the prior for sufficiently high frequencies, and this has already been motivated by the discussion in the previous section. The method resulting from this proposal will be denoted by C.

The third level of complexity consists of utilizing a relaxed version of the Hessian for an arbitrary given point close to a mode of the distribution. 222In the case of a multi-modal distribution, this would have to be updated at least each time the chain passes to a new peak of the distribution. However, in the case of high-dimensional sampling, it is rare that one might expect to sample more than a single peak within a single chain, since it is expensive to compute individual samples and many such samples are usually necessary to reach the first passage time. Therefore, we separate the issues of finding a peak of the distribution, where gradient information is invaluable, from exploring the peak, where gradient information is expected to be less valuable. On the other hand, curvature information may be useful in the former, and is surely invaluable in the latter. Let

(4.6)

where is the scalar tuning parameter as in the above cases, is another tuning parameter, and is any point. The method resulting from this proposal will be denoted by H. We consider a low rank approximation of the resulting operator, since it is assumed to be compact. Notice that for this choice of , the covariance of the search direction is given by

(4.7)

For linear we let , and the right-hand side of the above approaches the covariance of the posterior distribution as , giving exactly the correct rescaling for the search direction while still keeping the prior invariant. The linear case is rather special, since we could just do independence sampling of the posterior, but it provides a good benchmark. In practice, adaptation of in this case leads to small, but non-zero, . For nonlinear we choose , and the scaling of the search direction is commensurate with the local curvature except with increased weight on the log-likelihood component, given by the first term of (4.7). The second term is a perturbation which is larger for the (changing) directions dictated by the Hessian of the log-likelihood and approaches zero for directions which are dictated by the prior, in which the curvature is constant. Provided is chosen small enough, and the curvature does not change too much, the step-size of a given move will not exceed the scale of the distribution. Furthermore, the step-size will interpolate appropriately between the direction of the smallest scale and the directions which are determined by the prior, corresponding to scale 1. We note that a whole array of different proposals are possible by choosing different , and there may be better options since an exhaustive search has not been carried out. Investigation into various other proposals which incorporate the appropriate (re)scaling in different ways, however, suggests that their performance is similar TC ().

Both methods C and H can be considered as splitting methods in which part of the space is targeted using the acceptance probability and independence sampling is done on the complement. C splits the spectral space into a finite inner radius in which pCN is done, and its infinite-dimensional complement where independence sampling is done. H more delicately splits the space into the finite dominant eigenspace of the Hessian of the likelihood and its complement, and then furthermore considers an appropriate rescaling of the dominant eigenspace. Notice that the complement space absorbs the infinite-dimensionality. Such proposals which separate the space allow any general form of proposal on the finite-dimensional target subspace and maintain the property that as long as the proposal is reversible with respect to the prior on the infinite-dimensional complement subspace. For example, a simple pCN proposal with another choice of can be used on this subspace. This idea appears in the forthcoming works TC (); bckj ().

Note that one could instead incorporate this curvature information by devising a chain which keeps a Gaussian approximation to the posterior invariant, similarly to (3.3). For example, one can use the following proposal which preserves :

(4.8)

with and . The curvature information can be incorporated for example by letting

or some close by point, and . In this case, it would be sensible to revert to the case of scalar since the relevant scaling is already intrinsic in the form of the Gaussian. Empirical statistics can also be incorporated in this way. If is genuinely low-rank then equivalence is immediate. If not, one must look more closely. However, in this case if one imposes a low-rank approximation, then equivalence is again immediate based on the discussion above. The results in Sec. 3.1, and in particular those presented in Fig. 2 illustrate that this strategy should not be considered as preferable to using a proposal of the form (4.1) for appropriate choice of . These variations do however warrant further investigation. We limit the current study to the case of proposals of the form (4.1) because (i) these methods are defined on function space, following immediately from the reversibility with respect to the prior and the framework of Tie (), (ii) the acceptance probability takes the simple and elegant form of (2.8) which is easy to implement, and (iii) as described in Sec. 3, it is not clear that a more elaborate kernel such as the one following from the proposal (4.8) would yield better results.

5 Numerical Results

In this section, we will investigate the performance of the MH methods introduced in the previous section on two prototypical examples. First, we briefly explore the inverse heat equation on the circle as an illustrative example. Next, we look in more depth at the more complicated problem of the inverse Navier-Stokes equation on the 2D torus.

5.1 Heat Equation

Consider the one dimensional heat equation

We are interested in the inverse problem of recovering from noisy observations of :

where , , with , and We let , and .

The problem may be solved explicitly in the Fourier sine basis, and the coefficients of the mean and variance of the posterior distribution can be represented in terms of the coefficients of the data and the prior :

(5.1)

The posterior distribution is strongly concentrated on the mode, in the sense that and for all , and the discrepancy between and will necessitate a very small scalar . For the case of scalar weighting O, all modes will decorrelate slowly except for . See Fig. 3 for a comparison between scalar weighting O and curvature-based weighting H. In this case, we will omit the intermediate proposal which simply proposes independent samples outside a certain radius in the spectral domain.

Figure 3: Illustration of the autocorrelation of modes and , in the case of O (left), and H (right).

5.2 Navier-Stokes Equation

In this section, we consider the inverse problem of determining the initial condition of Navier-Stokes equation on the two dimensional torus given noisy observations. This problem is relevant to data assimilation applications in oceanography and meteorology.

Consider the 2D Navier-Stokes equation on the torus with periodic boundary conditions:

Here is a time-dependent vector field representing the velocity, is a time-dependent scalar field representing the pressure, is a vector field representing the forcing (which we assume to be time independent for simplicity), and is the viscosity. We are interested in the inverse problem of determining the initial velocity field from pointwise measurements of the velocity field at later times. This is a model for the situation in weather forecasting where observations of the atmosphere are used to improve the initial condition used for forecasting. For simplicity we assume that the initial velocity field is divergence free and integrates to zero over , noting that this property will be preserved in time.

Define

and as the closure of with respect to the norm. We define to be the Leray-Helmholtz orthogonal projector (see book:Robinson2001 ()). Given , define . Then an orthonormal basis for is given by , where

for . Thus for we may write

where, since is a real-valued function, we have the reality constraint . Using the Fourier decomposition of , we define the fractional Sobolev spaces

(5.2)

with the norm , where . If , the Stokes’ operator, then . We assume that for some .

Let , for , and let be the set of pointwise values of the velocity field given by . Note that each depends on and define by . We let be a set of random variables in which perturbs the points to generate the observations in given by

(5.3)

We let denote the accumulated data up to time , with similar notation for , and define by . We now solve the inverse problem of finding from . We assume that the prior distribution on is a Gaussian , with the property that and that the observational noise is i.i.d. in , independent of , with distributed according to a Gaussian measure .

We let noting that if , then almost surely for all ; in particular . Thus as required. The forcing in is taken to be , where and with the canonical skew-symmetric matrix, and . Furthermore, we let the viscosity , and the interval between observations , where is the timestep of the numerical scheme, and so that the total time interval is . In this regime, the nonlinearity is mild and the attractor is a fixed point, similar to the linear case. Nonetheless, the dynamics are sufficiently nonlinear that the optimal proposal used in the previous section does not work. We take evaluated at the gridpoints, such that the observations include all numerically resolved, and hence observable, wavenumbers in the system. The observational noise standard deviation is taken to be .

The truth and the mean vorticity initial conditions and solutions at the end of the observation window are given in Fig. 4. The truth (top panels) clearly resembles a draw from a distribution with mean given by the bottom panels. Fig. 5 shows the diagonal of the rescaling operator (i.e. for ) for C, given by (4.4) and H, given by (4.5). For O the operator is given by the identity, i.e. all one along the diagonal. Some autocorrelation functions (approximated with one chain of each) are shown in Fig. 6. Also given for comparison is the exponential fit to the (acceptance lagged) autocorrelation function for the proposal chain for O, i.e. . This differs for each, because and differ. In particular, for O and (approximated by the sample mean), for C and , and for H and . Presumably this accounts for the slight discrepancy in the autocorrelation of mode , given in the top left panel. For the mode (top right), the difference becomes more pronounced between H and the other two. In the bottom left it is clear that the mode decorrelates very quickly for H, while it decorrelates approximately like the proposal chain for both other methods. The mode (bottom right) is independently sampled for C, so has decorrelation time . It decorrelates almost as fast for H, and is decorrelating with the proposal chain for O.

Figure 7 shows the relative error in the mean (left) and variance (right) with respect to the converged values as a function of sample number for small sample sizes. The accumulated benefit of H is very apparent here, yielding almost an order of magnitude improvement. For each of the proposals, we run chains of length , both for the benchmark mean and variance used in Fig. 7, and to compare the so-called potential scale reduction factor (PSRF) convergence diagnostic brooks1998general (). In Fig. 8 we plot the PSRF for each method on the same color scale from . Although it cannot be seen for O(left) and C(middle) since the values of most modes are saturated on this scale, every mode for every method is clearly converged according to the PSRF, which should be smaller than 1.1 and eventually converges to 1. We can see for O that the low frequencies, where the mean and uncertainty are concentrated, converge faster (as indicated in 6), and other modes converge more or less uniformly slowly. C clearly outperforms O in the high frequencies outside the truncation radius (where the mean and uncertainty are negligible) and is comparable for the smallest frequencies, but may even be worse than O for the intermediate frequencies. This may be explained by the different and . Again, H quite clearly outperforms both of the other two methods. Note that these results are for particular parameter choices, and methods of choosing the parameters , , , and may be sensitive to their choice.

Figure 4: Truth (top) and expected value of the posterior (bottom). Initial condition is given to the left and solution at final time is given to the right.
Figure 5: The diagonal of the rescaling operator for C, given by (4.5) and H, given by (4.6).
Figure 6: The autocorrelation for each method O, C, and H, for a variety of frequencies. Also given for comparison is the exponential fit to the (acceptance lagged) autocorrelation function for the proposal chain for O, i.e. . This differs for each, because and differ.
Figure 7: The relative error in the mean (left) and variance (right) with respect to the converged values as a function of sample number for small sample sizes.
Figure 8: The PSRF plots for O (left), C (middle) and H (right). The images are saturated on the same scale so that H is visible.

6 Conclusions

This manuscript introduces and investigates a new class of proposals for function-space MCMC, giving rise to a new class of algorithms. These proposals involve an operator which appropriately rescales the proposal step-sizes in different directions in order to decrease autocorrelation and hence improve convergence rate. For the curvature-inspired rescaling H introduced here, there are two design parameters which need to be hand-chosen. But, it seems to be effective to choose sufficiently small and then allow to be chosen during a transient adaptive phase, as in the other methods. It should be possible to achieve the same result with one design parameter. Furthermore, there exists a whole range of more sophisticated models between those investigated here and the more sophisticated but expensive RMHMC girolami2011riemann () and stochastic Newton martin2012stochastic (). This gives rise to exciting opportunities for future investigation. It will be of great interest to establish rigorous theoretical results, such as geometric ergodicity, for these methods.

Acknowledgements The author would like to thank EPSRC and ONR for financial support. I also thank The Mathematics Institute and Centre for Scientific Computing at Warwick University, as well as The Institute for Computational and Experimental Research in Mathematics and The Center for Computation and Visualization at Brown University for supplying valuable computation time. I would like to extend thanks to Tiangang Cui, Youssef Marzouk, and Gareth Roberts for interesting discussions related to these results, and also to the referee for a careful read and insightful suggestions for improvement. Finally, I thank Andrew Stuart for interesting discussions related to these results and for valuable input in the revision of this manuscript.

References

  • (1) A. Stuart, Inverse problems: a Bayesian approach, Acta Numer. 19 (2010) 451–559.
  • (2) S. Cotter, G. Roberts, A. Stuart, D. White, MCMC methods for functions: modifying old algorithms to make them faster, Arxiv preprint arXiv:1202.0709.
  • (3) M. Girolami, B. Calderhead, Riemann manifold Langevin and Hamiltonian Monte Carlo methods, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 73 (2) (2011) 123–214.
  • (4) J. Martin, L. Wilcox, C. Burstedde, O. Ghattas, A stochastic newton mcmc method for large-scale statistical inverse problems with application to seismic inversion, SIAM Journal on Scientific Computing 34 (3) (2012) 1460–1487.
  • (5) A. Beskos, G. O. Roberts, A. M. Stuart, J. Voss, MCMC methods for diffusion bridges, Stochastic Dynamics 8 (3) (2008) 319–350. doi:10.1142/S0219493708002378.
  • (6) G. Roberts, A. Gelman, W. Gilks, Weak convergence and optimal scaling of random walk Metropolis algorithms, Ann. Appl. Prob. 7 (1997) 110–120.
  • (7) G. Roberts, R. Tweedie, Exponential convergence of Langevin distributions and their discrete approximations, Bernoulli 2 (4) (1996) 341–363.
  • (8) N. Metropolis, R. Rosenbluth, M. Teller, E. Teller, Equations of state calculations by fast computing machines, J. Chem. Phys. 21 (1953) 1087–1092.
  • (9) W. Hastings, Monte carlo sampling methods using markov chains and their applications, Biometrika 57 (1970) 97–109.
  • (10) L. Tierney, A note on Metropolis-Hastings kernels for general state spaces, Ann. Appl. Probab. 8 (1) (1998) 1–9.
  • (11) M. Hairer, A. Stuart, S. Vollmer, Spectral gaps for a Metropolis-Hastings algorithm in infinite dimensions, Arxiv preprint arXiv:1112.1392.
  • (12) G. Roberts, J. Rosenthal, Coupling and ergodicity of adaptive markov chain monte carlo algorithms, Journal of applied probability 44 (2) (2007) 458–475.
  • (13) J. C. Robinson, Infinite-Dimensional Dynamical Systems, Cambridge Texts in Applied Mathematics, Cambridge University Press, Cambridge, 2001.
  • (14) S. Brooks, A. Gelman, General methods for monitoring convergence of iterative simulations, Journal of Computational and Graphical Statistics 7 (4) (1998) 434–455.
  • (15) M. Dashti, K.J.H. Law, A.M. Stuart and J. Voss, MAP estimators and posterior consistency in Bayesian nonparametric inverse problems. Accepted for publication in Inverse Problems (2013). http://arxiv.org/abs/1303.4795 .
  • (16) T. Cui, K.J.H. Law, and Y. Marzouk, Likelihood-informed sampling of Bayesian nonparametric inverse problems. In preparation (2013).
  • (17) A. Beskos, A. Jasra, N. Kantas, (Submitted, 2013). http://arxiv.org/abs/1307.6127 .
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
48827
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description