Adaptive Dimension Reduction to Accelerate Infinite-Dimensional Geometric Markov Chain Monte Carlo

Adaptive Dimension Reduction to Accelerate Infinite-Dimensional Geometric Markov Chain Monte Carlo

Shiwei Lan slan@caltech.edu Department of Computing + Mathematical Sciences, California Institute of Technology, Pasadena, CA 91125, USA
Abstract

Bayesian inverse problems highly rely on efficient and effective inference methods for uncertainty quantification (UQ). Infinite-dimensional MCMC algorithms, directly defined on function spaces, are robust under refinement (through discretization, spectral approximation) of physical models. Recent development of this class of algorithms has started to incorporate the geometry of the posterior informed by data so that they are capable of exploring complex probability structures, as frequently arise in UQ for PDE constrained inverse problems. However, the required geometric quantities, including the Gauss-Newton Hessian operator or Fisher information metric, are usually expensive to obtain in high dimensions. On the other hand, most geometric information of the unknown parameter space in this setting is concentrated in an intrinsic finite-dimensional subspace. To mitigate the computational intensity and scale up the applications of infinite-dimensional geometric MCMC (-GMC), we apply geometry-informed algorithms to the intrinsic subspace to probe its complex structure, and simpler methods like preconditioned Crank-Nicolson (pCN) to its geometry-flat complementary subspace.

In this work, we take advantage of dimension reduction techniques to accelerate the original -GMC algorithms. More specifically, partial spectral decomposition (e.g. through randomized linear algebra) of the (prior or posterior) covariance operator is used to identify certain number of principal eigen-directions as a basis for the intrinsic subspace. The combination of dimension-independent algorithms, geometric information, and dimension reduction yields more efficient implementation, (adaptive) dimension-reduced infinite-dimensional geometric MCMC. With a small amount of computational overhead, we can achieve over 70 times speed-up compared to pCN using a simulated elliptic inverse problem and an inverse problem involving turbulent combustion. A number of error bounds comparing various MCMC proposals are presented to predict the asymptotic behavior of the proposed dimension-reduced algorithms.

keywords:
Infinite-dimensional Geometric Markov Chain Monte Carlo; High-dimensional sampling; Dimension Reduction; Bayesian Inverse Problems; Uncertainty Quantification.

1 Introduction

In Bayesian inverse problems, the objective is to identify an unknown parameter function which is assumed to be in a separable Hilbert space . Given finite-dimensional observations , for , is connected to through the following mapping:

where is the forward operator that maps unknown parameter onto the data space , and is the distribution of noise . If we assume the density of noise distribution, still denoted as , to exist with respect to the Lebesgue measure, then we can define the negative log-likelihood, a.k.a. potential function, as:

with indicating the density function for a given . The noise distribution could be simple, but the the forward mapping is usually non-linear thus the potential function can be complicated and computationally expensive to evaluate. For example, if we assume Gaussian noise , for some symmetric, positive-definite , then the potential function can be written as

(1)

where we have considered the scaled inner product .

In the Bayesian setting, a prior measure is imposed on . In this paper we assume a Gaussian prior with the covariance being a positive, self-adjoint and trace-class operator on . Now we can get the posterior of , denoted as , using Bayes’ theorem (stuart10; dashti2017):

Notice that the posterior can exhibit strongly non-Gaussian behavior, with finite-dimensional projections having complex non-elliptic contours.

Sampling from posterior distributions in the context of PDE-constrained inverse problems is typically a challenging task due to the high-dimensionality of the target, the non-Gaussianity of the posterior and the intensive computation of repeated PDE solutions for evaluating the likelihood function at different parameters. Traditional Metropolis-Hastings algorithms are characterized by deteriorating mixing times upon mesh-refinement in the finite-dimensional projection of parameter . This has prompted the recent development of a class of ‘dimension-independent’ MCMC methods (beskos08; beskos11; cotter13; law2014; beskos14; pinski15; rudolf2018; cui16; constantine2016; beskos2017) that overcome this deficiency. Compared to standard Metropolis-Hastings, the new algorithms are well-defined on the infinite-dimensional Hilbert space, thus yield the important computational benefit of mesh-independent mixing times for the practical finite-dimensional algorithms run on the computer.

Among those works, beskos14; cui16; beskos2017 incorporate geometry of the posterior informed by data to empower the MCMC to become more capable of exploring complicated distributions that deviate significantly from Gaussian. In particular, infinite-dimensional geometric MCMC (-GMC) beskos2017 put a series of ‘dimension-independent’ MCMC algorithms in the context of increasingly adopting geometry (gradient, Hessian). With the help of such geometric information, beskos2017 show that with the prior-based splitting strategy, -GMC algorithms can achieve up to two orders of magnitude speed up in sampling efficiency compared to vanilla pCN. However, fully computing the required geometric quantities is prohibitive in the discretized parameter space with thousands of dimensions. Therefore, it is natural to consider approximations to the gradient vector and Hessian (Fisher) matrix and compute them in a subspace with reduced dimensions. The key to the dimension reduction in this setting is to identify an intrinsic low-dimensional subspace and apply geometric methods to effectively explore its complex structure; while simpler methods can be used on its complementary subspace with larger step sizes. cui16 seek the intrinsic subspace, known as likelihood-informed subspace (LIS) (cui14), by detecting the eigen-subspace of some globalized Hessian; constantine2016 investigate the active subspace (AS) (constantine14; constantine15) by probing the principal eigen-directions of a prior-averaged empirical Fisher matrix.

In this paper we make a number of contributions in the development of speeding up -GMC for Bayesian solutions to large scale PDE constrained inverse problems. First, with dimension reduction techniques, we accelerate the original -GMC methods to make them applicable for problems of thousands of dimensions. Instead of sampling (low) frequency modes (beskos2017), we obtain posterior samples of the unknown parameter function evaluated over the whole solution domain. Second, we propose the dimension reduction directly based on partial spectral decomposition of the (prior or posterior) covariance operator, which simplifies dimension-independent likelihood Informed MCMC (DILI) (cui16) and produces more efficient algorithms. Unlike cui16, the posterior covariance projected in the intrinsic subspace is not empirically updated, but rather approximated in a diagonal form that can still capture the most variation of the projected posterior. Such approximation can be adopted either position-wise or adapted towards a global LIS within the burn-in stage of MCMC. We establish interesting connections between our adaptive algorithm and DILI and also apply the same dimension reduction to ‘HMC’ type algorithms, which can be viewed as generalizations of DILI. Third, theoretic bounds comparing several dimension-independent MCMC proposals are derived to help understand their asymptotic behavior as well as their difference. Lastly, we demonstrate the numerical advantage of our proposed algorithms in this high-dimensional setting by over 70 times speed-up compared to pCN method using a simulated elliptic inverse problem and an inverse problem of turbulent jet.

The rest of the paper is organized as follows. Section 2 reviews infinite-dimensional geometric MCMC (-GMC) (beskos2017) and dimension-independent likelihood Informed MCMC (DILI) (cui16). Section 3 describes the details of dimension reduction we adopt, based on both prior and posterior. Section 4 applies these dimension reduction techniques to -GMC to achieve acceleration, establishes the validity of the proposed methods, and also provides error bounds for comparing various algorithms. In Section LABEL:sec:numerics we show the numerical advantage of our algorithms using a simulated elliptic inverse problem and an inverse problem involving turbulent combustion. Finally we make some discussion and conclude with a few future directions in Section LABEL:sec:conclusion.

2 Review of -GMC and DILI

For simplicity we drop from terms involved, so we denote the posterior as and the potential function as . For the target and many proposal kernels in the sequel, we define the bivariate law:

Following the theory of Metropolis-Hastings on general spaces (tierney98), the acceptance probability is non-trivial when

where denotes mutual absolute continuity, that is, and . The acceptance probability is:

(2)

where denotes the minimum of . We first review -GMC (beskos2017).

2.1 Infinite-Dimensional Geometric MCMC (-Gmc)

We start with the preconditioned Crank-Nicolson (pCN) method (neal10; beskos08; cotter13), whose proposal does not use any data information. It modifies standard random-walk Metropolis (RWM) to make a proposal movement from the current position towards a random point, with its size controlled by a free parameter :

PCN is well-defined on the Hilbert space with the proposal being prior-preserving, whereas standard RWM can only be defined on finite-dimensional discretization and has diminishing acceptance probability for fixed step-size and increasing resolution (roberts97). Thus, pCN mixes faster than RWM in high-enough dimensions and the disparity in mixing rates becomes greater upon mesh-refinement (cotter13).

One approach for developing data-informed methods is to take advantage of gradient information in a steepest-descent setting. Consider the Langevin SDE on , preconditioned by some operator :

(3)

with denoting the Fréchet derivative of and being the cylindrical Wiener process. If we let , the scales of these dynamics are tuned to the prior. In this setting, SDE (3) preserves the posterior and can be used as effective MCMC proposals (beskos08; cotter13). beskos08 use a semi-implicit Euler scheme to discretize the above SDE and develop infinite-dimensional MALA (-MALA) with the following proposal for an algorithmic parameter and some small step-size :

(4)

Following beskos08, under the assumption that , -a.s. in , one can use Theorem 2.21 of da14 on translations of Gaussian measures on separable Hilbert spaces, to obtain the Radon-Nikodym derivative in the acceptance probability (2).

To further incorporate local geometric information of the target distribution, one can consider a location-specific pre-conditioner as the covariance of a local Gaussian approximation to the posterior, defined through

(5)

Notice that for the resulting dynamics do not, in general, preserve the target as they omit the higher order (and computationally expensive) Christofell symbol terms, see e.g. girolami11 and the discussion in xifara:14. However, the dynamics in (3) can still capture an important part of the local curvature structure of the target and can provide an effective balance between mixing and computational cost (girolami11). beskos2017 develop infinite-dimensional manifold MALA (-mMALA) with the following proposal obtained by the similar semi-implicit scheme as in beskos08

(6)

for defined as in (4) and we have:

(7)

With the assumptions 3.1-3.3 in beskos2017, one can use the Feldman-Hajek theorem (see e.g. Theorem 2.23 in da14) to derive the acceptance probability (2). See more details in beskos2017. It is interesting to notice that when (), -mMALA coincides with the stochastic Newton (SN) MCMC method (martin12), with .

One can generalize ‘MALA’ type algorithms to multiple steps. This is equivalent to investigating the following continuous-time Hamiltonian dynamics:

(8)

Hamiltonian Monte Carlo (HMC) algorithm (neal10) makes use of the Strang splitting scheme to develop the following Störmer-Verlet symplectic integrator (verlet67), for as defined in (7):

(9)

Equation (9) gives rise to the leapfrog map . Given a time horizon and current position , the MCMC mechanism proceeds by concatenating steps of leapfrog map consecutively:

where denotes the projection onto the -argument. For , this yields infinite-dimensional HMC (-HMC) (beskos11) with , and infinite-dimensional manifold HMC (-mHMC) (beskos2017) with respectively. The well-posedness of these ‘HMC’ type algorithms can be established under the same assumptions 3.1-3.3 of beskos2017. The following diagram illustrates graphically the connections between the various algorithms.

2.2 Dimension-Independent Likelihood Informed MCMC (DILI)

Now we review the MCMC algorithm DILI (cui16), which is much relevant to our proposed methods. The idea of DILI is to separate a low-dimensional LIS where likelihood-informed methods are applied to make inhomogeneous proposal to exploit the posterior structure that deviates from the prior structure; while the complementary space can be efficiently explored with simpler prior-based methods.

Inspired by the low-rank approximations to the Hessian of log-posterior in martin12, DILI (cui16) obtains the intrinsic low-dimensional LIS by comparing the Hessian of log-likelihood with prior covariance to identify directions in parameter space along which the posterior distribution differs most strongly from the prior. DILI also uses the Langevin equation (3) preconditioned by an operator as the proposal kernel. However, strictly speaking, the preconditioner is not the same as the location-specific , but rather globalized to aggregate the local geometry informed by data. More specifically, DILI considers the following prior-preconditioned Gauss-Newton Hessian (ppGNH):

where under the assumption (1), is also called Fisher metric.

ppGNH stems from the local Rayleigh ratio that quantifies how strongly the likelihood constrains variation in the direction relative to the prior, and can be converted to GNH w.r.t the whitened parameter

Therefore by transforming and applying on both sides it helps to simplify (3) into

(10)

where and we have

Note the whitened variable has the prior , where the identity covariance operator is not a trace-class on . However, random draws from are square-integrable in the weighted space . (10) can still serve as a well-defined function space proposal for parameter after inverting the transformation.

The intrinsic low-dimensional subspace is obtained through a low-rank approximation of the globalized (expected) GNH . Suppose the operator has eigen-pairs on . Then by thresholding largest eigenvalues one can define

(11a)
(11b)

Note provides the basis for the LIS and one can have the following decomposition for :

where is the projection of into LIS and lies in the complementary space dominated by the prior ; and they are independent under the approximated posterior . Therefore one can approximate the posterior covariance as follows

(12)

where is computed empirically and has eigendecomposition ; . By applying and to (10) respectively we obtain the final splitting proposal as follows:

where and ; and are algorithmic parameters to indicate whether (set to 1) or not (set to 0) to include gradient information in the intrinsic subspace and its complement respectively.

In the next section, we will derive similar intrinsic subspaces and splitting proposals directly from partial spectral decomposition. Based on prior or posterior covariance operators, different dimension reduction strategies can be achieved, applicable to different scenarios depending on how informative the data are.

3 Dimension Reduction

We focus on dimension reduction through partial spectral decomposition. Suppose we have eigen-pairs of some operator (prior covariance or posterior covariance) with . The intrinsic low-dimensional subspace can be defined through principal eigen-functions, that is, . Let . Then we define the projection operator as follows:

For example, if we truncate on the -dimensional subspace

then we can approximate by replacing with in (5). In this section we investigate two types of dimension reduction based on prior and likelihood respectively, with particular connection to DILI (cui16).

3.1 Prior-Based Dimension Reduction

Let , be the eigenvalues and eigenfunctions of the prior covariance operator such that , . Assume is a sequence of positive reals with (this enforces the trace-class condition for ), and is an orthonormal basis of . We make the usual correspondence between an element and its coordinates w.r.t. the basis , that is . Using the Karhunen-Loève expansion of a Gaussian measure (adler10; bogachev98; dashti2017) we have the representation:

(14)

Define the Sobolev spaces corresponding to the basis :

so that and if . Typically, we will have the following decay assumption:

Assumption 3.1.

for some in the sense that there exist constants such that for all .

Thus, the prior (so also the posterior) concentrate on for any . Notice also that: By thresholding largest eigenvalues , eigen-basis can also serve as a basis for low-dimensional subspace. One can define the similar projections in (11) and (12). Such prior based dimension reduction can work well in a class of inverse problems, especially when they are prior dominated (See beskos2017, for more details).

With the eigen-pairs , the prior covariance operator can be written and approximated as

with and defined by similarly as (11). Then we can approximate the posterior covariance

where , and . By the Sherman-Morrison-Woodbury formula,

where . This implies . By applying and to (3) respectively and using the above approximations we get

where and .

Remark 3.2.

The eigen-pairs can be pre-computed/pre-specified. For each step we only need to calculate -dimensional matrix at , which can be efficiently obtained by Hessian action through adjoint methods in PDE.

3.2 Likelihood-Based Dimension Reduction

When the data are more informative, the above prior based dimension reduction may not perform well and thus we need reduction techniques that are likelihood based, like DILI (cui16) or AS (constantine2016). We could consider the following generalized eigen-problem , to find the eigen-pairs such that

(16)

which can be shown equivalent to the eigen-problem of ppGNH for eigen-pairs

(17)

with ; it can be written as where

For the simplicity of notation, we drop the dependence of in the following, but readers should be reminded that , and are defined and approximated point wise .

If we work with the whitened coordinates , then we can define the (local) posterior precision

which has a direct -dimensional low-rank approximation

(19)

Thus the local posterior covariance can be approximated using the Sherman-Morrison-Woodbury formula

(20)

where .

Remark 3.3.

With the approximation (20), we directly have

(21)

Therefore we can rewrite the approximation (20) analogous to the approximation (12) in DILI (cui16)

In this sense, the approximation (20) to the posterior covariance is consistent with the low-rank approximation by DILI. However, we have avoided the empirical computation of . And since is already in the diagonal form (21), we have also avoided the rotation in DILI (cui16).

By applying and to (10) respectively and using the approximation (20), we have the following proposal split on the low-dimensional subspace and its complement :

(22a)
(22b)

where and .

4 Dimension Reduced Algorithms

In this section we apply the dimension reduction techniques discussed in Section 3 to two -GMC algorithms, -mMALA and -mHMC. Since the prior-based dimension reduction has been implemented in beskos2017, we focus on the likelihood-based dimension reduction. We derive new efficient algorithms and compare them to DILI.

4.1 Dimension-Reduced -mMALA

We can apply the similar semi-implicit Euler scheme as in beskos08 to (22), or equivalently use the approximation (20) in the following whitened proposal, which is a reformulation of manifold MALA proposal (6) by the transformation

(23)

for defined as in (4) and setting , in (7) we have:

(24)

where , with and chosen to be or to indicate whether or not to include gradient information on the low-dimensional subspace and its complement respectively. This proposal (23) can be reformulated as follows

(25)

where , , and .

With the approximation (20), it is straightforward to verify

(26)

Substituting the approximations (20) (26) into (25) yields the following proposal

(27)

With and , the proposal (27) can be rewritten as

Remark 4.1.

If we choose as DILI (cui16), then the proposal (27) can further be simplified as

(28)

Comparing with the following operator-weighted proposal as in DILI (Equations (17) (36) of cui16; note that when is already in the diagonal form.)

we found that the proposal (28) corresponds to the proposal 3.2 (LI-Langevin) of cui16 with the following parameters in the above equation:

(29)

Similar connection to the proposal 3.1 (LI-Prior) of cui16 can be drawn with , that is, .

The proposal (28) is analogous to but different from DILI proposal in two aspects: (i) while , , in DILI are fixed once the global LIS is obtained, , and implicitly depend on location thus our proposal (28) is more general with the freedom of choosing between a position-specific and an adaptively globalized implementation, as detailed in the following; (ii) inherited from the semi-implicit scheme of -mMALA, are all functions of the step size , therefore there is only one tuning parameter for the discretized step size in our proposal (28), while DILI has separate step sizes for and respectively, which may increase the difficulty of tuning.

With the proposal (27), we denote the position-specific implementation as Dimension-Reduced -dimensional manifold MALA (DR--mMALA). To prove the well-posedness of the resulting MCMC algorithm and derive its acceptance probability, we need the following assumption on the forward mapping :

Assumption 4.2.

For some , the mappings are Fréchet differentiable on with derivatives .

The following theorem establishes the validity of DR--mMALA.

Theorem 4.3.

Under the assumptions 3.1 and 4.2, DR--mMALA with proposal (27) is well-defined on Hilbert space , with the acceptance probability specified as follows

(30)

where and .

Proof.

See LABEL:apx:thm-DR-infmMALA.

Corollary 4.4.

With the setting (29), the acceptance probability (30) differs from that of DILI (Equation (40) of cui16) by a determinant term , which is used to adjust the change of local geometry.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
221229
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description