DataDriven Model Reduction for the Bayesian Solution of Inverse Problems
Abstract
One of the major challenges in the Bayesian solution of inverse problems governed by partial differential equations (PDEs) is the computational cost of repeatedly evaluating numerical PDE models, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. This paper proposes a datadriven projectionbased model reduction technique to reduce this computational cost. The proposed technique has two distinctive features. First, the model reduction strategy is tailored to inverse problems: the snapshots used to construct the reducedorder model are computed adaptively from the posterior distribution. Posterior exploration and model reduction are thus pursued simultaneously. Second, to avoid repeated evaluations of the fullscale numerical model as in a standard MCMC method, we couple the fullscale model and the reducedorder model together in the MCMC algorithm. This maintains accurate inference while reducing its overall computational cost. In numerical experiments considering steadystate flow in a porous medium, the datadriven reducedorder model achieves better accuracy than a reducedorder model constructed using the classical approach. It also improves posterior sampling efficiency by several orders of magnitude compared to a standard MCMC method.
DataDriven Model Reduction for the Bayesian Solution of Inverse Problems
Tiangang Cui, Youssef M. Marzouk, Karen E. Willcox^{1}^{1}1Correspondence to: K. E. Willcox, Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. Emails: tcui@mit.edu, ymarz@mit.edu, kwillcox@mit.edu
Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
KEY WORDS: model reduction; inverse problem; adaptive Markov chain Monte Carlo; approximate Bayesian inference
Contract/grant sponsor: DOE Office of Advanced Scientific Computing Research (ASCR); contract/grant number: DEFG0208ER2585 and DESC0009297
1 Introduction and motivation
An important and challenging task in computational modeling is the solution of inverse problems, which convert noisy and indirect observational data into useful characterizations of the unknown parameters of a numerical model. In this process, statistical methods—Bayesian methods in particular—play a fundamental role in modeling various information sources and quantifying the uncertainty of the model parameters [1, 2]. In the Bayesian framework, the unknown parameters are modeled as random variables and hence can be characterized by their posterior distribution. Markov chain Monte Carlo (MCMC) methods [3] provide a powerful and flexible approach for sampling from posterior distributions. The Bayesian framework has been applied to inverse problems in various fields, for example, geothermal reservoir modeling [4], groundwater modeling [5], ocean dynamics [6], remote sensing [7], and seismic inversion [8].
To generate a sufficient number of samples from the posterior distribution, MCMC methods require sequential evaluations of the posterior probability density at many different points in the parameter space. Each evaluation of the posterior density involves a solution of the forward model used to define the likelihood function, which typically is a computationally intensive undertaking (e.g., the solution of a system of PDEs). In this manyquery situation, one way to address the computational burden of evaluating the forward model is to replace it with a computationally efficient surrogate. Surrogate models have been applied to inverse problems in several settings; for example, [9, 10] employed generalized polynomial chaos expansions, [11] employed Gaussian process models, and [12, 13, 14, 15] used projectionbased reducedorder models. In this work, we also focus on projectionbased reducedorder models (although the datadriven strategy underlying our approach should be applicable to other types of surrogate models). A projectionbased reducedorder model reduces the computational complexity of the original or “full” forward model by solving a projection of the full model onto a reduced subspace. For the model reduction methods we consider, the construction of the reduced subspace requires evaluating the full model at representative samples drawn from the parameter space. The solutions of the full model at these samples are referred to as snapshots [16]. Their span defines the reduced subspace, represented via an orthogonal basis.
The quality of the reducedorder model relies crucially on the choice of the samples for computing the snapshots. To construct reducedorder models targeting the Bayesian solution of the inverse problem, we employ existing projectionbased model reduction techniques. Our innovation is in a datadriven approach that adaptively selects samples from the posterior distribution for the snapshot evaluations. This approach has two distinctive features:

We integrate the reducedorder model construction process into an adaptive MCMC algorithm that simultaneously samples the posterior and selects posterior samples for computing the snapshots. During the sampling process, the numerical accuracy of the reducedorder model is adaptively improved.

The approximate posterior distribution defined by the reducedorder model is used to increase the efficiency of MCMC sampling. We either couple the reducedorder model and the full model together to accelerate the sampling of the full posterior distribution^{†}^{†}†The term “full posterior” refers to the posterior distribution induced by the original or full forward model., or directly explore the approximate posterior distribution induced by the reducedorder model. In the latter case, sampling the approximate distribution yields a biased Monte Carlo estimator, but the bias can be controlled using error indicators or estimators.
Compared to the classical offline approaches that build the reducedorder model before using it in the manyquery situation, the motivation for collecting snapshots during posterior exploration is to build a reducedorder model that focuses on a more concentrated region in the parameter space. Because the solution of the inverse problem is unknown until the data are available, reducedorder models built offline have to retain a level of numerical accuracy over a rather large region in the parameter space, which covers the support of the posterior distributions for all the possible observed data sets. For example, samples used for computing the snapshots are typically drawn from the prior distribution [13]. In comparison, our datadriven approach focuses only on the posterior distribution resulting from a particular data set. As the observed data necessarily increase the information divergence of the prior distribution from the posterior distribution [17], the support of the posterior distribution is more compact than that of the prior distribution.
Figure 1 illustrates a simple twodimensional inference problem, where the prior distribution and the posterior distribution are represented by the blue and red contours, respectively. The left plot of Figure 1 shows 50 randomly drawn prior samples for computing the snapshots, each of which has a low probability of being in the support of the posterior. In comparison, the samples selected by our datadriven approach, as shown in the right plot of Figure 1, are scattered within the region of high posterior probability.
By retaining numerical accuracy only in a more concentrated region, the datadriven reducedorder model requires a basis of lower dimension to achieve the same level of accuracy compared to the reducedorder model built offline. For the same reason, the datadriven model reduction technique can potentially have better scalability with parameter dimension than the offline approach.
We note that goaloriented model reduction approaches have been developed in the context of PDEconstrained optimization [18, 19, 20, 21], in which the reducedorder model is simultaneously constructed during the optimization process. In these methods, the snapshots for constructing the reduced basis are only evaluated at points in the parameter space that are close to the trajectory of the optimization algorithm.
The remainder of this paper is organized as follows. In Section 2, we outline the Bayesian framework for solving inverse problems and discuss the sampling efficiency and Monte Carlo error of MCMC. In Section 3, we introduce the datadriven model reduction approach, and construct the datadriven reducedorder model within an adaptive delayed acceptance algorithm to speed up MCMC sampling. We also provide results on the ergodicity of the algorithm. Section 4 analyzes some properties of the datadriven reducedorder model. In Section 5, we discuss a modified framework that adaptively constructs the reducedorder model and simultaneously explores the induced approximate posterior distribution. We also provide an analysis of the mean square error of the resulting Monte Carlo estimator. In Section 6, we demonstrate and discuss various aspects of our methods through numerical examples. Section 7 offers concluding remarks.
2 Samplebased inference for inverse problems
The first part of this section provides a brief overview of the Bayesian framework for inverse problems. Further details can be found in [1, 22, 2]. The second part of this section discusses the efficiency of MCMC sampling for computationally intensive inverse problems.
2.1 Posterior formulation and sampling
Given a physical system, let denote the dimensional unknown parameter, and denote the dimensional observed data. The forward model maps a given parameter to the observable model outputs .
In a Bayesian setting the first task is to construct the prior models and the likelihood function as probability distributions. The prior density is a stochastic model representing knowledge of the unknown before any measurements and is denoted by . The likelihood function specifies the probability density of the observation for a given set of parameters , denoted by . We assume that the data and the model parameters follow the stochastic relationship
(1) 
where the random vector captures the measurement noise and other uncertainties in the observationmodel relationship, including model errors. Without additional knowledge of the measurement process and model errors, is modeled as a zero mean Gaussian, , where is the covariance. Let
(2) 
denote the datamisfit function. The resulting likelihood function is proportional to By Bayes’ formula, the posterior probability density is
(3) 
where
(4) 
is the normalizing constant.
In the general framework set up by [23], we can explore the posterior distribution given by (3) using MCMC methods such as the MetropolisHastings algorithm [24, 25]. The MetropolisHastings algorithm uses a proposal distribution and an acceptance/rejection step to construct the transition kernel of a Markov chain that converges to the desired target distribution.
2.2 Sampling efficiency and Monte Carlo error
Once we draw a sufficient number of samples from the posterior distribution, the expectations of functions over the posterior distribution can be estimated by Monte Carlo integration. Suppose we wish to estimate the expectation of a function over the posterior distribution
(5) 
by posterior samples, . The resulting Monte Carlo estimator of is
(6) 
which is an unbiased estimator with the mean square error
(7) 
Because the samples drawn by an MCMC algorithm are correlated, the variance of is dependent on the effective sample size
(8) 
where
is the integrated autocorrelation of ; see [3, Chapter 5.8] for a detailed derivation and further references.
In practice, we wish to improve the computational efficiency of the MCMC algorithm, which is defined by the effective sample size for a given budget of CPU time. There are two ways to achieve this:

Improve the statistical efficiency. To increase the effective sample size for a given number of MCMC iterations, one can reduce the sample correlation by designing proposals that traverse the parameter space more efficiently. For example, adaptive MCMC methods [26, 27] use adaptation to learn the covariance structure of the posterior distribution online, and use this information to design proposal distributions. Algorithms such as stochastic Newton [8] and Riemannian manifold MCMC [28] use local derivative information to construct efficient proposals that adapt to the local geometry of the posterior distribution.

Increase the number of MCMC steps for a given amount of CPU time. Because simulating the forward model in the datamisfit function (2) is CPUintensive, computing the posterior density at every iteration is the major bottleneck of the standard MCMC algorithm. By using a fast approximation to the forward model, the amount of computing time can be reduced. This effort complements the use of MCMC algorithms that require local derivatives, since fast approximations of the forward model also enable fast evaluations of its gradient and even higherorder derivatives. ^{‡}^{‡}‡Using approximate derivatives does not impact the ergodicity of the MCMC algorithm, since the bias caused by the approximate derivatives is corrected by the acceptance/rejection step in the MetropolisHastings algorithm.
One important remark is that by drawing samples directly from an approximate posterior distribution (i.e., the one induced by our approximation of the forward model), the resulting Monte Carlo estimator (6) is biased. Algorithms such as surrogate transition [29] or delayed acceptance [30] MCMC can be used to couple the posterior distribution and its approximation together to ensure the samples are drawn from the full posterior distribution. On the other hand, if the accuracy of the posterior approximation can be tightly controlled, then some bias may be acceptable if it enables significant variance reduction, and thus an overall reduction in the mean squared error of an estimate of a posterior expectation, for a given computational effort. We explore both options in this work.
3 Datadriven reducedorder model and full target algorithm
This section introduces our datadriven model reduction approach, and the adaptive sampling framework for simultaneously constructing the reducedorder model and exploring the posterior distribution.
3.1 Posterior approximation
Suppose the system of interest can be described by a system of nonlinear steady partial differential equations (PDEs), with a finite element or finite difference discretization that results in a discretized system in the form of
(9) 
In Equation (9), represents the discretized state of the system, is the dimension of the system (the number of unknowns), is a discretized linear operator, represents the discretized nonlinear terms of the governing PDE, and denotes the forcing terms. All of , , and can be parameterized by the unknown parameter . An observation operator maps the state of the system to the observable model outputs, i.e.,
(10) 
Equations (9) and (10) define the forward model, , that maps a realization of the unknown parameter to observable model outputs d.
Our datadriven model reduction approach selects a set of posterior samples to compute the snapshots by solving Equation (9) for each sample . By constructing the reduced basis , the state can be approximated by a linear combination of the reduced basis vectors, i.e., . Then Equation (9) can be approximated by applying the Galerkin projection:
(11) 
and the associated model outputs are
(12) 
If , the dimension of the unknown state in (11) and (12) is greatly reduced compared to that of the original full system (9) and (10). Thus (11) and (12) define a reducedorder model that maps a realization of the unknown parameter to an approximation of the observable model outputs .
Care must be taken to ensure efficient solution of the reducedorder model, since in the presence of general parametric dependence, the equations (11) and (12) have low state dimension but are not necessarily fast to solve. This is because for each new parameter , solution of the reducedorder model requires evaluating the full scale system matrices or residual, projecting those matrices or residual onto the reduced basis, and then solving the resulting low dimensional system. Since many elements of these computations depend on , the dimension of the original system, this process is typically computationally expensive (unless there is special structure to be exploited, such as affine parametric dependence). To reduce the computational cost of this process, methods such as the missing point estimation [31], the empirical interpolation [32] and its discrete variant [33] approximate the nonlinear term in the reducedorder model by selective spatial sampling.
We note that for systems that exhibit a wide range of behaviors, recently developed localization approaches such as [34, 35, 36, 37] adaptively construct multiple local reduced bases, each tailored to a particular system behavior that is associated with a subdomain of the state space or the parameter space. Our use of adaptation is different, focusing on the adaptive selection of posterior samples for evaluating the snapshots. In this paper we consider only the case of a global reduced basis; however, future work could combine our adaptive posterior sampling approach with a localization strategy to build multiple local reduced bases, each adapted to the local structure of the posterior.
The datamisfit function (2) can be approximated by replacing the forward model with the reducedorder model , which gives
(13) 
Then the resulting approximate posterior distribution has the form
(14) 
where is the normalizing constant.
Our datadriven approach adaptively selects the sample for evaluating the next snapshot such that the scaled error of the reducedorder model outputs,
(15) 
for the current reduced basis , is above a userspecified threshold. The whitening transformation computes the relative error of the reducedorder model compared with noise level of the observed data, which has standard deviation 1 after the transformation. In case that we wish to bypass the forward model evaluations, so that the error cannot be computed directly, an error indicator or an error estimator, , can be used. In this work, we use the dual weighted residual technique [38] to compute an error indicator.
3.2 Full target algorithm
To achieve simultaneous model reduction and posterior exploration, we employ the adaptive delayed acceptance algorithm of [39]. Suppose we have a reducedorder model constructed from an initial reduced basis. At each iteration of the MCMC sampling, we first sample the approximate posterior distribution based on the reducedorder model for a certain number of steps using a standard MetropolisHastings algorithm. This firststage subchain simulation should have sufficient number of steps, so that its initial state and last state are uncorrelated. Then the last state of the subchain is used as a proposal candidate in the secondstage of the algorithm, and the acceptance probability for this candidate is computed based on the ratio of the full posterior density value to the approximate posterior density value. This delayed acceptance algorithm employs the firststage subchain to decrease the sample correlation by sampling the computationally fast approximate posterior, then uses the acceptance/rejection in the secondstage to ensure that the algorithm correctly samples the full posterior distribution.
The statistical efficiency of the delayed acceptance algorithm relies on the accuracy of the approximate posterior. An approximate posterior induced by an inaccurate reducedorder model can potentially result in a higher secondstage rejection rate, which decreases the statistical efficiency of the delayed acceptance algorithm. This is because duplicated MCMC samples generated by rejections increase the sample correlation (we refer to [30] for a formal justification). To maintain statistical efficiency, we aim to construct a sufficiently accurate reducedorder model, so that the resulting secondstage acceptance probability is close to . To achieve this, we introduce adaptive reduced basis enrichment into the delayed acceptance algorithm.
After each full posterior density evaluation, the state of the associated forward model evaluation can be used as a potential new snapshot. We compute the scaled error (15) of the reducedorder model outputs at each new posterior sample, and the reduced basis is updated with the new snapshot when the error exceeds a usergiven threshold . By choosing an appropriate threshold , we can control the maximum allowable amount of error of the reducedorder model.
The resulting reducedorder model is datadriven, since it uses the information provided by the observed data (in the form of the posterior distribution) to select samples for computing the snapshots. It is also an online model reduction approach, since the reducedorder model is built concurrently with posterior sampling.
3.2.1 The algorithm.
Since the adaptive sampling procedure samples from the full posterior distribution, hereafter we refer to it as the “full target algorithm.” Details of the full target algorithm are given in Algorithm 1.
Lines 1–13 of Algorithm 1 simulate a Markov chain with invariant distribution for steps. In Lines 14–19 of Algorithm 1, to ensure that the algorithm samples from the full posterior distribution, we postprocess the last state of the subchain using a secondstage acceptance/rejection based on the full posterior distribution.
The secondstage acceptance probability is controlled by the accuracy of the reducedorder model and the subchain length . Ideally, we want to have large to give uncorrelated samples. However, if the secondstage acceptance probability is low, the effort spent on simulating the subchain will be wasted because the proposal is more likely to be rejected in the second step. To avoid this situation, we dynamically monitor the accuracy of the reducedorder model by evaluating its error indicator at each state of the subchain (Lines 10–12). The subchain is terminated if the Linfinity norm of the scaled error indicator exceeds a threshold .
Lines 20–22 of Algorithm 1 describe the adaptation of the reduced basis, which is controlled by the scaled error (15) of the reducedorder model and a given threshold . Three criteria must be satisfied for the reduced basis to be updated: (i) the scaled error at must exceed the threshold ; (ii) the dimensionality of the reduced basis must not exceed the maximum allowable dimensionality ; and (iii) the finite adaptation criterion (Definition 1) should not yet be satisfied. The finite adaptation criterion is defined precisely as follows.
Definition 1.
Finite adaptation criterion. The average number of MCMC steps used for each reduced basis enrichment exceeds , for some userspecified , where is the error threshold used in Algorithm 1.
The finite adaptation criterion is a threshold for how infrequent updates to the reducedorder model should become before model adaptation is stopped entirely. When more and more MCMC steps are used between updates, the reducedorder model has satisfied the accuracy threshold over larger and larger regions of the posterior. The threshold of at least steps between updates can thus be understood as a measure of “sufficient coverage” by the reducedorder model. Further discussion and justification of the finite adaptation criterion is deferred to Section 4.
Algorithm 1 uses the adaptation in Lines 20–22 to drive the online construction of the reducedorder model. Once the adaptation stops, the algorithm runs as a standard delayed acceptance algorithm, with an step subchain in the first stage.
To prevent the situation where the reduced basis dimension is large so that the reducedorder model becomes computationally inefficient, we limit the reduced basis dimension to be less than a usergiven threshold . When the reduced basis dimension reaches but the finite adaptation criterion is not satisfied, one strategy is to stop the reduced basis enrichment, and continue simulating Algorithm 1. This does not affect the ergodicity of Algorithm 1, as will be discussed in Section 3.2.2. However, the error of the resulting reducedorder model can be large in the subregion of the support of the posterior that remains unexplored. Consequently, this large reducedorder model error may decrease the secondstage acceptance rate and the statistical efficiency of the algorithm.
A wide range of proposal distributions can be used in Algorithm 1. In this work, we use the groupedcomponents adaptive Metropolis [39], which is a variant of adaptive Metropolis [26]. More advanced algorithms such as stochastic Newton [8] or manifold MCMC [28] can also be used within Algorithm 1. The computational cost of evaluating derivative information for these algorithms can also be reduced by using the reducedorder models.
3.2.2 Ergodicity of the full target algorithm.
Throughout this work, the following assumption on the forward model and the reducedorder model is used:
Assumption 2.
The forward model , and the reducedorder model are Lipschitz continuous and bounded on .
Since the datamisfit functions and are quadratic, we have and . Assumption 2 implies that there exists a constant such that We also note that Lipschitz continuity of implies the Lipschitz continuity of . Similarly, is Lipschitz continuous.
We first establish the ergodicity of a nonadaptive version of Algorithm 1, where the reducedorder model is assumed to be given and fixed.
Lemma 3.
Proof.
The adaptation used in Algorithm 1 is different from that of standard adaptive MCMC algorithms such as [26] and the original adaptive delayed acceptance algorithm [39]. These algorithms carry out an infinite number of adaptations because modifications to the proposal distribution or to the approximation continue throughout MCMC sampling. The number of adaptations in Algorithm 1, on the other hand, cannot exceed the maximum allowable dimension of the reduced basis, and hence it is finite. Furthermore, the adaptation stops after finite time because of the finite adaptation criterion. Since Algorithm 1 is ergodic for each of the reducedorder models constructed by Lemma 3, Proposition 2 in [27] ensures the ergodicity of Algorithm 1. Lemma 3 also reveals that Algorithm 1 always converges to the full posterior distribution, regardless of the accuracy of the reducedorder model.
4 Properties of the approximation
In Algorithm 1, the accuracy of the reducedorder model is adaptively improved during MCMC sampling, and thus it is important to analyze the error of the approximate posterior induced by the reducedorder model, compared to the full posterior. Since bounds on the bias of the resulting Monte Carlo estimator can be derived from the error of the approximate posterior, this analysis is particularly useful in situations where we want to utilize the approximate posterior directly for further CPU time reduction. This error analysis also justifies the finite adaptation criterion (see Definition 1) used in Algorithm 1 to terminate the enrichment of the reduced basis.
We provide here an analysis of the Hellinger distance between the full posterior distribution and the approximate posterior distribution
(16) 
The Hellinger distance translates directly into bounds on expectations, and hence we use it as a metric to quantify the error of the approximate posterior distribution.
The framework set by [22] is adapted here to analyze the Hellinger distance between the full posterior distribution and its approximation induced by the constructed in Algorithm 1, with respect to a given threshold . Given a set of samples where the snapshots are computed, and the associated reduced basis , we define the feasible set and the associated posterior measure:
Definition 4.
For a given and a reducedorder model , define the feasible set as
(17) 
The set has posterior measure
(18) 
The complement of the feasible set is given by , which has posterior measure .
On any measurable subset of the feasible set, the error of the reducedorder model is bounded above by . Then the Hellinger distance (16) can be bounded by the userspecified and the posterior measure of , which is the region that has not been well approximated by the reducedorder model. We formalize this notion though the following propositions and theorems.
Proposition 5.
Given a reducedorder model and some , there exists a constant such that
Proof.
This result directly follows from the definition of the feasible set and Assumption 2. ∎
Proposition 6.
Given a reducedorder model and a , there exist constants and such that
Proof.
From the definition of the feasible set, we have a bound on the difference of the normalizing constants:
Since and are Lipschitz continuous and greater than zero, we have
for some constant . Then by Proposition 5, there exists a constant such that
There also exists a constant such that
Thus, we have . ∎
Theorem 7.
Suppose we have the full posterior distribution and its approximation induced by a reducedorder model . For a given , there exist constants and such that
(19) 
Proof.
Following Theorem 4.6 of [22] we have
where
Following the same derivation as Proposition 6 we have
Thus there exist constants such that
Combining the above results we have
Since and , the above inequality can be rearranged as
for some constants and . ∎
Under certain technical conditions [40, 41], the pointwise error of the reducedorder model decreases asymptotically as the dimensionality of the reduced basis increases. Thus the feasible set asymptotically grows with the reduced basis enrichment, and hence asymptotically decays. If the posterior distribution is sufficiently wellsampled such that
(20) 
then the Hellinger distance (16) is characterized entirely by , as shown in Theorem 7. Thus, by adaptively updating the datadriven reducedorder model until the condition (20) is satisfied, we can build an approximate posterior distribution whose error is proportional to the userspecified error threshold .
In practice, it is not feasible to check the condition (20) within MCMC sampling, so we use heuristics to motivate the finite adaptation criterion in Definition 1. Consider a stationary Markov chain that has invariant distribution . We assume that the probability of visiting is proportional to its posterior measure. Suppose we have , then the probability of the Markov chain visiting is about , for some . In this case, the average number of MCMC steps needed for the next reduced basis refinement is about . Since the posterior measure of decays asymptotically with refinement of the reduced basis, the average number of steps needed for the basis refinement asymptotically increases. Thus we can treat the condition (20) as holding if the average number of steps used for reduced basis refinement exceeds , and thus terminate the adaptation. We recommend to choose a small value, for example , to delay the stopping time of the adaptation. In this way, the adaptive construction process can search the parameter space more thoroughly to increase the likelihood that the condition is satisfied.
5 Extension to approximate Bayesian inference
The full target algorithm introduced in Section 3.2 has to evaluate the forward model after each subchain simulation to preserve ergodicity. The resulting Monte Carlo estimator (6) is unbiased. However, the effective sample size (ESS) for a given budget of computing time is still characterized by the number of full model evaluations. To increase the ESS, we also consider an alternative approach that directly samples from the approximate posterior distribution when the reducedorder model has sufficient accuracy.
Suppose that the scaled error indicator provides a reliable estimate of the scaled true error of the reducedorder model. Then the reliability and the refinement of the reducedorder model can be controlled by the error indicator. During MCMC sampling, we only evaluate the full model to correct a Metropolis acceptance and to update the reducedorder model when the error indicator exceeds the threshold . When a reducedorder model evaluation has error indicator less than , we treat the reducedorder model as a sufficient approximation to the full model. In this case, decisions in the MCMC algorithm are based on the approximate posterior distribution.
Compared to the full target algorithm that draws samples from the full posterior distribution, this approach only samples from an approximation that mimics the full posterior up to the error threshold . We refer to the proposed approach as the “approximate algorithm.” Even though sampling the approximate posterior distribution will introduce bias to the Monte Carlo estimator (6), this bias may be acceptable if the resulting Monte Carlo estimator has a smaller mean squared error (MSE) for a given amount of computational effort, compared to the standard MetropolisHasting algorithm. In the remainder of this section, we provide details on the algorithm and analyze the bias of the resulting Monte Carlo estimator.
5.1 approximate algorithm
Algorithm 2 details the approximate algorithm. During the adaptation, for a proposed candidate , we discard the reducedorder model evaluation if its error indicator exceeds the new upper threshold . We set , which means that Algorithm 2 does not use the information from the reducedorder model if its estimated error is greater than one standard deviation of the measurement noise. In this case (Lines 3–6), we run the full model directly to evaluate the posterior distribution and the acceptance probability. If the error indicator of the reducedorder model is between the lower threshold and the upper threshold (Lines 7–15), the reducedorder model is considered to be a reasonable approximation, and the delayed acceptance scheme is used to make the correction. If the error indicator is less than the lower threshold , or if the adaptation is stopped (Lines 17–19), the reducedorder model is considered to be a sufficiently accurate approximation to the full model, and is used to accept/reject the proposal directly.
The adaptation criterion used in Lines 2 of Algorithm 2 has two conditions: the dimension of reduced basis should not exceed a specified threshold , and the finite adaptation criterion (Definition 1) should not yet be triggered. The reduced basis is updated using the full model evaluations at proposals accepted by MCMC.
When the reduced basis dimension reaches but the finite adaptation criterion is not satisfied, it is not appropriate to use the approximate algorithm for the prescribed error threshold. This is because the large reducedorder model error can potentially result in unbounded bias in the Monte Carlo estimator. In this situation, we should instead use the full target algorithm, for which convergence is guaranteed.
5.2 Monte Carlo error of the approximate algorithm
To analyze the performance of the approximate algorithm, we compare the MSE of the resulting Monte Carlo estimator with that of a standard singlestage MCMC algorithm that samples the full posterior distribution.
We wish to compute the expectation of a function over the posterior distribution , i.e.,
(21) 
where the first and second moments of are assumed to be finite. Suppose a singlestage MCMC algorithm can sample the full posterior distribution for steps in a fixed amount of CPU time, and effective samples are produced. The resulting Monte Carlo estimator
(22) 
has MSE
(23) 
which is characterized by the ESS and the variance of over .
By sampling the approximate posterior distribution, the expectation can be approximated by
(24) 
Suppose we can sample the approximate posterior for steps in the same amount of CPU time as sampling the full posterior, and that these samples have effective sample size
(25) 
where is the speedup factor that depends on the computational expense of the reducedorder model . The Monte Carlo estimator
(26) 
has the MSE
(27) 
where the bias is defined by
(28) 
We are interested in the situation for which sampling the approximation leads to a smaller MSE than sampling the full posterior distribution, i.e.,
(29) 
Rearranging the above inequality gives
(30) 
Equation (30) reveals that when our target ESS for drawing from the full posterior distribution does not exceed the bound , sampling the approximate posterior will produce a smaller MSE. This suggests that the approximate algorithm will be more accurate for a fixed computational cost than the singlestage MCMC when the target ESS of the singlestage MCMC satisfies (30). In such cases, the MSE of the Monte Carlo estimator is dominated by the variance rather than the bias.
The bias is characterized by the Hellinger distance between the full posterior distribution and its approximation (Lemma 6.37 of [22]), i.e.,
We assume that there exists an and a set of samples such that the resulting reducedorder model satisfies the condition (20), i.e., . Applying Theorem 7, the ratio of variance to squared bias can be simplified to
for some constant , and hence we have
(31) 
For problems that have reliable error indicators or estimators, the approximate algorithm provides a viable way to select the set of samples for computing the snapshots. However it is computationally infeasible to verify that the condition (20) holds in practice. We thus employ the finite adaptation criterion (Definition 1) to perform a heuristic check on the condition (20), as discussed in Section 4.
The bound is characterized by the usergiven error threshold and the speedup factor , where is a problemspecific factor that is governed by the convergence rate and computational complexity of the reducedorder model. For a reducedorder model such that , there exists a so that the MSE of sampling the approximate posterior for steps will be less than the MSE of sampling the full posterior for the same amount of CPU time. In the regime where the reducedorder models have sufficiently large speedup factors, the bound is dominated by , and hence decreasing results in a higher bound . However there is a tradeoff between the numerical accuracy and speedup factors. We should avoid choosing a very small value, because this can potentially lead to a highdimensional reduced basis and a correspondingly expensive reducedorder model such that , where the ratio should be close to one for such an accurate reducedorder model. In this case, sampling the approximate posterior can be less efficient than sampling the full posterior.
6 Numerical results and discussion
To benchmark the proposed algorithms, we use a model of isothermal steady flow in porous media, which is a classical test case for inverse problems.
6.1 Problem setup
Let be the problem domain, be the boundary of the domain, and denote the spatial coordinate. Let be the unknown permeability field, be the pressure head, and represent the source/sink. The pressure head for a given realization of the permeability field is governed by
(32) 
where the source/sink term is defined by the superposition of four weighted Gaussian plumes with standard deviation , centered at , , , , and with weights . A zeroflux Neumann boundary condition
(33) 
is prescribed, where is the outward normal vector on the boundary. To make the forward problem wellposed, we impose the extra boundary condition
(34) 
Equation (32) with boundary conditions (33) and (34) is solved by the finite element method with linear elements. This leads to the system of equations (9).
In Section 6.2, we use a ninedimensional example to carry out numerical experiments to benchmark various aspects of our algorithms. In this example, the spatially distributed permeability field is projected onto a set of radial basis functions, and hence inference is carried out on the weights associated with each of the radial basis functions. In Section 6.3, we apply our algorithms to a higher dimensional problem, where the parameters are defined on the computational grid and endowed with a Gaussian process prior. Both examples use fixed Gaussian proposal distributions, where the covariances of the proposals are estimated from a short run of the approximate algorithm. In Section 6.4, we offer additional remarks on the performance of the approximate algorithm.
6.2 The 9D inverse problem
The permeability field is defined by radial basis functions:
(35) 
where are the centers of the radial basis functions. These radial basis functions are shown in Figure 2. The prior distributions on each of the weights , are independent and lognormal, and hence we have
(36) 
where and = 9. The true permeability field used to generate the test data, and the corresponding pressure head are shown in Figure 3. The measurement sensors are evenly distributed over with grid spacing , and the signaltonoise ratio of the observed data is .
Numerical experiments for various choices of are carried out to test the computational efficiency of both algorithms and the dimensionality of reduced basis in the reducedorder model. For , we run the full target algorithm for iterations, with subchain length . To make a fair comparison in terms of the number of posterior evaluations, we run the approximate algorithm for iterations, also with . For both algorithms, the reducedorder model construction process is started at the beginning of the MCMC simulation. We set in the finite adaptation criterion 1. As a reference, we run a singlestage MCMC algorithm for iterations using the same proposal distribution. The reference algorithm only uses the full posterior distribution. The first samples of the simulations generated by the full target algorithm are discarded as burnin samples. Similarly, the first samples (to match the number of reducedorder model evaluations in the full target algorithm) are discarded for the simulations using the approximate algorithm and the reference algorithm.
In the remainder of this subsection, we provide various benchmarks of our datadriven model reduction approach and the two algorithms. These include:

A comparison of the full target algorithm and the approximate algorithm with the reference algorithm.

A comparison of the datadriven reducedorder model with the reducedorder model built with respect to the prior distribution.

A demonstration of the impact of observed data on our datadriven reducedorder model.
Reference  Full target  approximate  

Error threshold    
Average          
Full model evaluations  
Reduced basis vectors    
CPU time (sec)  
ESS  
ESS / CPU time  
speedup factor  
 
6.2.1 Computational efficiency.
Table I summarizes the number of full model evaluations, the dimensionality of reduced basis, the CPU time, the ESS, and the speedup factor, comparing the full target algorithm and the approximate algorithm with the reference algorithm. For the reducedorder models generated by the adaptive construction process, we provide estimates of the posterior measure of the complement of the feasible set, . We also provide a summary of the average secondstage acceptance probability, , for the full target algorithm.
For the full target algorithm, the average secondstage acceptance probabilities for all three values are greater than in this test case. This shows that the reducedorder models produced by all three values are reasonably accurate compared to the full model, and hence simulating the approximate posterior distribution in the first stage usually yields the same Metropolis acceptance decision as simulating the full posterior distribution. As we enhance the accuracy of the reducedorder model by decreasing the value of , the dimensionality of the resulting reduced basis increases, and thus the reducedorder model takes longer to evaluate. Because the full target algorithm evaluates the full model for every reducedorder model evaluations, its computational cost is dominated by the number of full model evaluations. Thus the speedup factors for all three choices of are similar (approximately 40). Since all three reducedorder models are reasonably accurate here, the efficiency gain of using a small value is not significant. In this situation, one could consider simulating the subchain in the first stage for more iterations (by increasing the subchain length ) when the value of is small.
The approximate algorithm produces speedup factors that are to times higher than the speedup factor of the full target algorithm in this test case. A larger value produces a larger speedup factor, because the dimension of the associated reduced basis is smaller.
To assess the sampling accuracy of the approximate algorithm, Figure 4 provides a visual inspection of the marginal distributions of each component of the parameter , and the contours of the marginal distributions of each pair of components. The black lines represent the results generated by the reference algorithm, the blue lines represent results of the full target algorithm with , and red lines represent results of the approximate algorithm with . The results from the more accurate simulations that use smaller values are not shown, as they are visually close to the case . The plots in Figure 4 suggest that all the algorithms generate similar marginal distributions in this test case. We note that both the reference algorithm and the full target algorithm sample from the full posterior distribution, and thus the small differences in the contours produced by various algorithms are likely caused by Monte Carlo error.
An alternative way to assess the computational efficiency and sampling accuracy of the approximate algorithm is to compare the number of effective samples generated by the approximate algorithm and the reference algorithm for a fixed amount of CPU time. As shown in Table I, the approximate algorithm with generates effective samples in seconds; the reference algorithm can only generate about effective samples in the same amount of CPU time. In the situation where the desired number of effective samples is at least an order of magnitude larger than the speedup factor, using the approximate algorithm is clearly advantageous to using the reference algorithm.
For both the full target algorithm and the approximate algorithm, and for each choice of , we use samples generated by the reference algorithm to compute the Monte Carlo estimator of the posterior measure of the complement of the feasible set for the final reducedorder model produced by the adaptive construction process. As shown in Table I, for all the reducedorder models, we have estimated . This suggests that the Hellinger distances between the full posterior distribution and its approximation can be characterized by the values in all three cases, and thus the finite adaptation criterion (Definition 1) with provides a useful indicator for terminating adaptation.
For , we note that the dimensions of the reduced bases produced by the full target algorithm and the approximate algorithm are different. This is because the snapshots are evaluated at selected samples that are randomly drawn from the posterior. The spread of the sample set slightly affects the accuracy of the reducedorder model. Nonetheless, both reducedorder models achieve the desirable level of accuracy since the estimated posterior measures are less than