Simulation-based optimal Bayesian experimental design for nonlinear systems
The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters.
Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter inference problems arising in detailed combustion kinetics.
keywords:Uncertainty quantification, Bayesian inference, Optimal experimental design, Nonlinear experimental design, Stochastic approximation, Shannon information, Chemical kinetics
Experimental data play an essential role in developing and refining models of physical systems. For example, data may be used to update knowledge of parameters in a model or to discriminate among competing models. Whether obtained through field observations or laboratory experiments, however, data may be difficult and expensive to acquire. Even controlled experiments can be time-consuming or delicate to perform. In this context, maximizing the value of experimental data—designing experiments to be “optimal” by some appropriate measure—can dramatically accelerate the modeling process. Experimental design thus encompasses questions of where and when to measure, which variables to interrogate, and what experimental conditions to employ.
These questions have received much attention in the statistics community and in many science and engineering applications. When observables depend linearly on parameters of interest, common solution criteria for the optimal experimental design problem are written as functionals of the information matrix Atkinson and Donev (2007). These criteria include the well-known ‘alphabetic optimality’ conditions, e.g., -optimality to minimize the average variance of parameter estimates, or -optimality to minimize the maximum variance of model predictions. Bayesian analogues of alphabetic optimality, reflecting prior and posterior uncertainty in the model parameters, can be derived from a decision-theoretic point of view Chaloner and Verdinelli (1995). For instance, Bayesian -optimality can be obtained from a utility function containing Shannon information while Bayesian -optimality may be derived from a squared error loss. In the case of linear-Gaussian models, the criteria of Bayesian alphabetic optimality reduce to mathematical forms that parallel their non-Bayesian counterparts Chaloner and Verdinelli (1995).
For nonlinear models, however, exact evaluation of optimal design criteria is much more challenging. More tractable design criteria can be obtained by imposing additional assumptions, effectively changing the form of the objective; these assumptions include linearizations of the forward model, Gaussian approximations of the posterior distribution, and additional assumptions on the marginal distribution of the data Chaloner and Verdinelli (1995). In the Bayesian setting, such assumptions lead to design criteria that may be understood as approximations of an expected utility. Most of these involve prior expectations of the Fisher information matrix Chu and Hahn (2008). Cruder “locally optimal” approximations require selecting a “best guess” value of the unknown model parameters and maximizing some functional of the Fisher information evaluated at this point Ford et al. (1989). None of these approximations, though, is suitable when the parameter distribution is broad or when it departs significantly from normality Clyde (1993). A more general design framework, free of these limiting assumptions, is preferred Müller (1998); Guest and Curtis (2009).
More rigorous information theoretic criteria have been proposed throughout the literature. The seminal paper of Lindley Lindley (1956) suggests using expected gain in Shannon information, from prior to posterior, as a measure of the information provided by an experiment; the same objective can be justified from a decision theoretic perspective Lindley (1972); Loredo (2010). Sebastiani and Wynn Sebastiani and Wynn (2000) propose selecting experiments for which the marginal distribution of the data has maximum Shannon entropy; this may be understood as a special case of Lindley’s criterion. Maximum entropy sampling (MES) has seen use in applications ranging from astronomy Loredo and Chernoff (2003) to geophysics van den Berg et al. (2003), and is well suited to nonlinear models. Reverting to Lindley’s criterion, Ryan Ryan (2003) introduces a Monte Carlo estimator of expected information gain to design experiments for a model of material fatigue. Terejanu et al. Terejanu et al. (2012) use a kernel estimator of mutual information (equivalent to expected information gain) to identify parameters in chemical kinetic model. The latter two studies evaluate their criteria on every element of a finite set of possible designs (on the order of ten designs in these examples), and thus sidestep the challenge of optimizing the design criterion over general design spaces. And both report significant limitations due to computation expense; Ryan (2003) concludes that “full blown search” over the design space is infeasible, and that two order-of-magnitude gains in computational efficiency would be required even to discriminate among the enumerated designs.
The application of optimization methods to experimental design has thus favored simpler design objectives. The chemical engineering community, for example, has tended to use linearized and locally optimal Mosbach et al. (2012) design criteria or other objectives Russi et al. (2008) for which deterministic optimization strategies are suitable. But in the broader context of decision theoretic design formulations, sampling is required. Müller and Parmigiani (1995) proposes a curve fitting scheme wherein the expected utility was fit with a regression model, using Monte Carlo samples over the design space. This scheme relies on problem-specific intuition about the character of the expected utility surface. Clyde et al. Clyde et al. (1995) explore the joint design, parameter, and data space with a Markov chain Monte Carlo (MCMC) sampler; this strategy combines integration with optimization, such that the marginal distribution of sampled designs is proportional to the expected utility. This idea is extended with simulated annealing in Müller et al. (2004) to achieve more efficient maximization of the expected utility. Clyde et al. (1995); Müller et al. (2004) use expected utilities as design criteria but do not pursue information theoretic design metrics. Indeed, direct optimization of information theoretic metrics has seen much less development. Building on the enumeration approaches of van den Berg et al. (2003); Ryan (2003); Terejanu et al. (2012) and the one-dimensional design space considered in Loredo and Chernoff (2003), Guest and Curtis (2009) iteratively finds MES designs in multi-dimensional spaces by greedily choosing one component of the design vector at a time. Hamada et al. Hamada et al. (2001) also find “near-optimal” designs for linear and nonlinear regression problems by maximizing expected information gain via genetic algorithms. But the coupling of rigorous information theoretic design criteria, complex physics-based models, and efficient optimization strategies remains an open challenge.
This paper addresses exactly these issues. Our interest is in physically realistic and hence computationally intensive models. We advance the state of the art by introducing flexible approximation and optimization strategies that yield optimal experimental designs for nonlinear systems, using a full information theoretic formalism, efficiently and with few limiting assumptions.
In particular, we employ a Bayesian statistical approach and focus on the case of parameter inference. Expected Shannon information gain is taken as our design criterion; this objective naturally incorporates prior information about the model parameters and accommodates very general probabilistic relationships among the experimental observables, model parameters, and design conditions. The need for such generality is illustrated in the numerical examples (Sections 5 and 6). To make evaluations of expected information gain computationally tractable, we introduce a generalized polynomial chaos surrogate Ghanem and Spanos (2012); Xiu and Karniadakis (2002) that captures smooth dependence of the observables jointly on parameters and design conditions. The surrogate carries no a priori restrictions on the degree of nonlinearity and uses dimension-adaptive sparse quadrature Gerstner and Griebel (2003) to identify and exploit anisotropic parameter and design dependencies for efficiency in high dimensions. We link the surrogate with stochastic approximation algorithms and use the resulting scheme to maximize the design objective. This formulation allows us to plan single experiments without discretizing the design space, and to rigorously identify optimal “batch” designs of multiple experiments over the product space of design conditions.
Figure 1 shows the key components of our approach, embedded in a flowchart describing a design–experimentation–model improvement cycle. The upper boxes focus on experimental design: the design criterion is formulated in Sections 2.1 and 2.2; estimation of the objective function is described in Section 2.3; and stochastic optimization approaches are described in Section 2.4. The construction of polynomial chaos surrogates for computationally intensive models is presented in Section 3. Section 4 briefly reviews computational approaches for Bayesian parameter inference, which come into play after the selected experiments have been performed and data have been collected. All of these tools are demonstrated on two example problems: a simple nonlinear model in Section 5 and a shock tube autoignition experiment with detailed chemical kinetics in Section 6.
2 Experimental design formulation
2.1 Experimental goals
Optimal experimental design relies on the construction of a design criterion, or objective function, that reflects how valuable or relevant an experiment is expected to be. A fundamental consideration in specifying this objective is the application of interest—i.e., what does the user intend to do with the results of the experiments? For example, if one would like to estimate a particular physical constant, then a good objective function might reflect the uncertainty in the inferred values, or the error in a point estimate. On the other hand, if one’s ultimate goal is to make accurate model predictions, then a more appropriate objective function should directly consider the distribution of the model outputs conditioned on data. If one would like to find the “best” model among a set of candidate models, then the objective function should reflect how well the data are expected to support each model, favoring experiments that maximize the ability of the data to discriminate. These considerations motivate the intuitive notion that an objective function should be based on specific experimental goals.
In this paper, we shall assume that the experimental goal is to infer a finite number of model parameters of interest. Parameter inference is of course an integral part of calibrating models from experimental data Kennedy and O’Hagan (2001). The expected utility framework developed below can be generalized to other experimental goals, and we will mention this where appropriate. Note that one could also augment the objective function by adding a penalty that reflects experimental effort or cost. More broadly, one can always add resource constraints to the experimental design optimization problem. In the interest of simplicity, and since costs and constraints are inevitably problem-specific, we do not pursue such additions here.
2.2 Design criterion and expected utility
We will formulate our experimental design criterion in a Bayesian setting. Bayesian statistics offers a foundation for inference from noisy, indirect, and incomplete data; a mechanism for incorporating physical constraints and heterogeneous sources of information; and a complete assessment of uncertainty in parameters, models, and predictions. The Bayesian approach also provides natural links to decision theory Berger (1985), a framework we will exploit below. (For discussions contrasting Bayesian and frequentist approaches to statistics, see Gelman (2008) and Stark and Tenorio (2010).)
The Bayesian paradigm Sivia and Skilling (2006) treats unknown parameters as random variables. Let be a probability space, where is a sample space, is a -field, and is a probability measure on . Let the vector of real-valued random variables denote the uncertain parameters of interest, i.e., the parameters to be conditioned on experimental data. is associated with a measure on , such that for . We then define to be the density of with respect to Lebesgue measure. For the present purposes, we will assume that such a density always exists. Similarly we treat the data as an -valued random variable endowed with an appropriate density. denotes the design variables or experimental conditions. Hence is the number of uncertain parameters, is the number of observations, and is the number of design variables. If one performs an experiment at conditions and observes a realization of the data , then the change in one’s state of knowledge about the model parameters is given by Bayes’ rule:
Here is the prior density, is the likelihood function, is the posterior density, and is the evidence. It is reasonable to assume that prior knowledge on does not vary with the experimental design, leading to the simplification .
Taking a decision theoretic approach, Lindley Lindley (1972) suggests that an objective for experimental design should have the following general form:
where is a utility function, is the expected utility, is the support of , and is the support of . The utility function should be chosen to reflect the usefulness of an experiment at conditions , given a particular value of the parameters and a particular outcome . Since we do not know the precise value of and we cannot know the outcome of the experiment before it is performed, we take the expectation of over the joint distribution of and .
Our choice of utility function is rooted in information theory. In particular, following Lindley (1956), we put equal to the relative entropy, or Kullback-Leibler (KL) divergence, from the posterior to the prior. For generic distributions and , the KL divergence from to is
where and are probability densities, is the support of , and . This quantity is non-negative, non-symmetric, and reflects the difference in information carried by the two distributions (in units of nats) Cover and Thomas (2006); MacKay (2003). Specializing to the inference problem at hand, the KL divergence from the posterior to the prior is
Note that this choice of utility function involves an “internal” integration over the parameter space ( is a dummy variable representing the parameters), therefore it is not a function of the parameters . Thus we have
To simplify notation, in Equation (5) is replaced by , yielding
The expected utility is therefore the expected information gain in . The intuition behind this expression is that a large KL divergence from posterior to prior implies that the data decrease entropy in by a large amount, and hence those data are more informative for parameter inference. As we have only a distribution for the data that may be observed, we are interested in maximizing information gain on average. We also note that is equivalent to the mutual information between the parameters and the data .111Using the definition of conditional probability, we have which is the mutual information between parameters and data, given the design. When applied to a linear-Gaussian design problem, reduces to the Bayesian -optimality condition.222-optimality maximizes the determinant of the information matrix in a linear design problem Atkinson and Donev (2007). Bayesian -optimality, in a linear-Gaussian problem, maximizes the determinant of the sum of the information matrix and the prior covariance Chaloner and Verdinelli (1995).
Finally, the expected utility must be maximized over the design space to find the optimal experimental design
What if resources allow multiple (say ) experiments to be carried out, but time and setup constraints require them to be planned (and perhaps performed) simultaneously? If is the optimal design parameter for a single experiment, the best choice is not necessarily to repeat the experiment at times; this does not generally yield the optimal expected information gain from all the experiments. (A shows that the expected utility of two experiments is not, in general, equal to the sum of the expected utilities of the individual experiments. Section 5 provides a numerical example of this situation.) Instead, all of the experiments should be incorporated into the likelihood function, where now , , and the data from the different experiments are conditionally independent given and the augmented . The new optimal design then carries the sets of conditions for all the experiments, maximizing the expected total information gain when these experiments are simultaneously performed. It is interesting to note that a simpler objective function often used in experimental design—the predictive variance of the data —would always suggest repeating all experiments at the single-experiment design optimum.
If the experiments need not be carried out simultaneously, then sequential experimental design may be performed. In general, a sequential design uses the results of one set of experiments (i.e., the that are actually observed) to help plan the next set of experiments. In one possible approach—in fact, a greedy approach—an optimal experiment is initially computed and carried out, and its data are used to perform inference. The resulting posterior is then used as the prior in the design of the next experiment , and the process is repeated. This approach is not necessarily optimal over a horizon of many experiments, however. A more rigorous treatment would involve formulating the sequential design problem as a dynamic programming problem, but this is beyond the scope of the present paper. Intuitively, a sequential experimental design should be at least as good as a fixed design, due to the extra information gained in the intermediate stages.
2.3 Numerical evaluation of the expected utility
Typically, the expected utility in Equation (6) has no closed form and must be approximated numerically. One approach is to rewrite as
where the second equality is due to the application of Bayes’ theorem to the quantities both inside and outside the logarithm. In the special case where the Shannon entropy of is independent of the design variables , the first term in Equation (8) becomes constant for all designs Shewry and Wynn (1987) and can be dropped from the objective function. Maximizing the remaining term—which is the entropy of —is then equivalent to the maximum entropy sampling approach of Sebastiani and Wynn Sebastiani and Wynn (2000). Here we retain the more general formulation of Equation (8) in order to accommodate, for example, likelihood functions containing a measurement error whose magnitude depends on or .
Monte Carlo sampling can then be used to estimate the integral in Equation (8)
where are drawn from the prior ; are drawn from the conditional distribution (i.e., the likelihood); and is the number of samples in this “outer” Monte Carlo estimate. The evidence evaluated at , , typically does not have an analytical form, but it can be approximated using yet another importance sampling estimate:
where are drawn from the prior and is the number of samples in this “inner” Monte Carlo sum. The combination of Equations (9) and (10) yields a biased estimator of Ryan (2003). The variance of this estimator is proportional to , where and are terms that depend only on the distributions at hand. The bias is proportional to Ryan (2003). Hence controls the bias while controls the variance.
Evaluating and sampling from the likelihood for each new sample of constitutes the most significant computational cost above (see Section 3). In order to mitigate the cost of the nested Monte Carlo estimator, we draw a fresh batch of prior samples for every , and use this set for both the outer Monte Carlo sum (i.e., ) and all the inner Monte Carlo estimates at that (i.e., , and consequently ). This treatment reduces the computational cost for a fixed from to . In practice, sample reuse also avoids producing near-zero evidence estimates (and hence infinite values for the expected utility) at small sample sizes. The reuse of samples contributes to the bias of the estimator, but this effect is very small Huan (2010). See B for a numerical study of the bias.
2.4 Stochastic optimization
Now that the expected utility can be estimated at any value of the design variables, we turn to the optimization problem (7). Maximizing via a grid search over is clearly impractical, since the number of grid points grows exponentially with dimension. Since only a Monte Carlo estimate of the objective function is available, another naïve approach would be to use a large sample size at each and then apply a deterministic optimization algorithm, but this is still too expensive. (And even with large sample sizes, is effectively non-smooth.) Instead, we would like to use only a few Monte Carlo samples to evaluate the objective at any given , and thus we need algorithms suited to noisy objective functions. Two such algorithms are simultaneous perturbation stochastic approximation (SPSA) and Nelder-Mead nonlinear simplex (NMNS).
SPSA, proposed by Spall Spall (1998a, b), is a stochastic approximation method that has received considerable attention Spall (2008). The method is similar to a steepest-descent method using finite difference estimates of the gradient, except that SPSA only uses two random perturbations to estimate the gradient regardless of the problem’s dimension:
where is the iteration number,
and , , , , and are algorithm parameters with recommended values available, e.g., in Spall (1998b). is a random vector whose entries are i.i.d. draws from a symmetric distribution with finite inverse moments Spall (1998a); here, we choose . Common random numbers are also used to evaluate each pair of estimates and at a given , in order to reduce variance in estimating the gradient Kleinman et al. (1999).
An intuitive justification for SPSA is that error in the gradient “averages out” over a large number of iterations Spall (1998a). Convergence proofs with varying conditions and assumptions can be found in Spall (1988, 1992); He et al. (2003). Randomness introduced through the noisy objective and the finite-difference-like perturbations allows for a global convergence property Maryak and Chin (2004). Constraints in SPSA are handled by projection: if the current position does not remain feasible under all possible random perturbations, then it is projected to the nearest point that does satisfy this condition.
The NMNS algorithm Nelder and Mead (1965) has been well studied and is widely used for deterministic optimization. The details of the algorithm are thus omitted from this discussion but can be found, e.g., in Nelder and Mead (1965); Barton and Ivey Jr. (1996); Spall (2003). This algorithm has a natural advantage in dealing with noisy objective functions because it requires only a relative ordering of function values, rather than the magnitudes of differences (as in estimating gradients). Minor modifications to the algorithm parameters can improve optimization performance for noisy functions Barton and Ivey Jr. (1996). Constraints in NMNS are handled simply by projecting from the infeasible point to the nearest feasible point.
There are advantages and disadvantages to both algorithms. SPSA is a gradient-based approach, taking advantage of any regularity in the underlying objective function while requiring only two function evaluations per step to estimate the gradient instead of evaluations, as with a full finite-difference scheme. However, a very high noise level can lead to slow convergence and cause the algorithm to stagnate in local optima. NMNS is relatively less sensitive to the noise level, but the simplex can be unfavorably distorted due to the projection treatment of constraints, leading to slow or false convergence.
Using either of the algorithms described in this section, we can approximately solve the stochastic optimization problem posed in Equation (7) and obtain the best experimental design. In a sense, this completes the experimental design phase of Figure 1. But a remaining difficulty is one of computational cost. Even with an effective Monte Carlo estimator of the expected utility, and with efficient algorithms for stochastic optimization, the complex physical model embedded in Equation (9) still must be evaluated repeatedly, over many values of the model parameters and design variables. Methods for making this task more tractable are discussed in the next section.
3 Polynomial chaos surrogate
Expensive physical models can render the evaluation and maximization of expected information gain impractical. Models enter the formulation through the likelihood function . For example, a simple likelihood function might allow for an additive discrepancy between experimental observations and model predictions:
Here, is a random variable with density ; we leave the form of this density non-specific for now. The “forward model” of the experiment is ; it maps both the design variables and the parameters into the data space. Drawing a realization from thus requires evaluating at a particular . Evaluating the density again requires evaluating .
To make these calculations tractable, one would like to replace with a cheaper “surrogate” model that is accurate over the entire prior support and the entire design space . Many options exist, ranging from projection-based model reduction Bui-Thanh et al. (2007); Frangos et al. (2011) to spectral methods based on polynomial chaos (PC) expansions Wiener (1938); Ghanem and Spanos (2012); Xiu and Karniadakis (2002); Debusschere et al. (2004); Najm (2009); Xiu (2009); Le Maître and Knio (2010). The latter approaches do not reduce the internal state of a deterministic model; rather, they explicitly exploit regularity in the dependence of model outputs on uncertain input parameters. Polynomial chaos has seen extensive use in a range of engineering applications (e.g., Hosder et al. (2006); Reagan et al. (2003); Walters (2003); Xiu and Karniadakis (2003)) including parameter estimation and inverse problems (e.g., Marzouk et al. (2007); Marzouk and Xiu (2009); Marzouk and Najm (2009)), and this is the approach we shall use.
Let be i.i.d. random variables defined on a probability space , where is the sample space, is the -field generated by all the , and is the probability measure. Then any random variable , measurable with respect to and possessing finite variance, , can be represented as follows:
where is an element of the sample space; , is an infinite-dimensional multi-index; is the norm; are the expansion coefficients; and
are multivariate polynomial basis functions Xiu and Karniadakis (2002). Here is an orthogonal polynomial of order in the variable , where orthogonality is with respect to the distribution of ,
and is the support of . The expansion (15) is convergent in the mean-square sense Cameron and Martin (1947). For computational purposes, the infinite sum and infinite dimension must be truncated to some finite stochastic dimension and polynomial order. A common choice is the “total-order” truncation :
The total number of terms in this expansion is
The choice of is influenced by the degree of nonlinearity in the relationship between and , and the choice of reflects the degrees of freedom needed to capture the stochasticity of the system. These choices might also be constrained by the availability of computational resources, as grows quickly when these numerical parameters are increased.
3.1 Joint expansion for design variables
In the Bayesian setting, the model parameters are random variables, for which PC expansions are easily applied. But the model outputs also depend on the design conditions, and constructing a separate PC expansion at each value of required during optimization would be impractical. Instead, we can construct a single PC expansion for each component of , depending jointly on and . (Similar suggestions have recently appeared in the context of robust design Eldred (2011).) To proceed, we increase the stochastic dimension by the number of design dimensions, putting , where we have assigned one stochastic dimension to each component of and one to each component of for simplicity. Further, we assume an affine transformation between each component of and the corresponding ; any value of can thus be uniquely associated with a vector of these . Since the design parameters will usually be supported on a bounded domain (e.g., inside some hyper-rectangle) the corresponding are given uniform distributions. (The corresponding univariate are thus Legendre polynomials.) These distributions effectively define a uniform weight function over the design space that governs where the -convergent PC expansions should be accurate.
3.2 Pseudospectral projection
Constructing the PC expansion involves computing the coefficients ; this generally can proceed via two alternative approaches, intrusive and non-intrusive. The intrusive approach results in a new system of equations that is larger than the original deterministic system, but it needs be solved only once. The difficulty of this latter step depends strongly on the character of the original equations, however, and may be prohibitive for arbitrary nonlinear systems. The non-intrusive approach computes the expansion coefficients by directly projecting the quantity of interest (e.g., the model output) onto the basis functions . One advantage of this method is that the deterministic solver can be reused and treated as a black box. The deterministic problem needs to be solved many times, but typically at carefully chosen parameter values. The non-intrusive approach also offers flexibility in choosing arbitrary functionals of the state trajectory as observables; these functionals may depend smoothly on even when the state itself has a less regular dependence. (The combustion model in Section 6 provides an example of such a situation.)
Taking advantage of orthogonality, the PC coefficients are simply:
where is the PC coefficient with multi-index for the th observable.333Here we are equating the dimension of the forward model output with the number of observables . If the data contain repeated observations of the same quantity, for instance, in the case of multiple experiments, then the same PC approximation can be used for all model-based predictions of that quantity. Analytical expressions are available for the denominators , but the numerators must be evaluated via numerical quadrature, because of the forward model . The resulting approach is termed pseudospectral projection, or non-intrusive spectral projection (NISP). When the evaluations of the integrand (and hence the forward model) are expensive and is large, an efficient method for high-dimensional integration is essential.
3.3 Dimension-adaptive sparse quadrature
A host of useful methods are available for numerical integration Morokoff and Caflisch (1995); Sobol (1967); Smolyak (1963); Barthelmann et al. (2000); Gerstner and Griebel (1998); Bungartz and Dirnstorfer (2004). In the present context, we seek a method that can evaluate the numerator of Equation (21) efficiently in high dimensions, i.e., with a minimal number of integrand evaluations, taking advantage of regularity and anisotropy in the dependence of on and . We thus employ the dimension-adaptive sparse quadrature (DASQ) algorithm of Gerstner and Griebel Gerstner and Griebel (2003), an efficient extension of Smolyak sparse quadrature that adaptively tensorizes quadrature rules in each coordinate direction. It has a weak dependence on dimension, making it an excellent candidate for problems of moderate size (e.g., ). Its formulation is briefly described below.
be the th level (with quadrature points) of some univariate quadrature rule, where are the weights, are the abscissae, and is the integrand. The level is usually defined to take advantage of any nestedness in the quadrature rule and to reduce the overall computational cost. We have chosen to use Clenshaw-Curtis (CC) quadrature for ’s with compact support, with the following level definition
The CC rule is especially appealing because it is accurate,444Although Gauss-Legendre quadrature (which is not nested) has a higher degree of polynomial exactness, Trefethen (2008) notes that “the Clenshaw-Curtis and Gauss formulas have essentially the same accuracy unless is analytic in a sizable neighborhood of the interval of the integration—in which case both methods converge so fast that the difference hardly matters.” nested, and easy to construct.555The abscissae are simply , and the weights can be computed very efficiently via FFT Gentleman (1972a, b), requiring only time and introducing very little roundoff error.
The difference formulas, defined by
are the differences between 1D quadrature rules at two consecutive levels. The subtraction is carried out by subtracting the weights at the quadrature points of the lower level. Then, for (where each entry of the multi-index represents the level in that dimension, with a total of dimensions), the multivariate quadrature rule is defined to be
where is some set determined by the adaptation algorithm, to be described below. For example, where is some user-defined level, corresponds to the Smolyak sparse quadrature, while corresponds to a tensor-product quadrature.
The original DASQ algorithm can be found in Gerstner and Griebel (2003). The idea is to divide all the multi-indices into two sets: an old set and an active set. A member of the active set is able to propose a new candidates by increasing the level in any dimension by 1. However, the candidate can only be accepted if all its backward neighbors are in the old set; this so-called admissibility condition ensures the validity of the telescoping expansion of the general sparse quadrature formulas via the differences . Finally, each multi-index has an error indicator, which is proportional to its corresponding summand value in Equation (25). Intuitively, if this term contributes little to the overall integral estimate, then integration error due to this term should be small. New candidates are proposed from the multi-index corresponding to the highest error estimate. The process iterates until the sum of error indicators for the active set members falls below some user-specified tolerance. More details, including proposed data structures for this algorithm, can be found in Gerstner and Griebel (2003). One drawback of DASQ is that parallelization can only be implemented within the evaluation of each , which is not as efficient as the parallelization in non-adaptive methods. The original DASQ algorithm also does not address how integrand evaluations at nested quadrature points can easily be identified and reused as adaptation proceeds. Huan Huan (2010) proposes an algorithm to solve this problem, taking advantage of the specific quadrature structure.
The ultimate goal of quadrature is to compute the polynomial chaos coefficients of the model outputs in Equation (21). There are a total of (see Equation (20)) coefficients for each output variable, and a total of model outputs, yielding a total of integrals. To simplify notation, let the PC coefficients , be re-indexed by . It would be very inefficient to compute each integral from scratch, since the corresponding quadrature points will surely overlap and any evaluations of ought to be reused. To realize these computational savings, the original DASQ algorithm is altered to integrate for all the coefficients simultaneously. We guide all the integrations via a single adaptation route, which uses a “total effect” local error indicator that reflects all the local error indicators from the integrals. The total effect indicator at a given may be defined as the max or 2-norm of the local error indicators . The new algorithm is presented as Algorithm 1.
Lastly, compensated summation (the Kahan algorithm Kahan (1965)) is used throughout our implementation, as it significantly reduces numerical error when summing long sequences of finite-precision floating point numbers as required above.
4 Bayesian parameter inference
Once data are collected by performing an optimal experiment, they can be used in the manner specified by the original experimental goal. In the present case, the goal is to infer the model parameters by exploring or characterizing the posterior distribution in Equation (1). Ideally the data will lead to a narrow posterior such that, with high probability, the parameters can only take on small range of values.
The posterior can be evaluated pointwise up to a constant factor, but computing it on a grid is immediately impractical as the number of dimensions increases. A more economical method is to generate independent samples from the posterior, but given the arbitrary form of this distribution (particularly for nonlinear ), direct Monte Carlo sampling is seldom possible. Instead, one must resort to Markov chain Monte Carlo (MCMC) sampling. Using only pointwise evaluations of the unnormalized posterior density, MCMC constructs a Markov chain whose stationary and limiting distribution is the posterior. Samples generated in this way are correlated, such that the effective sample size is smaller than the number of MCMC steps. Nonetheless, a well-tuned MCMC algorithm can be reasonably efficient. The resulting samples can then be used in various ways—to evaluate marginal posterior densities, for instance, or to approximate posterior expectations
with the -sample average
where are samples extracted from the chain (perhaps after burn-in or thinning). For example, the minimum mean square error (MMSE) estimator is simply the mean of the posterior, while the corresponding Bayes risk is the posterior variance, both of which can be estimated using MCMC.
A very simple and powerful MCMC method is the Metropolis-Hastings (MH) algorithm, first proposed by Metropolis et al. Metropolis et al. (1953), and later generalized by Hastings Hastings (1970); details of the algorithm can be found in Tierney (1994); Gilks et al. (1996); Andrieu et al. (2003); Robert and Casella (2004). Two useful improvements to MH are the concepts of delayed rejection (DR) Green and Mira (2001); Mira (2001) and adaptive Metropolis (AM) Haario et al. (2001); combining these lead to the DRAM algorithm of Haario et al. Haario et al. (2006). While countless other MCMC algorithms exist or are under active development, some involving derivatives (e.g., Langevin) or even Hessians of the posterior density, DRAM offers a good balance of simplicity and efficiency in the present context.
Even with efficient proposals, MCMC typically requires a tremendous number of samples (tens of thousands or even millions) to compute posterior estimates with acceptable accuracy. Since each MCMC step requires evaluation of the posterior density, which in turn requires evaluation of the likelihood and thus the forward model , surrogate models for the dependence of on can offer tremendous computational savings. Polynomial chaos surrogates, as described in Section 3, can be quite helpful in this context Marzouk et al. (2007); Marzouk and Xiu (2009); Marzouk and Najm (2009).
5 Application: nonlinear model
We first illustrate the optimal design of experiments using a simple algebraic model, nonlinear in both the parameters and the design variables. Since the model is inexpensive to evaluate, we use it to illustrate features of the core formulation—estimating expected information gain, designing single and multiple experiments, and the role of prior information—leaving demonstrations of stochastic optimization and polynomial chaos surrogates to the next section.
5.1 Design of a single experiment
Consider a simple nonlinear model with a scalar observable , one uncertain parameter , and one design variable :
where denotes the model output (without noise) and is an additive Gaussian measurement error. Let the prior be and the design space be .
Suppose our experimental goal is to infer the uncertain parameter based on a single measurement . The expected utility in Equation (6) and its estimate in Equation (9) are appropriate choices, and our ultimate goal is to maximize . Figure 2(a) shows estimates of the expected utility, using , plotted along a -node uniform grid spanning the entire design space. Local maxima appear at and , a pattern which can be understood by examining Equation (28). A value away from 0.2 or 1.0 (such as ) would lead to an observation that is dominated by the noise , which is not useful for inferring the uncertain parameter . But if is chosen close to 0.2 or 1.0, such that the noise is insignificant compared to the first or second term of the equation, then would be very informative for .
5.2 Design of two experiments
Consider the “batch” or fixed design of two experiments (where the results of one experiment cannot be used to design the other, as described in Section 2.2). Moreover, assume that both experiments are described by the same model; this is not a requirement, but an assumption adopted here for simplicity. Then the overall algebraic model is simply extended to
where the subscripts and denote variables associated with experiments 1 and 2, respectively. Note that there is still a single common parameter . The errors and are i.i.d. with .
Again using a prior on , the expected utility is plotted in Figure 3(a). First, note the symmetry in the contours along the line, which is expected since the two experiments have identical structure. Second, the optimal pair of experiments is not just a repeat of the optimal single-experiment design: , and instead we have or . Some insight can be obtained by examining Figure 4, which plots the single-experiment model output as a function of the uncertain parameter at the two locally optimal designs: and . Intuitively, a high slope of should be more informative for the inference of , as the output is then more sensitive to variations in the input. The plot shows that neither design has a greater slope over the entire range of the prior . Instead, the slope is greater for with design , and greater for with design , where .
Let us then examine the cases of “restricted” priors and . Expected utilities for a single experiment, under either of these priors, are shown in Figures 2(b) and 2(c). The optimal design for is at 0.2 and for it is at 1.0, supporting intuition from the analysis of slopes. Next, the expected utilities of two experiments, under the restricted priors, are shown in Figures 3(b) and 3(c). Since in both cases, a single design point can give maximum slope over the entire restricted prior range of , it is not surprising that the optimal pair of experiments involves repeating the respective single-experiment optima. In contrast, the lack of a “clear winner” over the full prior intuitively explains why a combination of different design conditions may yield more informative data overall. Note that we have only focused on the two local optima and from the original analysis, but it is possible that new local or globally optimal design points could emerge as the prior is changed.
6 Application: combustion kinetics
Experimental diagnostics play an essential role in the development and refinement of chemical kinetic models for combustion Frenklach (2007); Davidson and Hanson (2004). Available diagnostics are often indirect, imprecise, and incomplete, leaving significant uncertainty in relevant rate parameters and thermophysical properties Baulch et al. (1994, 2005); Reagan et al. (2003); Phenix et al. (1998). Uncertainties are particularly acute when developing kinetic models for new combustion regimes or for fuels derived from new feedstocks, such as biofuels. Questions of experimental design—e.g., which species to interrogate and under what conditions—are thus of great practical importance in this context.
6.1 Model description
We demonstrate our optimal experimental design framework on shock tube ignition experiments, which are a canonical source of kinetic data. In a shock tube experiment, the mixture behind the reflected shock experiences a sharp rise in tempeature and pressure; if conditions are suitable, this mixture then ignites after some time, known as the ignition delay time. Ignition delays and other observables extracted from the experiment carry indirect information about the elementary chemical kinetic processes occurring in the mixture. These experiments are well described by the dynamics of a spatially homogeneous, adiabatic, constant-pressure chemical mixture.
We model the evolution of the mixture using ordinary differential equations (ODEs) expressing conservation of energy and of individual chemical species. Governing equations are detailed in C. We consider an initial mixture of hydrogen and oxygen. (Note that H-O kinetics are a key subset of the reaction mechanisms associated with the combustion of complex hydrocarbons.) Our baseline kinetic model is a 19-reaction mechanism proposed in Yetter et al. (1991), reproduced in Table 1. Detailed chemical kinetics lead to a stiff set of nonlinear ODEs, with state variables consisting of temperature and species mass or molar fractions. The initial condition of the system is specified by the initial temperature and the fuel-oxidizer equivalence ratio . Species production rates depend on the mixture conditions and on a set of kinetic parameters: pre-exponential factors , temperature exponents , and activation energies , where is the reaction number in Table 1. These parameters are important in determining combustion characteristics and are of great interest in practice. Thermodynamic parameters and reaction rates in the governing equations are evaluated with the help of Cantera 1.7.0 Goodwin (2002); can (2009), an open-source chemical kinetics software package. ODEs are solved implicitly, using the variable-order backwards differentiation formulas implemented in CVODE Cohen and Hindmarsh (1996).
6.2 Experimental goals
In this study, the experimental goal is to infer selected kinetic parameters (, , and ) associated with the elementary reactions in Table 1. For demonstration, we let the kinetic parameters of interest be and . Reaction 1 is a chain-branching reaction, leading to a net increase in the number of radical species in the system. Reaction 3 is a chain-propagating reaction, exchanging one radical for another, but nonetheless relevant to the overall dynamics.666As the methodology explored here is quite general, we have the freedom to select any parameters appearing in the mechanism. The selection reflects the particular goals of the experimentalist or investigator. We also note that the “evaluated” combustion kinetic data in Baulch et al. (1994, 2005) can help select parameters to target for inference and help define their prior ranges. We infer rather than directly, where is the nominal value of in Yetter et al. (1991); this transformation ensures positivity and lets us easily impose a log-uniform prior on , which is appropriate since the pre-exponential is a multiplicative factor. The design variables are the initial temperature and equivalence ratio .
We once again use the expected utility defined in Equation (6) (and its estimator in Equation (9)) as the objective to be maximized for optimal design. Unlike the algebraic model in Section 5, however, this combustion problem offers many possible choices of observable. Some observables are more informative than others; we explore this choice in the next section.
6.3 Observables and likelihood function
Typical trajectories of the state variables are shown in Figure 5. The temperature rises suddenly upon ignition; reactant species are rapidly consumed and product species are produced as the mixture comes to equilibrium. The most complete and detailed set of system observables are the state variables as a function of time. One could simply discretize the time domain to produce a finite-dimensional data vector . Too few discretization points might fail to capture the state behaviour, however. And because the kinetic parameters affect ignition delay, the state at any given time may have a nearly discontinuous dependence on the parameters. (This is due to the sharpness of the ignition front; at a fixed time, the state is most probably either pre-ignition or post-ignition.) Such a dependence makes construction of a polynomial chaos surrogate far more challenging Najm et al. (2009). It is desirable to transform the state into alternate observables that somehow “compress” the information and depend relatively smoothly on the kinetic parameters, while retaining features that are relevant to the experimental goals. We would also like to select observables that are easy to obtain experimentally.
Taking the above factors into consideration, we will use the observables in Table 2. The observables are the peak value of the heat release rate, the peak concentrations of various intermediate chemical species (O, H, HO, HO), and the times at which these peak values occur. Examples of , , , and are shown in Figure 6. The time of peak heat release coincides with the time at which temperature rises most rapidly. We thus take it as our definition of ignition delay, . We use the logarithm of all the characteristic time variables in our actual implementation, as the times are positive and vary over several orders of magnitude as a function of the kinetic parameters and design variables.
The likelihood is defined using the ODE model predictions and independent additive Gaussian measurement errors: , with components . For the concentration observables, the standard deviation of the measurement error is taken to be 10% of the value of the corresponding signal:
For the characteristic-time observables, we add a small constant s to the standard deviation, reflecting the minimum resolution of the timing technology:
Note that the noise magnitude depends implicitly on both the kinetic parameters and the design variables. Both terms contributing to the expected information gain in Equation (8) are therefore influential, and one would expect a maximum entropy sampling approach to yield different results than the present experimental design methodology.
6.4 Polynomial chaos construction
Each solve of the ODE system defining is expensive, and thus we employ a polynomial chaos surrogate. In practice, since non-intrusive construction of the surrogate requires many forward model evaluations, the surrogate is only worth forming if the total number of model evaluations required for optimization of the expected utility exceeds the number required for surrogate construction. A detailed analysis of this tradeoff and the potential computational gains can be found in Section 6.7.
Uniform priors are assigned to the model parameters and uniform input distributions are assumed for the design variables (see Section 3.1), with the supports given in Table 3. The polynomial chaos expansions thus use Legendre polynomials, with . Our goal now is to construct PC expansions for the model outputs , using the projection given in Equation (21). For each desired PC coefficient, the numerator in that equation is evaluated using the modified DASQ algorithm described in Section 3.3. The expansions are truncated at a total order of , and DASQ is stopped once a total of function evaluations have been exceeded. The degree of this expansion is admittedly (and deliberately) chosen rather high. The performance of lower-order expansions is examined below and in Section 6.7.
Indeed, the accuracy of the PC surrogate can be analyzed more rigorously by evaluating its relative error over a range of and values:
For the th observable, is the output of the original ODE model and is the corresponding PC surrogate. and are affine functions of (the PC expansions of the model inputs). Accurately evaluating the error is expensive, certainly more expensive than computing in the first place. But additional integration error must be minimized, and the integrals in Equation (32) are thus evaluated using a level-15 isotropic Clenshaw-Curtis sparse quadrature rule, containing 3,502,081 distinct abscissae.
Figure 7 shows contours of of the error over a range of and , for the PC expansion of the peak enthalpy release rate. The values are approximate, as DASQ is terminated at the end of the iteration that exceeds . When is too small, the error is dominated by aliasing (integration) error and increases with . When a sufficiently large is used such that truncation error dominates, exponential convergence with respect to can be observed, as expected for smooth functions. Ideally, and should be selected at the “knees” of these contour plots, since little accuracy can be gained when is increased any further, but these locations can be difficult to pinpoint a priori.
6.5 Design of a single experiment
Figures 8(a) and 8(b) show contours of the expected utility estimates, , in the two-dimensional design space, constructed using the full ODE model (with estimator parameters ) and the PC surrogate (with estimator parameters ), respectively. Contours from the PC surrogate are very similar to those from the full model, though the former have less variability due to the larger number of Monte Carlo samples used to compute . Most importantly, both plots yield the same optimal experimental design at around .
To test how well the expected information gain anticipates the performance of an experiment, the inference problem is solved at three different design points , , and , listed in Table 4 and illustrated in Figure 8(b). Since the expected utility is highest at design , then the posterior is expected to reflect the largest information gain at that experimental condition. We use the full ODE model to generate artificial data at each of the three design conditions, then perform inference. Contours of posterior density are shown in Figure 9, using the full ODE model and the PC surrogate. The posteriors of the full model and PC surrogate match very well; hence the PC surrogate is suitable not only for experimental design, but also for inference. As expected from the expected utility plots, the posterior distribution of the kinetic parameters is tightest at design ; this was the most informative of the three experimental conditions. Posterior modes of the ODE model and PC results are not precisely the same, however, due to the modeling error associated with the PC surrogate. Also, the posterior modes obtained with the full ODE model do not exactly match the values used to generate the artificial data, due to the noise in the likelihood model.
What if a different set of observables are used? Two cases are explored: first, using only the characteristic time observables (i.e., the first five rows of Table 2); and second, using only the peak value observables (i.e., the last five rows of Table 2). The corresponding expected utility plots are given in Figure 10 (using the PC surrogate only). Several remarks can be made. First, the characteristic time observables are more informative than the peak value observables, as demonstrated by the higher expected utility values in Figure 10(b) than Figure 10(c). Second, the choice of observables can greatly influence the optimal value of the design parameters. Third, even though the observables from the two cases form a partition of the full observable set, their expected utility values do not simply sum to that of the full-observable case. (This is a special case of the analysis in A.) The lesson is that the selection of appropriate observables is a very important part of the design procedure, especially if one is forced to select only a few modes of observation. This selection could be made into an argument of the objective function, augmenting and leading to a mixed-integer optimization problem.
6.6 Design of two experiments: stochastic optimization
Now we perform a study analogous to that in Section 5.2, designing two ignition experiments (of the same structure) simultaneously. The experimental goal of inferring and is unchanged, and for computational efficiency we use only the , PC surrogate. The design space is now four-dimensional, with . Stochastic optimization is used to find the optimal experimental design, as a grid search is entirely impractical.
Coupling stochastic optimization schemes (Section 2.4) with the estimator of expected information gain introduces a few new numerical tradeoffs. The number of samples in the outer loop of the estimator controls the variance of , which dictates the noise level of the objective function. Lower noise in the objective function might imply fewer optimization iterations overall, while a noisier objective may require many more iterations of either SPSA or NMNS to make progress towards the optimum. On the other hand, noise should not be reduced too much for SPSA, since the usefulness of its gradient approximation relies on the existence of a non-negligible noise level. In general, the task of balancing against the number of optimization iterations, in order to minimize the number of model evaluations, is not trivial. We therefore test different values of to understand their impact on the SPSA and NMNS optimization schemes. We fix at in order to maintain a low bias contribution from the inner loop.
Since the results of stochastic optimization are themselves random, we use an ensemble of 100 independent optimization runs at any given parameter setting to analyze performance. Each optimization run is capped at evaluations of the noisy objective—i.e., of the summand in Equation (9)—where each noisy objective evaluation itself involves evaluations of the model or the surrogate . We can thus compare performance at a fixed computational cost.
Runs are performed for both SPSA and NMNS, with ranging from 1 to 100. The final design conditions and convergence histories (the latter plotted for 10 runs only) are shown in Figures 11 and 12, respectively. In Figure 11, each connected blue cross and red circle represent a pair of final experimental designs, where the red circle is arbitrarily chosen to represent the lower design. The optimization results indicate that both experiments should be performed near K, although the best is less precisely determined and less influential. This pattern is similar to that of a single-experiment optimal design. Overall, a tighter clustering of the final design points is observed as is increased. SPSA groups the majority of the final design points more tightly, but it also yields more outliers than NMNS; in other words, it results in more of a “hit-or-miss” situation. Figure 11 indicates that, for the NMNS cases, a lower lets the algorithm reach the convergence “plateau” more quickly. This result is affected both by the shrinkage rate of the simplex and by the fact that a higher simply requires more model evaluations even to complete one iteration. The choice of then should take into consideration how many model evaluations are available. Even then, the best choice may depend on the shape of the expected utility surface and the variance of its estimator (which is not stationary in ).
Our observations so far are based on an assessment of the design locations and convergence history. A more quantitative analysis should focus on the expected utility value of the final designs, which is what really matters in the end. Figure 13 shows histograms of expected utility for the 100 final design points resulting from each optimization case. To compare the design points, we want to make the error incurred in estimating relatively negligible, and thus we employ a high-quality estimator with . This is not the small-sample estimator used inside the optimization algorithms; it is a more expensive estimator used afterwards, only for diagnostic purposes. The histograms indicate that increasing is actually not very effective for SPSA, as the persistence of outliers creates a spread in the final values, supporting our suspicion that too small a noise level may be bad for SPSA. On the other hand, increasing is effective for NMNS. The bimodal structure of the histograms is due to the two groups: good designs on the center “ridge” and outliers, with few designs having expected utility values in between. Overall, NMNS performs better than SPSA in this study, both in terms of the asymptotic distribution of design parameters and how quickly the convergence plateau or “knee” is reached.
To validate the results, the parameter inference problem is solved with data from three two-experiment design points (labeled , , and ) and with data from a four-experiment factorial design. All of the experimental conditions are listed in Table 5. Design , a pair of experiments lying on the ridge of “good designs,” is expected to have the tightest posterior among the two-experiment designs. The posteriors are shown in Figure 14, using the PC surrogate only. Indeed, design has the tightest posterior, and is much better than the four-experiment factorial design even though it uses fewer experiments! The factorial design blindly picks all the corners in the design space, which are in general not good design points. (The number of experiments in a factorial design would also increase exponentially with the number of design parameters, becoming impractical very quickly.) Model-based optimal experimental design is far more robust and capable than this traditional method.
6.7 Is using a PC surrogate worthwhile?
In producing the previous results, we used a PC surrogate with and , though analysis for one observable in Figure 7 suggests that this polynomial truncation and adapted quadrature rule are perhaps of higher quality than necessary for our problem. To quantitatively determine whether a polynomial surrogate offers efficiency gains in this study (or any other), we must (i) check if a lower quality PC surrogate may be used while still achieving results of similar quality, and (ii) analyze the computational cost.
To check if a lower-quality PC surrogate would suffice, the same two-experiment optimal experimental design problem is solved but now with . We find that is roughly the smallest that still yields reasonable results. The final optimization positions (obtained via NMNS only), the convergence history, and histogram of final expected utilities are shown in Figure 15. In fact, the histogram appears to be even tighter than that obtained with the surrogate.
To analyze the computational cost, we assume that optimization is terminated after 5000 noisy objective function evaluations, as the practical convergence plateau is reached by that point. If each noisy objective function requires inner Monte Carlo samples and each PC evaluation is negligible compared to full model, then using the full ODE model requires model evaluations, whereas the construction of the PC surrogate requires full ODE model evaluations. The surrogate thus provides a speedup of roughly 3.5 orders of magnitude, saving 49,990,000 full model evaluations (roughly 4 months of computational time for the present problem, if run serially).
Even though a low-quality surrogate may be sufficient for optimal experimental design, it may not be sufficiently accurate for parameter inference. For example, the two-experiment and four-experiment posteriors obtained using the , surrogate are shown in Figure 16. These posterior density contours show a substantial loss of accuracy compared to the corresponding plots in Figure 14. Because inference does not involve averaging over the data space and broadly exploring the design space, and because it generally favors a more restricted range of the model parameters , it may be more sensitive to local errors than the optimal experimental design formulation. There are two possible solutions to this issue:
Build and use a high-order polynomial chaos surrogate at the outset of analysis, and use it for both optimal experimental design and inference. Because inference typically employs MCMC and thus requires many thousands or even millions of model evaluations, the combined computational savings will make such a surrogate almost certainly worth constructing.
A more efficient approach is to use a low-order polynomial chaos expansion to perform the optimal experimental design. Upon reaching the optimal design conditions, construct a new PC expansion for inference at that design only. The new expansion does not need to capture dependence on the design variables, and thus it involves a smaller dimension and fewer interactions. This less expensive local PC expansion can more easily be made sufficiently accurate for the inference problem.
This paper presents a systematic framework and a set of computational tools for the optimal design of experiments. The framework can incorporate nonlinear and computationally intensive models of the experiment, linking them to rigorous information theoretic design criteria and requiring essentially no simplifying assumptions. A flowchart of the overall framework is given in Figure 1, showing the steps of optimal design and their role in a larger design–experimentation–model improvement cycle.
We focus on the experimental goal of parameter inference, in a Bayesian statistical setting, where a good design criterion is shown to maximize expected information gain from prior to posterior. A two-stage Monte Carlo estimator of expected information gain is coupled to algorithms for stochastic optimization. The estimation of expected information gain, which would otherwise be prohibitive with computationally intensive models, is substantially accelerated through the use of flexible and accurate polynomial chaos surrogate representations.
We demonstrate our method first on a simple nonlinear algebraic model, then on shock tube ignition experiments described by a stiff system of nonlinear ODEs. The latter system is challenging to approximate, as certain model observables depend sharply on the combustion kinetic parameters and design conditions, and ignition delays vary over several orders of magnitude. In both these examples, we illustrate the design of single and multiple experiments. We analyze the impact of prior information on the optimal designs, and examine the selection of observables according to their information value. We also investigate numerical tradeoffs between variance in the estimator of expected utility and performance of the stochastic optimization schemes.
Overall we find that inference at optimal design conditions is indeed very informative about the targeted parameters, and that model-based optimal experiments are far more informative than those obtained with simple heuristics. The use of surrogates offers significant computational savings over stochastic optimization with the full model, more than three orders of magnitude in the examples tested here. Moreover, we find that the polynomial surrogate used in optimal experimental design need not be extremely accurate in order to reveal the correct design points; surrogate requirements for optimal design are less stringent than for parameter inference.
Several promising avenues exist for future work. More efficient means of constructing polynomial chaos expansions, adaptively and in conjunction with stochastic optimization, may offer considerable computational gains. Uncertainty in the design parameters themselves can also be incorporated into the framework, as in real-world experiments where the design conditions cannot be precisely imposed; this additional uncertainty could be treated with a hierarchical Bayesian approach. Structural inadequacy of the model is another important issue; how successful is an “optimal” design (or indeed an inference process) based on a forward model that cannot fully resolve the physical processes at hand? Our current experience on design with lower-order polynomial surrogates provides a glimpse into issues of structural uncertainty, but a much more thorough exploration is needed. Finally, the Bayesian optimal design methodology has a natural extension to sequential experimental design, where one set of experiments can be performed and analyzed before designing the next set. A rigorous approach to sequential design, incorporating ideas from dynamic programming and perhaps sequential Monte Carlo, may be quite effective.
The authors would like to acknowledge support from the KAUST Global Research Partnership and from the US Department of Energy, Office of Science, Office of Advanced Scientific Computing Research (ASCR) under Grant No. DE-SC0003908.
- Atkinson and Donev (2007) A. C. Atkinson, A. N. Donev, Optimum Experimental Designs, with SAS, Oxford Statistical Science Series, Oxford University Press, 2007.
- Chaloner and Verdinelli (1995) K. Chaloner, I. Verdinelli, Bayesian experimental design: A review, Statistical Science 10 (1995) 273–304.
- Chu and Hahn (2008) Y. Chu, J. Hahn, Integrating parameter selection with experimental design under uncertainty for nonlinear dynamic systems, AIChE Journal 54 (2008) 2310–2320.
- Ford et al. (1989) I. Ford, D. M. Titterington, C. Kitsos, Recent advances in nonlinear experimental design, Technometrics 31 (1989) 49–60.
- Clyde (1993) M. A. Clyde, Bayesian optimal designs for approximate normality, Ph.D. thesis, University of Minnesota, 1993.
- Müller (1998) P. Müller, Simulation based optimal design, in: Bayesian Statistics 6: Proceedings of the Sixth Valencia International Meeting, Oxford University Press, 1998, pp. 459–474.
- Guest and Curtis (2009) T. Guest, A. Curtis, Iteratively constructive sequential design of experiments and surveys with nonlinear parameter-data relationships, Journal of Geophysical Research 114 (2009) 1–14.
- Lindley (1956) D. V. Lindley, On a measure of the information provided by an experiment, The Annals of Mathematical Statistics 27 (1956) 986–1005.
- Lindley (1972) D. V. Lindley, Bayesian Statistics, A Review, Society for Industrial and Applied Mathematics (SIAM), 1972.
- Loredo (2010) T. J. Loredo, Rotating stars and revolving planets: Bayesian exploration of the pulsating sky, in: Bayesian Statistics 9: Proceedings of the Nineth Valencia International Meeting, Oxford University Press, 2010, pp. 361–392.
- Sebastiani and Wynn (2000) P. Sebastiani, H. P. Wynn, Maximum entropy sampling and optimal Bayesian experimental design, Journal of the Royal Statistical Society. Series B (Statistical Methodology) 62 (2000) 145–157.
- Loredo and Chernoff (2003) T. J. Loredo, D. F. Chernoff, Bayesian adaptive exploration, in: Statistical Challenges of Astronomy, Springer, 2003, pp. 57–70.
- van den Berg et al. (2003) J. van den Berg, A. Curtis, J. Trampert, Optimal nonlinear Bayesian experimental design: an application to amplitude versus offset experiments, Geophysical Journal International 155 (2003) 411–421.
- Ryan (2003) K. J. Ryan, Estimating expected information gains for experimental designs with application to the random fatigue-limit model, Journal of Computational and Graphical Statistics 12 (2003) 585–603.
- Terejanu et al. (2012) G. Terejanu, R. R. Upadhyay, K. Miki, J. Marschall, Bayesian experimental design for the active nitridation of graphite by atomic nitrogen, Experimental Thermal and Fluid Science 36 (2012) 178–193.
- Mosbach et al. (2012) S. Mosbach, A. Braumann, P. L. W. Man, C. A. Kastner, G. P. E. Brownbridge, M. Kraft, Iterative improvement of Bayesian parameter estimates for an engine model by means of experimental design, Combustion and Flame 159 (2012) 1303–1313.
- Russi et al. (2008) T. Russi, A. Packard, R. Feeley, M. Frenklach, Sensitivity analysis of uncertainty in model prediction, The Journal of Physical Chemistry A 112 (2008) 2579–2588.
- Müller and Parmigiani (1995) P. Müller, G. Parmigiani, Optimal design via curve fitting of Monte Carlo experiments, Journal of the American Statistical Association 90 (1995) 1322–1330.
- Clyde et al. (1995) M. A. Clyde, P. Müller, G. Parmigiani, Exploring Expected Utility Surfaces by Markov Chains, Technical Report 95-39, Duke University, 1995.
- Müller et al. (2004) P. Müller, B. Sansó, M. De Iorio, Optimal Bayesian design by inhomogeneous Markov chain simulation, Journal of the American Statistical Association 99 (2004) 788–798.
- Hamada et al. (2001) M. Hamada, H. F. Martz, C. S. Reese, A. G. Wilson, Finding near-optimal Bayesian experimental designs via genetic algorithms, The American Statistician 55 (2001) 175–181.
- Ghanem and Spanos (2012) R. G. Ghanem, P. D. Spanos, Stochastic Finite Elements: A Spectral Approach, Dover Publications, 2012.
- Xiu and Karniadakis (2002) D. Xiu, G. E. Karniadakis, The Wiener-Askey polynomial chaos for stochastic differential equations, SIAM Journal of Scientific Computing 24 (2002) 619–644.
- Gerstner and Griebel (2003) T. Gerstner, M. Griebel, Dimension-adaptive tensor-product quadrature, Computing 71 (2003) 65–87.
- Kennedy and O’Hagan (2001) M. C. Kennedy, A. O’Hagan, Bayesian calibration of computer models, Journal of the Royal Statistical Society. Series B (Statistical Methodology) 63 (2001) 425–464.
- Berger (1985) J. O. Berger, Statistical Decision Theory and Bayesian Analysis, Springer, 2nd edition, 1985.
- Gelman (2008) A. Gelman, Objections to Bayesian statistics, Bayesian Analysis 3 (2008) 445–478. Comments by J. M. Bernardo, J. B. Kadane, S. Senn, L. Wasserman, and rejoinder by A. Gelman.
- Stark and Tenorio (2010) P. B. Stark, L. Tenorio, A primer of frequentist and Bayesian inference in inverse problems, in: Large Scale Inverse Problems and Quantification of Uncertainty, John Wiley and Sons, 2010.
- Sivia and Skilling (2006) D. S. Sivia, J. Skilling, Data Analysis: a Bayesian Tutorial, Oxford University Press, 2006.
- Cover and Thomas (2006) T. M. Cover, J. A. Thomas, Elements of Information Theory, John Wiley & Sons, Inc., 2nd edition, 2006.
- MacKay (2003) D. J. C. MacKay, Information Theory, Inference, and Learning Algorithms, Cambridge University Press, 2003.
- Shewry and Wynn (1987) M. C. Shewry, H. P. Wynn, Maximum entropy sampling, J. Applied Statistics 14 (1987) 165–170.
- Huan (2010) X. Huan, Accelerated Bayesian Experimental Design for Chemical Kinetic Models, Master’s thesis, Massachusetts Institute of Technology, 2010.
- Spall (1998a) J. C. Spall, An overview of the simultaneous perturbation method for efficient optimization, Johns Hopkins APL Technical Digest 19 (1998a) 482–492.
- Spall (1998b) J. C. Spall, Implementation of the simultaneous perturbation algorithm for stochastic optimization, IEEE Transactions on Aerospace and Electronic Systems 34 (1998b) 817–823.
- Spall (2008) J. C. Spall, Simultaneous perturbation stochastic approximation website, 2008. http://www.jhuapl.edu/SPSA/.
- Kleinman et al. (1999) N. L. Kleinman, J. C. Spall, D. Q. Naiman, Simulation-based optimization with stochastic approximation using common random numbers, Management Science 45 (1999) 1570–1578.
- Spall (1988) J. C. Spall, A stochastic approximation algorithm for large-dimensional systems in the Kiefer-Wolfowitz setting, in: Proceedings of the 27th IEEE Conference on Decision and Control, volume 2, pp. 1544–1548.
- Spall (1992) J. C. Spall, Multivariate stochastic approximation using a simultaneous perturbation gradient approximation, IEEE Transactions on Automatic Control 37 (1992) 332–341.
- He et al. (2003) Y. He, M. C. Fu, S. I. Marcus, Convergence of simultaneous perturbation stochastic approximation for nondifferentiable optimization, IEEE Transactions on Automatic Control 48 (2003) 1459–1463.
- Maryak and Chin (2004) J. L. Maryak, D. C. Chin, Global random optimization by simultaneous perturbation stochastic approximation, Johns Hopkins APL Technical Digest 25 (2004) 91–100.
- Nelder and Mead (1965) J. A. Nelder, R. Mead, A simplex method for function minimization, The Computer Journal 7 (1965) 308–313.
- Barton and Ivey Jr. (1996) R. R. Barton, J. S. Ivey Jr., Nelder-Mead simplex modifications for simulation optimization, Management Science 42 (1996) 954–973.
- Spall (2003) J. C. Spall, Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control, John Wiley & Sons, Inc., 2003.
- Bui-Thanh et al. (2007) T. Bui-Thanh, K. Willcox, O. Ghattas, Model reduction for large-scale systems with high-dimensional parametric input space, SIAM Journal on Scientific Computing 30 (2007) 3270–3288.
- Frangos et al. (2011) M. Frangos, Y. Marzouk, K. Willcox, B. van Bloemen Waanders, Surrogate and Reduced-Order Modeling: A Comparison of Approaches for Large-Scale Statistical Inverse Problems, in Large-Scale Inverse Problems and Quantification of Uncertainty, Wiley, 2011.
- Wiener (1938) N. Wiener, The homogeneous chaos, American Journal of Mathematics 60 (1938) 897–936.
- Debusschere et al. (2004) B. J. Debusschere, H. N. Najm, P. P. Pébay, O. M. Knio, R. G. Ghanem, O. P. Le Maître, Numerical challenges in the use of polynomial chaos representations for stochastic processes, SIAM Journal on Scientific Computing 26 (2004) 698–719.
- Najm (2009) H. N. Najm, Uncertainty quantification and polynomial chaos techniques in computational fluid dynamics, Annual Review of Fluid Mechanics 41 (2009) 35–52.
- Xiu (2009) D. Xiu, Fast numerical methods for stochastic computations: A review, Communications in Computational Physics 5 (2009) 242–272.
- Le Maître and Knio (2010) O. P. Le Maître, O. M. Knio, Spectral Methods for Uncertainty Quantification: With Applications to Computational Fluid Dynamics, Springer, 2010.
- Hosder et al. (2006) S. Hosder, R. W. Walters, R. Perez, A non-intrusive polynomial chaos method for uncertainty propagation in CFD simulations, in: 44th AIAA Aerospace Sciences Meeting and Exhibit. AIAA paper 2006-891, 2006.
- Reagan et al. (2003) M. T. Reagan, H. N. Najm, R. G. Ghanem, O. M. Knio, Uncertainty quantification in reacting-flow simulations through non-intrusive spectral projection, Combustion and Flame 132 (2003) 545–555.
- Walters (2003) R. W. Walters, Towards stochastic fluid mechanics via polynomial chaos, in: 41st Aerospace Sciences Meeting and Exhibit. AIAA paper 2003-413, 2003.
- Xiu and Karniadakis (2003) D. Xiu, G. E. Karniadakis, A new stochastic approach to transient heat conduction modeling with uncertainty, International Journal of Heat and Mass Transfer 46 (2003) 4681–4693.
- Marzouk et al. (2007) Y. M. Marzouk, H. N. Najm, L. A. Rahn, Stochastic spectral methods for efficient Bayesian solution of inverse problems, Journal of Computational Physics 224 (2007) 560–586.
- Marzouk and Xiu (2009) Y. M. Marzouk, D. Xiu, A stochastic collocation approach to Bayesian inference in inverse problems, Communications in Computational Physics 6 (2009) 826–847.
- Marzouk and Najm (2009) Y. M. Marzouk, H. N. Najm, Dimensionality reduction and polynomial chaos acceleration of Bayesian inference in inverse problems, Journal of Computational Physics 228 (2009) 1862–1902.
- Cameron and Martin (1947) R. H. Cameron, W. T. Martin, The orthogonal development of non-linear functionals in series of Fourier-Hermite functionals, The Annals of Mathematics 48 (1947) 385–392.
- Eldred (2011) M. S. Eldred, Design under uncertainty employing stochastic expansion methods, International Journal for Uncertainty Quantification 1 (2011) 119–146.
- Morokoff and Caflisch (1995) W. J. Morokoff, R. E. Caflisch, Quasi-Monte Carlo integration, Journal of Computational Physics 122 (1995) 218–230.
- Sobol (1967) I. M. Sobol, On the distribution of points in a cube and the approximate evaluation of integrals, USSR Computational Mathematics and Mathematical Physics 7 (1967) 86–112.
- Smolyak (1963) S. A. Smolyak, Quadrature and interpolation formulas for tensor products of certain classes of functions, in: Dokl. Akad. Nauk SSSR, volume 4 (1963), p. 123.
- Barthelmann et al. (2000) V. Barthelmann, E. Novak, K. Ritter, High dimensional polynomial interpolation on sparse grids, Advances in Computational Mathematics 12 (2000) 273–288.
- Gerstner and Griebel (1998) T. Gerstner, M. Griebel, Numerical integration using sparse grids, Numerical Algorithms 18 (1998) 209–232.
- Bungartz and Dirnstorfer (2004) H.-J. Bungartz, S. Dirnstorfer, High order quadrature on sparse grids, in: Computational Science — ICCS 2004, Lecture Notes in Computer Science, Springer Berlin / Heidelberg, 2004, pp. 394–401.
- Trefethen (2008) L. N. Trefethen, Is Gauss quadrature better than Clenshaw-Curtis?, SIAM Review 50 (2008) 67–87.
- Gentleman (1972a) W. M. Gentleman, Implementing Clenshaw-Curtis quadrature, I methodology and experience, Communications of the ACM 15 (1972a) 337–342.
- Gentleman (1972b) W. M. Gentleman, Implementing Clenshaw-Curtis quadrature, II computing the cosine transformation, Communications of the ACM 15 (1972b) 343–346.
- Kahan (1965) W. Kahan, Further remarks on reducing truncation errors, Communications of the ACM 8 (1965) 40.
- Metropolis et al. (1953) N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, E. Teller, Equation of state calculations by fast computing machines, The Journal of Chemical Physics 21 (1953) 1087–1092.
- Hastings (1970) W. K. Hastings, Monte Carlo sampling methods using Markov chains and their applications, Biometrika 57 (1970) 97–109.
- Tierney (1994) L. Tierney, Markov chains for exploring posterior distributions, The Annals of Statistics 22 (1994) 1701–1728.
- Gilks et al. (1996) W. R. Gilks, S. Richardson, D. J. Spiegelhalter, Markov Chain Monte Carlo in Practice, Interdisciplinary Statistics, Chapman & Hall/CRC, 1996.
- Andrieu et al. (2003) C. Andrieu, N. de Freitas, A. Doucet, M. I. Jordan, An introduction to MCMC for machine learning, Machine Learning 50 (2003) 5–43.
- Robert and Casella (2004) C. P. Robert, G. Casella, Monte Carlo Statistical Methods, Springer Verlag, 2004.
- Green and Mira (2001) P. J. Green, A. Mira, Delayed rejection in reversible jump Metropolis-Hastings, Biometrika 88 (2001) 1035–1053.
- Mira (2001) A. Mira, On Metropolis-Hastings algorithms with delayed rejection, Metron - International Journal of Statistics 59 (2001) 231–241.
- Haario et al. (2001) H. Haario, E. Saksman, J. Tamminen, An adaptive Metropolis algorithm, Bernoulli 7 (2001) 223–242.
- Haario et al. (2006) H. Haario, M. Laine, A. Mira, E. Saksman, DRAM: Efficient adaptive MCMC, Statistics and Computing 16 (2006) 339–354.
- Frenklach (2007) M. Frenklach, Transforming data into knowledge—process informatics for combustion chemistry, Proceedings of the Combustion Insitute 31 (2007) 125–140.
- Davidson and Hanson (2004) D. F. Davidson, R. K. Hanson, Interpreting shock tube ignition data, International Journal of Chemical Kinetics 36 (2004) 510–523.
- Baulch et al. (1994) D. L. Baulch, C. J. Cobos, R. A. Cox, P. Frank, G. Hayman, T. Just, J. A. Kerr, T. Murrells, M. J. Pilling, J. Troe, R. W. Walker, J. Warnatz, Evaluated kinetic data for combustion modeling: Supplement I, Journal of Physical and Chemical Reference Data 23 (1994) 847–1033.
- Baulch et al. (2005) D. L. Baulch, C. T. Bowman, C. J. Cobos, R. A. Cox, T. Just, J. A. Kerr, M. J. Pilling, D. Stocker, J. Troe, W. Tsang, R. W. Walker, J. Warnatz, Evaluated kinetic data for combustion modeling: Supplement II, Journal of Physical and Chemical Reference Data 34 (2005) 757–1398.
- Phenix et al. (1998) B. D. Phenix, J. L. Dinaro, M. A. Tatang, J. W. Tester, J. B. Howard, G. J. McRae, Incorporation of parametric uncertainty into complex kinetic mechanisms: Application to hydrogen oxidation in supercritical water, Combustion and Flame 112 (1998) 132–146.
- Yetter et al. (1991) R. A. Yetter, F. L. Dryer, H. A. Rabitz, A comprehensive reaction mechanism for carbon monoxide/hydrogen/oxygen kinetics, Combustion Science and Technology 79 (1991) 97–128.
- Goodwin (2002) D. G. Goodwin, Cantera C++ User’s Guide, California Institute of Technology, 2002.
- can (2009) Cantera 1.7.0 website, 2009. http://sourceforge.net/projects/cantera/.
- Cohen and Hindmarsh (1996) S. D. Cohen, A. C. Hindmarsh, CVODE, a stiff/nonstiff ODE solver in C, Computers in Physics 10 (1996) 138–143.
- Najm et al. (2009) H. N. Najm, B. J. Debusschere, Y. M. Marzouk, S. Widmer, O. P. Le Maître, Uncertainty quantification in chemical systems, International Journal for Numerical Methods in Engineering 80 (2009) 789–814.
|Reaction No.||Elementary Reaction|