On optimal experimental designs for Sparse Polynomial Chaos Expansions
Uncertainty quantification (UQ) has received much attention in the literature in the past decade. In this context, Sparse Polynomial chaos expansions (PCE) have been shown to be among the most promising methods because of their ability to model highly complex models at relatively low computational costs. A least-square minimization technique may be used to determine the coefficients of the sparse PCE by relying on the so called experimental design (ED), i.e. the sample points where the original computational model is evaluated. An efficient sampling strategy is then needed to generate an accurate PCE at low computational cost. This paper is concerned with the problem of identifying an optimal experimental design that maximizes the accuracy of the surrogate model over the whole input space within a given computational budget. A novel sequential adaptive strategy where the ED is enriched sequentially by capitalizing on the sparsity of the underlying metamodel is introduced. A comparative study between several state-of-the-art methods is performed on four numerical models with varying input dimensionality and computational complexity. It is shown that the optimal sequential design based on the S-value criterion yields accurate, stable and computationally efficient PCE.
Keywords: Optimal Experimental Design – Sparse Polynomial Chaos Expansions – Sequential Experimental design
Prediction of the behavior of complex physical and engineering systems is affected by diverse types of uncertainties, including imperfect knowledge of the system parameters and their variability. These uncertainties need to be appropriately considered and their impact on the quality of model predictions needs to be quantified in a rigorous way. In this context, uncertainty quantification (UQ) is a flexible framework within which parametric uncertainties can be effectively tackled. However, UQ may become intractable in the presence of expensive-to-evaluate computational models. In this context, surrogate modelling techniques have been increasingly adopted to enable complex uncertainty quantification analyses that would otherwise be impossible.
Amongst the available surrogate modeling techniques, polynomial chaos expansions (PCE) are one of the most promising methods for uncertainty quantification (Ghanem and Spanos, 2003; Xiu and Hesthaven, 2005). In PCE, the model response is expanded in terms of a set of orthonormal multivariate polynomials that are orthogonal with respect to a suitable probability measure. They allow one to uncover the relationships between different input parameters and how they affect the model responses. Among the available strategies to construct PCE approximations, this paper focuses on non-intrusive least square minimization and sparse-regression-based approaches, meaning that the computational model is considered as a âblack-boxâ model. They have been extensively proven to be particularly effective in the presence of complex models (Blatman and Sudret, 2011a). The least-square calculation of the PCE coefficients relies on the so-called experimental design (ED), a sample from the joint distribution of the input parameters, and the corresponding model responses. A common property of least-square techniques is that their accuracy and efficiency (in terms number of required full-model evaluations) depends on how the experimental design is sampled. In this context, different sampling strategies have been extensively investigated in the literature (Giunta et al., 2003; Goel et al., 2008; Simpson et al., 2001, 2001). This is the focus of the design of experiments (DOE) field of research.
Two different classes of DOE sampling strategies are of particular interest in this paper. The first class involves space-filling designs that aim at filling the space of the input parameters uniformly according to some appropriate measure. Several authors have shown that space filling designs are well suited to perform metamodelling (Fang et al., 2005; Simpson et al., 2001). Among such designs, Latin hypercube sampling (LHS) methods have become very popular. An oversampling of times the number of unknown coefficients is recommended to get reliable results. These designs do not satisfy any specific optimality measure.
The second class of interest in this paper comprises optimal experimental design strategies which rely on existing prior knowledge about the surrogate model properties. They aim at maximizing the quality of a given metamodel, with respect to some optimality criteria. A number of optimality criteria have been proposed and discussed in the literature in the context of regression applications (Myers and Montgomery, 2002). Among recent examples involving optimal experimental designs in the context of metamodelling Shin and Xiu (2016) proposed a simple technique to generate efficiently quasi-optimal experimental designs for ordinary least squares (OLS) regression. It is an optimization algorithm that aims at finding the design of experiments of fixed size such that the obtained results are as close as possible to those obtained with a larger sample, prior to performing model evaluations. The optimality criterion is related to the orthogonality of the columns of the information matrix. An alternative method was introduced in Zein et al. (2013), where a D-optimal criterion is optimized to make the PCE construction computationally feasible.
In this paper, we are interested in selecting the best design of experiments in order to ensure the accuracy of the surrogate model over the whole input parameter space with a minimum number of evaluations of the computational model. Furthermore, recent literature has shown that it is more appropriate to build up the ED sequentially by iteratively enriching an initial ED, especially when model evaluations are time consuming. Recently, in metamodeling context, sequential sampling methodologies have been identified as a more efficient strategy than one-stage sampling methods, where sample points are generated all at once (Jin et al., 2002; Lin et al., 2008). dos Santos and dos Santos (2007) proposed a sequential design to improve the overall accuracy of nonlinear regression metamodels for one-dimensional and two-dimensional inputs. The approach improves the accuracy of the resulting metamodel compared to space filling designs. Crombecq et al. (2011) performed a comparison and analysis of different space filling sequential strategies for simulation-based modelling. A review of sequential experimental designs for metamodelling in engineering can be found in Jin et al. (2002).
In this paper, we propose a novel adaptive design strategy for PCE that iteratively updates the experimental design by capitalizing on the sparsity of the underlying metamodel. The idea is to generate new sample points according to an optimality criterion based on the current metamodel estimate, so as to improve its accuracy. The new sampling points are then added to the ED and the metamodel is updated.
The outline of the paper is as follows. In Section 2, sparse polynomial chaos expansions are introduced. In Section 3, several commonly adopted sampling strategies are reviewed. In Section 4, several sequential design strategies are reviewed. In Section 5, a novel sequential approach optimal for sparse PCE is proposed. Finally, in Section 6, a comprehensive comparative study of the different sampling strategies is performed on four numerical models with varying input dimensionality and computational complexity.
2 Sparse Polynomial Chaos Expansions
2.1 Polynomial Chaos Expansions
Let us consider a physical system whose behaviour is represented by a computational model . We suppose that this model is a function of uncertain input parameters that are represented by independent random variables with marginal probability density functions PDFs . As a consequence, the model scalar response denoted by is also a random variable:
Provided that the output random variable has a finite variance, it can be represented as a polynomial chaos expansion as follows:
where are the expansion coefficients to be determined, are multivariate polynomials, is a multi-index that identifies the degree of the multivariate polynomials in each of the input variables . The multivariate polynomials are assembled as the tensor product of univariate polynomials orthogonal w.r.t. the marginal PDF of the corresponding variable, i.e:
where is an orthogonal polynomial in the -th variable of degree and if and otherwise. Due to this construction, it follows that the multivariate polynomials are orthonormal w.r.t. the input random vector . For instance, if the input random variable are standard normal, a possible basis is the family of multivariate Hermite polynomials, which are orthogonal with respect to the Gaussian measure. Other common distributions can be used together with basis functions from the Askey scheme. A more general case can be treated through an isoprobabilistic transformation of the input random vector into a standard random vector. For computational purposes, the series in Eq. (2) is truncated after a finite number of terms , yielding the following PC approximation:
where is a finite set of multi-indices of cardinality . A standard truncation scheme, which corresponds to selecting all polynomials in the input variables of total degree not exceeding , is commonly used:
The cardinality of the set reads:
The next step is the computation of the polynomial chaos coefficients . Several intrusive approaches (e.g.Galerkin scheme) or non-intrusive approaches (e.g.projection, least square regression) (Sudret, 2008; Xiu, 2010) have been proposed in the literature. We herein focus our analysis on least-square methods, originally introduced by Berveiller et al. (2006); Choi et al. (2004) under the generic name regression. In this approach, the exact expansion can be written as the sum of a truncated series and a residual:
where corresponds to the truncated terms.
The term non-intrusive indicates that the polynomial chaos coefficients are investigated over a set of input realizations , referred to as the experimental design (ED). One can employ Monte Carlo sampling methods, Latin Hypercube sampling (LHS, see McKay et al. (1979)), or quasi-random sequences such as the Sobol’ or Halton sequence to sample points of the ED. Denoting by the set of outputs of the computational model for each point in the ED , the set of coefficients is then computed by minimizing the least square residual of the polynomial approximation over the ED :
The ordinary least-square solution of Eq. (9) can be formulated equivalently in matrix notation as follows:
in which is the model matrix that contains the values of all the polynomial basis evaluated at the experimental design points.
Solving Eq. (10) requires a minimum number of sample points. One typically selects to ensure the well-posedness of the regression problem. As a rule of thumb, the number of samples is often chosen such that . Due to the sharp increase in the number of terms in Eq. (5) (see Eq. (7)) , the required number of model evaluations becomes prohibitive when the polynomial degree is large. The truncation scheme proposed in Eq. (6) may thus lead to unaffordable calculations especially in high dimensions. This problem is known as the curse of dimensionality. As a way to mitigate this limitation, a so-called hyperbolic truncation scheme was proposed in Blatman and Sudret (2010). It corresponds to selecting all multi-indices that satisfy:
and . Lower values of limit the number of high order interaction terms, leading to sparse solutions. The value corresponds to the standard truncation shown in Eq. (6).
2.2 Sparse Polynomial Chaos Expansions
To further sparsify the solution, without sacrificing possibly important interaction terms, Blatman and Sudret (2011a) proposed an adaptive sparse PCE calculation strategy based on the least angle regression (LAR) algorithm (Efron et al., 2004). Based on a given experimental design, only regressors that have the greatest impact on the model response are retained. LAR consists in directly modifying the least-square minimization problem in Eq. (9) by adding a penalty term of the form , i.e. solving:
in which is the regularization term that forces the minimization to favour low-rank solutions. In this study, we adopt the hybrid Least Angle Regression (LAR) method for building sparse PCE. For further details, the reader is referred to Blatman and
2.3 Measures of accuracy
Once the metamodel is built, it is important to assess its quality. A natural measure is provided by the mean square error , defined as:
where the validation set consists of samples of the input vector that do not belong to the experimental design and is the surrogate model. In practice, one needs a sufficiently large validation set (typically in the order of ) to achieve a stable estimate of the . A low corresponds to an accurate metamodel. A common variant of the MSE error in Eq. (14) is the relative mean square error , defined as:
where is the variance of the model response or its estimator:
In pactice, this error is only affordable for analytical functions, since it requires evaluating a large number of model responses. As an alternative, an error estimate based on the ED may be used instead. In particular, the leave-one-out (LOO) error, denoted as , has been proposed (Geisser, 1975; Stone, 1974) in the context of PCE. The LOO error is a cross-validation technique developed in statistical learning theory. It consists in rebuilding sequentially metamodels, denoted , using the original experimental design excluding one point , and then calculating the prediction error at the withheld point (see, e.g., Blatman and Sudret (2011a)). The general formulation of the leave-one-out error is:
When the model is a linear superimposition of orthogonal terms, as in the case for PCE, the LOO error does not require the actual calculation of independent surrogates on experimental design points each. Instead, it can be calculated analytically from a single surrogate based on all the points according to the following equation:
where is the diagonal term of matrix .
3 Space-filling designs of experiments
The LAR algorithm allows one to detect automatically the significant terms in the PC expansion. However, its accuracy is greatly dependent on the sampling strategy adopted to generate the ED. Space-filling techniques are a common choice in this context. This class of designs aims at filling the input space as uniformly as possible for a given number of samples . In this section, we briefly review the space filling sampling techniques adopted in our study. Without loss of generality, we will consider independent uniform sampling of the unit hypercube. The more general case ( for example sampling of non-uniform distributions) can be achieved by considering isoprobabilistic transforms (e.g., Nataf or Rosenblatt transformation, (Rosenblatt, 1952)).
3.1 Monte Carlo sampling
Monte Carlo is a popular method for generating samples from a random vector. Samples are drawn randomly according to the probability distributions of the input variables, using a random number generator. However, the MC sampling method is not particularly efficient. A problem of clustering arises when small sample sizes are considered. In other words, it is likely that there are regions of the parameter space that are scarcely covered by the sampling, as well as regions in which random points are clustered together. Therefore, to reliably cover the entire parameter space, a large number of simulations is often needed, hence resulting in extensive computational costs.
3.2 Quasi Monte Carlo sampling
Quasi Monte Carlo sampling, also known as low discrepancy sequences (LDS), is a family of sampling strategies based on the deterministic placement of sample points as uniformly as possible, avoiding large gaps or clusters. Uniformity is quantified in terms of discrepancy from the uniform distribution. The advantage of using low discrepancy sequences is in general a faster convergence rate of stochastic estimators with respect to the standard MC method. Several sequences are available in the literature, e.g. Halton (Halton, 1960), Faure (Faure, 1982) and Sobolâ (Sobol, Sobol) sequences, all of which are based on the Van der Corput sequence. Several practical studies have proven that the Sobol’ sequence outperforms the other LDS in regression applications (Kucherenko et al., 2015), therefore it is the one of choice in this paper.
3.3 Latin hypercube sampling
Latin hypercube sampling is widely applied in computational engineering. It was originally introduced by McKay et al. (1979) as an improvement of the Monte Carlo sampling technique. It is a form of stratified sampling technique that improves convergence w.r.t. MCS thanks to a better representation of the sample space. It was shown in Stein (1987) that the variance of the LHS sample mean is asymptotically (i.e. in the limit of a large number of samples) lower than traditional MCS. In its original formulation in McKay et al. (1979), LHS consists in dividing the domain of each input variable in intervals of equal probability, followed by drawing a random sample inside each interval. A Latin hypercube design of points in dimensions can be built from a matrix , where each column is a random permutation of the sequence . The design points are then defined as follows:
where is a realization of a uniform random variable . It is easy to verify that one dimensional projections of the LHS design are uniformly spaced. However, this property is not guaranteed in principle in -dimensional space. Because of the random nature of LHS, some configurations exist such that the input variables are correlated, resulting in poor space filling qualities. Several modifications of the sampling scheme have been proposed in the literature leading to the so-called optimal Latin hypercube sampling designs (OLHS). One common strategy is to combine standard LHS with a space filling criterion such as entropy, maximin distance (Pronzato and
Müller, 2012), low discrepancy or minimum correlation between the samples. In general, an OLHS is achieved by creating several independent LHS designs of the desired size, then choosing the one that best satisfies the chosen optimality criterion.
The main difficulty with LHS designs is that the sampling size has to be selected a priori. To circumvent this problem, Blatman and Sudret (2010); Rennen (2009); Wang (2003) developed the so-called nested-LHS designs (NLHS). The main idea is to enrich an existing LHS design such that the resulting combined design meets the LHS requirements as closely as possible. Enriching an existing LHS of size with points can be achieved by substituting the partitioning of the unit hypercube in Eq. (19) with a finer one of size . Then, empty partitions are identified and a random sample is drawn in each of them. While this procedure does not guarantee that the final design is still an LHS (due to the possibility that multiple points of the original design lie in the same stratum after the new partitioning), the properties of the final design tend to be closer to those of an LHS. For more details, see Blatman and Sudret (2010); Wang (2003).
3.4 Measures of quality for space filling designs
A number of different criteria for assessing the space filling quality of a particular experimental design can be found in the literature. In the following, we briefly recall two widely used space-filling criteria, allowing for enhanced discrimination when choosing among candidate designs. The first one is the maximin distance criterion (Johnson et al., 1990), which aims at maximizing the smallest distance between neighboring points, i.e.:
where is the Euclidean distance between two sample points and . A second criterion was proposed to assess the uniformity quality of the design based on a discrepancy measures. To this purpose, a common and computationally efficient tool is the centred discrepancy criterion, which reads:
A design is called uniform if it shows a low centered discrepancy.
4 Optimal designs of experiments
Optimal designs leverage on the knowledge about the properties of the surrogate model that is being considered. This can be achieved by searching a design that optimizes a given optimality criterion. For instance, in the context of regression-based surrogate models (e.g. PCE), Zein et al. (2013) proposed a sampling method for regression-based polynomial chaos expansion that consists in maximizing the determinant of the information matrix in Eq. (10) for a given number of samples, by using the D-optimality criterion. Recently, Shin and Xiu (2016) proposed a point selection approach to construct an optimal experimental design with the aim of providing an accurate OLS estimation. The optimal design is determined by maximizing a quantity that combines both the determinant and the column orthogonality of the information matrix, denoted -value criterion. In the following section, we will briefly review the D-optimality criterion and -value criterion for regression-based PCE.
4.1 D-optimal designs
The D-optimality criterion is aimed at minimizing the variance of the regression parameters estimate. The latter is derived from Eq. (10) as follows:
Under the assumption of independent normal model errors with constant variance (), we obtain:
Reducing the variance of the PCE coefficients leads to the minimization of the determinant . To avoid the inversion, one maximizes the determinant of the information matrix . The -optimal design, denoted , is then the solution of the following optimization problem over the input design space :
4.2 S-optimal designs
We herein briefly review the optimal approach developed in Shin and Xiu (2016). The goal is to find the optimal -subset within a large-size sample of size (typically in the order of ) that provides the most accurate PCE. It is a subset selection problem where potential solutions are viewed as a matrix consisting of exactly different rows of . The minimization problem in Eq. (9) over the ED reads:
Equivalently, the minimization problem in Eq. (9) over the large ED, , reads:
The aim is to find a subset such that the estimated PCE coefficients based on the optimal design are as close as possible to those based on the large ED, . This is mathematically equivalent to the following optimization problem:
This can be intuitively interpreted as a row selection problem, such that:
where denotes a row selection matrix, where each row is a vector of length- with all zeros but a one at location, . As is the case in many subset selection algorithms, QR decomposition is used to find the most representative rows. Let denote the model matrix obtained when considering and the corresponding model evaluations . let be the factorization of . It is then possible to prove that:
where is the least square solution to
where is the projection operator to the orthogonal complement of the column space of .
It is then clear that achieving a smaller leads to the minimization of the difference . To do this, one seeks to: (i) maximize the determinant of so that the variance of is minimized and (ii) maximize the column orthogonality of , so that the correlation between design points is minimized. For this purpose, a selection criterion combining these conditions was proposed in Shin and Xiu (2016). Such criterion is hereafter called -value. For the sake of simplicity, the -value will be defined for a matrix of size as follows:
where is the -th column of . The optimal sample is thus the solution of the following optimization problem:
It is worth noting that in the context of PCE, the columns of the model matrix consist of orthonormal basis elements. The decomposition is then not necessary and the matrix can be used instead of . Thus, we seek:
The optimality condition Eq. (33) can then be equivalently rewritten as follows:
In their recent study, Shin and Xiu (2016) considered a standard truncation scheme to select the set of basis elements. Although their optimal design has proven powerful, it may still face challenges in high dimensional cases. As already mentioned, one can reduce the size of the candidate set by considering other truncation strategies, e.g. the hyperbolic truncation. However, it may not be known in advance which candidate set will perform best. This motivates the need of an adaptive strategy that selects iteratively the optimal candidate basis and thus the associated optimal design.
5 Sequential experimental designs
In the presence of expensive computational models, it can be preferable to
adopt a greedy iterative strategy in the construction of an experimental design
for surrogate modelling. The class of so-called sequential experimental designs is
based on constructing a relatively small initial space-filling design, training
a surrogate model and assessing its accuracy. If needed, the initial design can
then be enriched sequentially until some accuracy measure of the surrogate
model (e.g. the number of desired points or the in Eq. (15)) is satisfied.
In this paper we explore two different strategies to address this problem. The first is to enrich the initial
experimental design according to a space-filling criterion without
assuming any knowledge on the functional form of the surrogate model.
The second assumes some degree of knowledge on the behaviour of the surrogate model. Note that, in any case, none of the criteria used in this study depend on the model responses on the experimental design points.
In the following, the initial experimental design of size is enriched at each iteration with data points. The new points are to be selected, according to a specific criterion, from an available ED of size such that .
5.1 Sequential selection using space filling criteria
In this section we review a sequential approach for enriching the initial experimental design according to a maximin distance criterion. This strategy was proposed by Kersaudy (2015). Note that such idea was originally developed by Ravi et al. (1991) as a method for the dispersion problem and applied later by Rennen (2009) to subset selection for Kriging metamodels, thereby referred to as greedy maximin selection. At each iteration, the algorithm selects the point that maximizes the minimum distance with all the points in the current experimental design . The obtained experimental design of size is therefore generated according to the following algorithm:
Let (note that )
repeat step ii until the set contains the desired number of points .
The main advantage of this method is that it is easy to implement. However, the main disadvantage lies in that one needs to compute all the distances between every point of and every point , which can be challenging in high dimension. In the following, the obtained design is referred to as sequential maximin design. A similar idea has also been proposed in Iooss et al. (2010) where the centred discrepancy measure in Eq. (21) is used instead of the maximin criterion. In this case, the algorithm enriches the initial experimental design with points that minimize its discrepancy. It operates in the same way as the greedy maximin algorithm. In this paper, we choose to limit our study to the maximin criterion based on Euclidian distance.
5.2 Adaptive selection of the optimal design
The optimal design strategies presented until now rely on the assumption that the regressors in an OLS problem are known. However, in the context of sparse PCE, the optimal polynomial basis is in general not known a priori. To address this problem, we propose an iterative strategy in which the estimated sparse PCE basis is automatically updated and the experimental design is enriched. To construct a sparse PCE, we propose the use of a basis selection method based on compressive sensing techniques. In particular, we rely herein on the least angle regression algorithm (LAR) (Efron et al., 2004). Other alternatives (e.g.orthogonal matching pursuit (OMP) (Jakeman et al., 2015; Mallat and Zhang, 1993) or stepwise regression (Hesterberg et al., 2008)) are also directly compatible with our procedure.
5.2.1 Sparse PCE: Least Angle regression
The LAR algorithm is an iterative regression method applied in the context of PCE for basis selection (Blatman and
Sudret, 2011a). The basic idea of LAR is to select a parsimonious number of regressors that have the greatest impact on the model response, among a possibly large set of candidates. LAR eventually provides a sparse polynomial chaos expansion, i.e.one that only contains a small number of regressors compared to a classical full representation.
It consists on iteratively moving regressors from a candidate set to an active set. The next regressor is chosen based on its correlation with the current residual. At each
iteration, analytical relations are used to identify the best set of regression
coefficients for that particular active set, by imposing that every active
regressor is equicorrelated with the current residual. At the end of the process, the optimal sparse PC basis is chosen according to a suitable measure of accuracy. In particular, the leave-one-out (LOO) error, , is a natural candidate in the context of PCE.
The LAR algorithm in the context of PCE (Blatman and Sudret, 2011a) can be summarized as follows:
Initialize the set of candidate regressors to the full basis and the active set of regressors to .
Initialize all coefficients equal to . Set the residual equal to the output vector.
Find the regressor that is most correlated with the current residual
Move all the coefficients of the current active set towards their least-square value until their regressors are equicorrelated to the residual as some other regressor in the candidate set. This regressor will also be the most correlated to the residual in the next iteration.
Calculate and store the accuracy estimates, , for the current iteration
Update all the active coefficients and move from the candidate set to the active set
Repeat the previous steps (iii-v) until the size of the active set is equal to
Choose the best iteration based on their prescribed error estimate, .
Note that after detecting the relevant basis by LAR, the coefficients are recomputed using the ordinary least-square method (hybrid-LAR). Readers are referred to Blatman and Sudret (2010) for further details.
5.2.2 Adaptive selection of the optimal design
The algorithm proposed in the last subsection allows one to detect the most relevant regressors in the PC expansion on the basis of the current experimental design. However, it is based on an arbitrarily fixed experimental design. To address this problem, we propose an adaptive strategy that updates iteratively both the experimental design and the estimated sparse PCE basis. The initial ED generated using a space filling techniques will be enriched sequentially according to an optimality criterion. At each iteration, the optimality criterion is reformulated based on the updated sparse basis. The proposed algorithms works as follows:
Initial experimental design: we start with a small sample set of a size , normally produced by LHS or Quasi Monte Carlo sequences.
Basis selection: a degree-adaptive sparse PCE (Blatman and Sudret, 2011b) is then constructed using LARS to identify the sparse set of polynomials (out of a candidate set) that best approximates the computational model based on the current experimental design.
Formulation of the model matrix: This step is the core of the sequential process. The information matrix is updated according to the basis obtained in the previous step. The optimality criterion (-optimal or -optimal, see Sections 4.1 and 4.2, respectively) is then re-evaluated on to identify the next set of candidate points .
Experimental design augmentation: The identified - or - optimal data points are added to the current experimental design, in which the computational model is evaluated. The sparse PCE is then updated.
The procedure (steps ii, iii and iv) is repeated until a stopping criterion is fulfilled. In particular, the accuracy of the metamodel (estimated e.g.with cross-validation techniques) or the maximum number of sample points can be used. For easier comparisons between the methods, we choose to fix the maximum number of sample points as the stopping criterion in the following.
The effectiveness of the adaptive methods is dependent on the optimization algorithm that searches the optimal sample points. Various search strategies have been proposed in the literature. In this work, the -optimal experimental design is generated using the built-in Matlab function from the statistics toolbox. The routine implements the Fedorov exchange algorithm (Miller and Nguyen, 1994). The -optimal experimental design is generated using the greedy algorithm proposed in Shin and Xiu (2016). For a comprehensive overview of the implementation of the greedy algorithm, the reader is referred to Shin and Xiu (2016).
6 Results and Discussion
In this section, we investigate the efficiency of the proposed adaptive sequential sampling methods, when either the -optimality or the -value criteria are used to generate the sequential samples. For the sake of clarity, the sequential sampling corresponding to these two sampling settings are denoted as D-optimal and S-optimal, respectively. A comparative study with other commonly used sampling methods is performed. Four different sampling techniques are considered: LHS, LHS, Sobol’ sequences and Sequential design, denoted maximin. LHS is obtained by selecting an LHS design from a set of five different designs according to the maximin distance criterion, while LHS is obtained similarly, but with a minimum-correlation
based selection criterion. For this purpose, we use the Matlab function .
As already mentioned, D-optimal designs are generated with the Matlab built-in function. For comparison of the space-filling properties of the sample points, the measures described in Section 3.4 are used. In addition, the condition number (CN) of the information matrix is considered, as a measure of orthogonality.
In this section, the proposed adaptive sampling methods are validated on four models with varying input dimensionality and complexity. They comprise two analytical functions, namely the Ishigami function and the Sobol’ g-function, and two finite element models, namely a truss structure and a 1D diffusion problem. Because all of the presented benchmarks are inexpensive to evaluate, the relative generalization error (RMSE) on a validation set is then employed to assess the accuracy of the resulting polynomial chaos expansions. All of the PCE metamodels, as well as the LARS basis selection, are developed within the UQLab software framework (Marelli and Sudret, 2014, 2015).
Since the performance of the ED is sensitive to the random nature of the sampling methods, each analysis is replicated 50 times. To generate a randomized Sobol’ sequences, several parallel sequences are generated using different starting points. The results are then summarized in box-plots. These replications aim at assessing the effect of stochastic variations in the generation of the designs.
6.1 Ishigami function
The Ishigami function (Ishigami and Homma, 1990) is a well-known benchmark for polynomial chaos expansions. It is a 3-dimensional analytical function characterized by non-monotonicity and high non-linearity, given by the following equation:
The input random vector consists of three i.i.d. uniform random variables . The numerical values are chosen for this example. Note that this model is sparse in nature.
A sparse, degree-adaptive PCE approach with maximum degree in the range 3-20 is chosen. The PCE coefficients are computed with the hybrid LAR method (Blatman and
Sudret, 2010). The accuracy of the PCE is assessed with a MCS-based validation set of size . The sequential sampling methods ( S-optimal, D-optimal and maximin) are initialized with an experimental design generated by maximin LHS of size , iteratively augmented with additional points. For the other sampling methods ( LHS, LHS), a predefined size is considered. As already mentioned, the analysis is replicated 50 times to assess the statistical uncertainty. The performance of the different sampling methods is compared in terms of their RMSE on the validation set for varying ED sizes.
As a first step, the performances of the various design strategies with respect to the space-filling quality measures introduced in Section 3.4 are compared. The average values of the different measures calculated from the repetitions for various sizes of the ED are plotted in Figure 1. The top left panel of Figure 1 reports results for the maximin distance criterion. As expected, the sequential experimental design systematically shows the maximal minimum distance across all designs. All the other designs show a consistent decrease in the minimal distance with increasing ED size, leading to a deterioration of their space filling properties. The worst results are obtained with D-optimal design.
In the same format, the top-right panel of Figure 1 reports results for the criterion. All the non-sequential strategies tested (LHS and QMC) show an approximately constant discrepancy regardless of the experimental design size. However, all the sequential strategies show instead a constant increase with the experimental design size. This means that all the sequential strategies tend to produce samples that deviate from the uniform distribution. In other words, design strategies that are optimal for regression problems do not necessarily exhibit a low-discrepancy behaviour.
The condition number of the model matrices are plotted with respect to the size of the ED in the lower panel of Figure 1. As expected, S-optimal designs show a consistently smaller condition number for all experimental designs sizes. D-optimal, LHS and LHS exhibit comparable behaviours with a monotone increase in condition number. The remaining maximin ED and Sobol’ sequences show instead a somewhat more erratic behaviour. Overall, Figure 1 shows that S-optimal designs provides an overall good balance between the three criteria.
To gain an intuitive understanding of the different
experimental designs, Figure 2
compares 2D scatter plots and histograms for the final ED of size generated by
LHS and the three sequential designs maximin, D-optimal and S-optimal. Uniform
space coverage is obtained onlywith LHS. The sequential design
obtained with maximin and S-optimal designs appear to be
fairly uniformly distributed, with some degree of
accumulation on the boundaries. On the contrary, D-optimal sample points tend to cluster near the boundaries of the design space.
The metamodelling performance of the different sampling strategies are compared in Figure 3. The boxplots represent the median RMSE error, the % and % percentiles and the extreme error values (outliers) of the 50 independent runs for varying ED sizes, ranging from to . We observe that S-optimal design consistently outperforms other sampling strategies. In addition, it generally shows a more stable behaviour with smaller variability between repetitions, especially as the size of the experimental design increases. Accurate results are also achieved with maximin design. The prediction improvement seems to be more important for larger experimental designs. We note that comparable results are obtained with LHS, LHS and Sobol’ sequences.
6.2 Sobol function
Let us consider now the so called Sobol’ g-function:
The input parameters are i.i.d. uniform distributions over .
The are parameters controlling the complexity of the function. For our numerical application, we choose to fix and . The same
analysis presented for the Ishigami function was conducted to illustrate the performance of the proposed approach for this benchmark. The Sobol’ function is highly complex (strongly nonlinear, non-monotonic and non-polynomial), therefore requiring high polynomial degree even for low dimensions. The truncation options and are used for the
PCE calculation. The initial experimental design is generated by
a maximin LHS of size , augmented iteratively with additional
The performances of the various design strategies with respect to the space-filling quality measures have been compared, but they are not shown herein because the general conclusions drawn for the Ishigami function still hold. The metamodelling performance of the different sampling strategies are compared in Figure 4. As in the previous example, the S-optimal design outperforms the others sampling methods (Figure 4). Good performance are also achieved with maximin design as in the Ishigami function. Furthermore, much smaller variances (boxplots are smaller) are shown for S-optimal design and lead to the conclusion that this class of designs is robust. Sobol’ sequences give the least accurate results. This behaviour may be due to the increase in the dimensionality of the problem, coupled with the tendancy of Sobol’ sequences to cluster on regular sub-manifolds in high dimension (Morokoff and Caflisch, 1994).
6.3 Maximum deflection of a truss structure
Let us consider the truss structure shown in Figure 5. Ten independent variables are considered, namely: the vertical loads, ; the cross-sectional area and Young’s modulus of the horizontal bars, and , respectively; the cross-sectional area and Young’s modulus of the vertical bars, and , respectively (Figure 5).
The model input parameters are described by independent random variables whose distributions are reported in Table 1. An isoprobabilistic transformation of into standard normal variables is employed. In the underlying deterministic problem, the truss deflection is computed with a finite-element Matlab code. The same study as in the two previous examples is conducted for this 10-dimensional function.
A sparse degree-adaptive PCE approach with maximum degree in the range is chosen. The accuracy of the PCE is assessed with a MCS-based validation set of size . The sequential sampling methods ( S-optimal, D-optimal and maximin) are initialized with an experimental design generated by maximin LHS of size , augmented iteratively with additional points. For the other sampling methods ( LHS, LHS), a predefined size is considered. The performance of the different sampling methods is compared in terms of their RMSE on the validation set for varying ED sizes.
As in the previous cases, the performances of the various design strategies with respect to the space-filling quality measures were compared but the results are not shown though for the sake of conciseness, as they are in line with the previous two cases. The metamodelling performance of the different sampling strategies are compared in Figure 6. As in the two previous examples, the S-optimal design performs better than the other sampling methods, with a faster decrease of the validation errors. One sees that the computational budget gain increases exponentially with the size of the ED. Fairly accurate results are also obtained with maximin design. Similar accuracies are obtained with , LHS. Again, a poor performance is observed for Sobol’ sequences and -optimal designs, especially for large experimental designs.
6.4 One-dimensional diffusion problem
In this section, we consider a standard benchmark in UQ computations (Shin and Xiu, 2016), a one-dimensional stochastic diffusion problem subject to uncertainty in the diffusion coefficient. Let us consider the following boundary-value problem defined over a domain :
Furthermore, assume that the diffusion coefficient is a log-normal random field described by:
where is a standard normal stationary Gaussian random field with an exponential autocorrelation function:
For the numerical test we set the following numerical values: , , . We set: , and . The number of terms to be retained in (40) is selected such that:
are therefore required, making this the highest dimensional example considered.. The quantity of interest (model response) is .
A sparse, degree-adaptive PCE approach with maximum degree in the range - is chosen with . It is worth noting that the model is extremely sparse with a low effective dimension, although its high dimensionality. For the sequential sampling methods, the following conditions are then set: an initial experimental design of size generated with LHS, are considered.
We now investigate the metamodelling performance of the different sampling strategies. Figure 7 shows boxplots of the RMSE error. As in the previous examples, the S-optimal performs better than the other sampling methods. Unlike the previous examples, maximin designs do not perform better than LHS and LHS. It is worth to note that maximin designs search the design in dimensions and does not capture the sparsity of the model (i.e.; the model has a few variables that are significantly more influential than the others). Again, a poor performance is observed for the Seq -optimal designs.
6.5 Effects of the choice of and
Determining the optimal sample size of the initial design in the sequential construction is a difficult task. In the context of sequential sampling, previous literature does not have a well-defined rule of thumb for the number of initial runs, given a total budget . According to Bernardo et al. (1992), the initial design should at least contain three observations per input variable. Jones
et al. (1998) recommended larger initial designs with up to ten observations per input variable. A similar question holds on the choice of . It is likely that the optimal and depend on the nature of the model that is to be approximated and on its degree of sparsity.
To provide some insight into the effects of those choices on the -optimal design class, a parametric study was performed on the examples presented in the previous sections. For each of the examples provided, several choices of and were compared in terms of the final surrogate RMSE prediction. The most significant results are reported in Figure 8 for -optimal designs.
The results for the Ishigami function suggest that indeed the choice of and plays a significant role in the accuracy of PCE for all of the considered ED sizes, with best performances achieved with a relatively large initial experimental design. However, no comparable trends were observed for either of the remaining benchmarks in the other panels of Figure 8.
These results show that no reliable rule of thumb can be determined to choose and . The idea is to start with a relatively small sample to obtain initial information about the global behaviour of the metamodel. This information will be updated through iterations. However, if the initial design is too small, the obtained information is not sufficient to achieve an accurate estimate of the optimal basis from LARS, leading to a poor optimization of the -value criterion in the early iterations. A rather large initial design will not ameliorate the results despite an increased computational effort.
The paper handles the topic of selecting the best design of experiments in order to ensure the optimal accuracy of sparse polynomial chaos expansions over the whole input space within a given computational budget. The coefficients of the PCE are computed based on least-square analysis by relying on the so-called experimental design. An efficient approach for constructing an optimal experimental design for sparse polynomial chaos expansions was proposed. The complete approach includes two major elements: (a) the use of degree-adaptive sparse PCE based on LARS for selecting the optimal basis, and (b) the choice of an efficient criterion for selecting the optimal sample points. The ED is build up sequentially by enriching an initial design, which is generated using a space-filling technique. The added points are chosen according to an optimality criterion that depends on the current sparse basis. In this work, two optimality criteria are used, that is: (i) D-optimality and (ii) S-value. The D-optimal criterion is related to the determinant of the information matrix whereas the S-value criterion is related to the orthogonality of the columns of the information matrix.
A comprehensive comparative study of the different sampling strategies was performed on four numerical models with varying input dimensionality and complexity. To complete our investigation, a sequential strategy based on optimizing the well-known maximin distance criterion was also studied. Our results lead to the following major conclusions.
The use of a sequential adaptive design according to the value criterion, S-optimal, ensures optimal performances in terms of the relative generalization error. The proposed approach provides a significant accuracy and stability improvement compared to other sampling strategies.
The D-optimal approach has shown poor performances.
The sequential maximin designs performed well when the number of random input parameters is low to moderate, say less than 10. However, an erratic behaviour is observed for high dimensional cases.
Finally, further work may include using other basis selection approaches such as Orthogonal Matching Pursuit (OMP) (Jakeman et al., 2015; Mallat and Zhang, 1993) as well as metamodelling techniques such as Low Rank Approximations (LRA) (Chevreuil et al., 2015; Konakli and Sudret, 2016a, b, c) to identify the optimal ED. Future research may also investigate the use of the sequential adaptive designs in the framework of reliability analysis.
- Bernardo et al. (1992) Bernardo, M. C., R. Buck, L. Liu, W. A. Nazaret, J. Sacks, and W. J. Welch (1992). Integrated circuit design optimization using a sequential strategy. IEEE Trans. Comput.-Aided Design Integr. Circuits Syst. 11(3), 361–372.
- Berveiller et al. (2006) Berveiller, M., B. Sudret, and M. Lemaire (2006). Stochastic finite elements: a non intrusive approach by regression. Eur. J. Comput. Mech. 15(1-3), 81–92.
- Blatman and Sudret (2010) Blatman, G. and B. Sudret (2010). An adaptive algorithm to build up sparse polynomial chaos expansions for stochastic finite element analysis. Prob. Eng. Mech. 25, 183–197.
- Blatman and Sudret (2011a) Blatman, G. and B. Sudret (2011a). Adaptive sparse polynomial chaos expansion based on Least Angle Regression. J. Comput. Phys. 230, 2345–2367.
- Blatman and Sudret (2011b) Blatman, G. and B. Sudret (2011b). Adaptive sparse polynomial chaos expansion based on Least Angle Regression. J. Comput. Phys 230, 2345–2367.
- Chevreuil et al. (2015) Chevreuil, M., R. Lebrun, A. Nouy, and P. Rai (2015). A least-squares method for sparse low rank approximation of multivariate functions. SIAM/ASA J. Uncertain. Quantif 3(1), 897–921.
- Choi et al. (2004) Choi, S., R. Grandhi, R. Canfield, and C. Pettit (2004). Polynomial chaos expansion with Latin Hypercube sampling for estimating response variability. AIAA Journal 45, 1191–1198.
- Crombecq et al. (2011) Crombecq, K., E. Laermans, and T. Dhaene (2011). Efficient space-filling and non-collapsing sequential design strategies for simulation-based modeling. Eur. J. Oper. Res. 214(3), 683–696.
- dos Santos and dos Santos (2007) dos Santos, M. I. R. and P. M. R. dos Santos (2007). Sequential designs for simulation experiments: nonlinear regression metamodeling. In Proc. 26th IASTED Int. Conf. Modell. Identif. Control, pp. 88–93. ACTA Press.
- Efron et al. (2004) Efron, B., T. Hastie, I. Johnstone, and R. Tibshirani (2004). Least angle regression. Ann. Stat. 32, 407–499.
- Fang et al. (2005) Fang, K.-T., R. Li, and A. Sudjianto (2005). Design and modeling for computer experiments. CRC Press.
- Faure (1982) Faure, H. (1982). Discrépance de suites associées à un système de numération (en dimension s). Acta Arith. 41(4), 337–351.
- Geisser (1975) Geisser, S. (1975). The predictive sample reuse method with applications. J. Amer. Stat. Assoc. 70, 320–328.
- Ghanem and Spanos (1991) Ghanem, R. and P. Spanos (1991). Stochastic finite elements – A spectral approach. Springer Verlag, New York. (Reedited by Dover Publications, Mineola, 2003).
- Ghanem and Spanos (2003) Ghanem, R. G. and P. D. Spanos (2003). Stochastic finite elements: a spectral approach. Courier Corporation.
- Giunta et al. (2003) Giunta, A. A., S. F. Wojtkiewicz, M. S. Eldred, et al. (2003). Overview of modern design of experiments methods for computational simulations. In Proc. 41st AIAA Aerospace Sciences Meeting and Exhibit, AIAA-2003-0649.
- Goel et al. (2008) Goel, T., R. T. Haftka, W. Shyy, and L. T. Watson (2008). Pitfalls of using a single criterion for selecting experimental designs. Int. J. Numer. Meth. Eng. 75(2), 127–155.
- Halton (1960) Halton, J. H. (1960). On the efficiency of certain quasi-random sequences of points in evaluating multi-dimensional integrals. Numer. Math. 2(1), 84–90.
- Hesterberg et al. (2008) Hesterberg, T., N. H. Choi, L. Meier, C. Fraley, et al. (2008). Least angle and l1 penalized regression: A review. Stat. Surv. 2, 61–93.
- Iooss et al. (2010) Iooss, B., L. Boussouf, V. Feuillard, and A. Marrel (2010). Numerical studies of the metamodel fitting and validation processes. Int. J. Adv. Sys. Meas. 3, 11–21.
- Ishigami and Homma (1990) Ishigami, T. and T. Homma (1990). An importance quantification technique in uncertainty analysis for computer models. In Proc. ISUMA’90, First Int. Symp. Unc. Mod. An, pp. 398–403. University of Maryland.
- Jakeman et al. (2015) Jakeman, J., M. Eldred, and K. Sargsyan (2015). Enhancing -minimization estimates of polynomial chaos expansions using basis selection. J. Comput. Phys. 289, 18–34.
- Jin et al. (2002) Jin, R., W. Chen, and A. Sudjianto (2002). On sequential sampling for global metamodeling in engineering design. In ASME 2002 Proc. Int. Des. Eng. Tech. Conf. Comput. Inform. Eng. Conf, pp. 539–548. American Society of Mechanical Engineers.
- Johnson et al. (1990) Johnson, M. E., L. M. Moore, and D. Ylvisaker (1990). Minimax and maximin distance designs. J. Stat. Plan. Infer. 26(2), 131–148.
- Jones et al. (1998) Jones, D. R., M. Schonlau, and W. J. Welch (1998). Efficient global optimization of expensive black-box functions. J. Global. Optim. 13(4), 455–492.
- Kersaudy (2015) Kersaudy, P. (2015). Modélisation statistique de l’exposition humaine aux ondes radiofréquences. Ph. D. thesis, Université Paris-Est.
- Konakli and Sudret (2016a) Konakli, K. and B. Sudret (2016a). Global sensitivity analysis using low-rank tensor approximations. Reliab. Eng. Syst. Safe 156, 64 – 83.
- Konakli and Sudret (2016b) Konakli, K. and B. Sudret (2016b). Polynomial meta-models with canonical low-rank approximations: Numerical insights and comparison to sparse polynomial chaos expansions. J. Comput. Phys.
- Konakli and Sudret (2016c) Konakli, K. and B. Sudret (2016c). Reliability analysis of high-dimensional models using low-rank tensor approximations. Probab. Eng. Mech. 46, 18–36.
- Kucherenko et al. (2015) Kucherenko, S., D. Albrecht, and A. Saltelli (2015). Exploring multi-dimensional spaces: a comparison of Latin Hypercube and quasi Monte Carlo sampling techniques. arXiv preprint arXiv:1505.02350.
- Lin et al. (2008) Lin, Y., D. Luo, T. Bailey, R. Khire, J. Wang, and T. W. Simpson (2008). Model validation and error modeling to support sequential sampling. In ASME 2008 Proc. Int. Des. Eng. Tech. Conf. Comput. Inform. Eng. Conf, pp. 1255–1264. American Society of Mechanical Engineers.
- Loève (1977) Loève, M. (1977). Probability theory. 1977.
- Mallat and Zhang (1993) Mallat, S. G. and Z. Zhang (1993). Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process 41(12), 3397–3415.
- Marelli and Sudret (2014) Marelli, S. and B. Sudret (2014). UQLab: A framework for uncertainty quantification in Matlab. In Vulnerability, Uncertainty, and Risk (Proc. 2nd Int. Conf. on Vulnerability, Risk Analysis and Management (ICVRAM2014), Liverpool, United Kingdom), pp. 2554–2563.
- Marelli and Sudret (2015) Marelli, S. and B. Sudret (2015). UQLab user manual – Polynomial chaos expansions. Technical report, Chair of Risk, Safety & Uncertainty Quantification, ETH Zurich. Report # UQLab-V0.9-104.
- McKay et al. (1979) McKay, M. D., R. J. Beckman, and W. J. Conover (1979). A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 2, 239–245.
- Miller and Nguyen (1994) Miller, A. J. and N.-K. Nguyen (1994). Algorithm as 295: A fedorov exchange algorithm for d-optimal design. J. R. Stat Soc. series c (applied statistics) 43(4), 669–677.
- Morokoff and Caflisch (1994) Morokoff, W. J. and R. E. Caflisch (1994). Quasi-random sequences and their discrepancies. Siam. J. Sci. Comput. 15(6), 1251–1279.
- Myers and Montgomery (2002) Myers, R. and D. Montgomery (2002). Response surface methodology: process and product optimization using designed experiments (2nd ed.). J. Wiley & Sons.
- Pronzato and Müller (2012) Pronzato, L. and W. G. Müller (2012). Design of computer experiments: space filling and beyond. Stat. Comput 22(3), 681–701.
- Ravi et al. (1991) Ravi, S., D. J. Rosenkrantz, and G. K. Tayi (1991). Facility dispersion problems: Heuristics and special cases. In Algorithms and Data Structures, pp. 355–366. Springer.
- Rennen (2009) Rennen, G. (2009). Subset selection from large datasets for Kriging modeling. Struct. Multidiscip. O. 38(6), 545–569.
- Rosenblatt (1952) Rosenblatt, M. (1952). Remarks on a multivariate transformation. Ann. Math. Stat. 23(3), 470–472.
- Shin and Xiu (2016) Shin, Y. and D. Xiu (2016). Nonadaptive quasi-optimal points selection for least-squares linear regression. Siam. J. Sci. Comput. 38(1), A385–A411.
- Simpson et al. (2001) Simpson, T. W., D. K. Lin, and W. Chen (2001). Sampling strategies for computer experiments: design and analysis. Int. J. Rel. App. 2(3), 209–240.
- Simpson et al. (2001) Simpson, T. W., J. Poplinski, P. N. Koch, and J. K. Allen (2001). Metamodels for computer-based engineering design: survey and recommendations. Eng. Comput. 17(2), 129–150.
- Sobol (Sobol) Sobol, I. The distribution of points in a cube and the approximate evaluation of integrals, zh. vychisl. mat. i mat. fiz. 7 (1967), 784–802.
- Stein (1987) Stein, M. (1987). Large sample properties of simulations using Latin Hypercube sampling. Technometrics 29(2), 143–151.
- Stone (1974) Stone, M. (1974). Cross-validatory choice and assessment of statistical predictions. J. Royal Stat. Soc., Series B 36, 111–147.
- Sudret (2008) Sudret, B. (2008). Global sensitivity analysis using polynomial chaos expansions. Reliab. Eng. Sys. Safety 93, 964–979.
- Sudret and Der Kiureghian (2000) Sudret, B. and A. Der Kiureghian (2000). Stochastic finite elements and reliability: a state-of-the-art report. Technical Report UCB/SEMM-2000/08, University of California, Berkeley (173 pages).
- Wang (2003) Wang, G. (2003). Adaptive response surface method using inherited Latin Hypercube design points. J. Mech. Design. 125, 210–220.
- Xiu (2010) Xiu, D. (2010). Numerical methods for stochastic computations – A spectral method approach. Princeton University press.
- Xiu and Hesthaven (2005) Xiu, D. and J. S. Hesthaven (2005). High-order collocation methods for differential equations with random inputs. Siam. J. Sci. Comput. 27(3), 1118–1139.
- Zein et al. (2013) Zein, S., B. Colson, and F. Glineur (2013). An efficient sampling method for regression-based polynomial chaos expansion. Commun. Comput. Phys. 13(04), 1173–1188.