Sparse polynomial surrogates for aerodynamic computations with random inputs
This paper deals with some of the methodologies used to construct polynomial surrogate models based on generalized polynomial chaos (gPC) expansions for applications to uncertainty quantification (UQ) in aerodynamic computations. A core ingredient in gPC expansions is the choice of a dedicated sampling strategy, so as to define the most significant scenarios to be considered for the construction of such metamodels. A desirable feature of the proposed rules shall be their ability to handle several random inputs simultaneously. Methods to identify the relative ”importance” of those variables or uncertain data shall be ideally considered as well. The present work is more particularly dedicated to the development of sampling strategies based on sparsity principles. Sparse multi-dimensional cubature rules based on general one-dimensional Gauss-Jacobi-type quadratures are first addressed. These sets are non nested, but they are well adapted to the probability density functions with compact support for the random inputs considered in this study. On the other hand, observing that the aerodynamic quantities of interest (outputs) depend only weakly on the cross-interactions between the variable inputs, it is argued that only low-order polynomials shall significantly contribute to their surrogates. This ”sparsity-of-effects” trend prompts the use of reconstruction techniques benefiting from the sparsity of the outputs, such as compressed sensing (CS). CS relies on the observation that one only needs a number of samples proportional to the compressed size of the outputs, rather than their uncompressed size, to construct reliable surrogate models. The results obtained with the test case considered in this work corroborate this expected feature.
Key words and phrases:Computational fluid dynamics, Reynolds-averaged Navier-Stokes equations, Uncertainty quantification, Polynomial chaos, Compressed sensing
|Pitching moment coefficient|
|Static pressure coefficient|
|Total pressure coefficient|
|Parameters space dimension|
|Generic physical model|
|Discretized computational model|
|Polynomial chaos expansion coefficient of the computational model|
|Free-stream Mach number|
|Number of cubature points|
|Level of the cubature rule|
|Total order of the polynomial surrogates|
|Number of polynomials|
|Probability density function of the random input parameters|
|Polynomial set of total order|
|Sparsity (the number of non-zero entries)|
|Output quantities of interest|
|Angle of attack|
|Beta law of the first kind|
|Restricted isometry constant|
|Polynomial truncation error|
|-th sensing vector|
|-th representation vector (polynomial chaos)|
|One-dimensional quadrature rule|
|Random input parameters|
|-th cubature weight|
|-th cubature point in the parameters space|
|Index of the polynomial chaos|
|Index of a cubature point|
Surrogate models are typically used to perform optimization or uncertainty quantification (UQ) studies involving a complex modeling and simulation process, as encountered in computational fluid dynamics (CFD) among other engineering sciences applications. The principle of a surrogate model relies on an interpolation or regression procedure to estimate a scalar or vector field using a sampling data set constituted by carefully chosen outputs of the aforementioned complex process. Since the latter is often computationally expensive to run, the surrogate model shall be able to work out reliable estimates of its outputs at no extra computational costs except the evaluation of the surrogate itself. It should thus offer a non intrusive alternative to expensive runs of a complex process in order to sweep across the parameter space that influences it. When considering large parameter spaces, efficient algorithms are needed to provide an accurate surrogate representation of such parametric outputs. The kriging procedure  has gained a large attention over the past decades due to its robustness, accuracy, and ability to provide an estimate of the error done by the procedure; see the reviews by Kleijnen  or Bompard  and references therein. It has been applied to the simulation of air flow around a wing profile with the consideration of variable geometrical parameters in a previous study .
The polynomial chaos (PC) expansion is also a powerful tool for constructing spectral-like surrogate models of a parameterized process. A general methodology based on the Galerkin method has originally been proposed by Ghanem & Spanos  for the computation of the PC coefficients of the solution of a parameterized partial differential equation (PDE). This original scheme is heavily intrusive and has prompted the development of non-intrusive schemes, especially for PDEs arising in fluid dynamic models . Two approaches for computing the coefficients of a PC expansion have usually been considered: (i) a projection approach, in which they are computed by structured (i.e. Gauss quadratures) or unstructured (i.e. Monte-Carlo or quasi Monte-Carlo sampling) quadratures; or (ii) a regression approach, minimizing some error measure or truncation tolerance. Both techniques suffer from some well identified shortcomings when the dimension of the parameter space, and the number of model evaluations alike, increase. Indeed, a PC expansion of total degree in variable parameters contains coefficients. A direct way to compute them is to use a tensor product grid in the parameter space requiring evaluations of the process, where the number of evaluations needed for one particular direction in that space is related to . These runs are very often unaffordable for large parameter spaces and complex configurations, as in CFD for example. Fortunately, the Smolyak algorithm  defines sparse grid quadratures involving points while preserving a satisfactory level of accuracy. Consequently, collocation techniques with sparse quadratures or adaptive regression strategies have been developed in order to circumvent this dimensionality concern [8, 9, 10, 11].
Other directions may be considered to deal with large parameter spaces. In the work presented in this paper we aim at benefiting from the sparsity of the process outputs themselves to reconstruct their PC representations in a non-adaptive way . Indeed, we rely on the common observation that many cross-interactions between the input parameters are actually smoothened, or even negligible, once that have been propagated to some global quantities of interest processed from, say, complex aerodynamic computations. We can therefore expect to achieve a successful output recovery by the techniques known under the terminology of compressed sensing (CS) [13, 14]. In this theory the reconstruction of a sparse signal on a given, known basis requires only a limited number of evaluations at randomly selected points–at least significantly less than the a priori dimension of the basis. We thus resort to unstructured sampling sets to recover sparse outputs. We may also resort to highly structured sampling sets, such as nesting sets in large parameter spaces. The objective in this case is to be able to enrich the surrogate models by structured samples compatible with the structured samples used in a previous evaluation, without the need to re-evaluate the whole process. Classical nested quadratures in one dimension go from a set of points to a set of points (for Fejér second rule ) or points (for Clenshaw-Curtis rule [16, 17]), such that nested rules involve sets of nodes of which dimensions double each time their so-called level is incremented, . These sets can subsequently by used with the Smolyak algorithm  to construct nested sparse sets in higher dimensions, together with adaptive strategies as developed by Gerstner & Griebel  for example.
We will not consider nested rules in this paper though, leaving these developments to another publication. As invoked above, we rather focus on sparse multi-dimensional cubature rules based on one-dimensional Gauss-type quadratures, on one hand. These sets are non nested, but they are well adapted to probability density functions with compact supports for the input parameters considered in the example addressed in this work. On the other hand, the ”sparsity-of-effects” trend observed in complex simulations prompts the use of dedicated reconstruction techniques benefiting from the sparsity of the outputs. Since only low-order polynomials will significantly contribute to the output surrogates, one may infer from CS theory that only a number of samples proportional to the compressed size of the output, rather than the uncompressed size, is actually needed. In addition, these sampling points should be chosen randomly–i.e. they are unstructured. Therefore we may invoke sparsity patterns either for the input parameters or the output quantities of interest, to construct polynomial surrogates of the latter.
The rest of the paper is organized as follows. Section 2 introduces the basic configuration and CFD tools, namely a two-dimensional RAE 2822 transonic airfoil [20, 21] and the Onera in-house elsA software[22, 23]. This example serves as a guideline for the metamodeling techniques based on generalized PC expansions introduced in the subsequent sections: construction by quadrature rules (structured grids in the parameter space) in section 3 or by compressed sensing (using unstructured grids) in section 5, while the intermediate section 4 details the particular application of the former approach to the RAE 2822 airfoil. Some general conclusions and perspectives are drawn in section 6.
2. Problem setup
We start by introducing the problem setup and the numerical tools used to solve it. We consider a two-dimensional transonic flow around an RAE 2822 transonic airfoil solved by steady-state Reynolds-Averaged Navier-Stokes (RANS) equations together with a Spalart-Allmaras turbulence model closure . The nominal flow conditions are the ones described in Cook et al.  for the test case #6 together with the wall interference correction formulas derived in [24, pp. 386–387] and their slight modifications suggested in [25, p. 130] (see also the CFD verification and validation web-site of the NPARC Alliance ). The nominal free-stream Mach number , angle of attack , and Reynolds number (based on the airfoil chord length , fluid velocity, temperature and molecular viscosity at infinity) arise from the corrections and given in [25, p. 130] for the test case #6 outlined in Cook et al. , for which , , and .
2.1. Numerical nominal model
The nominal problem is discretized using a mesh shown in Fig. (1) and Fig. (2), where the boundary at infinity was left intensionally far (at about from the airfoil). These values proved to be sufficient to avoid spurious reflection with the far-field boundary. The discretized numerical solution is computed using the cell-centered finite volume CFD software elsA [22, 23]. It is based on:
First order Roe fluxes for the advection term of the turbulent variable;
Corrected second order diffusive terms based on the corrected mean of the cell-centered gradients of the two adjacent cells (referred to as the ”5p_cor” approach);
Source terms for the turbulent transport computed using the temperature gradients at the center of the cells.
The flow is attached with a weak shockwave on the suction side. The contour plot of the magnitude of the velocity are displayed on Fig. (3) and the static pressure profile at the wall are displayed on Fig. (4). Given the large number of simulations to run, the numerical parameters of the steady state algorithm proved to be essential to insure a fast convergence. This was performed using the following:
An implicite algorithm based on the Lower-Upper Symmetric Successive Overrelaxation (LU-SSOR) scheme  using relaxation cycles, increasing the CFL number after iterations to ;
A multigrid approach for the Navier-Stokes system over two grid levels with two iterations on the coarser grid;
A single fine level iteration for the turbulent equation alternating with a multigrid iteration for the RANS system.
Once the numerical parameters have been fixed, the number of iterations is determined from the evolution of the resulting global forces. A number of iterations (the discrete residuals of all equations and their decrease being checked at every iteration) appeared to be acceptable. Hence this number of iterations has been retained for all calculations so far. Further discussions on this issue are available in Dumont et al.  (available on demand).
2.2. Definition of the uncertainties
The aim of this work is to characterize the influence of uncertainties on the free-stream Mach number , angle of attack , and thickness-to-chord ratio on some aerodynamic quantities of interest, such as the drag, lift, or pitching moment coefficients , , or , respectively. These variable parameters are assumed to be independent and to follow Beta laws of the first kind . Therefore their probability density functions (PDF) read:
In the above is the usual Gamma function, and is the compact support of the random parameter . The parameters as well as the bounds for the three variable parameters , , are gathered in the Table 1 below. We note in passing that the model is the one arising from Jaynes’ maximum entropy principle [29, 30] when constraints on (i) the boundedness of the support, and (ii) the values of the averages and , are imposed. Here and in the subsequent developments the usual notation for is employed.
3. Polynomial surrogates
The computation of the first moments (mean, standard deviation, skewness, kurtosis…) of the aerodynamic quantities of interest when the variability of the parameters above is accounted for, is done thank to surrogate models, or response surfaces. We more particularly focus on polynomial surrogates in this study.
Let be a generic physical model involving parameters , such that the quantity of interest is given as:
Let be the set of -dimensional polynomials with total order with respect to . We first note that this set has cardinality such that:
A polynomial surrogate model of order for the model is obtained as:
where is the marginal PDF of the random parameters with values in . The accuracy of this approximation may be assessed considering the limit of the mean-square norm as . However such a ”convergence” does not necessarily holds, and it depends on the probability measure .
There are otherwise several ways to construct polynomial surrogates: embedded projection (this is the original spectral stochastic finite element method of Ghanem & Spanos  which is highly intrusive), non-intrusive projection (or collocation) [10, 11], kriging [1, 2, 3], etc.; see also Le Maître & Knio  for a detailed introduction. Regression is also an alternative, whereby an optimization problem is formed. For that purpose, a set of sampling points is needed in order to discretize the minimization problem (2). It is assumed in the following that these points are chosen as the integration points of a cubature rule , which provides with positive weights and nodes in such that for a smooth function one can evaluate its average by:
3.1. Regression approach
Using the foregoing cubature rule, the regression approach is formulated as a weighted least-squares minimization problem for the coefficients of the polynomial surrogate of expanded on the monomials of partial total order (i.e. with ). Let where is given by Eq. (1), then:
where , , and ; also stands for the transpose of . Numerous methods are available to solve this problem whenever ; we do not follow this approach in the subsequent developments though. We are rather interested in the situation whereby , and more interestingly . It is addressed subsequently in the section 5.
3.2. Projection approach
We assume now that a polynomial basis of is available. Then we construct the polynomial surrogate of by standard projection on the finite dimensional subspace of spanned by the truncated family of orthonormal polynomials up to the total order denoted by , where is again given by Eq. (1). The orthonormalization of this basis reads:
Then where , . Using the cubature rule of Eq. (3), these expansion coefficients are approximated by:
This corresponds to the approximation . Such expansions are referred to as polynomial chaos (PC) expansions in the dedicated literature, provided that the variable parameters follow a multi-dimensional Gaussian (normal) distribution [5, 6]. They are otherwise called generalized polynomial chaos (gPC) expansions for other distributions [31, 32].
3.3. Application to Uncertainty Quantification (UQ)
Once the polynomial surrogate model has been derived, the first moments and/or cumulants of the quantity of interest can be computed using the cubature rule and evaluations of the physical model at these nodes. Indeed, for a regular function on one can estimate a mean output functional by:
The mean is obtained for , the variance is obtained for , the skewness for , the kurtosis for , etc. More generally, the -th moment is obtained for , and may be used to compute the characteristic function :
where by the causality principle (or transport of PDFs) for one has:
Sensitivity indices may be computed alike. Denoting by the set of indices corresponding to the polynomials of depending only on the –th variable parameter , the main-effect gPC-based Sobol’ indices are given by (see e.g. Sudret ):
owing to the normalization condition (4). More generally, if is the set of indices corresponding to the polynomials of depending only on the parameters , the –fold joint sensitivity indices are:
In the subsequent application to the two-dimensional configuration described in section 2, we will consider the main-effect sensitivity indices and the –fold joint sensitivity indices .
4. Application to the two-dimensional RAE 2822 transonic airfoil
From the previous analysis, we see that the main ingredients requested for the construction of polynomial surrogates are the cubature rule and the truncated polynomial basis , for integration nodes and a multi-dimensional polynomial total order . In addition we have here for the parameter space dimension. Owing to the choices made for the variable parameters considered for this case (see Table 1), we have and . Therefore the integration points should be chosen from a Gauss-Jacobi quadrature rule, and the polynomial basis should be constituted by the multi-dimensional Jacobi polynomials which are orthogonal with respect to the weight function .
4.1. Polynomial basis
The polynomial basis adapted to the parameters PDF is constituted by the three-dimensional Jacobi polynomials , , such that . They are computed by:
where is the family of one-dimensional orthonormal Jacobi polynomials with respect to the weight function .
In the present study the polynomial surrogates constructed for the evaluation of the drag, lift and pitching moment coefficients , and , respectively, are truncated up to the total order . Therefore multi-dimensional Jacobi polynomials are ultimately retained in those gPC expansions.
4.2. Cubature rules
One-dimensional Gauss-Jacobi quadratures based on integration points are tailored to integrate on a smooth function :
such that this rule turns to be exact for polynomials up to the order . Here is the number of fixed nodes of the rule, typically the bounds . Depending on the choice of , different terminologies are used:
is the classical Gauss-Jacobi rule;
is the Gauss-Jacobi-Radau (GJR) rule, choosing or for instance;
is the Gauss-Jacobi-Lobatto (GJL) rule, choosing and for instance.
Since in our case we have chosen a total order , GJL points are needed to recover exactly the orthonormality property (4) for the corresponding one-dimensional Jacobi polynomials. Indeed should be defined such that in this situation. The –points Gauss-Jacobi rules are illustrated on Fig. (5) for various values of the parameters of the Jacobi weight function , and the –points Gauss-Jacobi-Lobatto rules are illustrated on Fig. (6). The blue dots correspond to and thus pertain to the PDF. The reason why we include the boundary nodes in the quadrature rule stems from the fact that the basic engineering practice would consider the evaluation of the physical model at the bounds of the support of the variable parameters. The main advantage of using Gauss-Jacobi quadratures is that they do not add integration points for the increased order induced by the weight function . The Clenshaw-Curtis rule  for example is typically suited for polynomials of order , though higher orders are actually achieved in practice . Thus if one uses nodes from this rule to compute the left-hand side of (5) an exact evaluation is achieved for polynomials up to the order , instead of with a GJL rule. However, the latter does not have the nesting property of the former.
Multi-dimensional quadratures (cubatures) may subsequently be obtained by tensorization and/or sparsification of these one-dimensional rules. Firstly, a multi-dimensional tensorized grid is obtained by the straightforward product rule:
It contains grid points, i.e. for the present Test Case (, ). Of course the order could be adapted depending on the –th variable parameter. Secondly, a sparse cubature rule can be derived thank to the Smolyak algorithm . The so-called –th level, -dimensional sparse rule is obtained by the following linear combination of product formulas :
For example, if and one has for the total number of grid points using a one-dimensional GJL quadrature as the generating rule, and:
By a direct extension of the arguments devised by Novak & Ritter  or Heiss & Winschel , it can be shown that such a –th level, -dimensional sparse rule based on the GJL one-dimensional rule is exact for -dimensional polynomials of total order . The total number of integration points of the rule is given for by the estimate (see e.g. Heiss & Winschel  and references therein), which typically outperforms the product rule with for . It is gathered in the Table 2 below for various combinations . The sparse rule is plotted in Fig. (8) for and , and in Fig. (10) for and together with the corresponding product rule in Fig. (7) and Fig. (9), respectively, for illustration purposes. We note that since the underlying one-dimensional GJL rule is not nested, the multi-dimensional rules are not either. Also the weights of the latter may be negative for some nodes although the underlying one-dimensional rules have positive weights.
The first four moments of the drag, lift, and pitching moment coefficients , , and , respectively, are gathered in the Table 3 below for calls to the model using the elsA software [22, 23] with the –th level full product rule (6). Table 4 gathers the same results using a -th level sparse rule (7) for which from Table 2. The reasons why we have not used a –th level sparse rule for a consistent comparison with the –th level product rule are twofold. Firstly, for this case , which is not competitive with the product rule. Secondly, the gPC coefficients are observed to be sparse so that the higher-order three-dimensional polynomials contribute only marginally to the surrogates. This observation is elaborated further on in the subsequent section 5 dealing with the sparse reconstruction approach we have applied invoking the theory of compressed sensing. Using the -th level sparse rule we are able to integrate exactly three-dimensional polynomials up to total order , hence reconstruct polynomial surrogates up to total order , corresponding to multi-dimensional Jacobi polynomials in those gPC expansions.
The main-effect sensitivity indices computed with the –th level product rule are gathered in Table 5 below, while the joint sensitivity indices are gathered in Table 6. Tables 7 and 8 display the same indices computed with the –th level sparse rule. It is clearly apparent from these results that the free-stream Mach number is the chief parameter controlling the variability of the aerodynamic coefficients in the range of analysis considered for this test case. We may also emphasize the discrepancies between the sensitivity indices computed with a full and a sparse rule, although their theoretical accuracy are different in the present case. These differences are even more pronounced for the joint sensitivity indices. The sparse reconstruction approach outlined in the next section yields sensitivities very comparable to the ones derived with the product rule, at a much lower computational cost though.
The PDFs of the three aerodynamic coefficients considered in this study are displayed on Fig. (11) through Fig. (13) and Fig. (14) through Fig. (16) using the –th level product rule and the –th level sparse rule, respectively. They were estimated from evaluations of the gPC surrogates and smoothing out the resulting histograms by a normal kernel density function. The means are shown on the plots with vertical blue lines.
Finally, the mean total pressure coefficients along the profile are displayed on Fig. (17) using the –th level product rule and on Fig. (18) using the –th level sparse rule. Error bars at , where is the standard deviation of the ’s, are also shown, together with the nominal total pressure coefficient (dotted line) obtained from a computation with the nominal parameters , , , and the nominal thickness-to-chord ratio. We observe an unexpected drop of the standard deviation at locally at the suction side when the –th level sparse rule is used. This may be related to the negativeness of some weights with sparse rules, however we have not been able to find a more detailed explanation of this probable anomaly so far.
5. Non-adapted sparse reconstruction by –minimization
The results of the previous section, especially the main-effect and joint sensitivity indices, suggest that only low-order interactions exist between the variable parameters for their effect on the aerodynamic coefficients of interest. This indicates that among the gPC coefficients to be computed for the construction of their polynomial surrogates , the ones pertaining to the one-dimensional polynomials (depending on a single variable parameter) dominate the others. Hence the vector of gPC coefficients of has many negligible components, so that it is compressible in the terminology of the theory of compressed sensing [13, 14, 37]. In this setting it is argued that only a number of samples proportional to the compressed size, rather than the uncompressed size, of the sought signal is needed in order to reconstruct it. This observation challenges the conventional approaches to sampling or imaging according to Shannon’s theorem, which states that the sampling rate (the Nyquist rate) must be at least twice the maximum frequency present in the signal. Compressed sensing, or compressive sampling (CS), asserts that one can recover some signals from far fewer samples or measurements than traditionally used in the widespread signal acquisition techniques. CS relies on two principles to make this possible:
Sparsity, which express the fact that many signals may have a concise representation once they are expressed in a proper basis ;
Incoherence, which express the fact that signals having a sparse representation in a given basis are actually spread out in the domain in which they are acquired. Or in other words, the sensing functions used to probe the signal have a dense representation in the basis .
The reconstruction procedure consists in correlating the signal with a small number of predefined sensing functions (for example, sinusoids if one aims at computing a Fourier transform) which are incoherent with the basis in which the signal is sparse. It is non adapted because it identifies the sparsity pattern, that is the order (location) of the negligible components of the signal in its sparsifying basis, and the leading components at the same time. The procedure can therefore efficiently capture the relevant information of a sparse signal without trying to comprehend that signal . This is clearly a much desirable feature for practical industrial applications.
5.1. Theoretical background
We basically follow the introductory paper of Candès & Wakin  in this introductory section to CS. The sparse signal to be reconstructed is the polynomial surrogate model of total order , for which information is obtained by recording values of the model :
Typically is a sinusoid and is a Fourier coefficient if a Fourier transform is applied, or a Dirac function if the model is evaluated at some sampling point : . This latter sensing procedure is the one considered in this work. Letting be the sensing matrix of which rows are the vectors ( is the conjugate transpose of ), the process of recovering a discretized version of the model from the observation of outputs reads:
This problem is generally ill-posed whenever , but CS theory tells us that a unique solution may be obtained if the vector is sparse, though. This is actually the case once the model is expanded on the orthonormal basis constituted by the polynomial chaos pertaining to the randomly variable parameters introduced in section 2. This property is observed a posteriori for the present problem from the results expounded in the foregoing section 4. Incidentally, it should be noted that sparsity can be proved a priori for some parametric, possibly non-linear elliptic and parabolic partial differential equations in a general framework; see Chkifa et al.  and references therein. However, the extension of these analyses to the present RANS system supplemented with a turbulence closure model, not to say the Navier-Stokes equations, does not seem actually feasible.
The discretized version of the model in terms of the polynomial surrogate reads in its sparse representation on the basis . Applying the sensing procedure (8) to the polynomial surrogate yields:
where is the sparse vector of the gPC coefficients in the basis to be computed, and is the so-called measurement matrix given by . Using as done here, it is a Vandermonde-type matrix with . Again, Eq. (9) is ill-posed whenever , unless the sparsity of the sought solution is accounted for. Regularized versions of Eq. (9) exists for this case, which in turn ensure its well-posedness. Since the polynomial chaos expansion truncated at the total order is not complete for the exact representation of , a truncation error has also to be accounted for in the solution process. Together with the sparsity of the gPC coefficients, this can be accommodated by reformulating Eq. (9) as the following –minimization problem, known as Basis Pursuit Denoising (BPDN) :
for some tolerance on the polynomial chaos truncation. The norms above are defined by , and is some diagonal weighting matrix. Its role is to prevent the algorithm from biasing toward the non-negligible entries of of which associated columns in have large norms [12, 40]. Now the strategy for our present study is to solve with runs of the physical model significantly lower than the number of coefficients to be identified. CS shows that it is achievable provided that the target is actually sparse, or nearly sparse (compressive), and some constraints on the measurement matrix are fulfilled.
As already stated above, a key requirement for the successful recovery of a sparse signal is incoherence between the sensing basis and the representation basis . It is quantified by the following mutual coherence :
where stands for the –th column of . It is a measure of how close to orthogonality the measurement matrix is. Based on this coherency measure, the following theorem from Candès & Romberg  asserts that if is sufficiently sparse in , the recovery of its gPC coefficients by –minimization is exact.
Assume that is –sparse on the gPC basis , that is it has at most non-zero entries. Then if sampling points are selected at random to form the measurement matrix , and:
for some constant , the solution of is exact with ”overwhelming” (sic) probability.
More precise results with structured random measurement matrices are given by e.g. Rauhut & Ward . It should be noted that the role of coherence in this result is transparent. The smaller the coherence is, the closer the measurement matrix is to a unitary matrix, and the fewer sampling points are needed.
The previous Theorem 5.1 is however not entirely satisfactory from a practical point of view because (i) it does not allow for some truncation error, or noisy/imprecise measurements; (ii) it does not deal with approximately sparse (compressive) signals, for which a large subset of entries are negligible rather than strictly zero. These shortcomings may be alleviated simultaneously as established by Candès et al. . To achieve this, a constraint on the measurement matrix needs be added to gain robustness in CS, the so-called restricted isometry property (RIP, also quoted as uniform uncertainty principle). For each integer , the isometry constant of is defined as the smallest number such that:
for all -sparse vectors . Then is said to satisfy the RIP of order if, say, is not too close to . This property amounts to saying that all –column submatrices of are numerically well-conditioned, or columns selected arbitrarily in are nearly orthogonal. The following theorem by Candès et al. [13, 37] then states that can be solved efficiently:
This result calls for several comments. First, it is more general than Theorem 5.1 since, if the signal is exactly –sparse, and the reconstruction is exact whenever (noiseless case). Second, it deals with all signals, not the –sparse ones solely. Third, it is deterministic and does not involve any probability. Lastly, the bound on is the one originally proposed by Candès & Wakin  but it can be improved as proposed by e.g. Mo & Li ; such improvements are an active field of research at present.
5.2. Application to the two-dimensional RAE 2822 transonic airfoil
We now apply the foregoing CS procedure to the non-adaptive computation of the gPC coefficients of the surrogate models for the aerodynamic coefficients , and . We use sampling points drawn at random following PDFs as defined in section 2. This sampling set is displayed on Fig. (19) below. The sole reason why we have chosen this sampling size if for its ease of use with the multithreading setup of our CFD software elsA [22, 23].
We subsequently apply BPDN to compute . For that purpose we use the Spectral Projected Gradient Algorithm (SPGL1) developed by van den Berg & Friedlander  and implemented in the Matlab package SPGL1  to solve this –minimization problem. The tolerance was fixed at and we were able to find a solution for all surrogates with this a priori choice without resorting to cross-validation, for example. Further investigations should be carried on on this topic, though. It should also be noted that no particular sampling strategy, such as stratification, low-discrepancy series, or preconditioning, has been applied at this stage to select the sampling set. Moreover, the weighting matrix was selected as the identity matrix. Alternative sampling and weighting strategies are outlined in several recent works[12, 40, 45, 46, 47, 48], though.
The mean and standard deviation of the drag, lift, and pitching moment coefficients , , and , respectively, are gathered in the Table 9 below. The main-effect sensitivity indices are gathered in Table 10, while the joint sensitivity indices are gathered in Table 11. These results are very close to the ones obtained by the –th level product rule. The PDFs of the aerodynamic coefficients considered here are displayed on Fig. (20) through Fig. (22). As for Fig. (11) through Fig. (16), they were estimated from evaluations of the gPC surrogates and smoothing out the resulting histograms by a normal kernel density function. The means are again shown on the plots with vertical blue lines. We finally compare on Fig. (23) through Fig. (25) (in scale) the PDFs computed by the three approaches considered in this work. We observe that the –th level product rule (with structured sampling points) and –minimization (with randomly selected sampling points) yield comparable results, but of course at a much lower computational cost with this latter technique.
In this paper we have addressed various methodologies with relevance to the construction of polynomial surrogates for parameterized complex processes as encountered in CFD problems. The present work has been more particularly focused on the development of dedicated sampling sets in the parameter space using either structured or unstructured grids. These techniques were illustrated with the example of an RAE 2822 airfoil in the transonic regime considering variable geometrical (the thickness-to-chord ratio) and operational (the free-stream Mach number and angle of attack) parameters.
Firstly, multi-dimensional sparse cubature rules based on one-dimensional Gauss-Jacobi rules have been used for uncertainty quantification of this two-dimensional aerodynamic computation. The quantities of interest are the usual drag, lift, and pitching moment coefficients for which polynomial surrogates are sought for using the aforementioned sampling sets as learning sets. More particularly, Gauss-Jacobi-Lobatto points have been considered since the probability density functions of the variable parameters have finite supports. Indeed, the engineering practice would typically include the boundary points of the parameter space in the learning sets.
Secondly, observing a posteriori that the aerodynamic quantities of interest are sparse in that parametric space, when projected on the multi-dimensional orthogonal polynomials associated to the parameters probability density functions, an –minimization procedure has been applied in the framework of the theory of compressed sensing. The latter allows to recover the expansion coefficients of the quantities of interest at a much lower computational cost than the sparse grids addressed in the first approach. Unstructured sampling points are needed in this process, selecting them randomly in the parameter space. Their number is typically less than the dimension of the polynomial space where the surrogates are sought for, and thus typically much less than the number of points of the multi-dimensional sparse rules that may be used for a given level of accuracy. The –minimization procedure is non-adaptive in the sense that it identifies both the amplitude of the leading expansion coefficients and their order. It thus constitutes a promising direction for future developments in practical applications for more complex geometries and flows, where adaptive strategies within the parametric space, weighted minimization, or preconditioned sampling sets may be needed.
This work has been supported by the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement #ACP3-GA-2013-605036 (UMRIDA Project www.umrida.eu).
- Krige, D.G. A statistical approach to some basic mine valuation problems on the Witwatersrand. J. Chem. Metal. Mining Soc. South Africa, 52(6):119-139 (1951).
- Kleijnen, J.P.J. Kriging metamodeling in simulation: A review. Eur. J. Operat. Research, 192(3):707-716 (2009).
- Bompard, M. Modèles de substitution pour l’optimisation globale de forme en aérodynamique et méthode locale sans paramétrisation. PhD Thesis, Université Nice-Sophia Antipolis, December 2011.
- Dumont, A., Passaggia, P.-Y., Peter, J., Salah el Din, I., Savin, É. Metamodels for the numerical errors with non intrusive UQ methods. UMRIDA Technical Report D2.3-12(9), Onera, Châtillon (2014).
- Ghanem, R., Spanos, P.D. Stochastic Finite Elements: A Spectral Approach. Springer, New York NY (1991).
- Le Maître, O., Knio, O. Spectral Methods for Uncertainty Quantification. With Applications to Computational Fluid Dynamics. Springer, Dordrecht (2010).
- Smolyak, S.A. Quadrature and interpolation formulas for tensor products of certain classes of functions. Soviet Math. Dokl., 4:240-243 (1963).
- Eldred, M., Burkardt, J. Comparison of non-intrusive polynomial chaos and stochastic collocation methods for uncertainty quantification. AIAA paper #2009-976 (2009).
- Blatman, G., Sudret, B. An adaptive algorithm to build up sparse polynomial chaos expansions for stochastic finite element analysis. Prob. Engng. Mech., 25(2):183-197 (2010).
- Ghiocel, D.M., Ghanem, R.G. Stochastic finite-element analysis of seismic soil-structure interaction. ASCE J. Engng. Mech., 128(1):66-77 (2002).
- Xiu, D., Hesthaven, J.S. High-order collocation methods for differential equations with random inputs. SIAM J. Sci. Comput., 27(3):1118-1139 (2005).
- Doostan, A., Owhadi, H. A non-adapted sparse approximation of PDEs with stochastic inputs. J. Comput. Phys., 230(8):3015-3034 (2011).
- Candès, E.J., Romberg, J.K., Tao, T. Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math., 59(8):1207-1223 (2006).
- Donoho, D.L. Compressed sensing. IEEE Trans. Inform. Theory, 52(4):1289-1306 (2006).
- Fejér, L. Mechanische Quadraturen mit positiven Cotesschen Zahlen. Math. Z., 37(1):287-309 (1933).
- Clenshaw, C.W., Curtis., A.R. A method for numerical integration on an automatic computer. Numer. Math., 2(1):197-205 (1960).
- Trefethen, L.N. Is Gauss quadrature better than Clenshaw-Curtis? SIAM Rev., 50(1):67-87 (2008).
- Gerstner, T., Griebel, M. Numerical integration using sparse grids. Numer. Algorithms, 18(3-4):209-232 (1998).
- Spalart, P., Allmaras, S. A one-equation turbulence model for aerodynamic flows. AIAA Paper #1992-0439 (1992).
- Cook, P.H., McDonald, M.A., and Firmin, M.C.P. Aerofoil RAE 2822 – Pressure distributions, and boundary layer and wake measurements. In Experimental Data Base for Computer Program Assessment. AGARD Advisory Report No. 138, NATO, May 1979; Appendix A6.
- http://www.grc.nasa.gov/WWW/wind/valid/raetaf/raetaf01/raetaf01.html. Online the April 21, 2015.
- Cambier, L., Heib, S., Plot, S. The Onera elsA CFD software: input from research and feedback from industry. Mechanics & Industry, 14(3):159-174 (2013).
- http://elsa.onera.fr/. Online the May 30, 2015.
- Garner, H.C., Rogers, E.W.E., Acum, W.E.A., and Maskell, E.C. Subsonic Wind Tunnel Wall Corrections. AGARDograph No. 109, NATO, October 1966; Chapter 6.
- Haase, W., Bradsma, F., Elsholz, E., Leschziner, M., and Schwamborn, D. (eds.). EUROVAL–An European initiative on validation of CFD codes. Notes on Numerical Fluid Mechanics 42, Vieweg Verlag, Wiesbaden (1993); Section 5.1.
- van Leer, B. Towards the ultimate conservative difference scheme, V. A second order sequel to Godunov’s method. J. Comput. Phys., 32(1):101-136 (1979).
- van Albada, G.D., van Leer, B., Roberts, W.W. A comparative study of computational methods in cosmic gas dynamics. Astronomy & Astrophysics 108(1):76-84 (1982).
- Yoon, S.-K., Jameson, A. An LU-SSOR scheme for the Euler and Navier-Stokes equations. AIAA Paper #1987-0600 (1987).
- Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev., 106(4):620-630 (1957).
- Jaynes, E.T. Information theory and statistical mechanics II. Phys. Rev., 108(2):171-190 (1957).
- Xiu, D., Karniadakis, G.E. The Wiener–Askey polynomial chaos for stochastic differential equations. SIAM J. Sci. Comput., 24(2):619-644 (2002).
- Soize, C., Ghanem, R. Physical systems with random uncertainties: chaos representations with arbitrary probability measure. SIAM J. Sci. Comput., 26(2):395-410 (2004).
- Sudret, B. Global sensitivity analysis using polynomial chaos expansions. Reliab. Eng. Syst. Safe., 93(7):964-979 (2008).
- Wasilkowski, G.W., Woźniakowski, H. Explicit cost bounds of algorithms for multivariate tensor product problems, J. Complexity, 11(1):1-56 (1995).
- Novak, E., Ritter, K. Simple cubature formulas with high polynomial exactness. Constructive Approx. 15(4):499-522 (1999).
- Heiss, F., Winschel, V. Likelihood approximation by numerical integration on sparse grids. J. Econometrics, 144(1):62-80 (2008).
- Candès, E.J., Wakin, M.B. An introduction to compressive sampling. IEEE Sig. Proc. Mag., 25(2):21-30 (2008).
- Chkifa, A., Cohen, A., Schwab, C. Breaking the curse of dimensionality in sparse polynomial approximation of parametric PDEs. J. Math. Pure Appl., 103(2):400-428 (2014).
- Chen, S.C., Donoho, D.L., Saunders, M.A. Atomic decomposition by basis pursuit. SIAM J. Sci. Comput., 20(1):33-61 (1998).
- Peng, J., Hampton, J., Doostan, A. A weighted –minimization approach for sparse polynomial chaos expansions. J. Comput. Phys., 267:92-111 (2014).
- Candès, E., Romberg, J.K. Sparsity and incoherence in compressive sampling. Inverse Probl., 23(3):969-985 (2007).
- Mo, Q., Li, S. New bounds on the restricted isometry constant . Appl. Comput. Harm. Anal., 31(3):460-468 (2011).
- van den Berg, E., Friedlander, M.P. Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput., 31(2):890-912 (2008).
- van den Berg, E., Friedlander, M.P. SPGL1: A solver for large-scale sparse reconstruction. June 2007.
- Rauhut, H., Ward, R. Sparse Legendre expansions via –minimization. J. Approx. Th., 164(5):517-533 (2012).
- Yan, L., Guo, L., Xiu, D. Stochastic collocation algorithms using –minimization. Int. J. Uncertainty Quantification, 2(3):279-293 (2012).
- Yuang, X., Karniadakis, G.E. Reweighted minimization method for stochastic elliptic differential equations. J. Comput. Phys., 248:87-108 (2013).
- Hampton, J., Doostan, A. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies. J. Comput. Phys., 280:363-386 (2015).