Global sensitivity analysis using low-rank tensor approximations

# Global sensitivity analysis using low-rank tensor approximations

K. Konakli Chair of Risk, Safety and Uncertainty Quantification, ETH Zurich, Stefano-Franscini-Platz 5, 8093 Zurich, Switzerland B. Sudret Chair of Risk, Safety and Uncertainty Quantification, ETH Zurich, Stefano-Franscini-Platz 5, 8093 Zurich, Switzerland
###### Abstract

In the context of global sensitivity analysis, the Sobol’ indices constitute a powerful tool for assessing the relative significance of the uncertain input parameters of a model. We herein introduce a novel approach for evaluating these indices at low computational cost, by post-processing the coefficients of polynomial meta-models belonging to the class of low-rank tensor approximations. Meta-models of this class can be particularly efficient in representing responses of high-dimensional models, because the number of unknowns in their general functional form grows only linearly with the input dimension. The proposed approach is validated in example applications, where the Sobol’ indices derived from the meta-model coefficients are compared to reference indices, the latter obtained by exact analytical solutions or Monte-Carlo simulation with extremely large samples. Moreover, low-rank tensor approximations are confronted to the popular polynomial chaos expansion meta-models in case studies that involve analytical rank-one functions and finite-element models pertinent to structural mechanics and heat conduction. In the examined applications, indices based on the novel approach tend to converge faster to the reference solution with increasing size of the experimental design used to build the meta-model.

Keywords: Global sensitivity analysis – Sobol’ indices – Low-rank approximations – Polynomial chaos expansions

## 1 Introduction

Robust predictions via computer simulation necessitate accounting for the prevailing uncertainties in the parameters of the computational model. Uncertainty quantification provides the mathematically rigorous framework for propagating the uncertainties surrounding the model input to a response quantity of interest. It comprises three fundamental steps (Sudret, 2007; De Rocquigny, 2012): First, the model representing the physical system under consideration is defined; the model maps a given set of input parameters to a unique value of the response quantity of interest. The second step involves the probabilistic description of the input parameters by incorporating available data, expert judgment or a combination of both. In the third step, the uncertainty in the input is propagated upon the response quantity of interest through repeated evaluations of the model for appropriate combinations of the input parameters. In cases when the uncertainty in the response proves excessive or when it is of interest to reduce the dimensionality of the model, sensitivity analysis may be employed to rank the input parameters with respect to their significance for the response variability. Accordingly, important parameters may be described in further detail, whereas unimportant ones may be fixed to nominal values.

Methods of sensitivity analysis can be generally classified into local and global methods. Local methods are limited to examining effects of variations of the input parameters in the vicinity of nominal values. Global methods provide more complete information by accounting for variations of the input parameters in their entire domain. Under the simplifying assumption of linear or nearly linear behavior of the model, global sensitivity measures can be computed by fitting a linear-regression model to a set of input samples and the respective responses (see e.g. Iooss and Lemaître (2014); Wei et al. (2015) for definitions of such measures). The same methods can be employed in cases with models that behave nonlinearly but monotonically, after applying a rank transformation on the available data. Variance-based methods represent a more powerful and versatile approach, also applicable to nonlinear and non-monotonic models. These methods, known as functional ANOVA (denoting ANalysis Of VAriance) techniques in statistics, rely upon the decomposition of the response variance as a sum of contributions from each input parameter or their combinations (Efron and Stein, 1981). The Sobol’ indices, originally proposed in Sobol’ (1993), constitute the most popular tool thereof. Although these indices have proven powerful in a wide range of applications, their definition is ambiguous in cases with dependent input variables, which has led to different extensions of the original framework (Da Veiga et al., 2009; Li et al., 2010; Kucherenko et al., 2012; Mara and Tarantola, 2012; Zhang et al., 2015). An alternative perspective is offered by the distribution-based indices, which are well-defined regardless of the dependence structure of the input (Borgonovo, 2007; Liu and Homma, 2010; Borgonovo et al., 2014; Zhou et al., 2015; Greegar and Manohar, 2016). The key idea is to use an appropriate distance measure to evaluate the effect of suppressing the uncertainty in selected variables on the Probability Density Function (PDF) or the Cumulative Distribution Function (CDF) of the response. These indices are especially useful when consideration of the variance only is not deemed sufficient to characterize the response uncertainty. However, contrary to the Sobol’ indices, they do not sum up to unity, which may hinder interpretation. For further information on global sensitivity analysis methods, the interested reader is referred to the review papers (Saltelli et al., 2008; Iooss and Lemaître, 2014; Wei et al., 2015; Borgonovo and Plischke, 2016).

It should be mentioned that different classifications of sensitivity analysis techniques can be found in the literature. In cases when one needs to perform a fast exploration of the model behavior with respect to a possibly large number of uncertain input parameters, the so-called screening methods may be employed. These methods can provide a preliminary ranking of the importance of the various input parameters at low computational cost before more precise and costly methods are applied. The Cotter method (Cotter, 1979) and the Morris method (Morris, 1991) are widely used screening methods, with the latter covering the input space in a more exhaustive manner. The more recently proposed derivative-based global sensitivity measures can also be classified into this category, while they also provide upper bounds for the Sobol’ indices (Sobol’ and Kucherenko, 2009; Lamboni et al., 2013; Sudret and Mai, 2015).

The focus of the present paper is on sensitivity analysis by means of Sobol’ indices. We limit our attention to cases with independent input and address the computation of these indices for high-dimensional expensive-to-evaluate models, which are increasingly used across engineering and sciences. Various methods have been investigated for computing the Sobol’ indices based on Monte Carlo simulation (Archer et al., 1997; Sobol’, 2001; Saltelli, 2002; Sobol’ and Kucherenko, 2005; Saltelli et al., 2010); because of the large number of model evaluations required, these methods are not affordable for computationally costly models. To overcome this limitation, more efficient estimators have recently been proposed (Sobol et al., 2007; Janon et al., 2013; Kucherenko et al., 2015; Owen, 2013). A different approach is to substitute a complex model by a meta-model, which has similar statistical properties while maintaining a simple functional form (see e.g. Oakley and O’Hagan (2004); Marrel et al. (2009); Storlie et al. (2009); Zuniga et al. (2013); Zhang and Pandey (2014); Le Gratiet et al. (2016) for global sensitivity analysis with various types of meta-models). Sudret (2008) proposed to compute the Sobol’ indices by post-processing the coefficients of Polynomial Chaos Expansion (PCE) meta-models. The key concept in PCE is to expand the model response onto a basis made of orthogonal multivariate polynomials in the input variables. The computational cost of the associated Sobol’ indices essentially reduces to the cost of estimating the PCE coefficients, which can be curtailed by using sparse PCE (Blatman and Sudret, 2010b). The PCE-based approach for computing the Sobol’ indices is employed by a growing number of researchers in various fields including hydrogeology (Fajraoui et al., 2011; Formaggia et al., 2013; Deman et al., 2015), geotechnics (Al Bittar and Soubra, 2013), ocean engineering (Alexanderian et al., 2012), biomedical engineering (Huberts et al., 2014), hybrid dynamic simulation (Abbiati, Marelli, Bursi, Sudret, and Stojadinovic, Abbiati et al.) and electromagnetism (Kersaudy et al., 2014; Yuzugullu et al., 2015). Unfortunately, the PCE approach faces the so-called curse of dimensionality, meaning the exploding size of the candidate basis with increasing dimension.

The goal of this paper is to derive a novel approach for solving global sensitivity analysis problems in high dimensions. To this end, we make use of a recently emerged technique for building meta-models with polynomial functions based on Low-Rank Approximations (LRA) (Nouy, 2010; Chevreuil et al., 2013; Doostan et al., 2013; Hadigol et al., 2014; Validi, 2014; Konakli and Sudret, 2015b). LRA express the model response as a sum of a small number or rank-one tensors, which are products of univariate functions in each of the input parameters. Because the number of unknown coefficients in LRA grows only linearly with the input dimension, this technique is particularly promising for dealing with cases of high dimensionality. We herein derive analytical expressions for the Sobol’ sensitivity indices based on the general functional form of LRA with polynomial bases. As in the case of PCE, the computational cost of the LRA-based Sobol’ indices reduces to the cost of estimating the coefficients of the meta-model. Once a LRA meta-model of the response quantity of interest is available, the Sobol’ indices can be calculated with elementary operations at nearly zero additional computational cost.

The paper is organized as follows: In Section 2, we review the basic concepts of Sobol’ sensitivity analysis and define the corresponding sensitivity indices. In Section 3, we describe the mathematical setup of non-intrusive meta-modeling and define error measures that characterize the meta-model accuracy. After reviewing the computation of Sobol’ indices using PCE meta-models in Section 4, we introduce the LRA-based approach in Section 5. In this, we first detail a greedy algorithm for the construction of LRA in a non-intrusive manner and then, use their general functional form to derive analytical expressions for the Sobol’ indices. In Section 6, we demonstrate the accuracy of the proposed method by comparing the LRA-based indices to reference ones, with the latter representing the exact solution or Monte-Carlo estimates relying on a large sample of responses of the actual model. Furthermore, we examine the comparative performance of the LRA- and PCE-based approaches in example applications that involve analytical rank-one functions and finite-element models pertinent to structural mechanics and heat conduction. The main findings are summarized in Section 7.

## 2 Sobol’ Sensitivity analysis

We consider a computational model describing the behavior of a physical or engineered system of interest. Let denote the -dimensional input random vector of the model with prescribed joint PDF . Due to the input uncertainties embodied in , the model response becomes random. By limiting our focus to a scalar response quantity , the computational model represents the map:

 X∈DX⊂RM⟼Y=M(X)∈R, (1)

where denotes the support of .

Sobol’ sensitivity analysis aims at apportioning the uncertainty in , described by its variance, to contributions arising from the uncertainty in individual input variables and their interactions. As explained in the Introduction, the theoretical framework described in the sequel is confined to the case when the components of are independent. Under this assumption, the joint PDF is the product of the marginal PDF of each input parameter.

### 2.1 Sobol’ decomposition

Assuming that is a square-integrable function with respect to the probability measure associated with , its Sobol’ decomposition in summands of increasing dimension is given by Sobol’ (1993):

 (2)

where , , denotes a subset of and is the subvector of containing the variables of which the indices comprise .

The uniqueness of the decomposition is ensured by choosing summands that satisfy the conditions:

 M0=E[M(X)] (3)

and

 (4)

Note that the above condition implies that all summands , in Eq. (2) have zero mean values. A recursive construction of summands satisfying the above conditions is obtained as:

 Mi(Xi)=E[M(X)|Xi]−M0Mi,j(Xi,Xj)=E[M(X)|Xi,Xj]−Mi(Xi)−Mj(Xj)−M0 (5)

and so on. By introducing:

 ˜Mu(Xu)=E[M(X)|Xu], (6)

the above recursive relationship is written in the general form:

 (7)

### 2.2 Sobol’ sensitivity indices

The uniqueness and orthogonality properties of the Sobol’ decomposition allow the following decomposition of the variance of :

 D=Var[M(X)]=M∑i=1Di+∑1≤i

where denotes the partial variance:

 Du=Var[Mu(Xu)]=E[(Mu(Xu))2]. (9)

The Sobol’ index , defined as:

 Su=DuD=Var[Mu(Xu)]Var[M(X)], (10)

represents the fraction of the total variance that is due to the interaction between the components of , i.e. describes the influence from the interaction between variables . By definition, the Sobol’ indices satisfy:

 M∑i=1Si+∑1≤i

Accordingly, the first-order index for a single variable is defined as:

 Si=DiD=Var[Mi(Xi)]Var[M(X)] (12)

and describes the influence of considered separately, also called main effect. It is noted that the first-order Sobol’ indices are equivalent to the first-order indices obtained by the Fourier amplitude sensitivity test FAST method (Cukier et al., 1978; Saltelli et al., 1999). The total Sobol’ index, denoted by , represents the total effect of , accounting for its main effect and all interactions with other input variables. It is derived from the sum of all partial indices that involve the variable :

 STi=∑u⊂{1,…,M}i∈uSu. (13)

First-order and total Sobol’ indices are also defined for groups of variables (Sobol’, 1993; Kucherenko et al., 2012; Owen, 2013). We herein respectively denote by and the first-order and total Sobl’ indices of the subvector of . The first-order index describes the influence of the elements of only, including their main effects and all interactions with other components of :

 ˜Su=∑v⊂uv≠∅Sv. (14)

The total index describes the total effect of the components of , including their main effects and all interactions with other variables in :

 ˜STu=∑v⊂{1,…,M}v∩u≠∅Sv. (15)

The total index can equivalently be obtained as:

 ˜STu=1−˜S∖u, (16)

where denotes the complementary set of so that . For , Eq. (14) and Eq. (15) respectively reduce to Eq. (12) and Eq. (13).

To elaborate on the above definitions of the Sobol’ indices, let us consider a model with three input variables i.e. . Setting for instance, the first-order and total Sobol’ indices of are respectively given by:

 ˜S1,2=S1+S2+S1,2 (17)

and

 ˜ST1,2=S1+S2+S1,2+S1,3+S2,3+S1,2,3=1−S3. (18)

Eq. (17) follows from the specialization of Eq. (14) to the considered case. In Eq. (18), the first part is consistent with the definition in Eq. (15), whereas the second part follows from Eq. (16).

Note that the first-order index of the subvector can alternatively be obtained as:

 ˜Su=Var[E[M(X)|Xu]]Var[M(X)]=Var[˜Mu(Xu)]Var[M(X)]. (19)

The equivalence between the above equation and Eq. (14) can be verified easily by considering the recursive relationship in Eq. (7) and the orthogonality property in Eq. (4). For the total index of the subvector , Eq. (16) in conjunction with Eq. (19) leads to:

 ˜STu=1−Var[E[M(X)|X∖u]]Var[M(X)]=1−Var[˜M∖u(X∖u)]Var[M(X)]. (20)

The Sobol’ indices can be computed by estimating the variance of the summands in Eq. (7) using sampling-based formulas (Sobol’, 1993; Saltelli, 2002). However, this computation often requires a large number of model evaluations (say more than per index) to obtain accurate estimates; it thus becomes cumbersome or even intractable in cases when a single evaluation of the model is time consuming. In Sections 4 and 5, we demonstrate that the Sobol’ indices can be evaluated analytically in terms of the coefficients of LRA or PCE meta-models. Consequently, once a PCE or LRA representation of is available, the Sobol’ indices indices can be obtained at nearly zero additional computational cost. In the case of PCE meta-models, the derivation is based on Eq. (14) and Eq. (15), following the original idea presented in Sudret (2008). In the case of LRA meta-models, analytical expressions for the Sobol’ indices are herein derived by relying on Eq. (19) and Eq. (20).

## 3 Non-intrusive meta-modeling and error estimation

A meta-model is an analytical function that mimics the behavior of in Eq. (1). In non-intrusive approaches, which are of interest herein, the meta-model is developed using a set of realizations of the input vector , called Experimental Design (ED), and the corresponding responses of the original model . Thus, non-intrusive meta-modeling treats the original model as a black box.

In the following, we describe measures of accuracy of the meta-model response . To this end, we introduce the discrete semi-norm of a function , given by:

 ∥a∥X=(1nn∑i=1a2(xi))1/2, (21)

where denotes a set of realizations of the input vector.

A good measure of the accuracy of the meta-model is the generalization error , which represents the mean-square of the difference . can be estimated by:

 ˆErrG=∥∥M−ˆM∥∥2Xval, (22)

where is a sufficiently large set of realizations of the input vector, called validation set. The estimate of the relative generalization error, denoted by , is obtained by normalizing with the empirical variance of .

However, meta-models are typically used in cases when only a limited number of model evaluations is affordable. It is thus desirable to obtain an estimate of by relying solely on the ED. One such error estimate is the empirical error , given by:

 ˆErrE=∥∥M−ˆM∥∥2E, (23)

in which the subscript indicates that the semi-norm is evaluated at the points of the ED. The relative empirical error, denoted by , is obtained by normalizing with the empirical variance of , the latter representing the set of model responses at the ED. Unfortunately, tends to underestimate the actual generalization error, which might be severe in cases of overfitting. Indeed, it can even decrease down to zero if the obtained meta-model interpolates the data at the ED, while it is not necessarily accurate at other points.

By using the information contained in the ED only, a fair approximation of the generalization error can be obtained with cross-validation techniques. In -fold cross-validation, (i) the ED is randomly partitioned into sets of approximately equal size, (ii) for , a meta-model is built considering all but the -th partition and the excluded set is used to evaluate the respective generalization error, (iii) the generalization error of the meta-model built with the full ED is estimated as the average of the errors of the meta-models obtained in (ii). Leave-one-out cross-validation corresponds to the case when , i.e. one point of the ED being set apart in turn (Allen, 1971).

## 4 Sobol’ sensitivity analysis using polynomial chaos expansions

### 4.1 Formulation and construction of polynomial chaos expansions

A meta-model of in Eq. (1) belonging to the class of PCE has the form (Xiu and Karniadakis, 2002):

 YPCE=MPCE(X)=∑α∈AyαΨα(X), (24)

where denotes a set of multi-indices , is a set of multivariate polynomials that are orthonormal with respect to and is the set of polynomial coefficients. The orthonormality condition reads:

 E[Ψα(X)Ψβ(X)]=δαβ, (25)

where is the Kronecker delta, equal to one if and zero otherwise.

The multivariate polynomials that comprise the PCE basis are obtained by tensorization of appropriate univariate polynomials:

 Ψα(X)=M∏i=1P(i)αi(Xi). (26)

In the above equation, is a polynomial of degree in the -th input variable belonging to a family of polynomials that are orthonormal with respect to , thus satisfying:

 E[P(i)j(Xi)P(i)k(Xi)]=δjk. (27)

For standard distributions, the associated family of orthonormal polynomials is well-known, e.g. a standard normal variable is associated with the family of Hermite polynomials, whereas a uniform variable over is associated with the family of Legendre polynomials. A general case can be treated through an isoprobabilistic transformation of the input random vector to a basic random vector, e.g. a vector with independent standard normal components or independent uniform components over .

The set of multi-indices in Eq. (24) is determined by an appropriate truncation scheme. A typical scheme consists in selecting multivariate polynomials up to a total degree , i.e. , with . The corresponding number of terms in the truncated series is:

 cardA=(M+ptpt)=(M+pt)!M!pt!, (28)

which grows exponentially with , thus giving rise to the so-called curse of dimensionality. Because the terms that include interactions between input variables are usually less significant, Blatman and Sudret (2010a) proposed to use a hyperbolic truncation scheme, where the set of retained multi-indices is defined as , with the -norm given by:

 ∥α∥q=(M∑i=1αiq)1/q,0

According to Eq. (29), the lower the value of , the smaller is the number of interaction terms in the PCE basis. The case corresponds to the common truncation scheme using a maximal total degree , whereas the case corresponds to an additive model.

Once the basis has been specified, the set of coefficients may be computed by minimizing the mean-square error of the approximation over the ED. More efficient solution schemes can be devised by considering the respective regularized problem:

 y=argminυ∈RcardA∥∥ ∥∥M−∑α∈AυαΨα∥∥ ∥∥2E+λP(υ), (30)

in which is an appropriate regularization functional of . If is selected as the norm of , i.e. , insignificant terms may be disregarded from the set of predictors, leading to sparse solutions. Blatman and Sudret (2011) proposed to use the hybrid Least Angle Regression (LAR) method for building sparse PCE. This method employs the LAR algorithm (Efron et al., 2004) to select the best set of predictors and subsequently, estimates the coefficients using Ordinary Least Squares (OLS). Other techniques to derive sparse expansions can be found in e.g. Doostan and Owhadi (2011); Yang and Karniadakis (2013).

A good measure of the PCE accuracy is the leave-one out error. As mentioned in Section 3, this corresponds to the cross-validation error for the case . Using algebraic manipulations, the leave-one-out error can be computed based on a single PCE that is built with the full ED. Let denote the -th diagonal term of matrix , with . The leave-one-out error can then be computed as (Blatman, 2009):

 ˆErrLOO=∥∥∥M−MPCE1−h∥∥∥2E. (31)

The relative leave-one-out error, denoted by , is obtained by normalizing with the empirical variance of , the latter representing the set of model responses at the ED. Because may be too optimistic, the following corrected estimate may be used instead (Chapelle et al., 2002):

 ˆerr∗LOO=ˆerrLOO(1−cardAN)−1(1+tr((ΨTΨ)−1)). (32)

This corrected leave-one-out error is a good compromise between fair error estimation and affordable computational cost.

### 4.2 Computation of Sobol’ sensitivity indices using polynomial chaos expansions

Let us consider the PCE representation in Eq. (24). It is straightforward to obtain the Sobol’ decomposition of in an analytical form by observing that the summands in Eq. (2) can be written as:

 MPCEu(Xu)=∑α∈AuyαΨα(X),Au={α∈A\leavevmode\nobreak |\leavevmode\nobreak αk≠0⇔k∈u} (33)

and

 MPCE0=y0. (34)

Because of the orthonormality of the PCE basis, the conditions in Eq. (3) and Eq. (4) are satisfied, ensuring the uniqueness of the decomposition. Note that Eq. (3) in conjunction with Eq. (34) lead to:

 E[MPCE(X)]=y0. (35)

Furthermore, the orthonormality of the PCE basis allows to obtain the total variance and any partial variance of by a mere combination of the squares of the PCE coefficients. These quantities are respectively given by:

 DPCE=Var[MPCE(X)]=∑α∈A∖{0}yα2 (36)

and

 DPCEu=Var[MPCEu(Xu)]=∑α∈Auyα2. (37)

By utilizing Eq. (36) and Eq. (37), the first-order and total Sobol’ indices of a subvector of are respectively obtained as:

 ˜SPCEu=∑α∈˜Auyα2/DPCE,˜Au={α∈A∖{0}\leavevmode\nobreak |\leavevmode\nobreak αi=0\leavevmode\nobreak ∀\leavevmode\nobreak i∉u} (38)

and

 ˜ST,PCEu=∑α∈˜ATuyα2/DPCE,˜ATu={α∈A\leavevmode\nobreak |\leavevmode\nobreak ∃\leavevmode\nobreak i∈u:αi>0}. (39)

To elaborate the above equations, let us consider a model with three input variables i.e. and focus on the case . Then, the set includes the multi-indices of the form , or , where , , denotes a non-zero element. The set is a superset of , additionally including the multi-indices of the form , or . Note that the set in Eq. (33) includes only the multi-indices of the form .

By specializing Eq. (38) and Eq. (39) to the case , the first-order and total Sobol’ indices of a single variable are respectively obtained as:

 SPCEi=∑α∈Aiyα2/DPCE,Ai={α∈A\leavevmode\nobreak |\leavevmode\nobreak αi>0,\leavevmode\nobreak αj≠i=0} (40)

and

 ST,PCEi=∑α∈ATiyα2/DPCE,ATi={α∈A\leavevmode\nobreak |\leavevmode\nobreak αi>0}. (41)

## 5 Sobol’ sensitivity analysis using low-rank tensor approximations

### 5.1 Formulation and construction of low-rank tensor approximations

Let us consider again the mapping in Eq. (1). A rank-one function of the input vector has the form:

 w(X)=M∏i=1v(i)(Xi), (42)

where denotes a univariate function of . A representation of as a sum of a finite number of rank-one functions constitutes a canonical decomposition with rank equal to the number of rank-one components. Naturally, of interest are representations where the exact response is approximated with sufficient accuracy by using a relatively small number of terms, leading to the name low-rank approximations.

A meta-model of belonging to the class of LRA can be written as:

 YLRA=MLRA(X)=R∑l=1bl(M∏i=1v(i)l(Xi)), (43)

where is the rank of the approximation, is a univariate function of in the -th rank-one component and are scalars that can be viewed as normalizing constants. In order to obtain a representation in terms of polynomial functions, we expand onto a polynomial basis that is orthonormal with respect to , i.e. satisfies the condition in Eq. (27). This leads to the expression:

 v(i)l(Xi)=pi∑k=0z(i)k,lP(i)k(Xi), (44)

where is the -th degree univariate polynomial in the -th input variable, is the maximum degree of and is the coefficient of in the -th rank-one term. Appropriate families of univariate polynomials satisfying the orthonormality condition can be determined as discussed in Section 4.1. By substituting Eq. (44) into Eq. (43), we obtain:

 YLRA=MLRA(X)=R∑l=1bl(M∏i=1(pi∑k=0z(i)k,lP(i)k(Xi))). (45)

Disregarding the redundant parameterization arising from the normalizing constants, the number of unknowns in the above equation is . Note that this number grows only linearly with the input dimension . A representation of in the form of LRA drastically reduces the number of unknowns compared to PCE. In order to emphasize this, we consider PCE with the candidate basis determined by the truncation scheme , so that the expansion relies on the same polynomial functions as those used in Eq. (45). For the case when , , the resulting number of unknowns is in PCE versus in LRA. Considering a typical engineering problem with and low-degree polynomials with , the aforementioned formulas yield unknowns in PCE versus unknowns in LRA; for a low rank, say , the number of unknowns in LRA does not exceed a mere . In other words, the LRA meta-model constitutes a compressed representation of a PCE meta-model, with the basis of the latter defined by constraining the maximum polynomial degree in each dimension. As will be seen in the following, the compressed formulation leads to a fundamentally different construction algorithm.

Non-intrusive algorithms proposed in the literature for building LRA in the form of Eq. (45) rely on Alternated Least-Squares (ALS) minimization for the computation of the polynomial coefficients (Chevreuil et al., 2013, 2013; Doostan et al., 2013; Rai, 2014). The ALS approach consists in sequentially solving a least-squares minimization problem along each dimension separately, while “freezing” the coefficients in all remaining dimensions. We herein employ the greedy algorithm proposed in Chevreuil et al. (2013) and further investigated in Konakli and Sudret (2015a), which involves progressive increase of the rank by successively adding rank-one components.

Let denote the rank- approximation of :

 YLRAr=MLRAr(X)=r∑l=1blwl(X), (46)

with:

 (47)

The employed algorithm comprises a sequence of pairs of a correction step and an updating step, so that the -th correction step yields the rank-one term and the -th updating step yields the set of coefficients . These steps are detailed in the sequel.

Correction step: Let denote the residual after the completion of the -th iteration:

 Rr(X)=M(X)−MLRAr(X). (48)

The sequence is initiated by setting leading to . Based on the available experimental design, , the rank-one tensor is obtained as the solution of:

 wr=argminω∈W∥Rr−1−ω∥2E, (49)

where represents the space of rank-one tensors. By employing an ALS scheme, the minimization problem in Eq. (49) is substituted by a series of smaller minimization problems, each involving the coefficients along one dimension only:

 z(j)r=argminζ∈Rpj∥∥ ∥∥Rr−1−⎛⎝∏i≠jv(i)r⎞⎠(pj∑k=0ζkP(j)k)∥∥ ∥∥2E,j=1,…,M. (50)

The correction step is initiated by assigning arbitrary values to , , e.g. unity values, and may involve several iterations over the set of dimensions . The stopping criterion proposed in Konakli and Sudret (2015a) combines the number of iterations, denoted by , with the decrease in the relative empirical error in two successive iterations, denoted by , where the relative empirical error is given by:

 ˆerrr=∥Rr−1−wr∥2E^Var[Y]. (51)

In the above equation, represents the empirical variance of the set comprising the model responses at the ED. Accordingly, the algorithm exits the -th correction step if either reaches a maximum allowable value, denoted by , or becomes smaller than a prescribed threshold, denoted by . Konakli and Sudret (2015a) showed that the choice of and may have a significant effect on the accuracy of LRA; based on a number of example investigations, they proposed to use and .

Updating step: After the completion of a correction step, the algorithm moves to an updating step, in which the set of coefficients is obtained as the solution of the minimization problem:

 b=argminβ∈Rr∥∥ ∥∥M−r∑l=1βlwl∥∥ ∥∥2E. (52)

Note that in each updating step, the size of vector is increased by one. In the -th updating step, the value of the new element is determined for the first time, whereas the values of the existing elements are recomputed (updated).

Construction of a rank- approximation in the form of Eq. (45) requires repeating pairs of a correction and an updating step for . The algorithm is summarized below.

Algorithm 1: Non-intrusive construction of a rank- approximation of with polynomial bases:

1. Set .

2. For , repeat steps (a)-(d):

1. Initialize: , ; ; .

2. While and , repeat steps i-iv:

1. Set .

2. For , repeat steps A-B (correction step):

1. Determine (Eq. (50)).

2. Use the current values of to update (Eq. (44)).

3. Use the current functional forms of , , to update (Eq. (47)).

4. Compute (Eq. (51)) and update .

3. Determine (Eq. (52), updating step).

4. Evaluate at the points of the ED (Eq. (46)).

5. Evaluate at the points of the ED (Eq. (48)).

Note that the above algorithm relies on the solution of several small-size minimization problems: each iteration in a correction step involves minimization problems of size (usually ), whereas the -th updating step involves one minimization problem of size (recall that small ranks are of interest in LRA). Because of the small number of unknowns, Eq. (50) and Eq. (52) can be efficiently solved with OLS, as shown in Konakli and Sudret (2015a). An alternative approach employed in Chevreuil et al. (2013) is to substitute these equations with respective regularized problems.

In a typical application, the optimal rank is not known a priori. As noted earlier, the progressive construction of LRA results in a set of decompositions of increasing rank. Thus, one may set in Step 2 of Algorithm 1, where is a maximum allowable candidate rank, and at the end, select the optimal-rank approximation using error-based criteria. In the present work, we select the optimal rank based on the 3-fold cross-validation error, as proposed by Chevreuil et al. (2013) (see Section 3 for details on -fold cross-validation). Konakli and Sudret (2015a) investigated the accuracy of the approach in a number of applications and showed that it leads to optimal or nearly optimal rank selection in terms of the relative generalization error estimated with a large validation set.

### 5.2 Computation of Sobol’ sensitivity indices using low-rank tensor approximations

In this section, we consider the LRA meta-model in Eq. (45) and derive analytical expressions for the Sobol’ indices in terms of the polynomial coefficients and the normalizing constants . To this end, we rely on the definitions of the first-order and total Sobol’ indices for a subset of given in Eq. (19) and Eq. (20), respectively. In the specific case , these equations yield the first-order and total Sobol’ indices of a single variable .

Employing the definition of variance and the property:

 E[˜Mu(Xu)]=E[E[M(X)|Xu]]=E[M(X)], (53)

Eq. (19) is equivalently written as:

 (54)

We next derive analytical expressions for each of the quantities in the right-hand side of Eq. (54) considering in place of . In these derivations, we make use of the equalities:

 E[v(i)l(Xi)]=z(i)0,l (55)

and

 E[v(i)l(Xi)v(i)l′(Xi)]=pi∑k=0z(i)k,lz(i)k,l′, (56)

which follow from Eq. (44) in conjunction with the orthonormality condition in Eq. (27).

Employing the expression for the LRA response in Eq. (43), its mean value can be computed as:

 E[MLRA(X)]=R∑l=1bl(M∏i=1E[v(i)l(Xi)])=R∑l=1bl(M∏i=1z(i)0,l). (57)

In the first part of the above equation, we have utilized the property , which holds under the condition that and are independent; note that because the components of are assumed independent, the quantities are also independent. The second part of Eq. (57) follows directly from Eq. (55).

The mean-square of the LRA response in Eq. (43) can be computed as:

 (58)

The first part of the above equation relies on the independence of the quantities , which allows the use of the property , whereas the second part follows directly from Eq. (56).

Substituting Eq. (57) and Eq. (58) into the definition of the variance, we have:

 (59)

In the following, we derive an analytical expression for , beginning with the case . Noting that represents a constant in the quantity , substitution of Eq. (43) into the definition of (Eq. (6)) yields:

 ˜MLRAi(Xi)=R∑l=1blv(i)l(Xi)⎛⎝M∏j≠iE[v(j)l(Xj)]⎞⎠=R∑l=1bl⎛⎝M∏j≠iz(j)0,l⎞⎠v(i)l(Xi). (60)

Using the above expression, we have:

 (61)

It is straightforward to extend Eq. (60) to considering that the components of represent constants in the quantity . We thereby obtain:

 ˜MLRAu(Xu)=R∑l=1bl(∏i∈uv(i)l(Xi))⎛⎝∏j∉uE[v(j)l(Xj)]⎞⎠=R∑l=1bl⎛⎝∏j∉uz(j)0,l⎞⎠(∏i∈uv(i)l(Xi)) (62)

and finally:

 (63)

Eq. (57), Eq. (59) and Eq. (63) provide all the elements required to compute the first-order index for any subvector according to Eq. (54). As explained above, Eq. (63) simplifies to Eq. (61) for the case of a single variable .

Respective analytical expressions for the total Sobol’ indices can be derived using Eq. (20), which is equivalently written as:

 ˜STu=1−E[(˜M∖u(X∖u))2]−(E[M(X)])2Var[M(X)]. (64)

By considering that is the complementary set of with respect to , we can obtain an expression for by simply interchanging the indices and in Eq. (63):

 (65)

For the special case , Eq. (65) reduces to:

 (66)

Note that by appropriate combinations of first-order indices for single variables and groups of variables, we can obtain any higher-order index representing the effect from the interaction between a set of variables. For instance, the second-order index representing the effect from the interaction between and can be obtained as . A general expression can be derived by dividing each part of Eq. (7) by , leading to:

 Su=˜Su−∑v⊂uv≠uSv. (67)

## 6 Example applications

In this section, we perform global sensitivity analysis for the responses of four models with different characteristics and dimensionality. The first two models correspond to analytical functions, namely a common benchmark function in uncertainty quantification, of dimension , and a structural-mechanics model of dimension ; the aforementioned analytical models have a rank-one structure. The subsequent applications involve finite-element models, namely a structural-mechanics model of dimension and a heat-conduction model, with thermal conductivity described by a random field, of dimension . The LRA-based Sobol’ indices are compared to respective PCE-based indices and reference indices based on the actual model. The latter are computed either analytically or by using a Monte Carlo Simulation (MCS) approach with large samples according to Janon et al. (2013). The computations of the PCE- and MCS-based indices are performed with the software UQLab (Marelli and Sudret, 2014; Marelli et al., 2015).

The meta-models are built using two types of EDs, based on Sobol pseudo-random sequences (Niederreiter, 1992) and the so-called maximin Latin Hypercube Sampling (LHS). Each ED of the latter type represents the best among 5 random LHS-based designs, where the selection criterion is the maximum of the minimum distance between the points. The LRA meta-models are built by implementing Algorithm 1 in Section 5.1. A common polynomial degree is considered with its optimal value selected as the one leading to the minimum 3-fold cross-validation error, denoted by (see Konakli and Sudret (2015a) for an investigation of the accuracy of the approach). The involved minimization problems are solved using the OLS method. In building the PCE meta-models, a candidate basis is first determined by employing a hyperbolic truncation scheme and then, a sparse expansion is obtained by evaluating the PCE coefficients with the hybrid LAR method. The optimal combination of the maximum total polynomial degree and the parameter controlling the truncation, with , is selected as the one leading to the minimum corrected leave-one-out error (see Section 4.1 for details).

### 6.1 Analytical functions

#### 6.1.1 Sobol function

In the first example, we consider the Sobol function:

 Y=M∏i=1|4Xi−2|+ci1+ci, (68)

where are independent random variables uniformly distributed over and are non-negative constants. We examine the case with and the constants given by Kersaudy et al. (2015):

 c={