Structural reliability analysis for p-boxes using multi-level meta-models

Structural reliability analysis for p-boxes using multi-level meta-models


In modern engineering, computer simulations are a popular tool to analyse, design, and optimize systems. Furthermore, concepts of uncertainty and the related reliability analysis and robust design are of increasing importance. Hence, an efficient quantification of uncertainty is an important aspect of the engineer’s workflow. In this context, the characterization of uncertainty in the input variables is crucial. In this paper, input variables are modelled by probability-boxes, which account for both aleatory and epistemic uncertainty. Two types of probability-boxes are distinguished: free and parametric (also called distributional) p-boxes. The use of probability-boxes generally increases the complexity of structural reliability analyses compared to traditional probabilistic input models. In this paper, the complexity is handled by two-level approaches which use Kriging meta-models with adaptive experimental designs at different levels of the structural reliability analysis. For both types of probability-boxes, the extensive use of meta-models allows for an efficient estimation of the failure probability at a limited number of runs of the performance function. The capabilities of the proposed approaches are illustrated through a benchmark analytical function and two realistic engineering problems.

Keywords: Structural reliability analysis – Kriging – probability-boxes – design enrichment – uncertainty quantification

1 Introduction

2 Introduction

Nowadays, computer simulations are a popular engineering tool to design systems of ever increasing complexity and to determine their performance. These simulations aim at reproducing the physical process and hence provide a solution to the underlying governing equations. As an example, finite element models have become a standard tool in modern civil and mechanical engineering. Such models exploit the available computer power, meaning that a single run of the model can take up to hours of computation or more.

At the same time, uncertainties are intrinsic to these physical processes and to simulation models. Typically, the model parameters are not perfectly known, but inferred from data and then modelled probabilistically. However, a common situation in practice is to have a limited budget for data acquisition and to end up with scarce datasets. This introduces epistemic uncertainty (lack of knowledge) alongside aleatory uncertainty (natural variability). Then, a more general framework is required to characterize the uncertainty in model parameters beyond probability theory.

Imprecise probabilities describes a set of methodologies which are more general than the well-established probability theory. They include Dempster-Shafer’s theory of evidence (Dempster, 1967; Shafer, 1976), Bayesian hierarchical models (Gelman, 2006; Gelman et al., 2009), possibility theory (De Cooman et al., 1995; Dubois and Prade, 1988), probability-boxes (Ferson and Ginzburg, 1996; Ferson and Hajagos, 2004), info-gap models (Ben-Haim, 2006; Kanno and Takewaki, 2006), and fuzzy sets (Zadeh, 1978; Möller and Beer, 2004). Among the various methods, probability-boxes provide a simple structure to distinguish aleatory and epistemic uncertainty naturally.

In this context, engineers are concerned with the reliability of their designs, which is assessed by structural reliability analysis. The goal of such an analysis is the estimation of the failure probability, which describes the probability that the systems performs below a minimal acceptable threshold value. A typical value for the failure probability in engineering is in the range of to . Altogether, the complexity of structural reliability analysis comprises three main factors: (i) a high-fidelity computer simulation, (ii) uncertainty in the parameters, and (iii) the estimation of rare events (i.e. the estimation of small failure probabilities).

In the presence of probabilistic variables, a number of methods are known to estimate failure probabilities (Lemaire, 2009; Morio et al., 2014), including Monte Carlo simulation, subset simulation (Au and Beck, 2001, 2003), importance sampling, first/second order reliability method (FORM/SORM) (Breitung, 1989; Hohenbichler and Rackwitz, 1988), and line sampling (Koutsourelakis et al., 2004; De Angelis et al., 2015). In the presence of imprecise probabilities, however, fewer publications are available, some of which are mentioned here. (Eldred and Swiler, 2009; Hurtado, 2013) use nested Monte Carlo simulations to cope with imprecise probabilities. (Alvarez and Hurtado, 2014) combine subset simulation with random sets. (Zhang et al., 2015) generalize FORM/SORM for Dempster-Shafer’s evidence theory. A combination of line sampling and imprecise probabilities is discussed in (De Angelis et al., 2015). These publications typically require a large number of model evaluations and hence rely on an inexpensive-to-evaluate performance function to make the analysis tractable.

A way to reduce the number of model evaluations and hence allowing expensive-to-evaluate limit-state functions are meta-models. Meta-modelling techniques approximate the expensive-to-evaluate limit-state function by a fast-to-evaluate surrogate based on a relatively small number of runs of the original one. The meta-model is then used in the structural reliability analysis. In particular, meta-models with adaptive experimental designs have been proven to be efficient in the estimation of failure probabilities. As an example, (Bichon et al., 2008; Echard et al., 2011; Dubourg et al., 2013) combine Kriging (Gaussian process models) (Santner et al., 2003) and structural reliability analysis in the context of probability theory.

However, only a few authors combine meta-modelling techniques with imprecise probabilities for structural reliability analysis. (Balesdent et al., 2014; Morio and Balesdent, 2016) use an adaptive-Kriging importance sampling algorithm for distributions with interval-valued parameters. A combination of Karhunen-Loève expansions and evidence theory is presented in (Oberguggenberger, 2014). (Zhang et al., 2014) use radial basis expansions in the presence of evidence theory.

Based on these developments, a set of algorithms is proposed in this paper to further increase the efficiency of structural reliability analysis in the presence of epistemic uncertainties. In particular, free and parametric p-boxes are presented separately and compared later in this paper. The application of Kriging meta-models at several stages of the imprecise structural reliability analysis promises a reduction of the overall computational costs at the same level of accuracy.

The paper is organized as follows. Section 3 defines probability-boxes as a generalization of probability theory. In Section 4, the basic structural reliability analysis is set up and Monte Carlo simulation is presented as a solution algorithm. Then, the proposed algorithms to solve imprecise structural reliability analyses are discussed in Section 5 and 6 for free and parametric probability-boxes, respectively. Finally, the performance of the proposed algorithms is assessed on a benchmark analytical function and realistic engineering problem settings, presented in Section 7.

3 Probability-boxes

3.1 Definition

A probability space is defined as the triplet , where is the universal event space, is the associated -algebra, and is the probability measure. A random variable defines the mapping , where is an elementary event, is the support of , and is the set of real numbers. In the context of probability theory, is typically characterized by its cumulative distribution function (CDF) and, in the case of continuous variables, by its probability density function (PDF) . The CDF describes the probability of the event , i.e. . Its PDF is defined as and describes the likelihood of being in the neighbourhood of a given .

Probability theory is based on the assumption that the probability measure is known precisely, resulting in a well-defined CDF. In cases of incomplete and/or sparse knowledge on , however, aleatory uncertainty (natural variability) and epistemic uncertainty (lack of knowledge) can occur simultaneously. Epistemic uncertainty may not be considered adequately in the context of probability theory alongside aleatory uncertainty Beer et al. (2016). Hence, a more general formulation of the probability measure, such as in Dempster-Shafer’s theory of evidence and probability-boxes, is required to capture the uncertainty fully.

Dempster-Shafer’s theory of evidence accounts for the epistemic uncertainty by replacing the probability measure with two values: belief and plausibility (Dempster, 1967; Shafer, 1976). Belief measures the minimum amount of probability that must be associated to an event, whereas plausibility measures the maximum amount of probability that could be associated to the same event. A probability-box (p-box) is a special case of the theory of evidence, considering only the nested set of events , which in turns is related to the definition of a CDF of a probabilistic random variable.

P-boxes define the CDF of a variable by lower and upper bounds denoted by and , respectively (Ferson and Ginzburg, 1996; Ferson and Hajagos, 2004). For any value , the true-but-unknown CDF value lies within these bounds such that . The two boundary curves form an intermediate area, hence the name probability-box. In the literature, two types of p-boxes are distinguished, namely free and parametric p-boxes, which are discussed in the following.

3.2 Free p-box

Free p-boxes are defined as introduced in the previous section, i.e. only by lower and upper bounds of the CDF. This implies that the true CDF can have any arbitrary shape as long as it fulfils the characteristics of a generic CDF and lies within the bounds of the p-box. Figure (A) shows the boundary curves of a free p-box and a number of possible realizations of the true CDF. Because the shape of the true CDF is not specified, different types of curves are possible, including non-smooth ones (see realization #3 in Figure (A)).

((A)) Free p-box
((B)) Parametric p-box
Figure 1: Definition of p-box types and realizations of possible CDFs

3.3 Parametric p-box

Parametric p-boxes (a.k.a. distributional p-boxes) are defined as cumulative distribution function families the parameters of which are known in intervals:


where denotes the interval domain of the distribution parameters of dimension . When the intervals are independent to each other, denotes a hyper-rectangular domain.

Parametric p-boxes allow for a clear separation of aleatory and epistemic uncertainty: aleatory uncertainty is represented by the distribution function family, whereas epistemic uncertainty is represented by the intervals in the distribution parameters. However, parametric p-boxes are more restrictive than free p-boxes because of the required knowledge on the distribution family. In other words, free p-boxes can be interpreted as a generalization of parametric p-boxes where the distribution family has parameters.

Parametric p-boxes are related to different concepts in the field of imprecise probabilities including Bayesian hierarchical models and fuzzy distributions. The parametric p-box construction resembles a Bayesian hierarchical model (Gelman, 2006) in which the distribution of the hyper-parameters is replaced by an interval. From a different point of view, the parametric p-box is a fuzzy distribution function where the membership function of the distribution parameters is equal to one for and zero otherwise (Möller and Beer, 2004).

Figure (B) illustrates a parametric p-box, consisting of a Gaussian distribution family with mean value and standard deviation . The lower and upper boundary curves of the parametric p-box drawn in Figure (B) are obtained by:


where . Hence, the boundary curves shown in Figure (B) have the characteristics of a CDF but are not necessarily a realization with a specific parameter combination . They can consist of sections of different realizations. As an example, the lower boundary CDF is a combination of realization #2 (, ) and realization #3 (, ).

4 Structural reliability analysis

4.1 Limit-state function

A limit-state function describes the performance of a process or system as a function of a set of input parameters. The deterministic mapping is defined as:


where is a -dimensional vector defined in the input domain and is the output scalar indicating the performance. The sign of , and hence , determines whether an input vector corresponds to a safe system () or a failed system (). The limit-state function is interpreted as a black box of which only the input vector and the corresponding response are accessible.

4.2 Failure probability

Due to uncertainties in the definition of the input vector, is modelled traditionally by probability distributions and in this paper is modelled by p-boxes. Considering a probabilistic input vector , the failure probability is defined as the probability that the limit-state function takes negative values:


which can be recast as an integral:


where is the failure domain and is the joint probability density function of the input vector . The integration in Eq. (5) cannot be performed analytically in the general case where the failure domain has a complex shape. Hence, numerical estimates for the failure probability were developed such as Monte Carlo simulation. Considering a large sample of denoted by , the failure probability can be estimated by:


where is the number of failure samples , is the total number of samples and is the indicator function with for a true subscript statement and otherwise. Note that the use of transforms the structural reliability problem into a classification problem where only the sign of is relevant.

In the context of p-boxes, the joint probability density function is not defined deterministically. Hence, Eq. (5) leads to a range of failure probabilities , the bounds of which are defined as:


where (resp. ) means that the optimization would be carried out over all PDF that satisfy some constraints related to the definition of the underlying p-box. This imprecise structural reliability analysis (ISRA) is not straightforward as it involves an optimization with a multi-dimensional distribution as the argument of the objective function. In the following sections, solution algorithms for the two types of p-boxes, i.e. free and parametric p-boxes, are discussed.

5 Imprecise structural reliability analysis (ISRA) for free p-boxes

5.1 Problem conversion

For the case of free p-boxes, the imprecise structural reliability problem can be recast as two structural reliability problems in the probabilistic sense (Zhang et al., 2010; Zhang, 2012; Schöbi and Sudret, 2015a). These can be solved by conventional structural reliability methods, such as Monte Carlo simulations.

Consider a random vector of length which follows a uniform distribution in the unit-hypercube domain . The component shall describe the CDF value of . For the sake of simplicity, the input variables are assumed to be statistically independent throughout this paper. When is modelled by a free p-box, each can be transformed into an interval in the domain by applying the inverse CDF of the p-box bounds:


For a given realization of , let us denote by . The boundary curves of the p-box of the response variable are then obtained by optimization:


Note that due to the definition of , taking the maximum response value naturally leads to a realization of the lower bound CDF of the response and vice versa. Based on those boundary curves and the definition of the failure domain , the range of failure probabilities is obtained by:


Then, the bounds of the failure probability can be estimated by sampling the probabilistic input vector . Note that the two boundary values can be computed independently.

A drawback of this problem conversion and the subsequent Monte Carlo simulation, however, is the high computational costs due to three distinct causes. Firstly, the computational model can be an expensive-to-evaluate model in many applications, such as finite element models. Secondly, the number of model evaluations may be large due to the optimization operations in Eq. (9), for each . And thirdly, failure probabilities are generally low, typically in engineering applications, which requires a large number of samples in Monte Carlo simulation to accurately estimate the failure probability, typically for a 10 % accuracy in the estimate. In order to cope with the three main aspects contributing to the total computational effort, meta-models are introduced at different levels of the ISRA for free p-boxes.

5.2 Adaptive Kriging Monte Carlo simulation


Consider again the limit-state function in Eq. (3) and a probabilistic input vector . Kriging is a meta-modelling approach that considers the limit-state function to be a realization of a Gaussian process (Santner et al., 2003):


where is the mean value of the Gaussian process consisting of regression functions and a vector of coefficients , is the variance of the Gaussian process, and is a zero-mean, unit-variance stationary Gaussian process, characterized by an autocorrelation function and its hyper-parameter(s) .

Training a Kriging meta-model is based on a set of input realizations and the corresponding values of the limit-state function . Then, the Kriging parameters are obtained by the generalized least-squares solution (Santner et al., 2003):


where is the correlation matrix and is the information matrix. When the correlation parameters are unknown, their values can be estimated by maximum likelihood or cross validation (Bachoc, 2013).

Given an arbitrary point of the input domain , the prediction value of the limit-state function is a Gaussian variable with mean value and variance:


where and . The Kriging predictor interpolates the experimental design, meaning that and for .

Adaptive experimental designs

Kriging is an interpolation algorithm which approximates the limit-state function most accurately close to the points of the experimental design . However, these points are not necessarily suitable to approximate the boundary between the safety and failure domain, i.e. the limit-state surface , and hence to estimate the failure probability. Then, an active learning algorithm can improve the estimation of by enriching the experimental design in a guided way.

Adaptive Kriging Monte Carlo simulation (AK-MCS) combines Kriging with an enrichment scheme and Monte Carlo simulation, as proposed in Echard et al. (2011) and further discussed in Schöbi et al. (2016); Marelli et al. (2015). The main steps of AK-MCS are described here:

  1. Generate a small initial experimental design and compute the corresponding response of the limit-state function: .

  2. Train a Kriging meta-model based on . In this paper, ordinary Kriging meta-models are used based on a constant (yet unknown) trend .

  3. Generate a large set of candidate samples from and evaluate the response of the meta-model: and .

  4. Compute the failure probability and its confidence values:


    In practice, is selected which approximately corresponds to a point-wise 95 % confidence interval on .

  5. Check the convergence criterion for the estimate of the failure probability:


    It has been shown that gives accurate results at reasonable costs (Schöbi et al., 2016). If the criterion is fulfilled, terminate the adaptive algorithm and return the last meta-model . Otherwise, continue with the next step.

  6. Compute the probability of misclassification for every Bect et al. (2012):


    where is the CDF of the standard normal variable. Select the best next sample to be added to the experimental design, which maximizes the probability of misclassification:

  7. Add to the experimental design and compute . Then, go back to step 2 to update the meta-model with the enriched experimental design.

After the termination of the adaptive Kriging iterations, the failure probability is estimated based on the last meta-model and the Monte Carlo sample .

5.3 Meta-modelling the limit-state surface

A first meta-model is applied to the limit-state function and in particular to model the limit-state surface . However, in order to conduct an AK-MCS analysis, a probabilistic input vector is required. When the input is modelled by free p-boxes in ISRA, an auxiliary input vector is created by condensation (Schöbi and Sudret, 2016). In this paper, the auxiliary input variables are characterized by the average value of the boundary curves of the p-box:


The auxiliary distribution covers the shape of the p-box. Hence, the resulting meta-model is accurate in the very neighbourhood of the limit-state surface that contributes to the value of the failure probability in the p-box setting.

5.4 Meta-modelling the limit-state surfaces and

The second meta-model is applied to the limit-state functions and and the estimation of the bounds of the failure probability (see Eqs. (9) and (10)). By using the approximation of , Eq. (9) may be replaced by:


where is the meta-model resulting from the first-level approximation. Therefore, the bounds of the failure probability and can be estimated by two independent AK-MCS analyses of and , respectively, i.e. and (see also Eq. (10)). Note that here the input vector consists of the probabilistic input vector , which makes the definition of an auxiliary distribution as in Section 5.3 unnecessary.

To improve the convergence of the AK-MCS algorithms, however, an auxiliary distribution may be beneficial. In fact, the auxiliary distribution defined in Eq. (21) is suitable when p-boxes are unbounded, as shown in Schöbi and Sudret (2015a). Then, an isoprobabilistic transform maps to and vice versa. The failure probability is estimated by and , respectively.

5.5 Two-level meta-modelling approach

Figure 2 summarizes the procedure, which consists of two sequential meta-modelling levels connected by a model conversion step. The first-level meta-model surrogates the original limit-state function , whereas the two second-level ones surrogate the limit-state functions for estimating the lower and upper bounds of the failure probability. Both levels use auxiliary random variables, i.e. on the first level and on the second level.

Figure 2: Imprecise structural reliability analysis for free p-boxes – flowchart of the two-level meta-modelling algorithm

6 Imprecise structural reliability analysis (ISRA) for parametric p-boxes

6.1 Nested algorithm

The definition of parametric p-boxes indicates a hierarchical model where the distribution of is defined conditionally on its distribution parameters. Hence, nested simulation algorithms can be applied in the context of ISRA, as discussed in Eldred and Swiler (2009); Schöbi and Sudret (2015b). In other words, the bounds of the failure probability can be found by estimating the conditional failure probabilities , where is a conditional distribution with , and minimizing/maximizing it with respect to to get the bounds and .

Making use of the same tools as in Section 5 for free p-boxes, the failure probability can be found with the help of Kriging meta-models. The conditional failure probabilities are estimated by AK-MCS, whereas the efficient global optimization (EGO) Jones et al. (1998) algorithm is used to optimize on . Figure 3 illustrates the main components of the proposed algorithm. In the following two sections, AK-MCS and EGO are discussed more in depth.

Figure 3: Imprecise structural reliability analysis for parametric p-boxes – overview of the nested algorithm

6.2 AK-MCS for conditional failure probabilities

In the context of parametric p-boxes, the core task consists of estimating efficiently the conditional failure probability by AK-MCS. Given a vector and consequently the conditional variable , AK-MCS can be applied to estimate the conditional failure probability as described in Section 5.2.2.

6.3 Adaptive-Kriging for the lower bound

EGO is a global optimization algorithm which is based on Kriging meta-models with a design enrichment strategy similar to AK-MCS (Echard et al., 2011; Jones et al., 1998). In the context of ISRA for parametric p-boxes, EGO is used to find the extreme values of the failure probability by optimizing on the distribution parameters . The main steps for finding the minimum failure probability are discussed here:

  1. Given the distribution parameter domain , generate a small initial experimental design .

  2. Compute the corresponding failure probabilities by AK-MCS. Note that for the first evaluation (i.e. ) of such a conditional failure probability, the procedure of Section 5.2.2 is applicable one-to-one with the distribution . However, for , the limit-state function evaluations of the previous AK-MCS analyses (i.e. the samples of the final experimental design ) are kept as initial experimental design. The difference between two samples and lies then solely in a modified and hence a modified . Due to the similarity of the MC populations , the number of limit-state function evaluations can be kept small (see also the application in Section 7.2).

  3. Train a Kriging meta-model based on . This meta-model approximates the estimation of the conditional failure probability .

  4. Search for the optimal samples to be added to the experimental design for minimizing the failure probability:


    where the expected improvement is defined as (Jones et al., 1998; Mockus et al., 1978):


    where is the expected improvement for minimizing , is the current minimum failure probability, and are the CDF and PDF of a standard normal variable, respectively. As explained in Jones et al. (1998), the expected improvement allows for a balance between local and global search.

  5. Check the convergence criterion proposed by Jones et al. (1998), applied to the minimum estimate of the failure probability:


    If the criterion is fulfilled, terminate the optimization algorithm and return the last estimate of the minimum failure probability: . Otherwise, continue with Step 6. Note that a value of was identified as a reliable choice for the applications in Section 7.

  6. Add the best next sample to the experimental design according to Eq. (23).

  7. Evaluate the failure probability corresponding to and add it to . Note that the limit-state function evaluations computed in previous iterations in AK-MCS can be recycled. They can be used as the initial experimental design in AK-MCS for estimating corresponding to (see also Step 2 for further details). Finally, continue with Step 3.

6.4 Adaptive Kriging for the upper bound

The upper boundary value of the failure probability can be estimated by an EGO algorithm analogously to the lower boundary value as follows. The best next sample is determined by replacing Eq. (23) by:


where the expected improvement is defined by:


where is the current maximum failure probability. Furthermore, the algorithm is terminated with a stopping criterion similar to Eq. (25):


When this is not fulfilled, the best next sample is added to the experimental design . After termination of the iterative algorithm, the maximum failure probability is estimated by .

6.5 Joint optimization

The optimization for the minimum and maximum failure probabilities are discussed separately in Sections 6.3 and 6.4, respectively. However, the Steps 4 and 5 of the EGO algorithm allow for a simultaneous optimization for the bounds of the failure probability. For simultaneous optimization, both boundary values of the failure probability are estimated with the same meta-model, are determined by Eq. (23) and (26), respectively, and are added in each iteration of the EGO algorithm. In fact, depending on the two stopping criteria, is added if Eqs. (25) and (28) are both not fulfilled, is added if only Eq. (28) is fulfilled, or is added if only Eq. (25) is fulfilled. If both equations are fulfilled, the iterative algorithm is terminated.

For separate optimization, the lower and upper bounds of the failure probability are estimated one after the other as presented in the Section 6.3 and 6.4. The effect of the different optimization modes is discussed in Section 7.1.

6.6 Comparison to free p-boxes

ISRA for parametric p-boxes consists of two interconnected Kriging models, the experimental designs of which are enriched quasi simultaneously, as shown in Figure 3. In the context of free p-boxes, two levels of meta-models are computed subsequently (see Figure 2). Additionally, an optimization operation is required for free p-boxes, which increases the complexity and potentially the computational effort in this case. The increased complexity originates from the basic definition of the free p-box, which can be interpreted as a generalization of the parametric p-box, as discussed in Section 3.3.

Assuming, however, that the computational costs are dominated by the evaluation of the limit-state function, the total computational costs for estimating the boundary values of the failure probabilities may be similar for free and parametric p-boxes. The total number of limit-state function evaluations may be similar due to the same method (i.e. AK-MCS) used to surrogate the limit-state surface .

7 Applications

7.1 Two-dimensional toy function

Problem statement

The first example is a simple two-dimensional problem used to illustrate the machinery of the proposed approaches. The limit-state function is defined as:


Failure is defined as and hence the failure probability is . The two variables are modelled by p-boxes. In the following, two cases are distinguished: (i) is modelled by free p-boxes and (ii) is modelled by parametric p-boxes. In the former case, the bounds of the free p-box are defined as:


where denotes the CDF of a Gaussian distribution with mean value and standard deviation . For the parametric p-box case, the distribution parameters are chosen so that the boundary curves are the same as in the case of free p-boxes:



The settings for the ISRA are the following. The meta-models and Monte Carlo simulations are computed using the Matlab-based uncertainty quantification framework UQLab (Marelli and Sudret, 2014), including its implementation of AK-MCS (Marelli et al., 2015). All Kriging models are defined by a Gaussian autocorrelation function and a constant trend (, a.k.a. ordinary Kriging). The number of samples in the initial experimental design are and for AK-MCS and EGO, respectively. The initial experimental designs are generated by the Latin-hypercube sampling method (McKay et al., 1979) in the unit hypercube and subsequent isoprobabilistic transform to the corresponding input domains. For free p-boxes, the auxiliary input distribution is defined as . For parametric p-boxes and EGO, the threshold value of the stopping criterion is set to . The failure probabilities are estimated with a set of Monte Carlo samples. The analysis is repeated 50 times with different initial experimental designs in order to obtain statistically significant results.


Free p-boxes Figure (C) visualizes a typical realization of the ISRA for free p-boxes. Note that the stopping criterion in AK-MCS was deactivated in this simulation run to visualize the general behaviour of the method. Figure (A) illustrates the experimental design of the final first-level meta-model. The initial experimental design is marked by the grey squares. The additional samples (blue squares) group around the limit-state surface (black solid line) in the physical domain, which implies an efficient selection of additional samples. Further, Figures (B) and (C) show the constraint optimization domains in Eq. (22) in the physical space for the experimental design on the second-level meta-model. Again, the optimization domains are selected in an efficient manner, as seen from the positioning of the blue rectangles. For in Figure (B), the minimum value of the limit-state function within each blue rectangular is achieved at a point which lies close to the limit-state surface. Thus, the rectangles are mostly contained in the safety domain. Analogously in Figure (C) for , the maximum value of the limit-state function within each blue rectangle is close to the limit-state surface. Hence, the rectangles are mostly contained in the failure domain.

((A)) Experimental design for (first-level meta-model)
((B)) Optimization domains in for (second-level meta-model)
((C)) Optimization domains in for (second-level meta-model)
Figure 4: Two-dimensional toy function – free p-boxes – adaptive experimental designs of a single run of the analysis

Accounting for the 50 replications of the ISRA, the estimates of the failure probability are summarized in Table 1. The reference value of the failure probability (denoted by ) is obtained by an Importance Sampling analysis of and with a sample size of . The table shows that all three meta-models accurately estimate the corresponding failure probabilities in terms of mean estimate. However, in terms of coefficient of variation, the second-level meta-models are less accurate than the expected Monte Carlo sampling-related values: for the expected value is and for the expected value is . The large variation in is caused by the larger number of samples in the failure domain and hence the larger length of the limit-state surface relevant for the failure probability estimation, when comparing to the auxiliary space . Selecting a different auxiliary distribution may possibly reduce this phenomenon.

- -
- -
Table 1: Two-dimensional toy function – free p-boxes – failure probability estimation (results of 50 replications of the same analysis with different sample sets and initial experimental designs)

The lower part of Table 1 shows the number of samples in the experimental design of the final meta-models. and denote the number of model evaluations on the first-level and second-level meta-model, respectively (average over 50 replications of the full algorithm). is low due to the smooth shape of the limit-state surface. In fact, an addition of points on average is sufficient to model it with sufficient accuracy. On the second level, the number of samples is larger due to the shape of and and the corresponding limit-state surfaces.

Parametric p-boxes A typical realization of the ISRA for parametric p-boxes is visualized in Figure (C). Figure (A) shows the experimental design of the final AK-MCS meta-model. Similar to Figure (A), the additional samples (red squares) group efficiently around the limit-state surface which is represented by the solid black line. The experimental design of the EGO algorithm is shown in Figure (B), which illustrates the exploratory behaviour of the EGO algorithm. Despite the small number of added samples, the boundary values of the failure probability are estimated accurately, as seen in Figure (C). The left side of Figure (C) shows the evolution of the bounds of the failure probability using the initial experimental design (i.e. samples in are generated simultaneously and the corresponding are computed sequentially), whereas the right side shows them during each iteration of EGO. After iteration 3, the optimization algorithm converges to a stable value of the failure probability bounds.

((A)) Experimental design for (first-level meta-model)
((B)) Experimental design in (EGO)
((C)) Evolution of the estimate of the failure probability bounds (EGO)
Figure 5: Two-dimensional toy function – parametric p-boxes – single realization of the ISRA

The results of the 50 replications of the analysis are summarized in Table 2 in terms of failure probability estimates and experimental design sizes. Further, it is distinguished whether the EGO algorithm is used to simultaneously optimize for and (left block of values) or whether EGO is performed separately for and (right block of values). Note that for the separate optimization case, samples from one optimization are not re-used in the other one. In terms of the failure probability, both settings result in accurate estimates. Moreover, the coefficient of variation is close to the expected value of and for and , respectively, for a Monte Carlo simulation with samples.

Simultaneous optimization Separate optimization
Table 2: Two-dimensional toy function – parametric p-boxes – failure probability estimation (results of 50 replications of the same analysis with different sample sets and initial experimental designs)

Further, the number of evaluations of the true limit-state function is similar for the simultaneous and separate optimization cases. The number of samples added during AK-MCS is generally low. At the level of EGO, however, the simultaneous optimization requires more samples () as the separate optimization. Even when adding up the samples for the separate optimization case, is less than . Hence, the separate optimization is more efficient than the simultaneous optimization in this case. This behaviour has been also observed on other toy functions. Based on this observation, the remaining application examples are analysed using separate optimizations.

Comparison The algorithms for free and parametric p-boxes originate from two different paradigms: interval analysis and nested Monte Carlo simulations, respectively. However, the cost statistics shows that the two proposed two-level meta-modelling algorithms perform similarly in terms of model evaluations. Free p-boxes required evaluations of the limit-state function whereas parametric p-boxes required on average. This can be intuitively understood due to the identical limit-state surface in both settings. Contrarily to , the number of evaluations on the second-level meta-models, i.e. , are not comparable for the case of free and parametric p-boxes, due to the different algorithms used.

7.2 Single-degree-of-freedom oscillator

Problem setup

In order to visualize the effects of non-monotone functions on the failure probability, the following example is analysed. Consider the non-linear undamped single-degree-of-freedom (SDOF) oscillator sketched in Figure 6 (Echard et al., 2011, 2013; Schueremans and Van Gemert, 2005). The corresponding limit-state function reads:


where is the mass, are the spring constants of the primary and secondary springs, is the displacement at which the secondary spring yields, is the duration of the loading, is the amplitude of the force and is the natural frequency of the oscillator. Failure is defined as and the associated failure probability is .

Figure 6: SDOF oscillator – geometry sketch and definition of the variables

The input vector is modelled by a mix of probabilistic variables and p-boxes accounting for the different levels of knowledge. The description of the input variables is provided in Table 3. It is assumed that the spring stiffnesses and the mass are known well. Hence are modelled by precise CDFs. On the other side, knowledge on is scarce. Hence, these variables are modelled by p-boxes. The two cases of free and parametric p-boxes are distinguished and compared in the following. As seen in Table 3, the parametric p-box is characterized by a distribution function with interval-valued mean value. For the case of free p-boxes, the same p-box boundary curves are used as for parametric p-boxes. Note that refers to a Gaussian distribution with mean value and standard deviation .

Parametric p-box Free p-box
Variable Distribution Mean Std. dev.
Table 3: Oscillator – definition of the input vector for the cases of parametric and free p-boxes


The settings for the imprecise structural reliability analyses are kept the same as in Section 7.1, except for the size of the initial experimental designs. The size of the initial experimental designs for all meta-models (including EGO) is set to . Further, the threshold value in EI is set to . For free p-boxes, the auxiliary distributions for are defined by , , and , respectively.


Efficiency of the proposed approaches Table 4 summarizes the results of a single run of the ISRA. The estimates of the failure probability are compared to a reference Monte Carlo simulation obtained with samples and the exact limit-state function. The corresponding optimizations are using Matlab’s genetic algorithm. All four failure probabilities are estimated accurately. Further, the number of evaluations of the limit-state function is low. The case of parametric p-boxes results in a larger compared to the free p-box (). The reason for this is the added samples during the iterations of the EGO. In comparison, the two levels of the free p-box analysis are independent and hence the total is lower.

Parametric p-box Free p-box
- -
- -
- -
Table 4: Oscillator – results of the ISRA (reference solution obtained by Monte Carlo simulation with samples)

The number of samples in the second-level meta-models are varying more than the first-level meta-models. In the case of parametric p-boxes, a few samples are sufficient to find the optimal distribution parameters. In the case of free p-boxes, has the same order of magnitude as because the analysis is of the same type, i.e. AK-MCS. However, the difference in of estimating and is large, which represents the different complexity of the limit-state surfaces and , respectively.

For the case of parametric p-boxes, Table 4 shows also the optimal mean values for , denoted by , which result from the EGO optimization. The results confirm the intuition: the largest maximal failure probability is obtained by the smallest yield displacement in combination with a long activation time and an eccentric force ( or ). On the other hand, the minimum failure probability is obtained by the largest yield displacement in combination with a short activation time and a concentric force ().

Evolution of experimental designs As mentioned above, the evolution of the experimental design is different for the two types of p-boxes. For parametric p-boxes, the details are shown in Figure (B). The size of is plotted as a function of the iteration in EGO in Figure (B). Interestingly, the major part of evaluations are required to estimate the first sample of EGO’s initial experimental design. Only a few additional samples are evaluated in order to refine the conditional failure probability estimates. Hence, the recycling of limit-state function evaluations is an efficient tool to keep the computational costs at a low level.

((A)) Failure probability estimates
((B)) Experimental design size
Figure 7: Oscillator – parametric p-boxes – convergence of the ISRA

In the same figure, the evolution of the boundary values of the failure probability are also given (see Fig. (A)). Interestingly, the failure probabilities evolve with each iteration in EGO where at the same time remains constant. In other words, the limit-state surface is modelled accurately enough to estimate the extreme failure probabilities, i.e. for different realizations .

Effect of non-monotonicity on failure probability In this example, the two types of p-boxes have the same boundary curves, as seen in Table 3. In case the limit-state function was a monotone function, the failure probabilities would be identical. In other words, the same realizations of the input CDF would lead to one of the failure probability boundary values.

In the case of the oscillator though, parametric and free p-box models result in significantly different failure probability bounds, as seen from Table 4. In fact, the imprecise failure probabilities of the free p-box case encapsulate the ones of the parametric p-box case. As free p-boxes are more general than parametric p-boxes by definition, more extreme failure probabilities are possible. In this example, free p-boxes result in a -times lower and in a -times larger compared to the parametric p-boxes.

7.3 Two-dimensional truss structure

Problem statement

Hurtado Hurtado (2013) introduced a two-dimensional linear-elastic truss structure, whose geometry is shown in Figure 8. The truss is subjected to seven loads which are modelled with p-boxes. The parametric p-boxes are defined by lognormal distributions with mean value and standard deviation . The geometry of the structure and the material properties are assumed constant. The modulus of elasticity is , whereas the cross section area varies along the different bars: for bars marked by , for bars marked by , and for the remaining bars.

Figure 8: Frame structure – sketch of the geometry and definition of the input variables

In the following analysis, a second scenario is considered, where the former parametric p-boxes are modelled by free p-boxes. As in the other examples, the boundary curves of the p-boxes should coincide, hence:


In the context of structural reliability analysis, the limit-state function is defined as