mfEGRA: Multifidelity Efficient Global Reliability Analysis
Abstract
This paper develops mfEGRA, a multifidelity active learning method using datadriven adaptively refined surrogates for failure boundary location in reliability analysis. This work addresses the issue of prohibitive cost of reliability analysis using Monte Carlo sampling for expensivetoevaluate highfidelity models by using cheapertoevaluate approximations of the highfidelity model. The method builds on the Efficient Global Reliability Analysis (EGRA) method, which is a surrogatebased method that uses adaptive sampling for refining Gaussian process surrogates for failure boundary location using a single fidelity model. Our method introduces a twostage adaptive sampling criterion that uses a multifidelity Gaussian process surrogate to leverage multiple information sources with different fidelities. The method combines expected feasibility criterion from EGRA with onestep lookahead information gain to refine the surrogate around the failure boundary. The computational savings from mfEGRA depends on the discrepancy between the different models, and the relative cost of evaluating the different models as compared to the highfidelity model. We show that accurate estimation of reliability using mfEGRA leads to computational savings of around 50% for an analytical multimodal test problem and 24% for an acoustic horn problem, when compared to single fidelity EGRA.

Keywords: active learning, adaptive sampling, probability of failure, contour location, classification, Gaussian process, kriging, multiple information source, EGRA, metamodel
1 Introduction
The presence of uncertainties in the manufacturing and operation of systems make reliability analysis critical for system safety. The reliability analysis of a system requires estimating the probability of failure, which can be computationally prohibitive when the high fidelity model is expensive to evaluate. In this work, we develop a method for efficient reliability estimation by leveraging multiple sources of information with different fidelities to build a multifidelity approximation for the limit state function.
Reliability analysis for strongly nonlinear systems typically require Monte Carlo sampling that can incur substantial cost because of numerous evaluations of expensivetoevaluate high fidelity models as seen in Figure 1 (a). There are several methods that improve the convergence rate of Monte Carlo methods to decrease computational cost through Monte Carlo variance reduction, such as, importance sampling [1, 2], crossentropy method [3], subset simulation [4, 5], etc. However, such methods are outside the scope of this paper and will not be discussed further. Another class of methods reduce the computational cost by using approximations for the failure boundary or the entire limit state function. The popular methods that fall in the first category are first and secondorder reliability methods (FORM and SORM), which approximate the failure boundary with linear and quadratic approximations around the most probable failure point [6, 7]. The FORM and SORM methods can be efficient for mildly nonlinear problems and cannot handle systems with multiple failure regions. The methods that fall in the second category reduce computational cost by replacing the highfidelity model evaluations in the Monte Carlo simulation by cheaper evaluations from adaptive surrogates for the limit state function as seen in Figure 1 (b).
Estimating reliability requires accurately classifying samples to fail or not, which needs surrogates that accurately predict the limit state function around the failure boundary. Thus, the surrogates need to be refined only in the region of interest (in this case, around the failure boundary) and do not require global accuracy in prediction of the limit state function. The development of sequential active learning methods for refining the surrogate around the failure boundary has been addressed in the literature using only a single highfidelity information source. Such methods fall in the same category as adaptively refining surrogates for identifying stability boundaries, contour location, classification, sequential design of experiment (DOE) for target region, etc. Typically, these methods are divided into using either Gaussian process (GP) surrogates or support vector machines (SVM). Adaptive SVM methods have been implemeted for reliability analysis and contour location [8, 9, 10]. In this word, we focus on GPbased methods (sometimes referred to as krigingbased) that use the GP prediction mean and prediction variance to develop greedy and lookahead adaptive sampling methods. Efficient Global Reliability Analysis (EGRA) adaptively refines the GP surrogate around the failure boundary by sequentially adding points that have maximum expected feasibility [11]. A weighted integrated mean square criterion for refining the kriging surrogate was developed by Picheny et al. [12]. Echard et al. [13] proposed an adaptive Kriging method that refines the surrogate in the restricted set of samples defined by a Monte Carlo simulation. Dubourg et al. [14] proposed a populationbased adaptive sampling technique for refining the kriging surrogate around the failure boundary. Onestep lookahead strategies for GP surrogate refinement for estimating probability of failure was proposed by Bect et al. [15] and Chevalier et al. [16]. A review of some surrogatebased methods for reliability analysis can be found in Ref. [17]. However, all the methods mentioned above use a single source of information, which is the highfidelity model as illustrated in Figure 1 (b). This work presents a novel multifidelity active learning method that adaptively refines the surrogate around the limit state function failure boundary using multiple sources of information, thus, further reducing the active learning computational effort as seen in Figure 1 (c).
For several applications, in addition to an expensive highfidelity model, there are potentially cheaper lower fidelity models, such as, simplified physics models, coarsegrid models, datafit models, reduced order models, etc. that are readily available or can be built. This necessitates the development of multifidelity methods that can take advantage of these multiple information sources [18]. In the context of reliability analysis using active learning surrogates, there are few multifidelity methods available. Dribusch et al. [19] proposed a hierarchical bifidelity adaptive SVM method for locating failure boundary. The recently developed CLoVER [20] method is a multifidelity active learning algorithm that uses a onestep lookahead entropyreductionbased adaptive sampling strategy for refining GP surrogates around the failure boundary. In this work, we develop a multifidelity extension of the popular EGRA method [11].
We propose mfEGRA (multifidelity EGRA) that leverages multiple sources of information with different fidelities and cost to accelerate active learning of surrogates for failure boundary identification. For single fidelity methods, the adaptive sampling criterion chooses where to sample next to refine the surrogate around the failure boundary. The challenge in developing a multifidelity adaptive sampling criterion is that we now have to answer two questions – {enumerate*}[label=()]
where to sample next, and
what information source to use for evaluating the next sample. This work proposes a new adaptive sampling criterion that allows the use of multiple fidelity models. In our mfEGRA method, we combine the expected feasibility function used in EGRA with a proposed weighted lookahead information gain to define the adaptive sampling criterion for multifidelity case. The key advantage of the mfEGRA method is the reduction in computational cost compared to singlefidelity active learning methods because it can utilize additional information from multiple cheaper lowfidelity models along with the highfidelity model information. We demonstrate the computational efficiency of the proposed mfEGRA using a multimodal analytic test problem and an acoustic horn problem with disjoint failure regions.
The rest of the paper is structured as follows. Section 2 provides the problem setup for reliability analysis using multiple information sources. Section 3 describes the details of the proposed mfEGRA method along with the complete algorithm. The effectiveness of mfEGRA is shown using an analytical multimodal test problem and an acoustic horn problem in Section 4. The conclusions are presented in Section 5.
2 Problem Setup
The inputs to the system are the random variables with the probability density function , where denotes the random sample space. The vector of a realization of the random variables is denoted by .
The probability of failure of the system is , where is the limit state function. In this work, without loss of generality, the failure of the system defined as . The failure boundary is defined as the zero contour of the limit state function, , and any other failure boundary, , can be reformulated as a zero contour (i.e., ).
One way to estimate the probability of failure for nonlinear systems is Monte Carlo simulation. The Monte Carlo estimate of the probability of failure is
(1) 
where are samples from probability density , is the failure set, and is the indicator function defined as
(2) 
The probability of failure estimation requires many evaluations of the expensivetoevaluate highfidelity model for the limit state function , which can make reliability analysis computationally prohibitive. The computational cost can be substantially reduced by replacing the highfidelity model evaluations with cheaptoevaluate surrogate model evaluations. However, to make accurate estimations of using a surrogate model, the zerocontour of the surrogate model needs to approximate the failure boundary well. Adaptively refining the surrogate around the failure boundary, while tradingoff global accuracy, is an efficient way of addressing the above.
The goal of this work is to make the adaptive refinement of surrogate models around the failure boundary more efficient by using multiple models with different fidelities and costs instead of only using the highfidelity model. We develop a multifidelity active learning method that utilizes multiple information sources to efficiently refine the surrogate to accurately locate the failure boundary. Let be a collection of models for with associated cost at location , where the subscript denotes the information source. We define the model to be the highfidelity model for the limit state function. The lowfidelity models of are denoted by . We use a multifidelity surrogate to simultaneously approximate all information sources while encoding the correlations between them. The adaptively refined multifidelity surrogate model predictions are used for the probability of failure estimation. Next we describe the multifidelity surrogate model used in this work and the multifidelity active learning method used to sequentially refine the surrogate around the failure boundary.
3 mfEGRA: Multifidelity EGRA with Information Gain
In this section, we introduce multifidelity EGRA (mfEGRA) that leverages the information sources to efficiently build an adaptively refined multifidelity surrogate to locate the failure boundary.
3.1 mfEGRA method overview
The proposed mfEGRA method is a multifidelity extension to the EGRA method [11]. Section 3.2 briefly describes the multifidelity GP surrogate used in this work to combine the different information sources. The multifidelity GP surrogate is built using an initial DOE and then the mfEGRA method refines the surrogate using a twostage adaptive sampling criterion that:

selects the next location to be sampled using an expected feasibility function as described in Section 3.3;

selects the information source to be used to evaluate the next sample using a weighted lookahead information gain criterion as described in Section 3.4.
The adaptive sampling criterion developed in this work enables us to use the surrogate prediction mean and the surrogate prediction variance to make the decision of where and which information source to sample next. Note that both of these quantities are available from the multifidelity GP surrogate used in this work. Section 3.5 provides the implementation details and the algorithm for the proposed mfEGRA method. Figure 2 shows a flowchart outlining the mfEGRA method.
3.2 Multifidelity Gaussian process
We use the multifidelity GP surrogate introduced by Poloczek et al. [21], which built on earlier work by Lam et al. [22], to combine information from the information sources into a single GP surrogate, , that can simultaneously approximate all the information sources. The multifidelity GP surrogate can provide predictions for any information source and random variable realization .
The multifidelity GP is built by making two modeling choices: (1) a GP approximation for the highfidelity model as given by , and (2) independent GP approximations for the model discrepancy between the highfidelity and the lowerfidelity models as given by for . denotes the mean function and denotes the covariance kernel for .
Then the surrogate for model is constructed by using the definition . These modeling choices lead to the surrogate model with prior mean function and prior covariance kernel . The priors for are
(3) 
and priors for are
(4) 
where and denotes the Kronecker delta. Once the prior mean function and the prior covariance kernels are defined using Equations (3) and (4), we can compute the posterior using standard rules of GP regression [23]. A more detailed description about the assumptions and the implementation of the multifidelity GP surrogate can be found in Ref. [21].
At any given , the surrogate model posterior distribution of is defined by the normal distribution with posterior mean and posterior variance . Consider that samples have been evaluated and these samples are used to fit the present multifidelity GP surrogate. Note that is the augmented vector of inputs to the multifidelity GP. Then the surrogate is refined around the failure boundary by sequentially adding samples. The next sample and the next information source used to refine the surrogate are found using the twostage adaptive sampling method mfEGRA as described below.
3.3 Location selection: Maximize expected feasibility function
The first stage of mfEGRA involves selecting the next location to be sampled. The expected feasibility function (EFF), which was used as the adaptive sampling criterion in EGRA [11], is used in this work to select the location of the next sample . The EFF defines the expectation of the sample lying within a band around the failure boundary (here, around the zero contour of the limit state function). The prediction mean and the prediction variance at any are provided by the multifidelity GP for the highfidelity surrogate model. The multifidelity GP surrogate prediction at is the normal distribution . Then the feasibility function at any is defined as being positive within the band around the failure boundary and zero otherwise as given by
(5) 
where is a realization of . The EFF is defined as the expectation of being within the band around the failure boundary as given by
(6) 
We will use to denote in the rest of the paper. The integration in Equation (6) can be solved analytically to obtain [11]
(7) 
where is the cumulative distribution function and is the probability density function of the standard normal distribution. Similar to EGRA [11], we define to balance exploration and exploitation. As noted before, we describe the method considering the zero contour as the failure boundary for convenience but the proposed method can be used for locating failure boundary at any contour level.
The location of the next sample is selected by maximizing the EFF as given by
(8) 
3.4 Information source selection: Maximize weighted lookahead information gain
Given the location of the next sample at obtained using Equation (8), the second stage of mfEGRA selects the information source to be used for simulating the next sample. The next information source is selected by using a weighted onestep lookahead information gain criterion. This adaptive sampling strategy selects the information source that maximizes the information gain in the GP surrogate prediction defined by the Gaussian distribution at any . In this work, information gain is quantified by the KullbackLeibler (KL) divergence. We measure the KL divergence between the present surrogate predicted GP and a hypothetical future surrogate predicted GP when a particular information source is used to simulate the sample at .
We represent the present GP surrogate built using the available training samples by the subscript P for convenience as given by . Then the present surrogate predicted Gaussian distribution at any is
where is the posterior mean and is the posterior prediction variance of the present GP surrogate for the highfidelity model built using the available training data till iteration .
A hypothetical future GP surrogate can be understood as a surrogate built using the current GP as a generative model to create hypothetical future simulated data. The hypothetical future simulated data is obtained from the present GP surrogate prediction at the location using a possible future information source . We represent a hypothetical future GP surrogate by the subscript F. Then a hypothetical future surrogate predicted Gaussian distribution at any is
The posterior mean of the hypothetical future GP is
where [21]. The posterior variance of the hypothetical future GP surrogate depends only on the location and the source , and can be replaced with . Note that we don’t need any new evaluations of the information source for constructing the future GP. The total lookahead information gain is obtained by integrating over all possible values of as described below.
Since both and are Gaussian distributions, we can write the KL divergence between them explicitly. The KL divergence between and for any is
(9) 
The total KL divergence can then be calculated by integrating over the entire random variable space as given by
(10) 
The total lookahead information gain for any can then be calculated by taking the expectation of Equation (10) over all possible values of as given by
(11) 
where
In practice, we choose a discrete set via Latin hypercube sampling to numerically integrate Equation (11) as given by
(12) 
The total lookahead information gain evaluated using Equation (12) gives a metric of global information gain over the entire random variable space. However, we are interested in gaining more information around the failure boundary. In order to give more importance to gaining information around the failure boundary we use a weighted version of the lookahead information gain normalized by the cost of the information source. In this work, we explore three different weighting strategies: (i) no weights , (ii) weights defined by the EFF, , and (iii) weights defined by the probability of feasibility (PF), . The PF of the sample to lie within the bounds around the zero contour is
(13) 
Weighting the information gain by either expected feasibility or probability of feasibility gives more importance to gaining information around the target region, in this case, the failure boundary.
The next information source is selected by maximizing the weighted lookahead information gain normalized by the cost of the information source as given by
(14) 
3.5 Algorithm and implementation details
An algorithm describing the mfEGRA method is given in Algorithm 1. In this work, we evaluate all the models at the initial DOE. We generate the initial samples using Latin hypercube sampling and run all the models at each of those samples to get the initial training set . In practice, we choose a fixed set of realizations at which the information gain is evaluated as shown in Equation (12) for all iterations of mfEGRA. Due to the typically high cost associated with the highfidelity model, we evaluate all the models when the highfidelity model is selected as the information source and update the GP hyperparameters in our implementation. All the model evaluations can be done in parallel. The algorithm is stopped when the maximum value of EFF goes below . However, other stopping criteria can also be explored.
4 Results
In this section, we demonstrate the effectiveness of the proposed mfEGRA method on an analytic multimodal test problem and an acoustic horn application. The probability of failure is estimated through Monte Carlo simulation using the adaptively refined multifidelity GP surrogate.
4.1 Analytic multimodal test problem
The analytic test problem used in this work has two inputs and three models with different fidelities and costs. This test problem has been used before in the context of reliability analysis in Ref. [11]. The highfidelity model of the limit state function is
(15) 
where and are uniformly distributed random numbers. The domain of the function is . The two lowfidelity models are
(16)  
(17) 
The cost of each fidelity model is taken to be constant over the entire domain and is given by . In this case, there is no noise in the observations from the different fidelity models. The failure boundary is defined by the zero contour of the limit state function () and the failure of the system is defined by . Figure 3 shows the contour plot of for the three models used for the analytic test problem along with the failure boundary predicted by each of them.
We use an initial DOE of size 10 generated using Latin hypercube sampling. All the models are evaluated at these 10 samples to build the initial multifidelity surrogate. The reference probability of failure is calculated to be using Monte Carlo samples of model. The relative error in probability of failure estimate using the adaptively refined multifidelity GP surrogate is used to assess the accuracy and computational efficiency of the proposed method. We repeat the calculations for 100 different initial DOEs to get the confidence bands on the results.
We first compare the accuracy of the method when different weights are used for the information gain criterion in mfEGRA as seen in Figure 4. We can see that using weighted information gain (both EFF and PF) performs better than the case when no weights are used when comparing the error confidence bands. EFFweighted information gain leads to only marginally lower errors in this case as compared to PFweighted information gain. Since we don’t see any significant advantage of using PF as weights and we use the EFFbased criterion to select the sample location, we propose using EFFweighted information gain to make the implementation more convenient. Note that for other problems, it is possible that PFweighted information gain may be better. From hereon, mfEGRA is used with the EFFweighted information gain.
The comparison of mfEGRA with single fidelity EGRA shows considerable improvement in accuracy at substantially lower computational cost as seen in Figure 5. In this case, to reach a median relative error of below in prediction, mfEGRA requires a computational cost of 28 compared to EGRA that requires a computational cost of 55 (around 50% reduction). Note that we start both cases with the same 100 sets of initial samples.
Figure 6 shows the evolution of the expected feasibility function and the weighted lookahead information gain, which are the two stages of the adaptive sampling criterion used in mfEGRA. These metrics along with the relative error in probability of failure estimate can used to define an efficient stopping criterion, specifically when the adaptive sampling needs to be repeated for different sets of parameters (e.g., in reliabilitybased design optimization).
Figure 7 shows the progress of mfEGRA at several iterations for a particular initial DOE. mfEGRA explores most of the domain using the cheaper and models in this case. The algorithm is stopped after 134 iterations when the expected feasibility function reached below ; we can see that the surrogate contour accurately traces the true failure boundary defined by the highfidelity model. In this case, mfEGRA makes a total of 35 evaluations of , 126 evaluations of , and 53 evaluations of including the initial DOE, to reach a value of EFF below .
4.2 Acoustic horn
We demonstrate the effectiveness of mfEGRA for the reliability analysis of an acoustic horn. The acoustic horn model used in this work has been used in the context of robust optimization by Ng et al. [24] An illustration of the acoustic horn is shown in Figure 8. The inputs to the system are the three random variables listed in Table 1.
Random variable  Description  Distribution  Lower bound  Upper bound  Mean  Standard deviation 

wave number  Uniform  1.3  1.5  –  –  
upper horn wall impedance  Normal  –  –  50  3  
lower horn wall impedance  Normal  –  –  50  3 
The output of the model is the reflection coefficient , which is a measure of the horn’s efficiency. We define the failure of the system to be . The limit state function is defined as , which defines the failure boundary as . We use a twodimensional acoustic horn model governed by the nondimensional Helmholtz equation. In this case, a finite element model of the Helmholtz equation is the highfidelity model with 35895 nodal grid points. The lowfidelity model is a reduced basis model with basis vectors [24, 25]. In this case, the cost of evaluating the lowfidelity model is 40 times faster than evaluating the highfidelity model. The cost of evaluating the different models is taken to be constant over the entire random variable space. A more detailed description of the acoustic horn models used in this work can be found in Ref. [24].
The reference probability of failure is estimated to be using Monte Carlo samples of the highfidelity model. We repeat the mfEGRA and the single fidelity EGRA results using 10 different initial DOEs with 10 samples in each (generated using Latin hypercube sampling) to get the confidence bands on the results. The comparison of convergence of the relative error in the probability of failure is shown in Figure 9 for mfEGRA and single fidelity EGRA. In this case, mfEGRA needs 19 equivalent highfidelity solves to reach a median relative error value of below as compared to 25 required by single fidelity EGRA leading to 24% reduction in computational cost. The reduction in computational cost using mfEGRA is driven by the discrepancy between the models and the relative cost of evaluating the models. In the acoustic horn case, we see computational savings of 24% as compared to around 50% seen in the analytic test problem in Section 4.1. This can be explained by the substantial difference in relative costs – 40 times cheaper lowfidelity model for the acoustic horn problem as compared to two lowfidelity models that are 1001000 times cheaper than the highfidelity model for the analytic test problem. The evolution of the mfEGRA adaptive sampling criteria can be seen in Figure 10.
Figure 11 shows that classification of the Monte Carlo samples using the highfidelity model and the adaptively refined surrogate model for a particular initial DOE lead to very similar results. It also shows that in the acoustic horn application there are two disjoint failure regions and the method is able to accurately capture both failure regions. The location of the samples from the different models when mfEGRA is used to refine the multifidelity GP surrogate for a particular initial DOE can be seen in Figure 12. The figure shows that most of the highfidelity samples are selected around the failure boundary. For this DOE, mfEGRA requires 28 evaluations of the highfidelity model and 69 evaluations of the lowfidelity model to reach an EFF value below .
Similar to the work in Refs. [13, 15], mfEGRA can also be implemented by limiting the search space for adaptive sampling location in Equation (8) to the set of Monte Carlo samples drawn from the given random variable distribution. The convergence of relative error in probability of failure estimate using this method improves for both mfEGRA and single fidelity EGRA as can be seen in Figure 13. In this case, mfEGRA requires 12 equivalent highfidelity solves as compared to 21 equivalent highfidelity solves required by single fidelity EGRA to reach a median relative error below leading to computational savings of around 43%.
5 Concluding remarks
This paper introduces the mfEGRA (multifidelity EGRA) method that refines the surrogate to accurately locate the limit state function failure boundary (or any contour) while leveraging multiple information sources with different fidelities and costs. The method selects the next location based on the expected feasibility function and the next information source based on a weighted onestep lookahead information gain criterion to refine the multifidelity GP surrogate of the limit state function around the failure boundary. We show through two numerical examples that mfEGRA efficiently combines information from different models to reduce computational cost. The mfEGRA method leads to computational savings of around 50% for a multimodal test problem and 24% for an acoustic horn problem over the single fidelity EGRA method when used for estimating the probability of failure. The mfEGRA method when implemented by restricting the searchspace to a priori drawn Monte Carlo samples showed even more computational efficiency with 43% reduction in computational cost compared to singlefidelity method for the acoustic horn problem. The driving factors for the reduction in computational cost for the method are the discrepancy between the high and lowfidelity models, and the relative cost of the lowfidelity models compared to the highfidelity model. These information are directly encoded in the mfEGRA adaptive sampling criterion helping it make the most efficient decision.
Acknowledgements
This work has been supported in part by the Air Force Office of Scientific Research (AFOSR) MURI on managing multiple information sources of multiphysics systems award numbers FA95501510038 and FA95501810023, the Air Force Center of Excellence on multifidelity modeling of rocket combustor dynamics award FA95501710195, and the Department of Energy Office of Science AEOLUS MMICC award DESC0019303.
References
 [1] Melchers, R., “Importance sampling in structural systems,” Structural Safety, Vol. 6, No. 1, 1989, pp. 3–10.
 [2] Liu, J. S., Monte Carlo strategies in scientific computing, Springer Science & Business Media, 2008.
 [3] Kroese, D. P., Rubinstein, R. Y., and Glynn, P. W., “The crossentropy method for estimation,” Handbook of Statistics, Vol. 31, Elsevier, 2013, pp. 19–34.
 [4] Au, S.K. and Beck, J. L., “Estimation of small failure probabilities in high dimensions by subset simulation,” Probabilistic Engineering Mechanics, Vol. 16, No. 4, 2001, pp. 263–277.
 [5] Papaioannou, I., Betz, W., Zwirglmaier, K., and Straub, D., “MCMC algorithms for subset simulation,” Probabilistic Engineering Mechanics, Vol. 41, 2015, pp. 89–103.
 [6] Hohenbichler, M., Gollwitzer, S., Kruse, W., and Rackwitz, R., “New light on firstand secondorder reliability methods,” Structural Safety, Vol. 4, No. 4, 1987, pp. 267–284.
 [7] Rackwitz, R., “Reliability analysis–a review and some perspectives,” Structural Safety, Vol. 23, No. 4, 2001, pp. 365–395.
 [8] Basudhar, A., Missoum, S., and Sanchez, A. H., “Limit state function identification using support vector machines for discontinuous responses and disjoint failure domains,” Probabilistic Engineering Mechanics, Vol. 23, No. 1, 2008, pp. 1–11.
 [9] Basudhar, A. and Missoum, S., “Reliability assessment using probabilistic support vector machines,” International Journal of Reliability and Safety, Vol. 7, No. 2, 2013, pp. 156–173.
 [10] Lecerf, M., Allaire, D., and Willcox, K., “Methodology for dynamic datadriven online flight capability estimation,” AIAA Journal, Vol. 53, No. 10, 2015, pp. 3073–3087.
 [11] Bichon, B. J., Eldred, M. S., Swiler, L. P., Mahadevan, S., and McFarland, J. M., “Efficient global reliability analysis for nonlinear implicit performance functions,” AIAA Journal, Vol. 46, No. 10, 2008, pp. 2459–2468.
 [12] Picheny, V., Ginsbourger, D., Roustant, O., Haftka, R. T., and Kim, N.H., “Adaptive designs of experiments for accurate approximation of a target region,” Journal of Mechanical Design, Vol. 132, No. 7, 2010, pp. 071008.
 [13] Echard, B., Gayton, N., and Lemaire, M., “AKMCS: an active learning reliability method combining Kriging and Monte Carlo simulation,” Structural Safety, Vol. 33, No. 2, 2011, pp. 145–154.
 [14] Dubourg, V., Sudret, B., and Bourinet, J.M., “Reliabilitybased design optimization using kriging surrogates and subset simulation,” Structural and Multidisciplinary Optimization, Vol. 44, No. 5, 2011, pp. 673–690.
 [15] Bect, J., Ginsbourger, D., Li, L., Picheny, V., and Vazquez, E., “Sequential design of computer experiments for the estimation of a probability of failure,” Statistics and Computing, Vol. 22, No. 3, 2012, pp. 773–793.
 [16] Chevalier, C., Bect, J., Ginsbourger, D., Vazquez, E., Picheny, V., and Richet, Y., “Fast parallel krigingbased stepwise uncertainty reduction with application to the identification of an excursion set,” Technometrics, Vol. 56, No. 4, 2014, pp. 455–465.
 [17] Moustapha, M. and Sudret, B., “Surrogateassisted reliabilitybased design optimization: a survey and a unified modular framework,” Structural and Multidisciplinary Optimization, 2019, pp. 1–20.
 [18] Peherstorfer, B., Willcox, K., and Gunzburger, M., “Survey of multifidelity methods in uncertainty propagation, inference, and optimization,” SIAM Review, Vol. 60, No. 3, 2018, pp. 550–591.
 [19] Dribusch, C., Missoum, S., and Beran, P., “A multifidelity approach for the construction of explicit decision boundaries: application to aeroelasticity,” Structural and Multidisciplinary Optimization, Vol. 42, No. 5, 2010, pp. 693–705.
 [20] Marques, A., Lam, R., and Willcox, K., “Contour location via entropy reduction leveraging multiple information sources,” Advances in Neural Information Processing Systems, 2018, pp. 5217–5227.
 [21] Poloczek, M., Wang, J., and Frazier, P., “Multiinformation source optimization,” Advances in Neural Information Processing Systems, 2017, pp. 4291–4301.
 [22] Lam, R., Allaire, D., and Willcox, K., “Multifidelity optimization using statistical surrogate modeling for nonhierarchical information sources,” 56th AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, 2015.
 [23] Rasmussen, C. E. and Nickisch, H., “Gaussian processes for machine learning (GPML) toolbox,” Journal of Machine Learning Research, Vol. 11, No. Nov, 2010, pp. 3011–3015.
 [24] Ng, L. W. and Willcox, K. E., ‘‘Multifidelity approaches for optimization under uncertainty,” International Journal for Numerical Methods in Engineering, Vol. 100, No. 10, 2014, pp. 746–772.
 [25] Eftang, J. L., Huynh, D., Knezevic, D. J., and Patera, A. T., “A twostep certified reduced basis method,” Journal of Scientific Computing, Vol. 51, No. 1, 2012, pp. 28–58.