mfEGRA: Multifidelity Efficient Global Reliability Analysis

mfEGRA: Multifidelity Efficient Global Reliability Analysis

Anirban Chaudhuri111Postdoctoral Associate, Department of Aeronautics and Astronautics, anirbanc@mit.edu.,  Alexandre N. Marques222Postdoctoral Associate, Department of Aeronautics and Astronautics, noll@mit.edu.
Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
Karen E. Willcox333Director, Oden Institute for Computational Engineering and Sciences, kwillcox@oden.utexas.edu
University of Texas at Austin, Austin, TX, 78712, USA
September 21, 2019
Abstract

This paper develops mfEGRA, a multifidelity active learning method using data-driven adaptively refined surrogates for failure boundary location in reliability analysis. This work addresses the issue of prohibitive cost of reliability analysis using Monte Carlo sampling for expensive-to-evaluate high-fidelity models by using cheaper-to-evaluate approximations of the high-fidelity model. The method builds on the Efficient Global Reliability Analysis (EGRA) method, which is a surrogate-based method that uses adaptive sampling for refining Gaussian process surrogates for failure boundary location using a single fidelity model. Our method introduces a two-stage adaptive sampling criterion that uses a multifidelity Gaussian process surrogate to leverage multiple information sources with different fidelities. The method combines expected feasibility criterion from EGRA with one-step lookahead information gain to refine the surrogate around the failure boundary. The computational savings from mfEGRA depends on the discrepancy between the different models, and the relative cost of evaluating the different models as compared to the high-fidelity model. We show that accurate estimation of reliability using mfEGRA leads to computational savings of around 50% for an analytical multimodal test problem and 24% for an acoustic horn problem, when compared to single fidelity EGRA.

  • Keywords:  active learning, adaptive sampling, probability of failure, contour location, classification, Gaussian process, kriging, multiple information source, EGRA, metamodel

1 Introduction

The presence of uncertainties in the manufacturing and operation of systems make reliability analysis critical for system safety. The reliability analysis of a system requires estimating the probability of failure, which can be computationally prohibitive when the high fidelity model is expensive to evaluate. In this work, we develop a method for efficient reliability estimation by leveraging multiple sources of information with different fidelities to build a multifidelity approximation for the limit state function.

Reliability analysis for strongly non-linear systems typically require Monte Carlo sampling that can incur substantial cost because of numerous evaluations of expensive-to-evaluate high fidelity models as seen in Figure 1 (a). There are several methods that improve the convergence rate of Monte Carlo methods to decrease computational cost through Monte Carlo variance reduction, such as, importance sampling [1, 2], cross-entropy method [3], subset simulation [4, 5], etc. However, such methods are outside the scope of this paper and will not be discussed further. Another class of methods reduce the computational cost by using approximations for the failure boundary or the entire limit state function. The popular methods that fall in the first category are first- and second-order reliability methods (FORM and SORM), which approximate the failure boundary with linear and quadratic approximations around the most probable failure point [6, 7]. The FORM and SORM methods can be efficient for mildly nonlinear problems and cannot handle systems with multiple failure regions. The methods that fall in the second category reduce computational cost by replacing the high-fidelity model evaluations in the Monte Carlo simulation by cheaper evaluations from adaptive surrogates for the limit state function as seen in Figure 1 (b).

Reliability analysis loop

High-fidelity model

Random variable realization

System outputs

Reliability analysis loop

Single fidelity adaptive surrogate

Random variable realization

System outputs

Multifidelity adaptive surrogate

Reliability analysis loop

High-fidelity model

Low-fidelity model 1

Low-fidelity model k

Random variable realization

System outputs

(a)

(b)

(c)

Improving computational efficiency

Figure 1: Reliability analysis with (a) high-fidelity model, (b) single fidelity adaptive surrogate, and (c) multifidelity adaptive surrogate.

Estimating reliability requires accurately classifying samples to fail or not, which needs surrogates that accurately predict the limit state function around the failure boundary. Thus, the surrogates need to be refined only in the region of interest (in this case, around the failure boundary) and do not require global accuracy in prediction of the limit state function. The development of sequential active learning methods for refining the surrogate around the failure boundary has been addressed in the literature using only a single high-fidelity information source. Such methods fall in the same category as adaptively refining surrogates for identifying stability boundaries, contour location, classification, sequential design of experiment (DOE) for target region, etc. Typically, these methods are divided into using either Gaussian process (GP) surrogates or support vector machines (SVM). Adaptive SVM methods have been implemeted for reliability analysis and contour location [8, 9, 10]. In this word, we focus on GP-based methods (sometimes referred to as kriging-based) that use the GP prediction mean and prediction variance to develop greedy and lookahead adaptive sampling methods. Efficient Global Reliability Analysis (EGRA) adaptively refines the GP surrogate around the failure boundary by sequentially adding points that have maximum expected feasibility [11]. A weighted integrated mean square criterion for refining the kriging surrogate was developed by Picheny et al. [12]. Echard et al. [13] proposed an adaptive Kriging method that refines the surrogate in the restricted set of samples defined by a Monte Carlo simulation. Dubourg et al. [14] proposed a population-based adaptive sampling technique for refining the kriging surrogate around the failure boundary. One-step lookahead strategies for GP surrogate refinement for estimating probability of failure was proposed by Bect et al. [15] and Chevalier et al. [16]. A review of some surrogate-based methods for reliability analysis can be found in Ref. [17]. However, all the methods mentioned above use a single source of information, which is the high-fidelity model as illustrated in Figure 1 (b). This work presents a novel multifidelity active learning method that adaptively refines the surrogate around the limit state function failure boundary using multiple sources of information, thus, further reducing the active learning computational effort as seen in Figure 1 (c).

For several applications, in addition to an expensive high-fidelity model, there are potentially cheaper lower fidelity models, such as, simplified physics models, coarse-grid models, data-fit models, reduced order models, etc. that are readily available or can be built. This necessitates the development of multifidelity methods that can take advantage of these multiple information sources [18]. In the context of reliability analysis using active learning surrogates, there are few multifidelity methods available. Dribusch et al. [19] proposed a hierarchical bi-fidelity adaptive SVM method for locating failure boundary. The recently developed CLoVER [20] method is a multifidelity active learning algorithm that uses a one-step lookahead entropy-reduction-based adaptive sampling strategy for refining GP surrogates around the failure boundary. In this work, we develop a multifidelity extension of the popular EGRA method [11].

We propose mfEGRA (multifidelity EGRA) that leverages multiple sources of information with different fidelities and cost to accelerate active learning of surrogates for failure boundary identification. For single fidelity methods, the adaptive sampling criterion chooses where to sample next to refine the surrogate around the failure boundary. The challenge in developing a multifidelity adaptive sampling criterion is that we now have to answer two questions – {enumerate*}[label=()]

where to sample next, and

what information source to use for evaluating the next sample. This work proposes a new adaptive sampling criterion that allows the use of multiple fidelity models. In our mfEGRA method, we combine the expected feasibility function used in EGRA with a proposed weighted lookahead information gain to define the adaptive sampling criterion for multifidelity case. The key advantage of the mfEGRA method is the reduction in computational cost compared to single-fidelity active learning methods because it can utilize additional information from multiple cheaper low-fidelity models along with the high-fidelity model information. We demonstrate the computational efficiency of the proposed mfEGRA using a multimodal analytic test problem and an acoustic horn problem with disjoint failure regions.

The rest of the paper is structured as follows. Section 2 provides the problem setup for reliability analysis using multiple information sources. Section 3 describes the details of the proposed mfEGRA method along with the complete algorithm. The effectiveness of mfEGRA is shown using an analytical multimodal test problem and an acoustic horn problem in Section 4. The conclusions are presented in Section 5.

2 Problem Setup

The inputs to the system are the random variables with the probability density function , where denotes the random sample space. The vector of a realization of the random variables is denoted by .

The probability of failure of the system is , where is the limit state function. In this work, without loss of generality, the failure of the system defined as . The failure boundary is defined as the zero contour of the limit state function, , and any other failure boundary, , can be reformulated as a zero contour (i.e., ).

One way to estimate the probability of failure for nonlinear systems is Monte Carlo simulation. The Monte Carlo estimate of the probability of failure is

(1)

where are samples from probability density , is the failure set, and is the indicator function defined as

(2)

The probability of failure estimation requires many evaluations of the expensive-to-evaluate high-fidelity model for the limit state function , which can make reliability analysis computationally prohibitive. The computational cost can be substantially reduced by replacing the high-fidelity model evaluations with cheap-to-evaluate surrogate model evaluations. However, to make accurate estimations of using a surrogate model, the zero-contour of the surrogate model needs to approximate the failure boundary well. Adaptively refining the surrogate around the failure boundary, while trading-off global accuracy, is an efficient way of addressing the above.

The goal of this work is to make the adaptive refinement of surrogate models around the failure boundary more efficient by using multiple models with different fidelities and costs instead of only using the high-fidelity model. We develop a multifidelity active learning method that utilizes multiple information sources to efficiently refine the surrogate to accurately locate the failure boundary. Let be a collection of models for with associated cost at location , where the subscript denotes the information source. We define the model to be the high-fidelity model for the limit state function. The low-fidelity models of are denoted by . We use a multifidelity surrogate to simultaneously approximate all information sources while encoding the correlations between them. The adaptively refined multifidelity surrogate model predictions are used for the probability of failure estimation. Next we describe the multifidelity surrogate model used in this work and the multifidelity active learning method used to sequentially refine the surrogate around the failure boundary.

3 mfEGRA: Multifidelity EGRA with Information Gain

In this section, we introduce multifidelity EGRA (mfEGRA) that leverages the information sources to efficiently build an adaptively refined multifidelity surrogate to locate the failure boundary.

3.1 mfEGRA method overview

The proposed mfEGRA method is a multifidelity extension to the EGRA method [11]. Section 3.2 briefly describes the multifidelity GP surrogate used in this work to combine the different information sources. The multifidelity GP surrogate is built using an initial DOE and then the mfEGRA method refines the surrogate using a two-stage adaptive sampling criterion that:

  1. selects the next location to be sampled using an expected feasibility function as described in Section 3.3;

  2. selects the information source to be used to evaluate the next sample using a weighted lookahead information gain criterion as described in Section 3.4.

The adaptive sampling criterion developed in this work enables us to use the surrogate prediction mean and the surrogate prediction variance to make the decision of where and which information source to sample next. Note that both of these quantities are available from the multifidelity GP surrogate used in this work. Section 3.5 provides the implementation details and the algorithm for the proposed mfEGRA method. Figure 2 shows a flowchart outlining the mfEGRA method.

mfEGRA

Get initial DOE and evaluate models

models

Build initial multifidelity GP

Is mfEGRA stopping criterion met?

Estimate probability of failure using the adaptively refined multifidelity GP

Stop

Select next sampling location using expected feasibility function

Select the information source using weighted lookahead information gain

Evaluate at the next sample using the selected model

Update multifidelity GP

Yes

No

Figure 2: Flowchart showing the mfEGRA method.

3.2 Multifidelity Gaussian process

We use the multifidelity GP surrogate introduced by Poloczek et al.  [21], which built on earlier work by Lam et al. [22], to combine information from the information sources into a single GP surrogate, , that can simultaneously approximate all the information sources. The multifidelity GP surrogate can provide predictions for any information source and random variable realization .

The multifidelity GP is built by making two modeling choices: (1) a GP approximation for the high-fidelity model as given by , and (2) independent GP approximations for the model discrepancy between the high-fidelity and the lower-fidelity models as given by for . denotes the mean function and denotes the covariance kernel for .

Then the surrogate for model is constructed by using the definition . These modeling choices lead to the surrogate model with prior mean function and prior covariance kernel . The priors for are

(3)

and priors for are

(4)

where and denotes the Kronecker delta. Once the prior mean function and the prior covariance kernels are defined using Equations (3) and (4), we can compute the posterior using standard rules of GP regression [23]. A more detailed description about the assumptions and the implementation of the multifidelity GP surrogate can be found in Ref. [21].

At any given , the surrogate model posterior distribution of is defined by the normal distribution with posterior mean and posterior variance . Consider that samples have been evaluated and these samples are used to fit the present multifidelity GP surrogate. Note that is the augmented vector of inputs to the multifidelity GP. Then the surrogate is refined around the failure boundary by sequentially adding samples. The next sample and the next information source used to refine the surrogate are found using the two-stage adaptive sampling method mfEGRA as described below.

3.3 Location selection: Maximize expected feasibility function

The first stage of mfEGRA involves selecting the next location to be sampled. The expected feasibility function (EFF), which was used as the adaptive sampling criterion in EGRA [11], is used in this work to select the location of the next sample . The EFF defines the expectation of the sample lying within a band around the failure boundary (here, around the zero contour of the limit state function). The prediction mean and the prediction variance at any are provided by the multifidelity GP for the high-fidelity surrogate model. The multifidelity GP surrogate prediction at is the normal distribution . Then the feasibility function at any is defined as being positive within the -band around the failure boundary and zero otherwise as given by

(5)

where is a realization of . The EFF is defined as the expectation of being within the -band around the failure boundary as given by

(6)

We will use to denote in the rest of the paper. The integration in Equation (6) can be solved analytically to obtain [11]

(7)

where is the cumulative distribution function and is the probability density function of the standard normal distribution. Similar to EGRA [11], we define to balance exploration and exploitation. As noted before, we describe the method considering the zero contour as the failure boundary for convenience but the proposed method can be used for locating failure boundary at any contour level.

The location of the next sample is selected by maximizing the EFF as given by

(8)

3.4 Information source selection: Maximize weighted lookahead information gain

Given the location of the next sample at obtained using Equation (8), the second stage of mfEGRA selects the information source to be used for simulating the next sample. The next information source is selected by using a weighted one-step lookahead information gain criterion. This adaptive sampling strategy selects the information source that maximizes the information gain in the GP surrogate prediction defined by the Gaussian distribution at any . In this work, information gain is quantified by the Kullback-Leibler (KL) divergence. We measure the KL divergence between the present surrogate predicted GP and a hypothetical future surrogate predicted GP when a particular information source is used to simulate the sample at .

We represent the present GP surrogate built using the available training samples by the subscript P for convenience as given by . Then the present surrogate predicted Gaussian distribution at any is

where is the posterior mean and is the posterior prediction variance of the present GP surrogate for the high-fidelity model built using the available training data till iteration .

A hypothetical future GP surrogate can be understood as a surrogate built using the current GP as a generative model to create hypothetical future simulated data. The hypothetical future simulated data is obtained from the present GP surrogate prediction at the location using a possible future information source . We represent a hypothetical future GP surrogate by the subscript F. Then a hypothetical future surrogate predicted Gaussian distribution at any is

The posterior mean of the hypothetical future GP is

where [21]. The posterior variance of the hypothetical future GP surrogate depends only on the location and the source , and can be replaced with . Note that we don’t need any new evaluations of the information source for constructing the future GP. The total lookahead information gain is obtained by integrating over all possible values of as described below.

Since both and are Gaussian distributions, we can write the KL divergence between them explicitly. The KL divergence between and for any is

(9)

The total KL divergence can then be calculated by integrating over the entire random variable space as given by

(10)

The total lookahead information gain for any can then be calculated by taking the expectation of Equation (10) over all possible values of as given by

(11)

where

In practice, we choose a discrete set via Latin hypercube sampling to numerically integrate Equation (11) as given by

(12)

The total lookahead information gain evaluated using Equation (12) gives a metric of global information gain over the entire random variable space. However, we are interested in gaining more information around the failure boundary. In order to give more importance to gaining information around the failure boundary we use a weighted version of the lookahead information gain normalized by the cost of the information source. In this work, we explore three different weighting strategies: (i) no weights , (ii) weights defined by the EFF, , and (iii) weights defined by the probability of feasibility (PF), . The PF of the sample to lie within the bounds around the zero contour is

(13)

Weighting the information gain by either expected feasibility or probability of feasibility gives more importance to gaining information around the target region, in this case, the failure boundary.

The next information source is selected by maximizing the weighted lookahead information gain normalized by the cost of the information source as given by

(14)

3.5 Algorithm and implementation details

An algorithm describing the mfEGRA method is given in Algorithm 1. In this work, we evaluate all the models at the initial DOE. We generate the initial samples using Latin hypercube sampling and run all the models at each of those samples to get the initial training set . In practice, we choose a fixed set of realizations at which the information gain is evaluated as shown in Equation (12) for all iterations of mfEGRA. Due to the typically high cost associated with the high-fidelity model, we evaluate all the models when the high-fidelity model is selected as the information source and update the GP hyperparameters in our implementation. All the model evaluations can be done in parallel. The algorithm is stopped when the maximum value of EFF goes below . However, other stopping criteria can also be explored.

Input:  Initial DOE , cost of each information source
Output:  Refined multifidelity GP

1:procedure mfEGRA()
2:      set of training samples
3:     Build initial multifidelity GP using the initial set of training samples
4:     while stopping criterion is not met do
5:         Select next sampling location using Equation (8)
6:         Select next information source using Equation (14)
7:         Evaluate at sample using information source
8:         
9:         Build updated multifidelity GP using
10:         
11:     end while
12:     return
13:end procedure
Algorithm 1 Multifidelity EGRA

4 Results

In this section, we demonstrate the effectiveness of the proposed mfEGRA method on an analytic multimodal test problem and an acoustic horn application. The probability of failure is estimated through Monte Carlo simulation using the adaptively refined multifidelity GP surrogate.

4.1 Analytic multimodal test problem

The analytic test problem used in this work has two inputs and three models with different fidelities and costs. This test problem has been used before in the context of reliability analysis in Ref. [11]. The high-fidelity model of the limit state function is

(15)

where and are uniformly distributed random numbers. The domain of the function is . The two low-fidelity models are

(16)
(17)

The cost of each fidelity model is taken to be constant over the entire domain and is given by . In this case, there is no noise in the observations from the different fidelity models. The failure boundary is defined by the zero contour of the limit state function () and the failure of the system is defined by . Figure 3 shows the contour plot of for the three models used for the analytic test problem along with the failure boundary predicted by each of them.

Figure 3: Contours of using the three fidelity models for the analytic test problem. Solid red line represents the zero contour that denotes the failure boundary.

We use an initial DOE of size 10 generated using Latin hypercube sampling. All the models are evaluated at these 10 samples to build the initial multifidelity surrogate. The reference probability of failure is calculated to be using Monte Carlo samples of model. The relative error in probability of failure estimate using the adaptively refined multifidelity GP surrogate is used to assess the accuracy and computational efficiency of the proposed method. We repeat the calculations for 100 different initial DOEs to get the confidence bands on the results.

We first compare the accuracy of the method when different weights are used for the information gain criterion in mfEGRA as seen in Figure 4. We can see that using weighted information gain (both EFF and PF) performs better than the case when no weights are used when comparing the error confidence bands. EFF-weighted information gain leads to only marginally lower errors in this case as compared to PF-weighted information gain. Since we don’t see any significant advantage of using PF as weights and we use the EFF-based criterion to select the sample location, we propose using EFF-weighted information gain to make the implementation more convenient. Note that for other problems, it is possible that PF-weighted information gain may be better. From hereon, mfEGRA is used with the EFF-weighted information gain.

Figure 4: Effect of different weights for information gain criterion in mfEGRA for analytic test problem in terms of convergence of relative error in prediction (shown in log-scale) for 100 different initial DOEs. Solid lines represent the median and dashed lines represent the 25 and 75 percentiles.

The comparison of mfEGRA with single fidelity EGRA shows considerable improvement in accuracy at substantially lower computational cost as seen in Figure 5. In this case, to reach a median relative error of below in prediction, mfEGRA requires a computational cost of 28 compared to EGRA that requires a computational cost of 55 (around 50% reduction). Note that we start both cases with the same 100 sets of initial samples.

Figure 5: Comparison of mfEGRA vs single fidelity EGRA for analytic test problem in terms of convergence of relative error in prediction (shown in log-scale) for 100 different initial DOEs.

Figure 6 shows the evolution of the expected feasibility function and the weighted lookahead information gain, which are the two stages of the adaptive sampling criterion used in mfEGRA. These metrics along with the relative error in probability of failure estimate can used to define an efficient stopping criterion, specifically when the adaptive sampling needs to be repeated for different sets of parameters (e.g., in reliability-based design optimization).

Figure 6: Evolution of adaptive sampling criteria (a) expected feasibility function, and (b) weighted information gain used in mfEGRA for 100 different initial DOEs.

Figure 7 shows the progress of mfEGRA at several iterations for a particular initial DOE. mfEGRA explores most of the domain using the cheaper and models in this case. The algorithm is stopped after 134 iterations when the expected feasibility function reached below ; we can see that the surrogate contour accurately traces the true failure boundary defined by the high-fidelity model. In this case, mfEGRA makes a total of 35 evaluations of , 126 evaluations of , and 53 evaluations of including the initial DOE, to reach a value of EFF below .

Figure 7: Progress of mfEGRA at several iterations showing the surrogate prediction and the samples from different models for a particular initial DOE. HF refers to high-fidelity model , LF1 refers to low-fidelity model , and LF2 refers to low-fidelity model .

4.2 Acoustic horn

We demonstrate the effectiveness of mfEGRA for the reliability analysis of an acoustic horn. The acoustic horn model used in this work has been used in the context of robust optimization by Ng et al. [24] An illustration of the acoustic horn is shown in Figure 8. The inputs to the system are the three random variables listed in Table 1.

Random variable Description Distribution Lower bound Upper bound Mean Standard deviation
wave number Uniform 1.3 1.5
upper horn wall impedance Normal 50 3
lower horn wall impedance Normal 50 3
Table 1: Random variables used in the acoustic horn application.
Figure 8: Two-dimensional acoustic horn geometry with and shape of the horn flare described by six equally-spaced half-widths [24]

The output of the model is the reflection coefficient , which is a measure of the horn’s efficiency. We define the failure of the system to be . The limit state function is defined as , which defines the failure boundary as . We use a two-dimensional acoustic horn model governed by the non-dimensional Helmholtz equation. In this case, a finite element model of the Helmholtz equation is the high-fidelity model with 35895 nodal grid points. The low-fidelity model is a reduced basis model with basis vectors [24, 25]. In this case, the cost of evaluating the low-fidelity model is 40 times faster than evaluating the high-fidelity model. The cost of evaluating the different models is taken to be constant over the entire random variable space. A more detailed description of the acoustic horn models used in this work can be found in Ref. [24].

The reference probability of failure is estimated to be using Monte Carlo samples of the high-fidelity model. We repeat the mfEGRA and the single fidelity EGRA results using 10 different initial DOEs with 10 samples in each (generated using Latin hypercube sampling) to get the confidence bands on the results. The comparison of convergence of the relative error in the probability of failure is shown in Figure 9 for mfEGRA and single fidelity EGRA. In this case, mfEGRA needs 19 equivalent high-fidelity solves to reach a median relative error value of below as compared to 25 required by single fidelity EGRA leading to 24% reduction in computational cost. The reduction in computational cost using mfEGRA is driven by the discrepancy between the models and the relative cost of evaluating the models. In the acoustic horn case, we see computational savings of 24% as compared to around 50% seen in the analytic test problem in Section 4.1. This can be explained by the substantial difference in relative costs – 40 times cheaper low-fidelity model for the acoustic horn problem as compared to two low-fidelity models that are 100-1000 times cheaper than the high-fidelity model for the analytic test problem. The evolution of the mfEGRA adaptive sampling criteria can be seen in Figure 10.

Figure 9: Comparing relative error in the estimate of probability of failure (shown in log-scale) using mfEGRA and single fidelity EGRA for the acoustic horn application with 10 different initial DOEs.
Figure 10: Evolution of adaptive sampling criteria (a) expected feasibility function, and (b) weighted information gain for the acoustic horn application with 10 different initial DOEs.

Figure 11 shows that classification of the Monte Carlo samples using the high-fidelity model and the adaptively refined surrogate model for a particular initial DOE lead to very similar results. It also shows that in the acoustic horn application there are two disjoint failure regions and the method is able to accurately capture both failure regions. The location of the samples from the different models when mfEGRA is used to refine the multifidelity GP surrogate for a particular initial DOE can be seen in Figure 12. The figure shows that most of the high-fidelity samples are selected around the failure boundary. For this DOE, mfEGRA requires 28 evaluations of the high-fidelity model and 69 evaluations of the low-fidelity model to reach an EFF value below .

Figure 11: Classification of Monte Carlo samples using (a) high-fidelity model, and (b) the final refined multifidelity GP surrogate for a particular initial DOE using mfEGRA for the acoustic horn problem.
Figure 12: Location of samples from different fidelity models using mfEGRA for the acoustic horn problem for a particular initial DOE. The cloud of points are the high-fidelity Monte Carlo samples near the failure boundary.

Similar to the work in Refs. [13, 15], mfEGRA can also be implemented by limiting the search space for adaptive sampling location in Equation (8) to the set of Monte Carlo samples drawn from the given random variable distribution. The convergence of relative error in probability of failure estimate using this method improves for both mfEGRA and single fidelity EGRA as can be seen in Figure 13. In this case, mfEGRA requires 12 equivalent high-fidelity solves as compared to 21 equivalent high-fidelity solves required by single fidelity EGRA to reach a median relative error below leading to computational savings of around 43%.

Figure 13: Comparing relative error in the estimate of probability of failure (shown in log-scale) using mfEGRA and single fidelity EGRA by limiting the search space for adaptive sampling location to a set of Monte Carlo samples drawn from the given random variable distribution for the acoustic horn application with 10 different initial DOEs.

5 Concluding remarks

This paper introduces the mfEGRA (multifidelity EGRA) method that refines the surrogate to accurately locate the limit state function failure boundary (or any contour) while leveraging multiple information sources with different fidelities and costs. The method selects the next location based on the expected feasibility function and the next information source based on a weighted one-step lookahead information gain criterion to refine the multifidelity GP surrogate of the limit state function around the failure boundary. We show through two numerical examples that mfEGRA efficiently combines information from different models to reduce computational cost. The mfEGRA method leads to computational savings of around 50% for a multimodal test problem and 24% for an acoustic horn problem over the single fidelity EGRA method when used for estimating the probability of failure. The mfEGRA method when implemented by restricting the search-space to a priori drawn Monte Carlo samples showed even more computational efficiency with 43% reduction in computational cost compared to single-fidelity method for the acoustic horn problem. The driving factors for the reduction in computational cost for the method are the discrepancy between the high- and low-fidelity models, and the relative cost of the low-fidelity models compared to the high-fidelity model. These information are directly encoded in the mfEGRA adaptive sampling criterion helping it make the most efficient decision.

Acknowledgements

This work has been supported in part by the Air Force Office of Scientific Research (AFOSR) MURI on managing multiple information sources of multi-physics systems award numbers FA9550-15-1-0038 and FA9550-18-1-0023, the Air Force Center of Excellence on multi-fidelity modeling of rocket combustor dynamics award FA9550-17-1-0195, and the Department of Energy Office of Science AEOLUS MMICC award DE-SC0019303.

References

  • [1] Melchers, R., “Importance sampling in structural systems,” Structural Safety, Vol. 6, No. 1, 1989, pp. 3–10.
  • [2] Liu, J. S., Monte Carlo strategies in scientific computing, Springer Science & Business Media, 2008.
  • [3] Kroese, D. P., Rubinstein, R. Y., and Glynn, P. W., “The cross-entropy method for estimation,” Handbook of Statistics, Vol. 31, Elsevier, 2013, pp. 19–34.
  • [4] Au, S.-K. and Beck, J. L., “Estimation of small failure probabilities in high dimensions by subset simulation,” Probabilistic Engineering Mechanics, Vol. 16, No. 4, 2001, pp. 263–277.
  • [5] Papaioannou, I., Betz, W., Zwirglmaier, K., and Straub, D., “MCMC algorithms for subset simulation,” Probabilistic Engineering Mechanics, Vol. 41, 2015, pp. 89–103.
  • [6] Hohenbichler, M., Gollwitzer, S., Kruse, W., and Rackwitz, R., “New light on first-and second-order reliability methods,” Structural Safety, Vol. 4, No. 4, 1987, pp. 267–284.
  • [7] Rackwitz, R., “Reliability analysis–a review and some perspectives,” Structural Safety, Vol. 23, No. 4, 2001, pp. 365–395.
  • [8] Basudhar, A., Missoum, S., and Sanchez, A. H., “Limit state function identification using support vector machines for discontinuous responses and disjoint failure domains,” Probabilistic Engineering Mechanics, Vol. 23, No. 1, 2008, pp. 1–11.
  • [9] Basudhar, A. and Missoum, S., “Reliability assessment using probabilistic support vector machines,” International Journal of Reliability and Safety, Vol. 7, No. 2, 2013, pp. 156–173.
  • [10] Lecerf, M., Allaire, D., and Willcox, K., “Methodology for dynamic data-driven online flight capability estimation,” AIAA Journal, Vol. 53, No. 10, 2015, pp. 3073–3087.
  • [11] Bichon, B. J., Eldred, M. S., Swiler, L. P., Mahadevan, S., and McFarland, J. M., “Efficient global reliability analysis for nonlinear implicit performance functions,” AIAA Journal, Vol. 46, No. 10, 2008, pp. 2459–2468.
  • [12] Picheny, V., Ginsbourger, D., Roustant, O., Haftka, R. T., and Kim, N.-H., “Adaptive designs of experiments for accurate approximation of a target region,” Journal of Mechanical Design, Vol. 132, No. 7, 2010, pp. 071008.
  • [13] Echard, B., Gayton, N., and Lemaire, M., “AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation,” Structural Safety, Vol. 33, No. 2, 2011, pp. 145–154.
  • [14] Dubourg, V., Sudret, B., and Bourinet, J.-M., “Reliability-based design optimization using kriging surrogates and subset simulation,” Structural and Multidisciplinary Optimization, Vol. 44, No. 5, 2011, pp. 673–690.
  • [15] Bect, J., Ginsbourger, D., Li, L., Picheny, V., and Vazquez, E., “Sequential design of computer experiments for the estimation of a probability of failure,” Statistics and Computing, Vol. 22, No. 3, 2012, pp. 773–793.
  • [16] Chevalier, C., Bect, J., Ginsbourger, D., Vazquez, E., Picheny, V., and Richet, Y., “Fast parallel kriging-based stepwise uncertainty reduction with application to the identification of an excursion set,” Technometrics, Vol. 56, No. 4, 2014, pp. 455–465.
  • [17] Moustapha, M. and Sudret, B., “Surrogate-assisted reliability-based design optimization: a survey and a unified modular framework,” Structural and Multidisciplinary Optimization, 2019, pp. 1–20.
  • [18] Peherstorfer, B., Willcox, K., and Gunzburger, M., “Survey of multifidelity methods in uncertainty propagation, inference, and optimization,” SIAM Review, Vol. 60, No. 3, 2018, pp. 550–591.
  • [19] Dribusch, C., Missoum, S., and Beran, P., “A multifidelity approach for the construction of explicit decision boundaries: application to aeroelasticity,” Structural and Multidisciplinary Optimization, Vol. 42, No. 5, 2010, pp. 693–705.
  • [20] Marques, A., Lam, R., and Willcox, K., “Contour location via entropy reduction leveraging multiple information sources,” Advances in Neural Information Processing Systems, 2018, pp. 5217–5227.
  • [21] Poloczek, M., Wang, J., and Frazier, P., “Multi-information source optimization,” Advances in Neural Information Processing Systems, 2017, pp. 4291–4301.
  • [22] Lam, R., Allaire, D., and Willcox, K., “Multifidelity optimization using statistical surrogate modeling for non-hierarchical information sources,” 56th AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, 2015.
  • [23] Rasmussen, C. E. and Nickisch, H., “Gaussian processes for machine learning (GPML) toolbox,” Journal of Machine Learning Research, Vol. 11, No. Nov, 2010, pp. 3011–3015.
  • [24] Ng, L. W. and Willcox, K. E., ‘‘Multifidelity approaches for optimization under uncertainty,” International Journal for Numerical Methods in Engineering, Vol. 100, No. 10, 2014, pp. 746–772.
  • [25] Eftang, J. L., Huynh, D., Knezevic, D. J., and Patera, A. T., “A two-step certified reduced basis method,” Journal of Scientific Computing, Vol. 51, No. 1, 2012, pp. 28–58.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
393220
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description