An exact framework for uncertainty quantification in Monte Carlo simulation

An exact framework for uncertainty quantification in Monte Carlo simulation

P Saracco and M G Pia
Abstract

In the context of Monte Carlo (MC) simulation of particle transport Uncertainty Quantification (UQ) addresses the issue of predicting non statistical errors affecting the physical results, i.e. errors deriving mainly from uncertainties in the physics data and/or in the model they embed. In the case of a single uncertainty a simple analytical relation exists among its the Probability Density Function (PDF) and the corresponding PDF for the output of the simulation: this allows a complete statistical analysis of the results of the simulation. We examine the extension of this result to the multi-variate case, when more than one of the physical input parameters are affected by uncertainties: a typical scenario is the prediction of the dependence of the simulation on input cross section tabulations.

  • National Institute for Nuclear Physics (I.N.F.N.)
    Via Dodecaneso, 33
    16146 Genova (Italy)

  • E-mail: paolo.saracco@ge.infn.it, Maria.Grazia.Pia@cern.ch

1 Introduction and problem definition

Uncertainty Quantification (UQ) may refer to a wide variety of specific problems pertaining different scientific areas: related studies involve many methodologies or techniques and even terminologies[1]-[10], so that it is necessary to better delimit the problem: we are interested in studying how uncertainties of the input data needed for physics simulation transfer into the output. Even within this more restricted domain the problem seems quite complex because the meaning of input data needed for physics simulation is rather undetermined: it may refer to those data needed to specify the experimental setup to be simulated - like, e.g., geometrical data or to the composition of the materials - to some conditions externally applied to the system - like temperature, pressure or electromagnetic fields - or to physical data - cross sections or, often, physical models - needed to describe the transport of particles. All these data are generally affected by uncertainties, which necessarily imply uncertainties in the output of the simulation. So from a responsible MC user’s perspective input data should refer both to those that are under his direct experimental control and to (many) other data, whose values and uncertainties come from different experiments: it is remarkable that often the latter are not considered. In this contribution we summarize previous results[11, 12] together with some new ones.

To give our analysis a well established mathematically ground we make a fundamental assumption: to be able to disentangle input data uncertainty from the process of simulation itself. In other words we assume that the process of simulation relies on the knowledge of the values of some parameters , with their associate uncertainties, and that these values cannot vary during the simulation (or that these values do not depend on the simulation itself): this assumption enables to assume a priori the knowledge of the Probability Distribution Function (PDF) for the input parameters , that can then be assumed as stochastic parameters.

This is our main hypothesis: its validity in the generic case must be assessed by users case by case.

We just mention two typical cases under which this hypothesis can be not valid or questionable: the first concerns the local temperature of some experimental apparatus we want to simulate; to some extent it can be assumed as fixed by external conditions with their associate uncertainties, external temperature and/or cooling system of the apparatus if it exists, but local significant temperature variations can be present depending on the local rate of energy deposition due to the transported particles. This last quantity is really tracked by the simulation itself, but its consequence in terms of local temperature variation is not normally taken into account automatically by the MC code: the relation between energy deposition and local temperature variations can be taken into account by coupling the MC simulation with some thermo-mechanical code or model, but even if this can be done, not all MC codes are able to use different cross sections data (cross sections data at different temperatures) in the same simulation. The second case concerns the density of some isotopes, which can vary due to activation/decay processes.

Under the quoted assumption the probability that MC simulation of some physical observable have a result between and has density

(1)

for a simulation encompassing events. Here is the sampled mean for the physical observable when the input unknowns assume values and is its sampled variance. This result derives from the Central Limit Theorem (CLT) if is sufficiently large to make independent from itself. In the limit

(2)

Equation (1) mathematically states how uncertainties in the input data exactly transfer into our knowledge of the observable because this is an assignment of a probability for each possible outcome of this observable.

Even if we derived (1) from (1) we could as well make the opposite: (1) represents the forward propagation of uncertainty for a deterministic problem having solution which, in turn, depends (parametrically, not dynamically) on some unknown input values whose PDF is given by . Searching the solution of such deterministic problem by MC simulation - that is we go back from (1) to (1) - results in a (gaussian) statistical blurring of the exact result of the underlying deterministic problem. This dual interpretation of (1) and (1) is crucial to our investigation: it is a clear indication of which is the natural goal for any UQ project - the determination of - and of its prerequisite - the knowledge of . Unfortunately, but quite obviously, the determination of requires also the knowledge of the parametric dependence of the exact solution on the input uncertainties and on their PDFs: except for very simple examples we are not able to exploit this dependance, but clearly we can use simulation to extract these required informations, as we shall see. It is relevant to stress that our attention has turned to the determination of the exact form of rather than the one of : with this passage we obtain an important result both on the conceptual - as we have seen - and on the practical side. In fact the inherent dependency of from the details of the simulation\@footnotemark\@footnotetextFrom (1) the dependence on the number of generated events is explicit, but obviously depends also on the specific way MC simulation is implemented. makes it impossible to determine it other than by statistical sampling: this implies the necessity of running a very large number of simulations, each time sampling out of . On the contrary the determination of the dependence can be accurately obtained with fewer runs, as we shall discuss.

Uncertainty quantification directly derives from (1): in fact if we know the determination of any required statistical information of the physical output, e.g. its confidence intervals, from the unknown input parameters is straightforward. So the task of any UQ project is not the study of how uncertainties of the input data needed for physics simulation transfer into the output (of the simulation), that is the determination of , rather the use of simulation to determine the properties of as better as possible. We note, en passant, that techniques we are going to develop to study are then applicable to more general contexts than simulation, for instance also when simulations are coupled to deterministic codes.

Moreover we learn that the necessary prerequisite is a precise, as far as possible, knowledge of the PDF for the input data, because it is directly transferred into the physical output probability distribution.

In most of the cases uncertainties in the data which are under the user’s control are independent from uncertainties in the physical data needed for transport. So our approach enables to clearly distinguish two very different phases of UQ projects: (i) a validation phase and (ii) a problem specific analysis phase.

  1. The validation phase - All physical data needed for transport should be validated: all general purpose MC codes make use of cross section tabulations as well as of physical models. All these data should be carefully analyzed giving to users at least the knowledge about their confidence intervals (better, their PDF). It should be clear that this cannot be a users’s responsibility: without a previous validation phase any UQ project is meaningless. On the other side how much these data are individually relevant in any given problem cannot be a priori established.

  2. The problem specific analysis phase - Most of realistic situations under simulation may involve a priori hundreds of physical parameters in their full definition: they include parameters whose confidence intervals (or PDF) come from the validation phase, as well as problem specific parameters, like, e.g., geometrical or material composition parameters: for these last is user’s responsibility to establish at least proper confidence intervals, better PDF. It will become clear that a full UQ process is out of human possibilities and largely meaningless. So a very detailed analysis of the problem at hand should be carried on to identify those parameters which are likely to be the most relevant in the given experimental configuration. Methods we shall expose in Section 2 can be applied to the selected set of parameters. If necessary successive iterations must be carried on other parameters.

2 The path to UQ

Once we have selected a set of parameters on which perform a UQ (or sensitivity) analysis we can proceed on the basis of (1) and (1): in most of the cases two further assumptions hold, namely (i) variations of the input parameters are independent and (ii) is linear in each of the parameters in their range of variability. We will discuss later these assumptions. If this is true (1) simplifies greatly to

(3)

where and can be assumed as known from validation phase or from problem specific analysis phase. Problem defined by (2) is a completely defined mathematical problem once we are able to determine and : it is obvious that we can obtain a statistical estimate of these required values with a predetermined accuracy by means of simulations each run with a different set of input parameters, namely and , so that indicating with the output of the simulation with a given set of values for the input parameters

Obviously these values are affected by statistical errors of the simulation that are of the order of \@footnotemark\@footnotetextWe can naturally assume as constant for small variations of the parameters.: then a rough estimate of the number of events needed in each of the simulations comes from

It should be realized that the use of (2) entails a large reduction in the computer time required with respect to any attempt to directly determine , a task that would require thousands of MC runs.

From (2) it is easy to extract and ; if for instance , often a convenient choice

so that . This quantity can be assumed as a first estimate of the uncertainty, but this is really correct only if is - at least approximately - a normal distribution: this is certainly true if the are normal distributions as well, but not in other cases. So we are left with a classical problem in probability theory, namely the determination of the distribution for the weighted sum of independent stochastic variables obeying (a priori) arbitrary distributions.

In some cases the calculation of can be carried on exactly. We quoted the case when all the s are normal distributions: this is a specific example of a general class of distributions which characteristic functions - the Fourier transform of the given distributions - are closed under product: these are the so-called stable distributions. In probability theory, a random variable is said to be stable (or to have a stable distribution) if it has the property that a linear combination of two independent copies of the variable has the same distribution, up to location and scale parameters. The stable distribution family is also sometimes referred to as the Lévy [13] -stable distribution. Distributions belonging to this family [14, 15] have characteristic functions of the form

(4)

where if and if . This is a 4 parameters ( and ) family which is closed under product for fixed : the analytic form of the corresponding PDF is known only for some special values of the parameters. Among these we mention the normal distribution (, ), the Cauchy distribution (, ), and the Lévy distribution (, ). All these distributions apart the normal one are heavy tailed, that is their behavior for large is . In the limit or they approach a Dirac delta.

The most common case of applicability of these properties to UQ is when we have to analyze a certain number of input parameters affected by different statistical errors.

Another useful case in practice is when we have a certain number of input parameters with flat distributions: this is the case when input parameters are given in the form . In this case is given by a generalization of the Irwin-Hall distribution [16, 17] recently revisited [18].

Unfortunately all these results apply only to cases when all the input unknowns have the same distributions with different parameters: for instance they can be all normal distributions, with different means and variances, or all flat distributions, with different means and experimental errors, and so on. It should be clear that in a generic UQ problem this is a strong limitation, because we can have the case of input parameters obeying different distributions as well: this problem is clearly not soluble in full generality so we must work on some approximation scheme to the search for .

3 A more general result

We recently succeeded in finding an exact analytical expression for the weighted sum of independent stochastic variables obeying arbitrary polynomial distributions supported on bounded intervals, whose proof can be found in [19]. Here we simply quote our central result and we show how this can be useful in determining with some predetermined error: an arbitrary distribution supported on a bounded interval can always be approximated with a predetermined accuracy by a sequence of polynomials each defined on different sub-intervals of : the most simple case of such procedure is to subdivide the interval in sub-intervals and to approximate the given distribution with sequence of segments joining the values . More accurate results can be obtained with a lower number of subintervals by using splines. Accuracy of an approximation is defined with respect to some given norm, measuring the distance between the exact result and its approximation; in our case, to the purpose of studying the propagation of error, it is convenient to make the choice

so that the distance between two function is the maximum of their absolute differences. We assert without proof that if is continuous (or if it has at most a finite number of discontinuities) it is always possible to find a subdivision of such that the distance between and its polynomial approximation is bounded by a predefined \@footnotemark\@footnotetextThis is a consequence of Weierstrass approximation theorem.. Then if we have such distributions, each approximated by this procedure, the maximum error of the distribution of the sum is clearly : for a weighted sum the bound on the error turns out to be . This means that with this procedure we can obtain the required distribution with any predetermined error provided we are able to make exactly the convolution\@footnotemark\@footnotetextWe remind the reader that the (weighted) sum of stochastic variables is expressed by the convolution of their (rescaled) distributions. of generic polynomial forms defined over different intervals.

Thanks to the linearity of the convolution this is equivalent to the ability of performing the convolution of monomial forms with arbitrary exponents supported on different bounded intervals: this is exactly what we proved in [19]. The convolution of monomials that are different from zero only over the intervals , , is given by

where and . Equation (3) appears as a generalization of the result [18]. We remark that calculation of along the quoted guidelines can be tedious and often long, so we are planning to release a specific dedicated software to automate the process.

Anyway the outlined procedure is always able to give a definite answer on the expected PDF with a predefined accuracy: it is then possible to extract confidence intervals for the required physical quantity, or any other statistical information we can need.

Two sources of errors must be accurately followed up: the first concerns the (statistical) errors in the values required for the calculation ( and the ) that, as we told, can be extracted from accurate simulations; the second concerns propagation of errors in the procedure of approximating individual input PDFs with polynomial forms over finite intervals.

In the most generic case some of the input PDF can be defined over infinite intervals - for instance they may be normal distributions: if this is the case we can follow the described procedure provided we a priori define some confidence for the whole calculation; that is to say that a normal distribution can be approximated by a distribution over a finite interval with a given confidence. So the worst case, for which we cannot easily apply the described procedure, is the case when some of the input PDF are heavy tailed.

4 Conclusions and outlook

Results presented in the previous section provide a well defined mathematical framework to obtain the probability density function for any observable depending on some given set of input physical parameters and on their associate uncertainties.

We made three assumptions about which some comment is useful.

First we assumed that the values of the input parameters, and implicitly their probability distributions, do not vary along the simulation: on the basis of the discussion we made about equations (1) and (1) this condition should be more properly reformulated as: the input parameters and their probability distributions do not depend on the dynamical evolution of the system. It should be clear that this condition can, at least in principle, be given up: this possibility depends on our ability to build a more complete dynamical description of the system under examination including also the possible evolution of these dynamically variable parameters: to remain within the examples we made, local temperatures can be derived by some thermomechanical or thermohydraulic model which must be coupled to particle transport, or isotope concentrations can be obtained by coupling transport to some solver for Bateman’s equations; even if this is in principle possible, in practice it depends on a major reformulation of the existing simulation codes.

The second major assumption is about the statistical independence of different input uncertainties: we currently do not see any way-out to this assumption; however this is the most common case in practice.

The third assumption is about the linearity of the response with respect to each of the , see equation (2): it is clearly possible to give up this assumption by properly subdividing the region of integration in (1), as we noted earlier, for example using methods from [20], but this process introduce additional approximation errors whose propagation must be studied case by case for their effect on the precision on the knowledge of .

So we can conclude that the presented conceptual mathematical framework for UQ is really consistent and it relies in principle only on the assumption of mutual independence of the input unknowns: however its practical usability is not straightforward in its general formulation because of the error tracking process. Then we plan the release of a dedicated software tool to automate this task.

References

References

  • [1] Walker W E et al. 2003 Defining uncertainty: a conceptual basis for uncertainty management in model-based decision support Integrated Assessments 4 5-17
  • [2] Marino S, Hogue I B, Ray C J and Kirschner D E 2008 A methodology for performing global uncertainty and sensitivity analysis in systems biologyÓ J. Theor Biol. 7 178-196
  • [3] Levine R A and Berliner L M 1999 Statistical principles for climate change studies J. Climate 12 564-574
  • [4] Suslick S B and Schiozer D J 2004 Risk analysis applied to petroleum exploration and production: an overview J. Petr. Sc. Eng. 44 1-9
  • [5] Bernardini A and Tonon F 2010 Bounding Uncertainties in Civil Engineering, (Berlin: Springer and Verlag)
  • [6] Oberkampf W L, DeLand S M, Rutherford B M, Diegert K V and Alvin K F, Error and uncertainty in modeling and simulation 2002 Reliab. Eng. Syst. Safety 75 333-357
  • [7] Swiler L P, Paez T L and Mayes R L 2009 Epistemic Uncertainty Quantification Tutorial Proc. IMAC-XXVII, Conf. and Exposition on Structural Dynamics (Orlando: Soc. Structural Mech) paper 294
  • [8] Helton J C 2011 Quantification of margins and uncertainties: conceptual and computational basis Reliab. Eng. Syst. Safety 96 976-1013
  • [9] Lin G, Engel D W and Esliner P W 2012 Survey and evaluate uncertainty quantification methodologies Pacific Northwest National Laboratory 20914 Report, (Richland: PNNL)
  • [10] Oberkampf W L and Roy C J 2010 Verification and Validation in Scientific Computing Chapter 13 (Cambridge: CUP)
  • [11] Saracco P, Batic M, Hoff G and Pia M G 2012 ”Uncertainty Quantification (UQ) in generic MonteCarlo simulations” Nuclear Science Symposium Conference Record (NSS/MIC) IEEE
  • [12] Saracco P, Batic M, Hoff G and Pia M G 2013 ”Theoretical ground for the propagation of uncertainties in Monte Carlo particle transport” submitted to IEEE Trans. Nucl. Science
  • [13] Lévy P. 1925 Calcul de Probabilités (Paris: Gauthier Villars)
  • [14] Zolotarev V M, 1986 One-dimensional Stable Distributions (Providence: American Mathematical Society)
  • [15] Fofack H and Nolan J P 1999 ”Tail behavior, modes and other characteristics of stable distributions” Extremes 39-58
  • [16] Irwin J O 1927 ”On the frequency distributions of the means of samples from a population having any law of frequency, with special reference to Pearson’s type 2” Biometrika 19 225-239
  • [17] Hall P 1927 ”The distribution of means for samples of size N drawn from a population in which the variate takes value between 0 and 1, all such values being equally probable” Biometrika 19 240-245
  • [18] Bradley D M and Gupta RC 2002 ”On the distribution of the sum of N non identically distributed random variables” Ann. Inst. Statist. Math. 54 689-700
  • [19] Saracco P and Pia M G 2013 ”Parameter Uncertainty Quantification and the problem of determining the distribution of the sum of N independent stochastic variables: an exact solution for arbitrary polynomial distributions on different intervals” submitted to Journ. Math. Phys.
  • [20] Luck R and Stevens J W 2004 ”A simple numerical procedure for estimating nonlinear uncertainty propagation” ISA Trans. 43 491-497
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
148288
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description