Estimating model bias over the complete nuclide chart with sparse Gaussian processes at the example of INCL/ABLA and double-differential neutron spectra
Predictions of nuclear models guide the design of nuclear facilities to ensure their safe and efficient operation. Because nuclear models often do not perfectly reproduce available experimental data, decisions based on their predictions may not be optimal. Awareness about systematic deviations between models and experimental data helps to alleviate this problem. This paper shows how a sparse approximation to Gaussian processes can be used to estimate the model bias over the complete nuclide chart at the example of inclusive double-differential neutron spectra for incident protons above 100 MeV. A powerful feature of the presented approach is the ability to predict the model bias for energies, angles, and isotopes where data are missing. The number of experimental data points that can be taken into account is at least in the order of magnitude of thanks to the sparse approximation. The approach is applied to the Liège Intranuclear Cascade Model (INCL) coupled to the evaporation code ABLA. The results suggest that sparse Gaussian process regression is a viable candidate to perform global and quantitative assessments of models. Limitations of a philosophical nature of this (and any other) approach are also discussed.
pacs:02.50.EyStochastic processes and 02.50.TtInference methods and 02.50.SkMultivariate analysis and 25.40-hNucleon-induced reactions and 28.20.-vNeutron physics and 29.85.FjData analysis and 29.85.-cComputer data analysis and 29.87.+gNuclear data compilation
Despite theoretical advances, nuclear models are in general not able to reproduce all features of trustworthy experimental data. Because experiments alone do not provide sufficient information to solve problems of nuclear engineering, models are still needed to fill the gaps.
Being in need of reliable nuclear data, a pragmatic solution to deal with imperfect models is the introduction of a bias term on top of the model. The true prediction is then given as the sum of model prediction and bias term. The form of the bias term is in principle arbitrary and a low order polynomial could be a possible choice. However, assumptions about how models deviate from reality are usually very vague, which makes it difficult to justify one parametrization over another one.
Methods of non-parametric statistics help to a certain extent to overcome this specification problem. In particular, Gaussian process (GP) regression (also known as Kriging, e.g., rasmussen_gaussian_2006 ()) enjoys popularity in various fields, such as geostatistics, remote sensing and robotics, due to its conceptual simplicity and sound embedding in Bayesian statistics. Instead of providing a parameterized function to fit, one specifies a mean function and a covariance function. The definition of these two quantities induces a prior probability distribution on a function space. Several standard specifications of parametrized covariance functions exist, e.g., (rasmussen_gaussian_2006, , Ch. 4), whose parameters regulate the smoothness and the magnitude of variation of the function to be determined by GP regression.
The optimal choice of the values for these so-called hyperparameters is problem dependent and can also be automatically performed by methods such as marginal likelihood optimization and cross validation, e.g., (rasmussen_gaussian_2006, , Ch. 5). In addition, features of various covariance functions can be combined by summing and multiplying them. Both the automatic determination of hyperparameters and the combination of covariance functions will be demonstrated.
From an abstract viewpoint, GP regression is a method to learn a functional relationship based on samples of input-output associations without the need to specify a functional shape. GP regression naturally yields besides estimates also the associated uncertainties. This feature is essential for evaluating nuclear data, because uncertainties of estimates are as important for the design of nuclear facilities as are the estimates themselves. Prior knowledge about the smoothness and the magnitude of the model bias can be taken into account. Furthermore, many existing nuclear data evaluation methods, e.g., muir_global_2007 (); leeb_consistent_2008 (); herman_development_2008 (), can be regarded as special cases. This suggests that the approach discussed in this paper can be combined with existing evaluation methods in a principled way.
The main hurdle for applying GP regression is the bad scalability in the number of data points . The required inversion of an covariance matrix leads to computational complexity of . This limits the application of standard GP regression to several thousand data points on contemporary desktop computers. Scaling up GP regression to large datasets is therefore a field of active research. Approaches usually rely on a combination of parallel computing and the introduction of sparsity in the covariance matrix, which means to replace the original covariance matrix by a low rank approximation, see e.g., quinonero-candela_unifying_2005 ().
In this paper, I investigate the sparse approximation introduced in snelson_sparse_2006 () to estimate the model bias of inclusive double-differential neutron spectra over the complete nuclide chart for incident protons above 100 MeV. The predictions are computed by the C++ version of the Liège Intranuclear Cascade Model (INCL) mancusi_new_2014 (); mancusi_extension_2014 () coupled to the Fortran version of the evaporation code ABLA07 abla07 (). The experimental data are taken from the EXFOR database otuka_towards_2014 ().
The idea of using Gaussian processes to capture deficiencies of a model exists for a long time in the literature, see e.g., blight_bayesian_1975 (), and has also already been studied in the context of nuclear data evaluation, e.g., pigni_uncertainty_2003 (); schnabel_large_2015 (); schnabel_differential_2016 (). The novelty of this contribution is the application of GP regression to a large dataset with isotopes across the nuclide chart, which is possible thanks to the sparse approximation. Furthermore, the way GP regression is applied enables predictions for isotopes without any data.
The exemplary application of sparse GP regression in this paper indicates that the inclusion of hundred thousands of data points may be feasible and isotope extrapolations yield reasonable results if some conditions are met. Therefore, sparse GP regression is a promising candidate to perform global assessments of models and to quantify their imperfections in a principled way.
The structure of this paper is as follows. Section 2 outlines the theory underlying sparse GP regression. In particular, Section 2.1 provides a succinct exposition of standard GP regression, section 2.1 sketches how to construct a sparse approximation to a GP and how this approximation is exploited in the computation, and section 2.3 explains the principle to adjust the hyperparameters of the covariance function based on the data.
The application to INCL/ABLA and inclusive double-differential neutron spectra is then discussed in section 3. After a brief introduction of INCL and ABLA in section 3.1, the specific choice of covariance function is detailed in section 3.2. Some details about the hyperparameter adjustment are given in section 3.3 and the results of GP regression are shown and discussed in section 3.4.
2.1 GP regression
GP regression, e.g., rasmussen_gaussian_2006 (), can be derived in the framework of Bayesian statistics under the assumption that all probability distributions are multivariate normal. Let the vector contain the values at the locations of interest and the observed values at the locations .
For instance, in the application in section 3, the elements in the vector represent the relative deviations of the “truth” from the model predictions for neutron spectra at angles and energies of interest. The vector contains the relative deviations of the available experimental data from the model predictions for neutron spectra at the angles and energies of the experiments. The underlying assumption is that the experimental measurements differ from the truth by an amount compatible with their associated uncertainties. If this assumption does not hold, model bias should be rather understood as a combination of model and experimental bias.
Given a probabilistic relationship between the vectors and , i.e. we know the conditional probability density function (pdf) of given , (e.g., because of continuity assumptions), the application of the Bayesian update formula yields
The occurring pdfs are referred to as posterior , likelihood , and prior . The posterior pdf represents an improved state of knowledge.
The form of the pdfs in the Bayesian update formula can be derived from the joint distribution . In the following, we need the multivariate normal distribution
which is characterized by the center vector and the covariance matrix . The dimension of the occurring vectors is denoted by . Under the assumption that all pdfs are multivariate normal and centered at zero, the joint distribution of and can be written as
The compound covariance matrix contains the blocks and associated with and , respectively. The block contains the covariances between the elements of and . Centering the multivariate normal pdf at zero is a reasonable choice for the estimation of model bias. It means that an unbiased model is a priori regarded as the most likely option.
The posterior pdf is related to the joint distribution by
The solution for a multivariate normal pdf is another multivariate normal pdf. For eq. 3, the result is given by
with the posterior mean vector and covariance matrix (e.g., (rasmussen_gaussian_2006, , A. 3)),
The important property of these equations is the fact that the posterior moments depend only on the observed vector . The introduction of new elements into and associated columns and rows into in eq. 3 has no impact on the already existing values in and . In other words, one is not obliged to calculate all posterior expectations and covariances at once. They can be calculated sequentially.
GP regression is a method to learn a functional relationship based on samples of input-output associations . The vector introduced above then contains the observed functions values of , i.e. . Assuming all prior expectations to be zero, the missing information to evaluate eqs. 7 and 6 are the covariance matrices. Because predictions for should be computable at all possible locations and the same applies to observations, covariances between functions values must be available for all possible pairs of locations. This requirement can be met by the introduction of a so-called covariance function . A popular choice is the squared exponential covariance function
The parameter enables the incorporation of prior knowledge about the range functions values are expected to span. The parameter regulates the smoothness of the solution. The larger the slower the covariance function decays for increasing distance between and and consequently the more similar function values are at nearby locations.
2.2 Sparse Gaussian processes
The main hurdle for applying GP regression using many observed pairs is the inversion of the covariance matrix in eqs. 7 and 6. The time to invert this matrix with being the number of observations increases proportional to . This limits the application of standard GP regression to several thousand observations on contemporary desktop computers. Parallelization helps to a certain extent to push this limit. Another measure is the approximation of the covariance matrix by a low rank approximation. I adopted the sparse approximation described in snelson_sparse_2006 (), which will be briefly outlined here. For a survey of different approaches and their connections consult e.g., quinonero-candela_unifying_2005 ().
Suppose that we have not measured the values in associated with the locations , but instead a vector associated with some other locations . We refer to as vector of pseudo-inputs. Now we use eq. 6 to determine the hypothetical posterior expectation of ,
The matrices and are constructed analogous to eq. 10 and eq. 9, respectively, i.e. and with and being the number of observations and the number of pseudo-inputs. Noteworthy, is a linear function of . Under the assumption of a deterministic relationship, we can replace the posterior expectation by . Using the sandwich formula and , we get for the covariance matrix of
Given that both and have full rank and the number of observations is bigger than the number of pseudo-inputs , the rank of equals . The approximation is more rigid than the original covariance matrix due to the lower rank.
In order to restore the flexibility of the original covariance matrix, the diagonal matrix is added to . This correction is essential for the determination of the pseudo-input locations via marginal likelihood optimization as explained in (snelson_sparse_2006, , Sec. 3). Furthermore, to make the approximation exhibit all properties of a GP, it is also necessary to add to as explained in (quinonero-candela_unifying_2005, , Sec. 6).
Noteworthy, the inversion in these expressions needs only to be performed for an matrix where is the number of pseudo-inputs. Typically, is chosen to be in the order of magnitude of hundred. The computational cost for inverting the diagonal matrix is negligible.
2.3 Marginal likelihood maximization
Covariance functions depend on parameters (called hyperparamters) whose values must be specified. In a full Bayesian treatment, the choice of values should not be informed by the same observations that enter into the GP regression afterwards. In practice, this ideal is often difficult to achieve due to the scarcity of available data and therefore frequently abandoned. A full Bayesian treatment may also become computationally intractable if there are too much data.
Two popular approaches to determine the hyperparameters based on the data are marginal likelihood maximization and cross validation, see e.g., (rasmussen_gaussian_2006, , Ch. 5). A general statement which approach performs better cannot be made. I decided to use marginal likelihood optimization because it can probably be easier interfaced with existing nuclear data evaluation methods.
Given a covariance function depending on some hyperparameters, e.g., with hyperparameters and as in eq. 8, the idea of marginal likelihood maximization is to select values for the hyperparameters that maximize the probability density for the observation vector . In the case of the multivariate normal pdf in eq. 3, it is given by (e.g., (rasmussen_gaussian_2006, , Sec. 5.4.1))
The first term is a constant, the second term is up to a constant the information entropy of the multivariate normal distribution, and the third term is the generalized -value. The maximization of this expression amounts to balancing two objectives: minimizing the information entropy and maximizing the -value.
which enables the usage of gradient-based optimization algorithms. In this paper, I use the L-BFGS-B algorithm byrd_limited_1995 () because it can deal with a large number of parameters and allows to impose restrictions on their ranges.
Due to the appearance of the determinant and the inverse of , the optimization is limited to several thousand observations on contemporary desktop computers. However, replacing by the approximation in eqs. 18 and 17 enables to scale up the number of observations by one or two orders of magnitude. The structure of the approximation is exploited by making use of the matrix determinant lemma, the Woodbury identity, and the trace being invariant under cyclic permutations.
The approximation to is not only determined by the hyperparameters but also by the location of the pseudo-inputs . Hyperparameters and pseudo-input locations can be jointly adjusted by marginal likelihood maximization. The number of pseudo-inputs is usually significantly larger (e.g., hundreds) than the number of hyperparameters (e.g., dozens). In addition, the pseudo-inputs are points in a potentially multi-dimensional space and their specification requires a coordinate value for each axis. For instance, in section 3 sparse GP regression is performed in a five dimensional space with three hundred pseudo-inputs, which gives 1500 associated parameters. Because eq. 18 has to be evaluated for each parameter in each iteration of the optimization algorithm, its efficient computation is important.
The mathematical details are technical and tedious and hence only the key ingredient for efficient computation will be discussed. Let be the coordinate of the pseudo-input. The crucial observation is that yields a matrix in which only the and column and row contain non-zero elements. A similar statement holds for . This feature can be exploited in the multiplications and the trace computation in eq. 18 (where is substituted by ) to achieve per coordinate of a pseudo-input with being the number of pseudo-inputs, and being the number of observations. This is much more efficient than for the partial derivative with respect to a hyperparameter.
The model bias was determined for the C++ version of the Liège Intranuclear Cascade Model (INCL) mancusi_new_2014 (), a Monte Carlo code, coupled to the evaporation code ABLA07 abla07 () because this model combination performs very well according to an IAEA benchmark of spallation models iaea_benchmark (); david_spallation_2015 () and is used in transport codes such as MCNPX and GEANT4. This suggests that the dissemination of a more quantitative performance assessment of INCL coupled to ABLA potentially helps many people to make better informed decisions. Because some model ingredients in INCL are based on views of classical physics (as opposed to quantum physics), the model is mainly used for high-energy reactions above 100 MeV.
The ability of a model to accurately predict the production of neutrons and their kinematic properties may be regarded as one of the most essential features for nuclear engineering applications. Especially for the development of the innovative research reactor MYRRHA myrrha_reactor () driven by a proton accelerator, these quantities need to be well predicted for incident protons.
For these reasons, I applied the approach to determine the model bias in the prediction of inclusive double-differential neutron spectra for incident protons and included almost all nuclei for which I found data in the EXFOR database otuka_towards_2014 (). Roughly ten thousand data points were taken into account. Table 1 gives an overview of the data.
3.2 Design of the covariance function
The covariance function presented in eq. 8 is probably too restrictive to be directly used on double-differential spectra. It incorporates the assumption that the model bias spans about the same range for low and high emission energies. Because the neutron spectrum quickly declines by orders of magnitude with increasing emission energy, it is reasonable to use a covariance function with more flexibility to disentangle the systematics of the model bias associated with these two energy domains.
Assuming the two covariance functions and , a more flexible covariance function can be constructed in the following ways (e.g., (rasmussen_gaussian_2006, , Sec. 4.2.4))
Taking into account that the purpose of a covariance function is to compute elements of a covariance matrix, the construction in eq. 19 is analogous to the possible construction of an experimental covariance matrix: An experimental covariance matrix can be assembled by adding a diagonal covariance matrix reflecting statistical uncertainties to another one with systematic error contributions. Please note that sparse GP regression as presented in this paper works only with a diagonal experimental covariance matrix. Equation 20 will be used to achieve a transition between the low and high energy domains.
To state the full covariance function used in this paper, the following covariance function on a one-dimensional input space is introduced,
The meaning of the hyperparameter was explained below eq. 8.
The inclusive double-differential neutron spectra for incident protons over the complete nuclide chart can be thought of as a function of spectrum values associated with points in a five dimensional input space. The coordinate axes are incident energy (En), mass number (A), charge number (Z), emission angle () and emission energy (E). Using eq. 21 we define two covariance functions and associated with the low and high energy domain of the emitted neutrons. Given two input vectors
the form of the covariance function for both and is assumed to be
The hyperparameters are for the coordinate axis indicated by and can take different values for and (.
The transition between the two energy domains, which means to switch from to , is established by the logistic function
Noteworthy, all the variables are vectors. The equation defines a (hyper)plane with normal vector and distance to the origin of the coordinate system. The function in eq. 25 attains values close to zero for far away from the plane on one side and close to one if far away on the other side. Within which distance to the plane the transition from zero to one occurs depends on the length of . The larger , the faster the transition and the narrower the window of transition around the plane.
With the abbreviations
the full covariance function is given by
Finally, the GP regression is not performed on the absolute difference between a model prediction and experimental data point , but on the transformed quantity
In words, relative differences are taken for model predictions larger than 0.1 and absolute differences scaled up by a factor of ten for model predictions below 0.1. Relative differences fluctuate usually wildly for spectrum values close to zero—especially for a Monte Carlo code such as INCL—and the switch to absolute values helps GP regression to find more meaningful solutions with a better ability to extrapolate.
Due to the number of roughly ten thousand data points, the covariance matrices computed with eq. 29 were replaced by the sparse approximation outlined in section 2.2. I introduced three hundred pseudo-inputs and placed them randomly at the locations of the experimental data. Their locations were then jointly optimized with the hyperparameters, which will be discussed in section 3.3.
The diagonal matrix occurring in the approximation was changed to
to accommodate statistical uncertainties of the model prediction (due to INCL being a Monte Carlo code) and the experimental data. Both and are diagonal matrices. The matrix contains variances corresponding to statistical uncertainty for all experimental data points. The matrix contains the estimated variances of the model predictions.
A diagonal matrix for the experimental covariance matrix can certainly be challenged because the important systematic errors of experiments reflected in off-diagonal elements are neglected. This is at the moment a limitation of the approach.
3.3 Marginal likelihood maximization
The hyperparameters appearing in eq. 29 and the locations of the three hundred pseudo-inputs determining the approximation in eq. 11 were adjusted via marginal likelihood maximization described in section 2.3. To be explicit, the hyperparameters considered were , ,
and also and of the logistic function. The vector was forced to remain parallel to the axes associated with En, , and , i.e. . Further, polar coordinates were introduced. The direction of the vector was taken equal to that of , which removes ambiguity in the plane specification without shrinking the set of possible solutions. Because of these measures, it sufficed to consider the length as hyperparameter. Counting both hyperparameters and pseudo-inputs, 1515 parameters were taken into account in the optimization.
I employed the L-BFGS-B algorithm byrd_limited_1995 () as implemented in the optim function of R r_development_core_team_r_2008 (), which makes use of an analytic gradient, can deal with a large number of variables and permits the specification of range restrictions. The optimization was performed on a cluster using 25 cores and was stopped after 3500 iterations, which took about 10 hours. The obtained solution corresponds to and is with a two-sided p-value of reasonably consistent in a statistical sense. Restrictions of parameter ranges were established to introduce prior knowledge and to guide the optimization procedure. Noteworthy, lower limits on length-scales, such as and ; were introduced to counteract their dramatic reduction due to inconsistent experimental data in the same energy/angle range. Table 2 summarizes the optimization procedure. The evolution of the pseudo-inputs projected onto the (A,Z)-plane is visualized in Fig. 1.
A thorough study of the optimization process exceeds the scope of this paper and hence I content myself with a few remarks. The length scales associated with the emission angle, i.e. , experienced significant changes. Their increase means that the model bias is similar for emission angles far away from each other and the GP process is able to capture them. Concerning the length scales associated with the emission energy, the small value compared to the larger value indicates that the features of the model bias of the low and high energy domain are indeed different. The most striking feature, however, is that the large length scales and ”survived” the optimization, which means that the model bias behaves similar over large regions of the nuclide charts. Examples of isotope extrapolations will be given in section 3.4.
As a final remark, a more rigorous study of the optimization procedure is certainly necessary and there is room for improvement. This is left as future work.
3.4 Results and discussion
The values of the hyperparameters and pseudo-inputs obtained by marginal likelihood maximization were used in the covariance function in eq. 29. This covariance function was employed to compute the required covariance matrices in eqs. 16 and 15 based on the experimental data summarized in table 1. Because the hyperparameters are determined during hyperparameter optimization before being used in the GP regression, they will be referred to as prior knowledge.
Equations 16 and 15 enable the prediction of a plethora of spectrum values and their uncertainties for combinations of incident energy, mass number, charge number, emission angle, and emission energy. The few selected examples of predictions in Fig. 2 serve as the basis to discuss general features of GP regression, the underlying assumptions, and its accuracy and validity.
How well we can interpolate between data and how far we can extrapolate beyond the data depends on the suitability of the covariance function for the problem at hand. The building block for the full covariance function in eq. 29 is the one-dimensional covariance function in eq. 21. Using the latter imposes the assumption that possible solutions have derivatives of any order and hence are very smooth (rasmussen_gaussian_2006, , Ch. 4). Interpolations between data points included in the regression are determined by this smoothness property and values of the length scales .
The length scales reflect the prior assumption about similarity between spectrum values of points a certain distance away from each other. This prior assumption directly impacts the uncertainty of the predictions. The farther away a prediction is from an observation point, the higher the associated uncertainty. If a prediction is already multiples of any length scale away from all observations, the uncertainty reverts to its prior value given by either or depending on the energy domain.
In the case of the sparse approximation, the uncertainty is related to the distance to the pseudo-inputs. Because only few pseudo-inputs are located at very high and very low emission energies, the uncertainty bands in Fig. 2 in those energy domains are rather large despite the presence of experimental data.
The important finding in this specific application is that the length scales related to emission angle, , mass number, , and charge number, , are very large. GP regression is therefore able to interpolate and extrapolate over large ranges of these axes.
The interpolation of spectrum values between angles and emission energies of an isotope with available data may be considered rather standard. For instance, one can do a -fit of a low order Legendre polynomial to the angular distributions for emission energies with data. The coefficients of the Legendre polynomial for intermediate emission energies without data can then be obtained by linear interpolation.
The important difference between such a conventional fit and GP regression is the number of basis functions. Whereas their number is fixed and limited in a conventional fit, GP regression amounts to a fit with an infinite number of basis functions (rasmussen_gaussian_2006, , Sec. 2.2). The length scales regulate the number of basis functions that effectively contribute to the solution. Hyperparameter optimization decreases the length scales for unpredictable data which leads to a greater number of contributing basis functions and consequently to greater flexibility and larger uncertainties in the predictions. This feature sets GP regression apart from a standard -fit. In the latter, the uncertainty of the solution depends to a much lesser extent on the (un)predictability of the data and much more on the number of data points and the fixed number of basis functions.
One truly novel element in the approach is the inclusion of the mass and charge number, which enables predictions for isotopes without data. We can easily imagine that different isotopes differ significantly by their physical properties. From this perspective, the idea to extrapolate the model bias to other isotopes should be met with healthy skepticism.
To get a first grasp on the validity of isotope extrapolations, let us consider again the hyperparameter optimization discussed in 3.3. The hyperparameters were adjusted on the basis of the isotopes in table 1. These data are spread out over the periodic table and cover a range from carbon to thorium. In these data, similar trends of the model bias persist across significant ranges of the mass and charge number, which was the reason that the associated length scales retained high values during optimization. For instance, the experimental data of carbon and indium in Fig. 2 show comparable structures of the model bias despite their mass differences.
However, the isotopes considered in the optimization are not very exotic and gaps between them are at times large. Further, these isotopes are certainly not a random sample taken from the periodic table and therefore most theoretical guarantees coming from estimation theory do not hold. So how confident can we be about isotope extrapolation?
To provide a basis for the discussion of this question, Fig. 2 also contains predictions for oxygen and cadmium. Importantly, the associated experimental data have not been used for the hyperparameter optimization and in the GP regression. The predictions follow well the trend of the experimental data. The supposedly uncertainty bands, however, include less than of the data points. This observation points out that there are systematic differences between isotopes. Therefore, due to the sample of isotopes not being a random sample, uncertainty bands should be interpreted with caution.
One way to alleviate this issue could be to add a covariance function to eq. 29 which only correlates spectrum values of the same isotope but not between isotopes. This measure would lead to an additional uncertainty component for each isotope, which only decreases if associated data are included in the GP regression.
A very extreme mismatch up to between the prediction and experimental data occurs for oxygen at 60 and emission energies below 1 MeV. The creation of low energy neutrons is governed by de-excitation processes of the nucleus suggesting that the angular distribution of emitted particles is isotropic. This property holds for the model predictions but not for the experimental data. The experimental data exhibits a peak at 60 which is about a factor five higher than at 30, 120 and 150. The origin of this peculiarity may deserve investigation but is outside the scope of this work. For the sake of argument I assume that it is indeed a reflection of some property of the nucleus.
Because the data in table 1 do not include any measurements below 1 MeV emission energy in this mass range, both hyperparameter optimization and GP regression had no chance to be informed about such a feature. In this case, the predictions and uncertainties are determined by nearby values or—if there are not any—by the prior expectation.
If we would have been aware of such effects, we could have used it as a component in the covariance function to reflect our large uncertainty about the spectrum for low emission energies. Otherwise, the only sensible data-driven way to provide predictions and uncertainties for unobserved domains is to assume that effects are similar to those in some observed domains. Of course, it is our decision which domains are considered similar. The employed covariance function in eq. 29 incorporates the assumption that nearby domains in terms of mass charge, angle, etc. are similar. However, we are free to use any other assumption about similarity to construct the covariance function. As a side note, also results of other models relying on different assumptions could serve as a basis for uncertainties in unobserved regions. Such information can be included in GP regression in a principled way.
4 Summary and Outlook
Sparse GP regression, a non-parametric estimation technique of statistics, was employed to estimate the model bias of the C++ version of the Liège Intranuclear Cascade Model (INCL) coupled to the evaporation code ABLA07. Specifically, the model bias in the prediction of double-differential inclusive neutron spectra over the complete nuclide chart was investigated for incident protons above 100 MeV. Experimental data from the EXFOR database served as the basis of this assessment. Roughly ten thousand data points were taken into account. The hyperparameter optimization was done on a computing cluster whereas the GP regression itself on a desktop computer. The obtained timings indicate that increasing the number of data points by a factor of ten could be feasible.
For this specific application, it was shown that GP regression produces reasonable results for isotopes that have been included in the procedure. It was argued that the validity of predictions and uncertainties for isotopes not used in the procedure depends on the validity of the assumptions made about similarity between isotopes. As a simple benchmark, the isotopes oxygen and cadmium, which have not been taken into account in the procedure, were compared to the respective predictions. The agreement between prediction and experimental data was reasonable but the 95% confidence band sometimes misleading and should therefore be interpreted with caution. Accepting the low energy peak of oxgyen at 60 in the data as physical reality, the low energy spectrum of oxygen served as an example where the similarity assumption between isotopes did not hold.
As for any other uncertainty quantification method, it is a hard if not impossible task to properly take into account unobserved phenomena without any systematic relationship to observed ones. The existence of such phenomena are unknown unknowns, their observation is tagged shock, surprise or discovery, and they potentially have a huge impact where they appear.
Even though GP regression cannot solve the philosophical problem associated with unknown unknowns, knowledge coming from the observation of new effects can be taken into account by modifying the covariance function. For instance, the peculiar peak in the oxygen spectrum suggests the introduction of a term in the covariance function which increases the uncertainty of the spectrum values in the low emission energy domain. The ability to counter new observations with clearly interpretable modifications of the covariance function represents a principled and transparent way of knowledge acquisition.
The scalability and the possibility to incorporate prior assumptions by modeling the covariance function makes sparse GP regression a promising candidate for global assessments of nuclear models and to quantify their uncertainties. Because the formulas of GP regression are structurally identical to those of many existing evaluation methods, GP regression should be regarded more as a complement than a substitute, which can be interfaced with existing evaluation methods.
This work was performed within the work package WP11 of the CHANDA project (605203) financed by the European Commission. Thanks to Sylvie Leray, Jean-Christophe David and Davide Mancusi for useful discussion.
- (1) C.E. Rasmussen, C.K.I. Williams, Gaussian Processes for Machine Learning (MIT Press, Cambridge, Mass., 2006), ISBN 0-262-18253-X 978-0-262-18253-9
- (2) D.W. Muir, A. Trkov, I. Kodeli, R. Capote, V. Zerkin, The Global Assessment of Nuclear Data, GANDR (EDP Sciences, 2007)
- (3) H. Leeb, D. Neudecker, T. Srdinko, Nuclear Data Sheets 109, 2762 (2008)
- (4) M. Herman, M. Pigni, P. Oblozinsky, S. Mughabghab, C. Mattoon, R. Capote, Y.S. Cho, A. Trkov, Tech. Rep. BNL-81624-2008-CP, Brookhaven National Laboratory (2008)
- (5) J. Quiñonero-Candela, C.E. Rasmussen, Journal of Machine Learning Research 6, 1939 (2005)
- (6) E. Snelson, Z. Ghahramani, Sparse Gaussian Processes Using Pseudo-Inputs, in Advances in Neural Information Processing Systems (2006), pp. 1257–1264
- (7) D. Mancusi, A. Boudard, J. Cugnon, J.C. David, P. Kaitaniemi, S. Leray, New C++ Version of the Liège Intranuclear Cascade Model in Geant4 (EDP Sciences, 2014), p. 05209, ISBN 978-2-7598-1269-1
- (8) D. Mancusi, A. Boudard, J. Cugnon, J.C. David, P. Kaitaniemi, S. Leray, Physical Review C 90 (2014)
- (9) A. Kelic, M.V. Ricciardi, K.H. Schmidt (2009)
- (10) N. Otuka, E. Dupont, V. Semkova, B. Pritychenko, A. Blokhin, M. Aikawa, S. Babykina, M. Bossant, G. Chen, S. Dunaeva et al., Nuclear Data Sheets 120, 272 (2014)
- (11) B.J.N. Blight, L. Ott, Biometrika 62, 79 (1975)
- (12) M.T. Pigni, H. Leeb, Uncertainty Estimates of Evaluated 56Fe Cross Sections Based on Extensive Modelling at Energies Beyond 20 MeV, in Proc. Int. Workshop on Nuclear Data for the Transmutation of Nuclear Waste. GSI-Darmstadt, Germany (2003)
- (13) G. Schnabel, Ph.D. thesis, Technische Universität Wien, Vienna (2015)
- (14) G. Schnabel, H. Leeb, EPJ Web of Conferences 111, 09001 (2016)
- (15) R.H. Byrd, P. Lu, J. Nocedal, C. Zhu, SIAM Journal on Scientific Computing 16, 1190 (1995)
- (16) IAEA benchmark 2010, https://www-nds.iaea.org/spallations/
- (17) J.C. David, The European Physical Journal A 51 (2015)
- (18) MYRRHA: An innovative research installation, http://sckcen.be/en/Technology_future/MYRRHA
- (19) R Development Core Team, R: A Language and Environment for Statistical Computing (R Foundation for Statistical Computing, Vienna, Austria, 2008), ISBN 3-900051-07-0
- (20) W.B. Amian, R.C. Byrd, C.A. Goulding, M.M. Meier, G.L. Morgan, C.E. Moss, D.A. Clark, Nuclear Science and Engineering 112, 78 (1992)
- (21) T. Nakamoto, K. Ishibashi, N. Matsufuji, N. Shigyo, K. Maehata, S.i. Meigo, H. Takada, S. Chiba, M. Numajiri, T. Nakamura et al., Journal of Nuclear Science and Technology 32, 827 (1995)
- (22) K. Ishibashi, H. Takada, T. Nakamoto, N. Shigyo, K. Maehata, N. Matsufuji, S.i. Meigo, S. Chiba, M. Numajiri, Y. Watanabe et al., Journal of Nuclear Science and Technology 34, 529 (1997)