# An adaptive minimum spanning tree multi-element method for uncertainty quantification of smooth and discontinuous responses

## Abstract

A novel approach for non-intrusive uncertainty propagation is proposed. Our approach overcomes the limitation of many traditional methods, such as generalised polynomial chaos methods, which may lack sufficient accuracy when the quantity of interest depends discontinuously on the input parameters. As a remedy we propose an adaptive sampling algorithm based on minimum spanning trees combined with a domain decomposition method based on support vector machines. The minimum spanning tree determines new sample locations based on both the probability density of the input parameters and the gradient in the quantity of interest. The support vector machine efficiently decomposes the random space in multiple elements, avoiding the appearance of Gibbs phenomena near discontinuities. On each element, local approximations are constructed by means of least orthogonal interpolation, in order to produce stable interpolation on the unstructured sample set. The resulting minimum spanning tree multi-element method does not require initial knowledge of the behaviour of the quantity of interest and automatically detects whether discontinuities are present. We present several numerical examples that demonstrate accuracy, efficiency and generality of the method.

###### keywords:

Multi-element method, Stochastic collocation, Adaptive sampling, Discontinuous functions, Support vector machines, Minimum spanning trees, Least orthogonal interpolation, Domain decomposition, Fluid dynamics^{1}

testexample O \colorletcolexamblack \newtcolorbox[use counter=testexample]testexampleboxempty,title=Example: #1,attach boxed title to top left, minipage boxed title, boxed title style=empty,size=minimal,toprule=0pt,top=4pt,left=3mm,overlay=, coltitle=colexam,fonttitle=, before=,parbox=false,boxsep=0pt,left=3mm,right=0mm,top=2pt,breakable,pad at break=0mm, before upper=, overlay unbroken=\draw[colexam,line width=.5pt] ([xshift=-0pt]title.north west) – ([xshift=-0pt]frame.south west); , overlay first=\draw[colexam,line width=.5pt] ([xshift=-0pt]title.north west) – ([xshift=-0pt]frame.south west); , overlay middle=\draw[colexam,line width=.5pt] ([xshift=-0pt]frame.north west) – ([xshift=-0pt]frame.south west); , overlay last=\draw[colexam,line width=.5pt] ([xshift=-0pt]frame.north west) – ([xshift=-0pt]frame.south west); , {testexamplebox} \newdefinitionrmkRemark \newproofpfProof \newproofpotProof of Theorem LABEL:thm2

[cor1]Corresponding author

## 1 Introduction

Uncertainty quantification (UQ) has become increasingly important for complex engineering applications. Determining and quantifying the influence of parametric and model-form uncertainties is essential for a wide range of applications: from turbulent flow phenomena (1); (2), aerodynamics (3), biology (4); (5) to design optimisation (6); (7); (8). We are interested among others in liquid impact problems (9); (10); (11).

For problems which have a complex underlying model, one often uses so-called non-intrusive methods, also known as sampling methods. The model is solved deterministically a number of times, and a stochastic solution is constructed using these deterministic samples. A well known sampling method for propagating uncertainties through a model is the Monte Carlo method (12). Despite its easy implementation and wide applicability, the Monte Carlo method suffers from slow convergence with increasing number of model evaluations, when approximating the Quantity of Interest (QoI). As a consequence of this slow convergence rate, many samples are required for obtaining high quality stochastic solutions. Therefore, Monte Carlo methods are not suitable for problems with a complex underlying model. As an alternative to Monte Carlo methods, Stochastic Collocation (SC) methods (13); (14); (15) were introduced, replacing the slow convergence of Monte Carlo by an exponential convergence rate. The introduction of SC methods resulted in a decrease of required samples to achieve a certain accuracy in comparison to Monte Carlo methods. For a smooth QoI as a function of the uncertainties, fast convergence is achieved. However, if the QoI is highly non-linear or discontinuous, Gibbs phenomena (16) may occur, which deteriorate the accuracy globally. To avoid the occurrence of Gibbs phenomena, several alternatives to the SC methods were introduced (17); (18), but they focus solely on discontinuous QoIs, leading to a significant increase in the number of samples needed for approximating smooth QoIs. One interesting method is the Multi-Element Stochastic Collocation method (ME-SC) (19); (20). The idea of ME-SC is to decompose the domain, spanned by the uncertainties, into smaller non-overlapping elements, in each of which the QoI is amenable for using an SC method. Gibbs phenomena still appear in the elements where there is a discontinuity in the QoI, but they are confined to these specific elements. Improving the multi-element approach is an active field of research and focuses on more efficient and robust domain decomposition. Jakeman et al. (21) proposed the minimal multi-element method, which uses discontinuity detection based on polynomial annihilation to detect discontinuities, and divides the domain along the discontinuities. As a result, the Gibbs phenomena are removed completely. Although this discontinuity detection algorithm is accurate, the total number of samples needed to determine the discontinuity location is still too high if sampling the model is expensive. Therefore, a discontinuity detection algorithm which performs well for a lower number of samples was proposed by Gorodetsky et al. (22). This discontinuity detection uses polynomial annihilation in combination with Support Vector Machines (SVMs) to divide the domain into elements along the so-called SVM classification boundary. Even though both discontinuity detection algorithms (21); (22) perform well, using them for approximating smooth QoIs can become prohibitively expensive, as both methods focus solely on finding the discontinuities. It is often unknown in advance if a QoI is smooth or discontinuous and choosing a method which is suited for one of both only is not recommended. Our goal is to create a surrogate model that works for both smooth and discontinuous QoIs and which requires no initial knowledge about the QoI.

A novel domain decomposition method in combination with adaptive sampling of the QoI is used for constructing this surrogate. The adaptive sampling procedure in our method is based on Minimum Spanning Trees (MST) (23); (24), which add new sample points at places which are associated with a high probability density and/or where the QoI changes rapidly. The adaptively placed samples are classified and an SVM (22); (25); (26); (27); (28) is used to obtain a classification boundary, which serves as an approximation for the discontinuity location. The decomposition of the random space in this way leads to elements on which each local QoI is amenable for interpolation without Gibbs phenomena. For constructing a surrogate model in each element a least orthogonal interpolant (29) is employed, which is suited for interpolation on the scattered data points that we obtain with our adaptive sampling. Our proposed method is abbreviated as MST-ME (Minimum Spanning Tree Multi-Element) method and is designed mainly for the purpose of uncertainty propagation. However, when assuming uniformly distributed probability density functions for the input parameters, the MST-ME method is also well suited to obtain a parametric solution of the partial differential equation (PDE) under consideration. This will also be illustrated in this paper.

This paper is outlined as follows: section 2 briefly introduces the problem. Section 3 introduces the MST-ME method in detail. Finally, section 4 demonstrates efficiency and accuracy of our method when applied to analytical test-cases. Complex test-cases related to sloshing impact problems, i.e., shallow water dam break and 3D dam break, are also discussed.

## 2 Problem Description

Quantifying the effects of uncertainties in computational engineering typically consists of three steps: (i) the input uncertainties are characterised in terms of a probability density function (PDF), which follows from observations or physical evidence; (ii) the uncertainties are propagated through the model; (iii) the outputs are post-processed, where the QoI is expressed in terms of its statistical properties. In the present work we focus on the propagation step, and the input distributions are assumed to be given. The goal is to solve the following stochastic problem:

(1) |

where is the -dimensional vector containing uncertain inputs, the QoI and an operator, which represents the model. The operator can be a (non-)linear partial differential operator, or any mathematical model that relates input to the QoI . Both continuous QoIs , and discontinuous QoIs , are considered in this paper. The support set of the uncertain inputs is referred to as the random space. The uncertain inputs are assumed to be characterised by a joint PDF . The stochastic problem is solved non-intrusively by sampling the model (1) at different locations in the random space, i.e.:

(2) |

where is the sampled solution at the collocation node . Since the sampling is non-intrusive, black-box solvers for the operator can be used. In this paper we are interested in finding the entire functional relation as a function of the uncertainties , in terms of a surrogate model. A surrogate model of is constructed by interpolation, such that:

(3) |

When is smooth, it is possible to construct an approximation which converges exponentially fast to the exact solution . However, if the QoI exhibits highly non-linear or discontinuous behaviour, then the accuracy of the approximation deteriorates globally, due to Gibbs phenomena. Multi-element methods divide the random space into a set of smaller elements , such that the negative impact of the Gibbs phenomena is confined to a limited number of elements surrounding the discontinuity. The elements are non overlapping and span the entire random space, i.e.:

(4) |

A local surrogate is constructed in each :

(5) |

The global surrogate is given by patching the local surrogate models:

(6) |

where is the indicator function satisfying , if and 0 otherwise. Standard multi-element methods utilise a tensor construction of hypercubes for defining the elements (19); (20). While such a tensor construction removes the global effect of Gibbs phenomena, they can still appear locally in several elements.

An example of the standard tensorised multi-element approach for the approximation of a 2D function is shown in figure 1. Gibbs phenomena appear in the elements , where a discontinuity is present in the exact function.

## 3 Minimum Spanning Tree Multi-Element Method

The art of an efficient multi-element stochastic collocation method lies in the choice of sampling points , the choice of the elements , and the reconstruction of the local approximations . These are the focuses of this paper. Accordingly, the MST-ME method introduced here, is divided into three stages: (I) sampling, (II) domain decomposition and (III) local approximation construction:

I (choice of ) Adaptive sampling of the QoI while taking into account both smooth and discontinuous regions, and the underlying PDF.

II (choice of ) Division of the random space into a minimal number of elements, such that the QoI is smooth within each element.

III (construction of ) Interpolation of the samples, while producing a stable interpolant.

The QoI is adaptively sampled (I) by taking into account both the PDF and the QoI gradient information. The samples are distributed among different classes, such that within each class the local QoI is smooth. The classified samples are given as input to the domain classification algorithm (II). Instead of using a tensor based domain decomposition, the random space is divided into a minimal number of elements, in which the QoI is amenable for interpolation without Gibbs phenomena. This domain decomposition methodology was already introduced in (21), but in that work the number of samples required for determining proper elements was too high. In contrast, our domain decomposition method (II) uses the samples from the sampling algorithm (I) for determining proper elements, without the need to perform additional sampling. Local approximations (III) are constructed in each element, by using the least orthogonal interpolation method (29). The global approximation, the surrogate model, is given by the patched local approximations (6).

Our method distinguishes itself from other methods, such as (18); (21), by detecting if a QoI is smooth or discontinuous automatically, while not emphasising solely on either the discontinuity or the smooth regions. The nodes are placed by considering both smooth and discontinuous characteristics of the QoI, which results in sample locations that resolve both smooth and discontinuous regions. The combination of these sample locations and an SVM leads to accurate discontinuity detection, without placing samples in the random space where they are not needed.

### I Sampling Algorithm

The main idea behind our adaptive sampling algorithm is that we want to refine our surrogate model based on the QoI behaviour and the associated PDF of the random input variables. We achieve this by creating a graph that links the samples, and by assigning weights to edges of this graph, constructing an MST, and then adding samples on the most important edges of this MST.

The methodology is explained for a 2D QoI , but can be generalised easily to a higher dimensional QoI.

Initial sample placement

The sampling procedure is started by placing initial sample points. Straightforward choices of the initial sample locations are shown in figure 2. These initial sample configurations are extensible to high-dimensional random spaces. Initial sample placement in both figure 2a and figure 2b introduces an anisotropy in the placement of subsequent samples. The initial sample placement in figure 2c is a good trade-off between the number of initial samples and the isotropy of subsequent samples.

In this paper, the initial sample grid shown in figure 2c is employed, unless stated otherwise.

Graph construction with neighbouring samples

The existing samples are connected based on Voronoi diagram construction (30). A Voronoi diagram is a partitioning of a -dimensional space into regions based on the distance to a specific set of samples (31). Each Voronoi cell contains one sample, and the cell corresponds to all the points that are closer to this sample than to any other sample. The Voronoi diagram for the initial sample configuration is shown in figure 3 (left).

The boundary of each Voronoi cell contains several vertices. At each of these vertices the neighbouring Voronoi cells are determined. The midpoints of these neighbouring cells are then connected to obtain a local graph, see figure 3 (middle). This local graph is similar to a Delaunay triangulation, but gives more isotropic behaviour on regular grids. The local graphs are subsequently connected together to obtain a global graph, see figure 3 (right).

Assignment of weights to edges

Not all the edges in the total graph are equally attractive for placing new samples. Edges that are long, and have large variation between QoI values and/or are in a region associated with a high PDF value, are good candidates for refinement. A weight function assigns a weight to the edge between the two graph vertices and . A low weight means that the edge is a good candidate for refinement and vice versa. Weighing based solely on either gradient (21) or PDF (32); (33) is the most straightforward:

(7) | ||||

(8) |

where is the PDF and is the Euclidean distance. These weight functions are not always efficient, as they place most samples in either the smooth or the discontinuous regions. We therefore propose a new weight function that focuses on both the PDF and the gradient in the QoI:

(9) |

where both the PDF and gradient term contribute equally to the weight. We will compare this new weight function to the weight functions (7) and (8) in section 4. We multiply each weight function with the distance of the edge to account for regions in the random space that have a low number of samples. After weighing all the edges, we normalise by dividing by the maximum weight. When using (9), the PDF and the gradient term are normalised separately, before adding them together and taking the reciprocal. The resulting weights are normalised again to ensure values in .

Minimum spanning tree for refinement of sample grid

New samples are placed at the middle of the edges that have a sufficiently low weight. However, if all edges with sufficiently low weight are refined, undesirable clustering of samples may occur at early stages of the sampling procedure.

To prevent this, a Minimum Spanning Tree (MST) is used to obtain a subset of edges, such that this subset reaches all the samples, with a minimal total edge weight. The most important edges are contained in this MST, while still exploring a significant portion of the random space. The MST prevents the undesirable sample clustering at early stages. An edge in the MST is refined, if its weight is sufficiently low compared to the minimum weight among all edges, :

(10) |

where . The value of determines how many samples are added each iteration, i.e., low values of result in a low number of samples added and vice versa. Low values of produce the most accurate results, but many iterations are needed to reach a specified total number of samples. Iterations can become prohibitively expensive in high-dimensional random spaces. Therefore, in this paper we set , which is a trade-off between the number of samples added and the number of iterations to be performed. The edge with minimum weight is not necessarily included in the MST, and will be added if it was not already included, to prevent the sampling algorithm from not adding any samples.

Complete sampling algorithm

The complete sampling strategy (I) is an iterative procedure, which is illustrated in figure 4. The procedure starts with choosing the initial sample points. Next we loop over the three steps: graph construction, edge weighing and MST edge refinement. The loop is terminated when the specified number of iterations has been performed or when the total number of sample points exceeds a specified threshold .

The complete sampling strategy is illustrated in figure 4.

### Ii Domain Decomposition

The idea of the domain decomposition step in our method is to divide the random space into non-intersecting elements , such that the sampled QoI values from section I exhibit smooth behaviour locally in each element. The elements are constructed by first classifying the QoI values according to the QoI gradients. Second, the sample classes are separated by means of a classification boundary, based on SVMs. This classification boundary cuts the random space into several elements.

Sample classification based on QoI gradients

An SVM determines a classification boundary based on a set of classified samples. Since a classification boundary is an approximation to a discontinuity in the QoI, the classification is based on the gradient between two neighbouring samples. A schematic representation of the classification procedure is shown in figure 5. The choice for the threshold in the procedure is crucial. We have observed that the procedure from (22) works very well and is therefore employed in this paper. It uses polynomial annihilation to estimate a jump value across an edge and labels two points the same class if the difference in function values is less than the jump value. An in-depth discussion of polynomial annihilation is discussed in (34).

Classification boundary from support vector machines

SVM is a supervised machine learning technique, used for building a classification boundary between samples that belong to different classes (22); (28). SVM is used in this paper as it is defined by a convex optimisation problem, for which efficient methods are available (35), which makes it viable for high-dimensional problems.

Assume adaptive samples are classified into different classes , where is the class belonging to sample point . The idea behind an SVM is to construct a classifier of the form:

(11) |

where is the coefficient associated with the sample point , a regularisation parameter, the number of support vectors and a kernel (27). If , then is a support vector. Depending on the application, different kernels are available (28):

(12) | ||||

(13) | ||||

(14) | ||||

(15) |

where is the standard inner product in , a regularisation constant, a translation constant and the polynomial degree. Radial basis function kernels (14) are commonly used and are also employed in this paper. The constant is normally advised to be chosen as (35), but we will also investigate other choices in the result section of this paper. High values for result in a classification boundary with a fine resolution, but this may result in overfitting. Low values for result in a coarser estimation of the location of the classification boundary. The optimal value for differs for different functions. In section 4 an optimal value for is found for a specific family of functions, which is important in our test-cases. Once has been chosen, the classifier is obtained by solving the following least-squares problem:

(16) |

The classification boundary is given by the -contour of . It separates the different classes from each other with a hypersurface and is obtained with the LIBSVM library (35). The classification boundary decomposes the domain into several elements. Figure 6 shows an example of a classification boundary for two different classes. SVM can deal with multiple classes, and hence multiple discontinuities, quite easily, which makes it a suitable discontinuity finder for a wide range of QoIs. SVM in combination with the classification procedure is not yet able to detect discontinuities that run partially through the domain.

### Iii Local Approximations

The elements from the domain decomposition (II) are arbitrarily shaped and the samples are distributed in such a way that interpolation is not a trivial task. Least orthogonal interpolation is able to perform interpolation on sample distributions on such arbitrarily shaped domains. For an in-depth discussion of least orthogonal interpolation, see (29). The sampling strategy (I) does not necessarily choose points that are optimal for interpolation. Therefore, attempting to construct an interpolant on this set of interpolation nodes is not always a good idea and may produce unstable interpolants. We therefore use an extended version of the original least orthogonal interpolation, which selects a subset of samples that is better suited for stable interpolation (21). This enables the MST-ME method to place sample points in the random space, where we want to further resolve the QoI, without focusing on the stability of the interpolation.

We denote the least orthogonal interpolation operator by , which operates on a subset of , and we assume that the random space is decomposed into elements . Each element comprises a single class , which consists of the samples . The global approximation is given by, see equation (6):

(17) |

where is the indicator function satisfying , if and 0 otherwise.

### Iv Complete Algorithm

The complete MST-ME method, which comprises: sampling (I), domain decomposition (II) and local approximation construction (III), is shown in figure 7. Apart from the weight function and sampling threshold, no additional input from the user is required, which makes the method suitable for generic problems.

Computational cost for samples in -dimensional random space

The computational cost of the sampling strategy (I) is determined by the cost of computing the Voronoi diagrams and finding the MST. Computing a Voronoi diagram on samples in can be done in time (36). In the worst-case scenario, only a single sample is added in each iteration, and hence we have to compute Voronoi diagrams on samples. The maximum computational cost is therefore . Computing the MST can be done with Prim’s algorithm (37), which has an algorithmic complexity of , where is the total number of edges and the total number of samples. An upper bound for and is given by and , respectively. Again, in the worst case scenario we have to perform iterations, which can be done in time.

The computational cost of the domain decomposition (II) is based on the complexity of the classification procedure and the SVM. Classification of samples can be performed in time. The SVM has a complexity ranging between and , depending on the number of classes and the kernel (35). Hence, domain decomposition can be performed in time and is independent of .

The local approximation (III) uses LU and QR-decomposition for determining the interpolant. Computing the QR-decomposition is the dominant factor in (III), and it can be done in time, using the standard implementation in Matlab. If least orthogonal interpolation is employed in an adaptive fashion, the complexity increases to , as QR-decompositions are computed in the worst case.

The complexity of the complete algorithm is determined by the complexities of (I), (II) and (III). This results in an overall complexity of if and otherwise.

## 4 Results

In this section we present multiple examples that illustrate the robustness and flexibility of the MST-ME method. For computing the error between the exact surrogate and approximation we use the following weighted -norm:

(18) |

where the surrogate model is constructed using MST-ME and evaluated at Monte Carlo sample locations in the random space. The exact solution is the evaluation of the model sampled at the same Monte Carlo samples. We multiply the difference between the surrogate and exact solution with the PDF to emphasise on regions in the random space that are likely to occur. Samples that are within a distance from the discontinuity, which is lower than the minimum distance of the adaptive MST-ME samples, are discarded. Discarding samples is motivated by the fact that these points lie below the resolution of the SVM discontinuity detection, where the fidelity of the classification is questionable (21).

The first example shows the approximation of three 2D, piecewise constant functions. This example focuses solely on the domain classification and shows convergence of the SVM domain decomposition (II). A second example shows the approximation of 1D and multidimensional Genz functions (38). The third test-case shows the MST-ME method applied to a more complicated model, which is defined by an underlying PDE, namely, the shallow water equations. Lastly, we apply the MST-ME method to a 3D dam break problem governed by incompressible fluid-flow equations, to show that our methodology can be used for a complex engineering problem.

### 4.1 Domain Classification

The accuracy of the SVM domain decomposition (II) is investigated as a function of the parameter .

SVM domain decomposition works best for discontinuities without corners

The SVM domain decomposition (II) is tested by approximating three piecewise constant 2D functions. The functions have a discontinuity in the shape of a circle, a rectangle and a triangle, respectively. The sampling algorithm (I), with a weight function that focuses solely on the gradient (8), is used to determine the sample locations. The SVM domain decomposition uses the radial basis function kernel (14) in which is set to the advised value (35), where is the number of classes. The sample locations for the circle test-case are shown in figure 8 (left), along with the definition of the error measure (the misclassified portion). Clustering of samples appears around the circular-shaped discontinuity location, because the gradient based weight function (8) does not lead to refinement if there is no intersection with the discontinuity. The sample locations show symmetry, but after 5 iterations, the symmetry is lost slightly, although this is not noticeable in figure 8 (left).

The domain decomposition error as a function of the number of samples is shown in figure 8 (right).

The misclassified portion of the domain decreases rapidly with the increasing number of samples, but is in general not monotonically decreasing. Discontinuity lines with sharp corners, in particular the square and triangle, are in general harder to approximate for the SVM discontinuity finding, than smooth discontinuity lines. The value for may significantly influence the accuracy of the discontinuity approximation, and the advised value is in general not optimal. Therefore we search for a value of that is optimal for the remaining test-cases.

The optimal value for for our test-cases is

We now investigate the optimal value for by testing several candidate values for a large set of piecewise constant functions. The parameter may influence the accuracy of the SVM domain classification. Many discontinuous QoIs in engineering applications possess a discontinuity without corners, i.e., the exact discontinuity surface is a smooth hypersurface. For such discontinuities, the value of might not be optimal and therefore we search for an optimal value among a set of candidates that are perturbations on the advised value: . A set of piecewise constant functions, possessing up to 3 discontinuities, on the domain , is randomly generated. The discontinuity location is given by up to 3 non-intersecting polynomial parametric curves up to degree 5, which have random coefficients. For each of these functions, an SVM domain decomposition is performed with each of the possible candidates for . This domain decomposition is performed on 50 adaptively sampled points from the MST-ME method, based on the weight function (8) and the initial configuration shown in figure 2c. We add all correctly classified portions for each -candidate, and divide by to obtain an average correctly classified portion for all the randomly generated functions. The number of adaptive samples influences the average correctly classified portion, but a similar trend between the -candidates is obtained. The results are shown in figure 9.

Figure 9 shows that is the most accurate choice for the generated family of piecewise constant functions. The inconsistency with the value for suggested in literature and our value is possibly due to the fact that we limit ourselves to a family of discontinuous functions, which possess no sharp corners in the discontinuity surface. Hence, the value might not be the optimal value for other families of discontinuous functions. The discontinuities considered in the remainder of this paper fall within this category and is therefore used in the remainder of this paper.

### 4.2 Genz Functions Approximation

To illustrate the accuracy/efficiency, the proposed MST-ME method is applied to a standard benchmark problem, namely, approximation of 1D Genz functions (38).

Edge weighing based on PDF and gradient is most robust

We consider the following Genz functions:

(19) | ||||

(20) | ||||

(21) | ||||

(22) |

A uniform PDF is assumed on the interval and the initial grid consists of the two end points plus the middle point of this interval. As a reference, the MST-ME solution is compared with the stochastic collocation (SC) solution on a Gauss-Legendre grid.

Figure 10 shows that weighing based on the PDF only results in a uniformly spaced sample grid. Interpolation on such a grid is in general not a good idea, as it may produce unstable interpolants. The least orthogonal interpolation method partially circumvents this issue by choosing a subset of these samples in constructing an interpolant. Consequently, the smooth Genz functions and are well approximated, but and are not. Weighing based on the gradient alone leads to improved results for the discontinuous function , but leads to less accurate results for the smooth functions and . The standard stochastic collocation method performs well in smooth cases, but converges slowly and also shows an oscillating error in some cases, due to the Gauss-Legendre grid that includes the middle point of the domain only for an odd number of samples. In contrast, the weight function based on both the PDF and the gradient performs the best overall, by keeping track of the discontinuity location, while still maintaining a sample distribution which resolves parts of the random space away from the discontinuity. In the remainder of this paper we therefore use weighing based on both PDF and gradient, equation (9).

Notice that all weight functions show slow convergence for approximating . This is due to the absence of the second derivative in the weight function (9). By basing the classification on the second derivative in the QoI, we can circumvent the slow error convergence in presence of a discontinuity in the first derivative. However, we will not have any discontinuities in the first derivatives for the remaining test-cases in this paper. We therefore use classification based on the first derivative only in the remainder of this paper.

The underlying PDF changes the sample grid

The effect of the underlying PDF is now investigated. The two PDFs that we consider are a symmetric and an asymmetric -distribution, with parameters and , respectively. The support of both PDFs is scaled to and we use the uniform distribution as a reference. The error convergence for and is shown in figure 11.

The error convergence is similar to the error convergence for the uniform distribution. Again the sample grid is not ideal for interpolation, but the adaptive least orthogonal interpolation circumvents this by using a subset of nodes.

Samples per dimension is constant

To investigate how the MST-ME method performs in multiple dimensions, the error is plotted for a smooth Genz function and a discontinuous Genz function , with increasing dimension , in figure 12 (left). These multidimensional Genz functions are tensor products of the 1D Genz functions. The error for each dimension is based on 1000 adaptively sampled points with an initial sampling configuration equal to the one shown in figure 2c. The number of required samples needed to attain a specific accuracy is also plotted as a function of the dimension of the random space (right).

Apart from the first point in the discontinuous case, the average number of samples per dimension is approximately constant, which is the well-known curse of dimensionality. Since there is no strong change in the number of samples per dimension, the MST-ME method is applicable to problems with a low to medium number of uncertain inputs.

### 4.3 Shallow Water dam break

We study the performance of the MST-ME method applied to a system of 1D conservation laws. This system consists of the 1D shallow water equations (SWEs), which describe the inviscid flow of a layer of fluid with free surface, under the action of gravity, with the thickness of the fluid layer small compared to the other length scales (39):

(23) |

where is the free surface height (thickness of the fluid layer), the velocity, and the acceleration of gravity. The initial condition for the system of PDEs is given by:

(24) |

leading to a Riemann problem shown in figure 13. The solution of the Riemann problem for these initial conditions can be computed exactly when working on an infinite spatial domain (40). The solution consists of two characteristic waves travelling through the spatial domain, see figure 13. Each wave is a shock or rarefaction wave. When solid boundary conditions are imposed at , an exact solution cannot be obtained for all initial solutions. We therefore employ a finite volume method with an exact Riemann solver (41) to compute the cell face fluxes, and solve the SWEs using 256 cells. A Crank-Nicolson scheme is used to integrate the SWEs in time and a ghost-cell method with reflective properties is used for the boundaries. The initial left state is assumed to be uncertain and uniformly distributed between and , respectively, i.e.:

(25) |

The uncertainty in the initial conditions is large in order to ensure that we get different characteristic behaviour of the QoI. The average thickness of the fluid layer is not shallow compared to the domain length, but this is not important for showing the performance of the MST-ME method. The QoI is defined as the fluid height at at a certain time . A schematic representation of this set-up is shown in figure 13.

Notice that the QoI is time dependent and the characteristics of this QoI will change significantly as time progresses. Either a transition between a shock and rarefaction wave, or a difference in wave speeds can result in a discontinuity in . This allows us to study the robustness of the MST-ME method, as this test-case comprises both smooth and discontinuous QoI responses. We emphasise that the constructed surrogate for the QoI at a cannot be reused for other time instances, because the MST-ME method uses the QoI at the current time as a measure to place new samples.

MST-ME detects if a function is smooth or discontinuous automatically

The MST-ME method is used for three different QoIs, , which correspond to a mildly non-linear, highly non-linear and close to discontinuous QoI, respectively. The surrogate model and sample grids after 10 iterations are shown in figure 14.

The discontinuity in the QoI at time is caused by a shock wave, which hits the left boundary for certain values in the random space, but does not yet hit the left boundary for other values in the random space.

To investigate the accuracy of the MST-ME method, we determine the convergence. The error is based on Monte Carlo samples. The convergence of the -error (18) is shown in figure 15.

The results show that the error as a function of the samples decays fast for the mildly and highly non-linear case, as expected. The highly non-linear case shows a sudden drop in the error, which is caused by transition in the domain decomposition. First the classification procedure detects a large enough jump in the sampled QoI to conclude that there is a discontinuity present in the QoI. As the MST-ME progresses, samples are added in the area of the possible discontinuity, until the jump in the QoI values becomes small enough to classify the samples properly. This transition to a correct classification explains the sudden drop in the error for the highly non-linear case. MST-ME automatically detects the smoothness of the QoI, as the number of samples increases.

### 4.4 3D dam break

As a last test-case, the MST-ME method is applied to a complex engineering problem, namely a 3D fluid dam break problem with parametric initial conditions. This test-case is similar to the shallow water dam break, with the difference that it describes fluid motion in 3D and contains more physics, i.e., viscous effects and no shallow-water assumption. Dam break problems are commonly used as benchmark in for example sloshing applications (9) and have been extensively studied (9); (10); (11). MST-ME can be used to gain physical insight for this parametric problem, by constructing a surrogate model in the full parameter space, and this is our goal in this test-case. We do not focus on convergence, as it is computationally infeasible to construct a reliable reference solution. A schematic representation of the test-case is shown in figure 16.

We consider two uncertain parameters, namely the length of the tank and the height of the right column of liquid. When the fluid is released, it starts flowing to the left side of the domain and impacts the wall, shown in figure 16. The QoI is the maximum perpendicular wall force component during first impact. Smoothed particle hydrodynamics (SPH) is used to simulate the free surface flow, induced by the initial condition in figure 16. We use an open-source SPH solver, DualSPHysics (42). The acceleration of gravity is set to and the kinematic viscosity is set to , which is the value for water at room temperature. Surface tension is neglected. Approximately particles are used for the simulations, which is considered as medium to high resolution for free surface flows of these length scales. The simulations are performed on a single GPU-unit with 2048 cores and the average simulation time is approximately 14 hours. A typical time-dependent result for a height of 3m and a length of 7m, along with the perpendicular wall force component , is shown in figure 17. The four consecutive instances of the example simulation show: initial liquid configuration; wave development due to pressure gradient; breaking wave impact on wall; liquid after impact. Depending on the height difference between both liquid columns, wave breaking can occur before the wave impacts the wall. In reality, a gas pocket is entrapped during this process, which is compressed and leads to a pressure build-up inside the gas pocket. However, the simulations performed here are free surface flows and the gas phase is not taken into account, so the physics in the entrapped gas pocket is ignored.

Since obtaining a parametric solution is the target, both parameters are assumed to be uniformly distributed. The MST-ME method is used to construct the unknown QoI response. A total of 6 iterations of the sampling algorithm is performed with weight function (9), which results in 30 samples. We expect the MST-ME method to automatically distinguish smooth and discontinuous behaviour of the QoI, which is important for this problem, since we have limited initial knowledge of the QoI as a function of the parameters. The results are shown in figure 18.

The results indicate a smooth QoI response, which is approximately constant along the lines height/length=constant. This implies that for free surface flow, the force on the wall depends roughly on the ratio of height and length, and not on their separate values. The wall force increases when this ratio increases, which is intuitive from a physical point of view. Figure 18 (right) shows that this increase is non-linear, which corresponds to results previously reported for dry-bed dam break problems (43).

Interestingly, in contrast to the SWE test-case, the QoI shows no discontinuity in the parameter space. This is possibly due to neglecting the gas phase in the free surface simulations. When simulated with a gas phase, the pressure build-up in the entrapped gas pocket (see figure 17) may lead to a discontinuity in the QoI. The strength of our proposed MST-ME method is that we do not require knowledge about the characteristics of the QoI beforehand, as it distinguishes smooth and discontinuous behaviour automatically.

## 5 Conclusion

In this paper we have presented a novel domain decomposition based interpolation method, the Minimum Spanning Tree Multi-Element (MST-ME) method. The unique property of the MST-ME method is that it adaptively constructs a surrogate model as a function of a set of uncertain parameters for both smooth and discontinuous quantities of interest. The three key ingredients in the MST-ME method are: (I) adaptive sampling based on a minimum spanning tree with a smart weight function, (II) discontinuity detection and sample classification with a support vector machine algorithm, and (III) least orthogonal interpolation to construct local approximations. This combination of robust methods makes the MST-ME method a very practical method that is applicable to a wide range of UQ problems.

The MST-ME method has been applied to several numerical examples: domain decomposition, Genz function approximation, 1D shallow water equations, and a 3D dam break problem. Any discontinuities present in these test-cases are effectively captured by the method. In all cases, fast convergence is obtained, leading to an accurate surrogate model already at a tractable number of model runs. This surrogate model can be directly used as a fast tool for uncertainty quantification (for example with Monte Carlo type methods), but it is also a great tool for the parametric solution of black-box models, including partial differential equations. We also foresee application of this surrogate model in the solution of inverse problems. The freedom in the weighing function of the minimum spanning tree offers many applications, such as adaptive sampling for reliability analysis, where the weighing function may be adapted such that samples are placed in regions of low probability.

Currently, the MST-ME method does not include the option to detect discontinuities that run partially through the random space. Furthermore, the MST-ME method can not yet preserve the symmetry in the node distributions, which might be advantageous in certain special cases (e.g. when both the model and the underlying PDF of the random variables are symmetric). Lastly, the least orthogonal interpolation does not necessarily use all sample points in the construction of the local approximation. By adding a term to the weight function of the minimum spanning tree, which accounts for the stability of the interpolant (as is done for example in Leja nodes (44)), the sample locations may be further improved, such that all samples are used in the construction of the interpolant.

## Acknowledgements

This work is part of the research programme ”SLING” (Sloshing of Liquefied Natural Gas), which is (partly) financed by the Netherlands Organisation for Scientific Research (NWO).

## References

### Footnotes

- journal: Journal of Computational Physics

### References

- H. Xiao, J. L. Wu, J. X. Wang, R. Sun, C. J. Roy, Quantifying and reducing model-form uncertainties in Reynolds-Averaged Navier-Stokes simulations: a data-driven, physics-informed Bayesian approach, Journal of Computational Physics 324 (2016) 115–136. doi:10.1016/j.jcp.2016.07.038.
- W. N. Edeling, R. P. Dwight, P. Cinnella, Simplex-stochastic collocation method with improved scalability, Journal of Computational Physics 310 (2016) 301–328. doi:10.1016/j.jcp.2015.12.034.
- F. Simon, P. Guillen, P. Sagaut, D. Lucor, A gPC-based approach to uncertain transonic aerodynamics, Computer Methods in Applied Mechanics and Engineering 199 (2010) 1091–1099. doi:10.1016/j.cma.2009.11.021.
- K.-H. Cho, S.-Y. Shin, W. Kolch, O. Wolkenhauer, Experimental design in systems biology, based on parameter sensitivity analysis using a Monte Carlo method: a case study for the TNFÎ±-mediated NF-Îº B signal transduction pathway, Simulation 79 (2003) 726–739. doi:10.1177/0037549703040943.
- R. Abagyan, M. Totrov, Biased probability Monte Carlo conformational searches and electrostatic calculations for peptides and proteins, Journal of Molecular Biology 235 (1994) 983–1002. doi:10.1006/jmbi.1994.1052.
- E. M. Constantinescu, V. M. Zavala, M. Rocklin, S. Lee, M. Anitescu, A computational framework for uncertainty quantification and stochastic optimization in unit commitment with wind power generation, IEEE Transactions on Power Systems 26 (2011) 431–441. doi:10.1109/TPWRS.2010.2048133.
- J. Mateos, T. Gonzalez, D. Pardo, V. Hoel, A. Cappy, Monte Carlo simulator for the design optimization of low-noise HEMTs, IEEE Transactions on Electron Devices 47 (2000) 1950–1956. doi:10.1109/16.870579.
- M. Papadrakakis, N. D. Lagaros, Reliability-based structural optimization using neural networks and Monte Carlo simulation, Computer Methods in Applied Mechanics and Engineering 191 (2002) 3491–3507. doi:10.1016/S0045-7825(02)00287-6.
- E. J. Hopfinger, V. Baumbach, Liquid sloshing in cylindrical fuel tanks, in: Progress in Propulsion Physics, Vol. 1, EDP Sciences, pp. 279–292. doi:10.1051/eucass/200901279.
- R. Marsooli, W. Wu, 3-D finite-volume model of dam-break flow over uneven beds based on VOF method, Advances in Water Resources 70 (2014) 104–117. doi:10.1016/j.advwatres.2014.04.020.
- L. A. Larocque, J. Imran, M. H. Chaudhry, 3-D numerical simulation of partial breach dam-break flow using the LES and k- turbulence models, Journal of Hydraulic Research 51 (2013) 145–157. doi:10.1080/00221686.2012.734862.
- J. Hammersley, Monte Carlo Methods, Springer Verlag, 2013.
- D. Xiu, Fast numerical methods for stochastic computations, Communications in Computational Physics 5 (2009) 242–272.
- I. Babuska, F. Nobile, R. Tempone, A stochastic collocation method for elliptic partial differential equations with random input data, SIAM Journal on Numerical Analysis 45 (2007) 1005–1034. doi:10.1137/050645142.
- F. Nobile, R. Tempone, C. Webster, A sparse grid stochastic collocation method for partial differential equations with random input data, SIAM Journal on Numerical Analysis 46 (2008) 2309–2345. doi:10.1137/060663660.
- C. Canuto, M. Y. Hussaini, A. M. Quarteroni, T. A. Zang, Jr, Spectral Methods in Fluid Dynamics, Springer Science & Business Media, 2012.
- J. A. S. Witteveen, A. Loeven, H. Bijl, An adaptive stochastic finite elements approach based on Newton-Cotes quadrature in simplex elements, Computers & Fluids 38 (2009) 1270–1288. doi:10.1016/j.compfluid.2008.12.002.
- J. D. Jakeman, S. G. Roberts, Local and dimension adaptive sparse grid interpolation and quadrature, arXiv:1110.0010 [cs, math]ArXiv: 1110.0010.
- X. Wan, G. Karniadakis, Multi-element generalized polynomial chaos for arbitrary probability measures, SIAM Journal on Scientific Computing 28 (2006) 901–928. doi:10.1137/050627630.
- J. Foo, X. Wan, G. E. Karniadakis, The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications, Journal of Computational Physics 227 (2008) 9572–9595. doi:10.1016/j.jcp.2008.07.009.
- J. D. Jakeman, A. Narayan, D. Xiu, Minimal multi-element stochastic collocation for uncertainty quantification of discontinuous functions, Journal of Computational Physics 242 (2013) 790–808. doi:10.1016/j.jcp.2013.02.035.
- A. A. Gorodetsky, Y. M. Marzouk, Efficient localization of discontinuities in complex computational simulations, arXiv:1402.2845 [cs].
- J. B. Kruskal, On the shortest spanning subtree of a graph and the traveling salesman problem, Proceedings of the American Mathematical Society 7 (1956) 48–50. doi:10.2307/2033241.
- R. L. Graham, P. Hell, On the history of the minimum spanning tree problem, Annals of the History of Computing 7 (1985) 43–57. doi:10.1109/MAHC.1985.10011.
- G. S. El-tawel, A. K. Helmy, An edge detection scheme based on least squares support vector machine in a contourlet HMT domain, Applied Soft Computing 26 (2015) 418–427. doi:10.1016/j.asoc.2014.10.025.
- M. K. S. Varma, N. K. K. Rao, K. K. Raju, G. P. S. Varma, Pixel-based classification using support vector machine classifier, in: 2016 IEEE 6th International Conference on Advanced Computing (IACC), 2016, pp. 51–55. doi:10.1109/IACC.2016.20.
- C. J. C. Burges, A tutorial on support vector machines for pattern recognition, Data Mining and Knowledge Discovery 2 (1998) 121–167. doi:10.1023/A:1009715923555.
- B. Scholkopf, A. J. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond, MIT Press, Cambridge, MA, USA, 2001.
- A. Narayan, D. Xiu, Stochastic collocation methods on unstructured grids in high dimensions via interpolation, SIAM Journal on Scientific Computing 34 (2012) A1729–A1752. doi:10.1137/110854059.
- C. B. Barber, D. P. Dobkin, H. Huhdanpaa, The quickhull algorithm for convex hulls, ACM Transactions on Mathematical Software 22 (1996) 469–483.
- J. E. Goodman, J. O’Rourke (Eds.), Handbook of Discrete and Computational Geometry, CRC Press, Inc., Boca Raton, FL, USA, 1997.
- J. Witteveen, G. Iaccarino, Simplex elements stochastic collocation for uncertainty propagation in robust design optimization, in: 48th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition, American Institute of Aeronautics and Astronautics, 2010.
- J. Witteveen, G. Iaccarino, Simplex stochastic collocation with random sampling and extrapolation for nonhypercube probability spaces, SIAM Journal on Scientific Computing 34 (2012) A814–A838. doi:10.1137/100817504.
- J. D. Jakeman, R. Archibald, D. Xiu, Characterization of discontinuities in high-dimensional stochastic problems on adaptive sparse grids, Journal of Computational Physics 230 (2011) 3977–3997. doi:10.1016/j.jcp.2011.02.022.
- C.-C. Chang, C.-J. Lin, LIBSVM: A library for support vector machines, ACM Trans. Intell. Syst. Technol. 2 (2011) 27:1–27:27. doi:10.1145/1961189.1961199.
- S. Fortune, A Sweepline Algorithm for Voronoi Diagrams, SCG ’86, ACM, New York, NY, USA, 1986, pp. 313–322. doi:10.1145/10515.10549.
- R. C. Prim, Shortest connection networks and some generalizations, The Bell System Technical Journal 36 (1957) 1389–1401. doi:10.1002/j.1538-7305.1957.tb01515.x.
- A. Genz, Testing multidimensional integration routines, in: Proc. of International Conference on Tools, Methods and Languages for Scientific and Engineering Computation, Elsevier North-Holland, Inc., New York, NY, USA, 1984, pp. 81–94.
- C. B. Vreugdenhil, Numerical Methods for Shallow-Water Flow, Springer Verlag, 2013.
- R. J. LeVeque, Finite Volume Methods for Hyperbolic Problems, Cambridge University Press, 2002.
- E. F. Toro, Riemann Solvers and Numerical Methods for Fluid Dynamics: A Practical Introduction, Springer Science & Business Media, 2013.
- A. J. C. Crespo, J. M. Dominguez, B. D. Rogers, M. Gomez-Gesteira, S. Longshaw, R. Canelas, R. Vacondio, A. Barreiro, O. Garcia-Feal, DualSPHysics: Open-source parallel CFD solver based on Smoothed Particle Hydrodynamics (SPH), Computer Physics Communications 187 (2015) 204–216. doi:10.1016/j.cpc.2014.10.004.
- L. LobovskÃ½, E. Botia-Vera, F. Castellana, J. Mas-Soler, A. Souto-Iglesias, Experimental investigation of dynamic pressure loads during dam break, Journal of Fluids and Structures 48 (2014) 407–434. doi:10.1016/j.jfluidstructs.2014.03.009.
- A. Narayan, J. Jakeman, Adaptive Leja sparse grid constructions for stochastic collocation and high-dimensional approximation, SIAM Journal on Scientific Computing 36 (2014) A2952–A2983. doi:10.1137/140966368.