Uncertainty quantification for hyperbolicconservation laws with flux coefficientsgiven by spatiotemporal random fields

Uncertainty quantification for hyperbolic conservation laws with flux coefficients given by spatiotemporal random fields

Abstract

In this paper hyperbolic partial differential equations with random coefficients are discussed. We consider the challenging problem of flux functions with coefficients modeled by spatiotemporal random fields. Those fields are given by correlated Gaussian random fields in space and Ornstein–Uhlenbeck processes in time. The resulting system of equations consists of a stochastic differential equation for each random parameter coupled to the hyperbolic conservation law. We define an appropriate solution concept in this setting and analyze errors and convergence of discretization methods. A novel discretization framework, based on Monte Carlo Finite Volume methods, is presented for the robust computation of moments of solutions to those random hyperbolic partial differential equations. We showcase the approach on two examples which appear in applications: The magnetic induction equation and linear acoustics, both with a spatiotemporal random background velocity field.

s
1

tochastic hyperbolic partial differential equation, uncertainty quantification, spatiotemporal random field, Monte Carlo method, random flux function, finite volume method, Ornstein–Uhlenbeck process, Gaussian random field

1 Introduction

Hyperbolic partial differential equations with random data have been an active research field over the last decades. In ample situations measurements are not accurate enough to allow an exact description of a physical phenomena by a deterministic model. To account for this, uncertainty is introduced in the appropriate parameters and the distribution of the (now stochastic) solution is studied. In this paper we consider linear hyperbolic PDEs with time and space dependent flux functions. Important examples of such equations include linear elasticity, and linear shallow water equations, as well as the linear acoustics and the magnetic induction equation considered in this article.

1.1 Linear hyperbolic conservation/balance laws

Many important physical phenomena can be modeled by first order linear hyperbolic systems. We consider a system of the general form

 Unknown environment 'cases% (1)

with suitable boundary conditions. Here denotes the vector of conserved quantities at a point in the domain at time , and , for , are the flux functions in the –direction, respectively, where are linear maps. The matrix is assumed to be diagonalizable with real eigenvalues. Well-posedness of linear non-autonomous systems of conservation laws is studied for instance in [14, 15, 18, 26, 47]. For a linear advection equation, even in the case of variable coefficients, the characteristics never cross [26]. Looking at the linear advection equation in one (spatial) dimension, one can prove that all characteristic converge to the origin. It becomes clear that solutions for may become unbounded and contain Dirac delta functions.

In this article we treat the case where the flux functions depend on coefficients , that are modeled as correlated random fields in space, and stochastic processes in time, defined on a probability space . We assume from now on that the random fields are (-almost surely) differentiable with respect to , [2]. The stochastic processes considered provide the coefficients implicitly, namely as the solution to a stochastic differential equation (SDE). This means that the system we are considering is given by

 {∂tU(x,t)+→div⋅(→A(x,t,V(x,t)) U(x,t))=S(x,t),∂tV(x,t)=μ(V(x,t))+σ(V(x,t)) ν(x,t), (2)

where the vectors , and are arbitrary functions, and the vector is a random function in space and time, often referred to as the ”noise term”.

In general, the Itô-SDE for the parameter in Equation (2) has no explicit (strong or weak) solution. Many stochastic schemes fall into the class of stochastic Runge-Kutta (SRK) methods. Weak approximations focus on the expectation of functionals of solutions, whereas strong approximations are concerned with pathwise solutions. For an overview on the theory of SDEs and numerical methods we refer to [24, 23, 36, 29, 41] and references therein.

In general, there are no explicit solution formulas for deterministic variable-coefficient linear hyperbolic systems of the form (1), let alone for the stochastic PDE (2). Numerical methods are therefore widely used to approximate the solutions of those types of equations. Finite Difference, Finite Volume and Discontinuous Galerkin methods are popular approaches to obtain efficient time and space discretizations for the problem at hand, see [26, 18] and references therein.

1.2 Uncertainty quantification

In many applications the parameters of the flux functions in Equation (1) are determined by measurements. Then is, in fact, only statistical information available. Among other phenomena, scarcity of measurements of material properties or background velocity fields lead to uncertainty in the parameters of the flux function. Given the statistical description of the parameters, it is of interest to efficiently quantify the resulting uncertainty in the solution to Equation (1). An appropriate mathematical notion, together with a proof of existence and uniqueness, of random solutions for systems of hyperbolic conservation laws has recently been developed in e.g. [4, 42](see also references therein).

Efficient numerical methods for uncertainty quantification in the setting of partial differential equations has been intensively studied in recent years. A non-exhaustive list of literature on uncertainty quantification for hyperbolic conservation laws includes [1, 6, 27, 28, 44, 38, 46, 16, 21, 43, 38] and references therein. Among the most popular techniques are stochastic Galerkin methods based on generalized polynomial chaos expansions (gPC). These methods reduce the stochastic model to a (high-dimensional) deterministic one. This comes to the price that they are highly intrusive, such that existing numerical schemes for conservation laws cannot be used. A second class of methods for uncertainty quantification are stochastic collocation methods, which are non-intrusive and easier to parallelize than gPC based methods. Solutions to hyperbolic conservation laws, however, do not have the necessary regularity with respect to the stochastic variables, which in general diminishes the use of both gPC and collocation methods. There are a number of other techniques, namely stochastic Finite Volume methods (see [30]), adaptive analysis of variance, proper generalized decomposition, and Fokker-Planck-Kolmogorov type techniques. The latter can handle low parametric regularity, but assume that the ”effective” number of stochastic dimensions is low and require often impractical complex representations of the input random fields.

Here, we use Monte Carlo (MC) methods to quantify the uncertainty, which comprises a class of non-intrusive methods well suited for problems with low parametric (stochastic) regularity. Monte Carlo methods rely on repeated sampling of the probability/parameter space. For each sample the underlying, (then) deterministic, PDE is solved and the sample solutions are combined to obtain statistical information in the form of moments of the distribution. Monte Carlo methods are very robust with respect to the regularity of the solutions. However, the convergence rate of a (plain) MC method is limited to with respect to the number of samples, which means that typically a large number of realizations of approximations of solutions to the underlying problem (1) has to be computed. In the Monte Carlo approximation of moments of solutions to stochastic partial differential equations the discretization error consists of a statistical error and a spatiotemporal error of the numerical scheme, as can be seen in Equation (31). For non-autonomous linear systems of conservation laws it has been shown in [34], that the number of Monte Carlo samples can be chosen

 M=O(Δx−2o), (3)

in order to equilibrate the statistical and spatiotemporal error of an underlying FV scheme with convergence rate . The computational complexity due to the slow convergence rate, can be lowered by Multilevel Monte Carlo (MLMC) methods, which have been proposed in [42] and related papers by the same authors  [33, 31, 32]. For a result on the convergence and computational complexity of the multilevel Monte Carlo approximation for general Hilbert-space-valued random variables see [3]. The idea behind MLMC methods is to use a hierarchy of different levels of resolution of the underlying deterministic numerical solver. The level dependent numbers of MC samples are then chosen, in such a way that the total computational work is minimal while the sum of all error terms is asymptotically optimized. In the case of non-autonomous linear systems of conservation laws, see [34] for details. Optimal computational complexity is not the main focus of this paper. We remark that it is not particularly challenging to employ an MLMC method, as the problems considered here fall into the class of problems for which a general MLMC method is derived in [3].

Random (scalar) linear transport equations are discussed for instance in [37, 10, 7, 39] and references therein. In [11] the authors present expressions for the distribution of the solution of a linear advection equation with a time-dependent velocity, given in terms of the probability density function of the underlying integral of the stochastic process. Numerical schemes are then introduced in [9, 12]. A stochastic collocation method for the wave equation is introduced in [35]. Uncertainty quantification of acoustic wave propagation in random heterogeneous layered media is presented in [34]. In [21] the linear advection equation with spatiotemporal coefficients is the subject of research. The authors develop numerical methods using polynomial chaos expansions to solve the advection equation with a transport velocity given by a Gaussian or a log-normal distribution.

1.3 Aims and contributions of the paper

We extend the existing framework for random solutions to linear hyperbolic systems presented in, e.g., [42, 33, 31, 32] and the references therein. Random coefficients of the numerical fluxes are modeled as Gaussian random fields in space and Ornstein–Uhlenbeck processes in time. Thus, the system of equations consists of a stochastic differential equation (SDE) for each random parameter coupled to the conservation law. For that purpose,

• we provide the necessary solution concepts for weak random solutions for linear conservation laws with time-dependent random field coefficients, together with a well-posedness result (existence, uniqueness and dependence on initial data).

• we describe algorithms for fast generation of such spatiotemporal random fields. Discretizations of the SDEs with the appropriate (weak) order are presented.

• we present two applications: the magnetic induction equation and linear acoustics, both with background velocity fields modeled by spatiotemporal random fields.

The simulation results are based on a flexible, thread-parallel algorithm for the uncertainty quantification of linear conservation laws.

The remainder of the paper is organized as follows. In Section 2, we provide the necessary theoretical background for time-dependent random fields, as well as random (linear) hyperbolic conservation laws. We present a Monte Carlo Finite Volume framework for approximating moments of the random solutions in Section 3. This is followed by Section 4, showcasing efficiency and robustness on realistic test cases, and some conclusions in Section 5.

2 Stochastic linear hyperbolic conservation/balance laws with spatiotemporal random parameters

This article treats the case of random flux functions with coefficients that are modeled as spatiotemporal random fields. In order to rigorously define uncertainty of the parameters of the flux function, we start by recapitulating the necessary background from probability theory.

2.1 Spatiotemporal random fields

To this end, let be a measurable space with elementary events endowed with a -algebra . Then, the measure space is called a probability space, if is a -additive set function such that .

{definition}

[Gaussian random field (GRF)] A random field (also written as ) is a set of real-valued random variables defined on a probability space . The field is called Gaussian, if the vector of random variables follows the multivariate Gaussian distribution for any , i.e., , where is the normal distribution with mean and covariance matrix . Any Gaussian random field is completely defined by its second-order statistics.

We note that is a nonnegative, semi-definite, symmetric function. Bochner’s theorem [5] states that is the Fourier transform of a positive measure on . If we assume that has a Lebesgue density which is even and positive, we can construct a GRF by

 g(x)=(F−1γ1/2FW)(x), (4)

where denotes the d-dimensional Fourier transform with inverse , and is a centered Gaussian family with covariance (see [25]).

Looking at the time domain, a standard Brownian motion or Wiener process , for , defined on a probability space , is a continuous stochastic process which starts in zero -a.s. and has independent and normally distributed increments, i.e., . {definition}[Ornstein–Uhlenbeck (OU) process] Let be a standard Brownian motion. For , and , the Ornstein–Uhlenbeck process is given as the solution of the stochastic differential equation

 da(t)=θ(μ−a(t))dt+σdB(t),a(0)=a0, (5)

In general the initial condition may be random as well.

The OU processes is mean-reverting: Starting at a value , over time, the process tends to drift towards its long-term mean . See Figure 1 for some sample solutions.

Remark \thetheorem

For every the random variable is normally distributed with mean and variance

 E(a(t))=μ+(a0−μ)e−θt,V(a(t))=σ22θ(1−e−2θt). (6)

This can easily be shown by using Itô’s formula with the function , and considering the dynamics of . Then, the stochastic differential equation (5) has the following solution

 a(t)=μ+e−θt(a0−μ)+σ∫t0e−θ(t−s)dB(s). (7)

From this form we can directly deduce the expectation of , the variance is derived by using the Itô isometry.

Remark \thetheorem

The realizations of an Ornstein–Uhlenbeck process are continuous and nowhere differentiable with probability 1.

{definition}

[Spatiotemporal random field] For all let be a Gaussian random field with covariance and mean . Further, a time-dependent random field is defined as the solution of the following SDE (cf. Definition 1)

 dZ(x,t)=θ(μ(x)−Z(x,t))dt+σdG(x,t),Z(x,0)=Z0(x), (8)

where is a continuously differentiable function in , and are positive real parameters.

The solution has the following properties.

• For a fixed time , is a real-valued Gaussian random field.

• For each point , is an Ornstein–Uhlenbeck process, i.e., a mean-reverting process.

2.2 Random conservation laws

Equipped with a probability space we can incorporate uncertainties in the flux of Equation (1) by considering the equation

 {∂tU(x,t,ω)+→div⋅(→A(x,t,ω)U(x,t,ω))=S(x,t,ω),U(x,0,ω)=U0(x,ω),x∈D⊂Rd,t>0. (9)

We are interested in cases, where some or all of the coefficients of are modeled as a spatiotemporal random field according to Definition 1. In order for this to make sense, we follow [42, 34], but extend the solution concept and definition of hyperbolicity to be time-dependent where necessary.

{definition}

[Hyperbolicity] For in the unit sphere let
be the convex combinations of the directional random matrices . Consider the eigen-decomposition

 Extra open brace or missing close brace

where is a diagonal matrix consisting of the eigenvalues of , and contains the corresponding eigenvectors as columns. The random linear system of conservation laws (9) is -a.s. hyperbolic if all eigenvalues of are real -a.s. for all . In addition, for every finite time horizon there exists a such that

 supx∈D,t∈[0,T],w∈Sd−1∥Qw(x,t,ω)∥∥Qw(x,t,ω)−1∥≤K(ω),P-a.s.. (10)

In addition, we require the expected wave speeds to be finite, i.e.,

 ˆλ:=max1≤r≤d supx∈D,w∈Sd−1∣∣E[λwr(x,t,⋅)] ∣∣<∞,∀t∈[0,T]. (11)

We consider a measurable mapping

 U:(Ω,A)→(V,B(V)),ω↦U(x,t,ω), (12)

where is a Borel -algebra of the function space . Then, we define the concept of a weak solution as follows. {definition}[Pathwise weak solution] A random field , with values in , is called a (pathwise) weak solution to the stochastic conservation law (9) on , with and a finite time horizon , if it is -a.s. a weak solution. This means that it satisfies the variational formulation

 ∫Rd×[0,T](U⋅φt+d∑r=1AxrU⋅φxr)dxdt+∫RdU0⋅φ|t=0dx=∫Rd×[0,T]S⋅φdxdt, (13)

for all test functions for -a.e. .

Denote and . We have the following result regarding existence and uniqueness of a solution. {theorem}[Well-posedness of stochastic linear hyperbolic conservation laws.] Consider the linear conservation law (2), and assume the following holds:

• the stochastic differential equation for the coefficients (see Equation (20)) admits a unique solution (in the sense of [22, Chapter 5.2])

• the system is hyperbolic according to Definition 2.2, with (pathwise) constant ,

• the moments of the initial data, the source and the flux are bounded in the following sense: there exist non-negative such that

 U0∈Lk(Ω,Wr0,∞),S∈Lk(Ω,WrS,∞),Axr∈Lk(Ω,WrA,∞), (14)

where is the sample space of the probability space.

• each random field is stochastically independent of and .

Then, for , the system (2) admits a unique pathwise weak solution. Moreover, for all , we have the following estimates

 ∥U(.,t,ω)∥L2(D)≤K(ω)(∥U0(.,ω)∥L2(D)+t∥S(.)∥L2(D)),P-a.s.,∥U∥Lk(Ω,C([0,T],L2(D))≤¯K(∥U0∥Lk(Ω,L2(D))+t∥S∥Lk(Ω,L2(D))), (15)
{proof}

The proof is an immediate consequence from [42, Theorem 1], if one considers the following modifications to address the time-dependence of the flux function:

1. Using the time-dependent version of the classical existence and uniqueness results summarized in [42, Theorem 1], one can show that the random field given by the is well defined and that is a weak solution -a.s..

2. To show that the maps are measurable for all -a.s. one uses the fact that is a separable Hilbert space and the stability estimate of the classical theorem for the deterministic (pathwise) solution.

3. The first estimate in (15) may again be derived from the time-dependent version of the classical case (summarized in [42, Theorem 1]), and the second follows from the first, using the assumption that random fields, initial conditions and sources are independent.

Incorporating the above into the proof of [42, Theorem 1], the assertion follows immediately.

If and Theorem 2.2 ensures the existence of moments of order of the random weak solution.

In general, there are no explicit solution formulas. For the special case of the scalar linear transport equation with a time-dependent coefficient (transport driven by the Ornstein–Uhlenbeck process) however, we can derive the distribution of the solution in closed form. This distribution will then be used in an example presented in Section 4.1 to verify our Monte Carlo Finite Volume method. {theorem} Consider the scalar transport equation

 {u(x,t,ω)t+(a(t,ω)u(x,t,ω))x=0u(x,0)=u0(x) (16)

with coefficient given by the Ornstein–Uhlenbeck process (5). The moments of the solution to Equation (16) exist and are given by

 E(u(x,t))=∫u0(x−y)fA(^σ2,^μ)(y)dy=(fA(^σ2,0)∗u0)(x−t^μ). (17)

Here, ”” denotes convolution and the probability density function is given by

 fA(^σ2,^μ)(y)=1√2π^σ2e−(y−^μ)22^σ2,

with diffusion coefficient and transportation speed .

Remark \thetheorem

Higher moments of the solution may be calculated by

 Mm(u(x,t))=E((u(x,t)−E(u(x,t)))m). (18)
{proof}

The solution for a single realization (fixed ) of Equation (16) is given by . We start by calculating the first moment of this expression, i.e.

 E(u0(x−∫t0a(s)ds)).

This means, we have to calculate the distribution of the time integral over , i.e. the distribution of the stochastic process

 A(t)=∫t0a(t)dt.

The process is again a Gaussian process, i.e. , and therefore completely characterized by its mean and variance. Using Fubini’s theorem we have that

 E(A(t))=∫t0E(a(s))ds=∫t0μ+e−θs(a0−μ)ds=μt−(a0−μ)e−θt−1θ=:^μ. (19)

We express the variance of via the covariance of with itself

 V(A(t))=Cov(A(t),A(t))=E((A(t)−E(A(t)))(A(t)−E(A(t)))).

Using (combine Equations (7) and (19)) this yields

 V(A(t)) =E(σ∫t0∫s0e−θ(s−u)dB(u)dsσ∫t0∫r0e−θ(r−v)dB(v)dr) =2σ2∫t0e−θs∫t0e−θrE(∫s0eθudB(u)∫r0eθvdB(v))drds,

using Fubini’s theorem. For a Brownian motion , it is known that

 E(∫s0eθudB(u)∫r0eθvdB(v))=12θ(e2θmin(s,r)−1).

Therefore, we have

 V(A(t)) =2σ2∫t0e−θs∫s0e−θr12θ(e2θmin(s,r)−1)drds =σ2θ∫t0e−θs∫s0e−θr(e2θr−1)drds=σ2θ3(θt+2e−θt−12e−2θt−32)=:^σ2.

This gives us the variance of depending on the variables and .

Therefore, the expectation of the solution to Equation (16) is given by

 E(u0(x−∫t0a(s)ds))=E(u0(x−A(t)))=∫∞−∞u0(x−y)fA(y)dy

where is the normal density function with parameters and given by

Remark \thetheorem

For the limit , we recover the corresponding result for a pure Brownian motion process (i.e. ), where and . This can be shown by a Taylor expansion as

 V(A(t)) =σ2θ3(θt+2e−θt−12e−2θt−32) =σ2θ2(θt+2(1−θt+θ2t2/2−θ3t3/3!+O(θ4)) −12(1−2θt+4θ2t2/2−23θ3t3/3!+O(θ4))−32)=σ2t3/3+O(θ).

A similar Taylor expansion shows the result for .

3 Monte Carlo Finite Volume methods

For the approximation of the (moments of the) solution to partial differential equations with random coefficients we have to discretize in space and time, as well as in the “stochastic domain”. We use a Monte Carlo method for the approximation of moments of the random solution. This means that we have to approximate the (deterministic) solution for each realization of Equation (9), i.e.,

 ∂tU(x,t)+→div⋅(→A(x,t,V(x,t)) U(x,t))=S(x,t),∂tV(x,t)=μ(V(x,t))+σ(V(x,t)) ν(x,t), (20)

where each component of is given by the spatiotemporal random field (8). As mentioned in the Introduction, a Multilevel MC method could be used to achieve optimal computational complexity. This is, however, not the goal of this article.

Before we start with a brief recapitulation of Finite Volume methods, we introduce some useful notation. Let the computational domain be a bounded axiparallel domain, i.e., . A uniform axiparallel mesh of the domain consists of identical cells , for , , and , with edge lengths in the x-, y- and z-direction, respectively. The cell centers are denoted by , and values at the cell interfaces by . For a function we set where , for , is the n-th time step.

3.1 Finite volume methods

A Finite Volume (FV) scheme is obtained by integrating Equation (1) over a cell, or control volume, , and over a time interval , . Denoting cell averages by , a fully discrete flux-differencing method has the form

 Un+1i,j,k=Uni,j,k−ΔtnΔx(Fni+1/2,j,k−Fni−1/2,j,k)−ΔtnΔy(Gni,j+1/2,k−Gni,j−1/2,k)−ΔtnΔz(Hni,j,k+1/2−Hni,j,k−1/2), (21)

where are the fluxes in x-, y- and z-direction respectively. The fluxes approximate the integral

 Fni+1/2,j,k≈1ΔtnΔyΔz∫Tn∫yj+1/2yj−1/2∫zk+1/2zk−1/2f(xi+1/2,y,z)dtdydz. (22)

The approximations for the fluxes in the other directions are equivalently defined. Using the flux-differencing form, FV methods are typically based on the reconstruct-evolve-average (REA) algorithm. This algorithm consists of the following steps performed for each time step: First, one reconstructs the cell averages by a piecewise polynomial function. Then, one evolves the hyperbolic equation by defining the numerical fluxes (22). Finally, the new cell averages are computed. The numerical fluxes are based on solutions of local Riemann problems at each cell interface. High order accuracy in space is achieved by using TVD limiters, or (W)ENO schemes [19, 26, 45, 20, 40], and in time by strong stability-preserving (SSP) Runge-Kutta methods, see e.g., [17]. The CFL condition dictates a limit on the time step size for the resulting explicit FV schemes. Let be the eigenvalues of Equation (1) in x-, y-, and z-direction, then the time step size needs to satisfy

 ΔtnΔx¯¯¯¯¯¯λx+ΔtnΔy¯¯¯¯¯λy+ΔtnΔz¯¯¯¯¯λz≤12, (23)

where is the largest absolute eigenvalue in x-direction, and similar in y-, and z-directions.

3.2 Realizations of spatiotemporal random fields

In order to approximate moments of the solution to Equation (9), we need to create realizations of the spatiotemporal random fields given in Definition 1. Therefore, we describe here one way to discretize the SDE (8) in time and space.

Discretization in time

There is a wide variety of numerical methods for SDEs. Methods are classified mainly according to their strong () and weak () orders of convergence. The order of weak convergence is typically higher than the order of strong convergence of a scheme. For example the Euler-Maruyama approximation has convergence orders , and the Milstein method has convergence orders . For a classification of higher order SRK schemes see, e.g., [8].

For the Ornstein–Uhlenbeck process, given in Equation (8), the Milstein method is equivalent to the Euler-Maruyama method, since is independent of the process. The Milstein method is given by

 Zl+1(x)−Zl(x)=hθ(μ(x)−Zl(x))+σ√hGl(x), (24)

where for all and is a Gaussian random field in at the discrete times .

Depending on the parameters, the Ornstein–Uhlenbeck process, given in Equation (8), is stiff and appropriate implicit schemes have to be used. For instance, the implicit Milstein method is given by

 {Zl+1(x)=(Zl(x)+hθμ(x)+σ√hGl(x))/(1+hθ),Z0=μ(x),∀x∈D. (25)

In general, higher order SRK schemes become increasingly involved, for the Ornstein–Uhlenbeck process, however, those schemes simplify considerably, as the function is constant.

Discretization in space

The discretization in Equation (25) is only semi-discrete as it is continuous in space. For a fully discrete approximation of a realization of Equation (8) we need to provide an algorithm that approximates realizations of the Gaussian random fields on a Cartesian grid over at each time step. To this end, we use the approach described in [25], which provides the following algorithm based on the representation formula (4). For simplicity we only present the periodic case. Let be the discrete Fourier transform with inverse , and let be random samples from a normal distribution for all , i.e., , where is the volume of the cells. Then, a Gaussian random field is given by

 {Gli,j,k}=F−1{√γi,j,kF{Zi,j,k}}, (26)

where the covariance is given by the Fourier transform of the symmetric, positive function . A typical family of functions for the Lebesgue density is given by

 γi,j,k=(1+∥pi,j,k∥q2)−l,k,l∈N,q,l≥1, (27)

where the points in the Fourier domain are given by . Another possibility would be to employ an exponential covariance function given by with correlation length .

3.3 Monte Carlo approach

We employ a Monte Carlo Finite Volume method (MC FV method) in order to approximate the moments of the stochastic PDE (2) where particular coefficients are modeled as spatiotemporal random fields given in Equation (8), such that the system is hyperbolic according to Definition 2.2. As before, we consider a bounded axiparallel domain together with a mesh consisting of identical cells .

The step size of the explicit PDE solver is given by the CFL condition (23) and depends on the eigenvalues which are influenced by the stochastic parameters . On the other hand, the interval length of the discretization of the SDE is constant for each realization. To achieve an optimal convergence rate, the approximation error of the SDE’s and the FV method should be of the same order. If a bound of the maximum expected eigenvalue , as defined in Equation (11), is available, one can choose

 h=cΔxˆλ, (28)

where is a constant. We require the discrete times of the FV scheme to be a subset of the discrete times of SDE approximation, i.e., .

The MC FV algorithm consists of three main steps.

1. Generate realizations for each parameter that is modeled as a spatiotemporal random field (8):

• Choose an appropriate interval for the approximation of the SDE.

• Generate (times the number of parameters) Gaussian random fields on the given mesh according to Equation (26).

• Use an approximation scheme with the same (weak) order as the FV method.

2. For each generated realization, the deterministic problem of the underlying hyperbolic conservation law is solved on the given mesh . In the underlying Riemann problem, the random field is assumed to be piecewise constant on the time intervals . For the sample we denote by the exact pathwise solution at time , and by the numerical approximation.

3. The approximations of the sample solutions, i.e., are used to approximate moments of the random solution field .

One is particularly interested in the first two moments, i.e., the expectation and the variance . The sample mean of the approximate solutions is used to estimate the expectation given by

 EM[UT](x,T)=1MM∑m=1UT(x,T,ωm). (29)

Higher statistical moments of , such as the variance , can be approximated similarly. The total approximation error of the expectation in the norm, i.e.

 εappr(T)=|Ci,j,k| ∑i,j,k∣∣EM[UT](xi,j,k,T)−E[U](xi,j,k,T)∣∣, (30)

is bounded by the sum of the numerical approximation error of the base method and the Monte Carlo error

 εappr(t)≤εnum(t)+ε%MCM(t), (31)

where

 εnum(t)=|Ci,j,k| ∑i,j,k∣∣1MM∑m=1(UT(xi,j,k,T,ωm)−U(xi,j,k,T,ωm))∣∣,εMCM(t)=|Ci,j,k| ∑i,j,k∣∣1MM∑m=1U(xi,j,k,T,ωm)−E[U](xi,j,k,T)∣∣. (32)

Using the triangle inequality, it is trivial to show the relationship (31). If one uses the norm then equality holds.

The estimate (31) shows that the approximation error is bounded by the dominating part of the sum of the numerical error and the Monte Carlo error. The Monte Carlo method converges with the rate in the number of samples in mean square and is independent of the resolution of the grid, i.e. the size of . On the other hand, a numerical base method of order converges with for each single realization, independent of the number of Monte Carlo samples. Therefore, equation (31) suggests that our Monte Carlo method is most efficient if . For the according sample numbers in a MLMC approach we refer to [3]. In [34] for instance it is shown that this can be achieved if the number of Monte Carlo samples is , where is the order of the FV method.

{lemma}

Assume that,

• the conditions of Theorem 2.2 are fulfilled for ,

• the underlying numerical FV scheme converges (under grid refinement) to the weak solution of Equation (1) with rate ,

• the numerical scheme for the SDE converges at the same rate .

Then, the MC FV method estimates described in this section converge to the first moment of the solution in mean square sense, as , with .

{proof}

The assertion follows directly from the triangle inequality and the convergence of the corresponding FV scheme, together with the specific choice for the number of samples

 Extra open brace or missing close brace ≤∥E[U]−E[UT]∥2+E∥EM[UT]−E[UT]∥2 ≤E∥U−UT∥2+1MVar(UT) =O(Δx2o).

Here we used the standard convergence properties of the sequence of Monte Carlo estimators . The norm is the canonical norm for the (pathwise) solution .

There is no dependence between different samples , and therefore the described algorithm is trivial to parallelize. Our implementation distributes the workload on as many CPU threads as are available in the computing environment, achieving (trivially) optimal parallelization-efficiency.

4 Examples

We evaluate the proposed approach for uncertainty quantification of linear hyperbolic conservation laws on a suite of test cases. We start with a “degenerate” case, where an autonomous scalar linear transport is driven by a (space-independent) Ornstein–Uhlenbeck process. We present error and convergence analysis, based on the existence of the explicit solution formula (17).

Then, we present two realistic cases of linear systems of conservation laws in two dimensions, namely the equations for linear acoustics and the motion of magnetic fields (induction equation). In both cases, we model the (background) velocity field in x-, and y-direction as a spatiotemporal random field, and we show hyperbolicity of the system. The simulations show the robustness of the approach and reveal interesting features of the moments of the solutions.

4.1 Ornstein–Uhlenbeck process driven scalar linear advection

We
start by considering the scalar linear stochastic conservation law with a parameter given by the Ornstein–Uhlenbeck process (5), i.e.,

 ut+(a(t)u)x=0,u(x,0,ω)=u0(x),x∈D=[0,1]da(t)=θ(μ−a(t))dt+σdB(t),a(0)=a0, (33)

with periodic boundary conditions for . The eigenvalue of the PDE is the matrix itself, i.e., , and the normalized eigenvector is 1. The system is hyperbolic according to Definition 2.2, since and, using Equation (6), we have that the expected maximum eigenvalue

 |E[λ(x,t,ω)]|=|E[a(t,ω)]|=∣∣μ+(a0−μ)e−θt∣∣<∞, for all t∈R+ (34)

is finite.

We use the derived explicit solution formula from Theorem 2.2 for the moments for the solution to this equation to show convergence of the proposed MC FV method. To this end, we compute the first two moments of Equation (33) at time , started with an initial condition consisting of a discontinuity, i.e. , where is the characteristic function of the subset . Furthermore, we choose the deterministic initial condition of the OU process to be and . A few sample solutions for these parameters are plotted in Figure 1(a). We can see in Figure 2 that the expectation at time consists of the initial function transported with speed and smoothed out wave fronts in accordance with Theorem 2.2. The largest values of the variance are located around the (smoothed out) discontinuities.

The first order scheme MC FV scheme uses a standard upwind discretization of the deterministic problem, and the Milstein scheme for the OU–process. The second order scheme consists of a minmod flux-limiter in space and a second order strong stability preserving (SSP) Runge-Kutta time-stepping for the deterministic problem, together with a (weak) second order stochastic Runge-Kutta scheme for the OU–process. For the discrete time interval of the OU–process we use . The number of Monte Carlo samples is chosen as . For the described setup, Figure 3 shows the relative approximation error of the first two moments of the solution. Both the first and second order MC FV method converge to the exact solution. The convergence order for the first moment is , and for the second moment is . The second order MC FV method has a smaller error constant compared to the first order method. The full convergence order for the second order scheme is not achieved, since the deterministic solution consists of discontinuous piecewise linear data.

4.2 Linear Acoustics in 2 dimensions

Sound waves can be described using the vector of conserved variables , where is the pressure, is the velocity in x-direction, and in y-direction. Given a background density and a bulk modulus of compressibility , the dynamics are governed by the following system of equations

 Ut+(˘A(10)U)x+(˘A(01)U)y=0,˘Aw=˘Aw(x,t