Numerics for the 2nd moment

Numerical methods for the 2nd moment of stochastic ODEs

R. Andreev and K. Kirchner Univ Paris Diderot, Sorbonne Paris Cité, LJLL (UMR 7598 CNRS), F-75205 Paris, France Department of Mathematical Sciences, Chalmers University of Technology and University of Gothenburg, SE-412 96 Gothenburg, Sweden
July 5, 2019

Numerical methods for stochastic ordinary differential equations typically estimate moments of the solution from sampled paths. Instead, in this paper we directly target the deterministic equation satisfied by the first and second moments. For the canonical examples with additive noise (Ornstein–Uhlenbeck process) or multiplicative noise (geometric Brownian motion) we derive these deterministic equations in variational form and discuss their well-posedness in detail. Notably, the second moment equation in the multiplicative case is naturally posed on projective–injective tensor products as trial–test spaces. We propose Petrov–Galerkin discretizations based on tensor product piecewise polynomials and analyze their stability and convergence in the natural norms.

Key words and phrases:
Stochastic ordinary differential equations, Additive noise, Multiplicative noise, Variational problem, Hilbert tensor product space, Projective and injective tensor product space, Petrov–Galerkin discretization
2010 Mathematics Subject Classification:
65C30, 60H10, 65L60


1. Introduction

Ordinary and partial differential equations are pervasive in financial, biological, engineering and social sciences, to name a few. Often, randomness is introduced to model uncertainties in the coefficients, in the geometry of the physical domain, in the boundary or initial conditions, or in the sources (right-hand sides). In this paper we aim at the latter scenario, specifically ordinary or partial differential evolution equations driven by Brownian noise. The random solution is then a continuous-time stochastic process with values in a certain state space. When the state space is of finite dimension (, say), it may be possible to approximate numerically the temporal evolution of the probability density function of the stochastic process, but in most applications, only the first few statistical moments of the random solution may be of interest or even feasible to compute.

The computation of moments of the random solution is typically based on sampling methods such as Monte Carlo. In general, Monte Carlo methods are, however, computationally expensive due to the convergence order of the Monte Carlo estimation and the high cost for computing sample paths of solutions to stochastic differential equations. Recent developments aiming at reducing the computational cost include multilevel Monte Carlo methods, e.g., [6, 7] and quasi-Monte Carlo integration [10] or combinations of them [8, 11].

An alternative to sampling for the covariance of a parabolic stochastic PDE driven by additive Brownian noise was proposed in [12]. It is based on the insight that the second moment satisfies a deterministic equation that can be formulated as a well-posed linear space-time variational formulation on Hilbert tensor products of Bochner spaces. The main promise of space-time variational formulations is in potential savings in computing time and memory through space-time compressive schemes, e.g., using adaptive wavelet methods or low rank tensor approximations. Multiplicative noise requires a more careful analysis because firstly, an extra term in the space-time variational formulation constrains it to projective–injective tensor products of those Bochner spaces for the trial–test spaces [9]. Secondly, the well-posedness is self-evident only as long as the volatility of the multiplicative noise is sufficiently small. Consequently, while it is relatively straightforward to derive numerical methods in the case of additive noise (by tensorizing existing space-time discretizations of deterministic parabolic evolution equations), new techniques are necessary in the case of multiplicative noise. To fully explain and address those issues, in this paper we focus entirely on canonical examples of stochatistic ODEs driven by additive or multiplicative Brownian noise. However, to facilitate the transition to parabolic stochastic PDEs, our estimates are explicit and sharp in the relevant parameters.

The paper is structured as follows. In Section 2 we introduce the model stochastic ODEs and the necessary definitions, derive the deterministic equations for the first and second moments and discuss their well-posedness. In Section 3 we present conforming Petrov–Galerkin discretizations of those equations and discuss their stability, concluding with a numerical example. Section 4 summarizes the paper.

A comment on notation. If is a Banach space then denotes its unit sphere. We write . The symbol ð () denotes the Dirac measure (at ). The closure of an interval is . The symbol variously denotes the tensor product of two functions or the algebraic tensor product of function spaces, depending on the context.

2. Derivation of the deterministic moment equations

2.1. Model stochastic ODEs

Let , set . The focus of this paper are the model real-valued stochastic ODEs with additive noise


or with multiplicative noise



  • is a fixed positive number that models the action of an elliptic operator,

  • is a real-valued Brownian motion defined on a complete probability space ,

  • are parameters specifying the volatility of the noise,

  • the initial value is a random variable independent of the Brownian motion with known first and second moments (but not necessarily with a known distribution).

We call the -algebra generated by the initial value and the Brownian motion , and the resulting filtration. The expectation operator is denoted by . We refer to [13] for basic notions of stochastic integration and Itô calculus.

A real-valued stochastic process is said to be a (strong continuous) solution of the stochastic differential equation “ on with ” if a) is progressively measurable with respect to , b) the expectation of is finite, c) the integral equation


holds (), and d) is continuous (). By standard theory [13, Theorem 5.2.1], a Lipschitz condition on implies existence and uniqueness of such a solution. Moreover, its has finite second moments. For future reference, we state here the integral equations for (1)–(2):


The solution processes and their first/second moments are known explicitly:

Additive (4) Multiplicative (5)
(Ornstein–Uhlenbeck process) (Geometric Brownian motion)

The square integrability (6) in conjunction with Fubini’s theorem will be used to interchange the order of integration over and without further mention. Square integrability also implies the useful martingale property (see [13, Corollary 3.2.6] and [13, Definition 3.1.4])


Choosing shows that the stochastic integral has expectation zero. If and are two square integrable processes adapted to , the Itô isometry [13, Corollary 3.1.7], along with (7) and the polarization identity yield the equality


These are the main tools in the derivation of (6). We will write for the real-valued stochastic process on indexed by the parameter space .

A function is called symmetric if for (a.e.) . It is said to be positive semi-definite if


Our first aim will be to derive deterministic equations for the first and the second moments


as well as for the covariance function


of the stochastic process . The second moment and the covariance are symmetric positive semi-definite.

2.2. Deterministic first moment equations

We first introduce the spaces


where the latter denotes the closed subspace of the Sobolev space of functions vanishing at . Thanks to the embedding , elements of will be identified by their continuous representant. These spaces are equipped with the -dependent norms


and the obvious corresponding inner products and . The norm on is motivated by the fact that

Lemma 2.1.

Let . Then


Suppose that the supremum of is attained at some . Integrating over , applying the Cauchy–Schwarz and the Young inequalities leads to the estimate in terms of the norms. In a similar way, observing that , we obtain in terms of the norms. Adding the two inequalities gives (15). ∎

The inequality (15) is sharp in general as the function attests:


The deterministic moment equations will be expressed in terms of the continuous bilinear form


We employ the same notation for the induced bounded linear operator


and use whichever is more convenient, as should be evident from the context. The operator arises in the weak formulation of the ordinary differential equation . With the definition of the norms (13), it is an isometric isomorphism,


Indeed, is obvious from (13)–(14). To verify , let be arbitrary. Taking as the solution to the ODE with , it follows using (13)–(14) that . Therefore, , and in particular . This shows the isometry property. By a similar argument, for all nonzero . By [3, Theorem 2.1], is an isomorphism .

If a functional can be expressed as for some , then enjoys the representation


Despite this integral representation, is not a compact operator (it is an isomorphism).

Applying the expectation operator to (4)–(5) shows that the first moment of the solution satisfies the integral equation


Testing this equation with the derivative of an arbitrary and integrating by parts in time shows that the first moment of (4)–(5) solves the deterministic variational problem


2.3. Second moment equations: additive noise

The Hilbert tensor product spaces


are obtained as the closure of the algebraic tensor product and under the norm induced by the tensorized inner product,


and similarly for . We write also for the norm of and for the norm of the dual space of . We recall the canonical isometry [15, Theorem II.10]


By virtue of square integrability (6), the second moment is an element of . We define the bilinear form


or explicitly as


More precisely, is the unique continuous extension of by bilinearity from the algebraic tensor products to . Boundedness and injectivity of the operator induced by the bilinear form follow readily from the corresponding properties of , so that the operator is an isometry and its inverse is the due continuous extension of . A representation of the inverse analogous to (20) also holds. For example, the integral kernel of the functional is , which gives . As a further illustration, we give a lemma that will be used below.

A functional is called positive semi-definite if

Lemma 2.2.

The function is positive semi-definite in the sense of (9) if and only if the functional is positive semi-definite.


Identifying with via for all , we observe that . Thus is positive semi-definite iff is. ∎

Finally, we introduce the bounded linear functional


As in [12, Lemma 4.1], [19, Lemma 5.1] could be used to show boundedness of . We give here an elementary quantitative argument. Writing as the integral of over and exploiting the representation (20) of we find . Since is an isometry, the operator norm of is


In particular, this yields the asymptotics for small and for large . In addition, the uniform bound holds, see Remark 2.9.

We are now ready to state the following result (derived for stochastic PDEs in [12]).

Proposition 2.3.

The second moment of the solution to the stochastic ODE (1) with additive noise solves the deterministic variational problem


Inserting the solution (4) in the first term of and integrating it by parts one finds


where the stochastic integration by parts formula [13, Theorem 4.1.5] was used in the second equality. Employing this in with (8) for the term leads to the desired conclusion. ∎

From the equations for the first and second moments, an equation for the covariance function follows:


The proof is straightforward and is omitted.

2.4. Second moment equations: multiplicative noise

Before proceeding with the second moment equation for the multiplicative case we formulate a lemma, which repeats the derivation of the first moment equation (22) without taking the expectation first.

Lemma 2.4.

Let be the solution (5) to the stochastic ODE (2). Then


Let . We employ the definition (5) of the solution in the first term of and integration by parts on the first two summands of the integrand to obtain (observing that the terms at vanish due to )


Inserting this expression in the definition (17) of yields the claimed formula. ∎

The next ingredient in the second moment equation for the multiplicative noise, which appears due to the integral term in (34), is the bilinear form


referred to as the trace product. Again, we use the same symbol for the induced operator, where convenient. Here, denotes the algebraic tensor product. The expression (37) is meaningful because functions in are bounded. As we will see in Lemma 2.8, this bilinear form extends continuously to a form


on the projective and the injective tensor product spaces


These spaces are defined as the closure of the algebraic tensor product under the projective norm


and the injective norm


respectively. Note that, initially, these norms are defined on the algebraic tensor product space. In particular, the sums in (40) are finite and the action of in (41) is well-defined. The spaces in (39) are separable Banach spaces. They are reflexive if and only if their dimension is finite [16, Theorem 4.21]. By [16, Proposition 6.1(a)], these tensor norms satisfy


as well as


We write for the norm of the continuous dual .

Example 2.5.

Consider with the Euclidean norm. Elements can be identified with real matrices. Let denote the singular values of . The projective, the Hilbert, and the injective norms on are the nuclear norm , the Frobenius norm , and the operator norm , respectively. They are also known as the Schatten -norms with , , and . Note that .

For a symmetric and positive semi-definite function the operator defined by , is self-adjoint and positive semi-definite. Let denote its eigenvalues. If is finite then the operator is trace-class and , see [14, Theorem 9.1.38 and comments]. The following specialization will be useful.

Lemma 2.6.

If is symmetric positive semi-definite then with from (29).


Let be an orthonormal basis of consisting of eigenvectors of corresponding to the eigenvalues . By symmetry, . Since , we have . ∎

An arbitrary can be decomposed (via the corresponding integral operator) as with symmetric positive semi-definite and an antisymmetric . This decomposition is stable in the sense that


The tensor product spaces and will be necessary because the trace product is not continuous on the Hilbert tensor product spaces as the following example illustrates.

Example 2.7.

To simplify the notation, suppose , so that . Define by for . Consider the sequence of indicator functions


In view of the isometry (25), this sequence is a null sequence in . However, for all . Therefore, is not continuous on .

The example additionally shows that is not continuous on either, since by (42)–(43) we have , while as .

By contrast, is not a null sequence in . Indeed, Lemma 2.6 gives for all .

Lemma 2.8.

The trace product in (38) is continuous on with .


By density it suffices to bound for arbitrary and . By [17, Theorem 2.4] we may assume that . We note first that the point evaluation functionals have norm on by (15). Therefore, if then


and the continuity of follows:


where the integral Cauchy–Schwarz inequality on was used in the last step, together with the fact that . ∎

We point out that the bound is sharp in general. For take with and with as in (16). Then and , and the bound is tight when applying both limits.

Remark 2.9.

Consider the functional from (29). Since and , we have . In view of , see (43), we find . Finally, by the integral Cauchy–Schwarz inequality and Lemma 2.6.

A crucial observation is that the second moment lies not only in the Hilbert tensor product space but in the smaller projective tensor product space, . This follows by passing the norm under the expectation , then using (42) and the square integrability (6) of .

We recall here from [17, Theorems 2.5 and 5.13] the fact that


(whereas the space is isometric to a proper subspace of , see [16, p. 46]). A corollary of this representation is that


because extends to an isometric isomorphism from onto . We denote it also by . This isometry property (49), Lemma 2.2 and Lemma 2.6 produce the useful identity


for any positive semi-definite as in (28), if it is also symmetric:


Here and below, Lemma 2.2 applies to functionals in mutatis mutandis. Similarly, using the decomposition from (44) we can decompose any into symmetric positive semi-definite and antisymmetric parts with


Now we are in position to introduce the bilinear form


or more explicitly,


The reason for this definition is the following result from [9, Theorem 4.2] derived there for stochastic PDEs. The simplified proof is given here for completeness.

Proposition 2.10.

The second moment of the solution to the stochastic ODE (2) with multiplicative noise solves the deterministic variational problem


It suffices to verify the claim for of the form with . The more general statement follows by linearity and continuity of both sides in . We first observe with Fubini’s theorem on that Next, we insert the expression (34) for both and expand the product. The cross-terms vanish because the terms of the form vanish in expectation; this is seen by conditioning this term on and employing the martingale property (7). With the identity (8) and we arrive at


It remains to verify that coincides with the last term on the right-hand side. Let us distinguish the two cases and and write that triple integral as


Evaluating the integral in the first summand and the integral in the second summand, we see that . Hence, . This completes the proof. ∎

Using the equations for the first and second moments we obtain an equation for the covariance function from (11):


Identity (50) yields for the functional appearing on the right-hand side of (55) and (58). Similarly, , providing some details on the estimate from Lemma 2.8.

We emphasize that it is not possible to replace in the present case of multiplicative noise the pair of trial and test spaces by either pair or , because by Example 2.7 the operator is not continuous there. We note, however, that in the case of additive noise (Section 2.3) the pair could be used instead of . Then with the asymptotics (small ) and (large ).

In order to discuss the well-posedness of the variational problem (55), given a functional , we consider the more general problem:


Owing to and we have . Thus, injectivity of holds under the condition of small “volatility”. A similar condition was imposed in [9, Theorem 5.5]. This is exactly the threshold for the second moment (6) to diverge as , but it stays nevertheless finite for all finite . We discuss here what happens in the variational formulation (59) for larger volatilities , and summarize in Theorem 2.11 below.

Since is an isomorphism, problem (59) is equivalent to . Using the representation of as the double integral of , and the integral representation of through (20), we obtain the integral equation


Defining and we find from (60) the ODE with the initial condition . The solution is




under the integral of (60) provides a unique candidate for . Moreover, . We now estimate in terms of the norm of .

Clearly, not all functionals lead to solutions that are potential second moments. Let us therefore assume that is symmetric positive semi-definite (51)/(28). Then is positive semi-definite (9) by Lemma 2.2. In particular, and . With , the functional