Exponential Integrators for Stochastic Maxwell’s Equations Driven by Itô Noise^{†}^{†}thanks: Submitted to the editors in DATE.
Funding: his work was supported by the
National Natural Science Foundation of China
(NO. 91530118, NO. 91130003, NO. 11021101, NO. 91630312 and NO. 11290142),
the Swedish Foundation for International Cooperation in Research and Higher Education
(STINT project nr. ), as well as the Swedish Research Council (VR)
(projects nr. and ).
The computations were performed on resources provided by the Swedish National Infrastructure
for Computing (SNIC) at HPC2N, Umeå University.
Abstract
This article presents explicit exponential integrators for stochastic Maxwell’s equations driven by both multiplicative and additive noises. By utilizing the regularity estimate of the mild solution, we first prove that the strong order of the numerical approximation is for general multiplicative noise. Combing a proper decomposition with the stochastic Fubini’s theorem, the strong order of the proposed scheme is shown to be for additive noise. Moreover, for linear stochastic Maxwell’s equation with additive noise, the proposed time integrator is shown to preserve exactly the symplectic structure, the evolution of the energy as well as the evolution of the divergence in the sense of expectation. Several numerical experiments are presented in order to verify our theoretical findings.
Key words. stochastic Maxwell’s equation, exponential integrator, strong convergence, trace formula, average energy, average divergence.
AMS subject classifications. 60H35, 60H15, 35Q61.
1 Introduction
In the context of electromagnetism, a common way to model precise microscopic origins of randomness (such as thermal motion of electrically charged microparticles) is by means of stochastic Maxwell’s equations [35]. Further applications of stochastic Maxwell’s equations are: In [32], a stochastic model of Maxwell’s field equations in dimension is shown to be a simple modification of a random walk model due to Kac, which provides a basis for the telegraph equations. The work [27] studies the propagation of ultrashort solitons in a cubic nonlinear medium modeled by nonlinear Maxwell’s equations with stochastic variations of media. To simulate a coplanar waveguide with uncertain material parameters, timeharmonic Maxwell’s equations are considered in [4]. For linear stochastic Maxwell’s equations driven by additive noise, the work [21] proves that the problem is a stochastic Hamiltonian partial differential equation whose phase flow preserves the multisymplectic geometric structure. In addition, the averaged energy along the flow increases linearly with respect to time and the flow preserves the divergence in the sense of expectation, see [10]. Let us finally mention that linear stochastic Maxwell’s equations are relevant in various physical applications, see e.g. [35, Chapter 3].
We now review the literature on the numerical discretisation of stochastic Maxwell’s equations. The work [41] performs a numerical analysis of the finite element method and discontinuous Galerkin method for stochastic Maxwell’s equations driven by colored noise. A stochastic multisymplectic method for dimensional problems with additive noise, based on stochastic variational principle, is studied in [21]. In particular, it is shown that the implicit numerical scheme preserves a discrete stochastic multisymplectic conservation law. The work [10] inspects geometric properties of the stochastic Maxwell’s equation with additive noise, namely the behavior of averaged energy and divergence, see below for further details. Especially, the authors of [10] investigate three novel stochastic multisymplectic (implicit in time) methods preserving discrete versions of the averaged divergence. None of the proposed numerical schemes exactly preserve the behavior of the averaged energy. The work [22] proposes a stochastic multisymplectic wavelet collocation method for the approximation of stochastic Maxwell’s equations with multiplicative noise (in the Stratonovich sense). For the same stochastic Maxwell’s equation as the one considered in this paper (see below for a precise definition), the recent reference [8] shows that the backward Euler–Maruyama method converges with meansquare convergence rate . Finally, the preprint [9] studies implicit Runge–Kutta schemes for stochastic Maxwell’s equation with additive noise. In particular, a meansquare convergence of order is obtained.
In the present paper, we construct and analyse an exponential integrator for stochastic Maxwell’s equations which is explicit (thus computationally more efficient than the above mentioned time integrators) and which enjoys excellent longtime behavior. Observe that exponential integrators are widely used for efficient time integrations of deterministic differential equations, see for instance [18, 7, 19, 12] and more specially [37, 31, 24, 39, 33] and references therein for Maxwelltype equations. In recent years, exponential integrators have been analysed in the context of stochastic (partial) differential equations (S(P)DEs). Without being too exhaustive, we mention analysis and applications of such numerical schemes for the following problems: stochastic differential equations [36, 25, 26]; stochastic parabolic equations [23, 29, 5, 15, 3]; stochastic Schrödinger equations [1, 11, 16]; stochastic wave equations [13, 40, 14, 2, 34] and references therein.
The main contributions of the present paper are:

a strong convergence analysis of an explicit exponential integrator for stochastic Maxwell’s equations in . By making use of regularity estimates of the exact and numerical solutions, the strong convergence order is shown to be for general multiplicative noise. Furthermore, by using a proper decomposition and stochastic Fubini’s theorem, we prove that the strong convergence order of the proposed scheme can achieve .

an analysis of longtime conservation properties of an explicit exponential integrator for linear stochastic Maxwell’s equations driven by additive noise. Especially, we show that the proposed explicit time integrator is symplectic and satisfies a trace formula for the energy for all times, i. e. the linear drift of the averaged energy is preserved for all times. In addition, the numerical solution preserves the averaged divergence. This shows that the exponential integrator inherits the geometric structure and the dynamical behavior of the flow of the linear stochastic Maxwell’s equations. This is not the case for classical time integrators such as Euler–Maruyama type schemes.

an efficient numerical implementation of twodimensional models of stochastic Maxwellâs equations by explicit time integrators.
We would like to remark that the proofs of strong convergence for the exponential integrator use similar ideas present in various proofs of strong convergence from the literature. But, to the best of our knowledge, the present paper offers the first explicit time integrator for linear stochastic Maxwell’s equations that is of strong order , symplectic, exactly preserves the linear drift of the averaged energy, and preserves the averaged divergence for all times. A weak convergence analysis of the proposed scheme for stochastic Maxwell’s equations driven by multiplicative noise will be reported elsewhere.
An outline of the paper is as follows. Section LABEL:sec;2 sets notations and introduces the stochastic Maxwell’s equation. This section also presents assumptions to guarantee existence and uniqueness of the exact solution to the problem and shows its Hölder continuity. The exponential integrator for stochastic Maxwell’s equation is introduced in Section LABEL:sectEXP, where we also prove its strong order of convergence for additive and multiplicative noise. In Section LABEL:sectLSM, we show that the proposed scheme has several interesting geometric properties: it preserves the evolution laws of the averaged energy, the evolution laws of the divergence, and the symplectic structure of the original linear stochastic Maxwell’s equations with additive noise. We conclude the paper by presenting numerical experiments supporting our theoretical results in Section LABEL:sectNE.
2 Wellposedness of stochastic Maxwell’s equations
We consider the stochastic Maxwell’s equation driven by multiplicative Itô noise \linenomath
(1) 
supplemented with the boundary condition of a perfect conductor as in [21]. Here, , is valued function whose domain is a bounded and simply connected domain in with smooth boundary . The unit outward normal vector to is denoted by . Moreover, stands for the formal time derivative of a Wiener process on a stochastic basis . The Wiener process can be written as , where is a sequence of mutually independent and identically distributed valued standard Brownian motions; is an orthonormal basis of consisting of eigenfunctions of a symmetric, nonnegative and of finite trace linear operator , i. e., , with for . Assumptions on and are provided below.
The Maxwell’s operator is defined by \linenomath
(2) 
It has the domain , where \linenomath
is termed by the space and \linenomath
is the subspace of with zero tangential trace. In addition, and are bounded and uniformly positive definite functions:
with being a positive constant. These conditions on ensure that the Hilbert space is equipped with the weighted scalar product \linenomath
where stands for the standard Euclidean inner product. This weighted scalar product is equivalent to the standard inner product on . Moreover, the corresponding norm, which stands for the electromagnetic energy of the physical system, induced by this inner product reads \linenomath
with being the Euclidean norm. Based on the norm , the associated graph norm of is defined by \linenomath
It is well known that Maxwell’s operator is closed and that equipped with the graph norm is a Banach space, see e.g. [30]. Moreover, is skewadjoint, in particular, for all , \linenomath
In addition, the operator generates a unitary group via Stone’s theorem, see for example [17]. According to the definition of unitary groups, one has \linenomath
(3) 
which means that the electromagnetic energy is preserved, for Maxwell’s operator, see [20]. Besides, the unitary group satisfies the following properties which will be made use of in the next section.
Lemma 2.1 (Theorem 3 with in [6]).
For the semigroup on , it holds that \linenomath
(4) 
where the constant does not depend on . Here, denotes the space of bounded linear operators from to .
Observe that, throughout the paper, stands for a constant that may vary from line to line.
For two realvalued separable Hilbert spaces and , we denote the set of Hilbert–Schmidt operators from to by . It will be equipped with the norm
where is any orthonormal basis of . Furthermore, let be the unique positive square root of the linear operator (defining the noise ). We also introduce the separable Hilbert space endowed with the inner product for , where we recall that .
Lemma 2.2.
As a consequence of Lemma LABEL:lm;SG, for any and any we have \linenomath
(5) 
Proof Thanks to Lemma LABEL:lm;SG and the definition of the Hilbert–Schmidt norm, we know that, for an orthonormal basis of , \linenomath
which proves the claim.
To guarantee existence and uniqueness of strong solutions to (LABEL:mod;CMAX), we make the following assumptions:
Assumption 2.1 (Coefficients).
Assume that the coefficients of Maxwell’s operator (LABEL:Mop) satisfy
with some positive constant .
Assumption 2.2 (Initial value).
The initial value of the stochastic Maxwell’s equation (LABEL:mod;CMAX) is a valued stochastic process with for any .
Assumption 2.3 (Nonlinearity).
We assume that the operator is continuous and that there exists constants such that \linenomath
Assumption 2.4 (Noise).
We assume that the operator satisfies
(6) 
where may depend on the operator . We recall that and denote the spaces of Hilbert–Schmidt operators from to , resp. to .
We now present two examples of an operator verifying Assumption LABEL:ap;3 (we only prove one of the inequality in (LABEL:con;G), the others follow in a similar way).
For the first example (inspired by [21]), let , and consider for two real numbers and . The stochastic Maxwell’s equation (LABEL:mod;CMAX) then becomes an SPDE driven by additive noise. In this case, one chooses the orthonormal basis of to be , for , and . Assuming for example that , where , one can get that for all and thus the last inequality in (LABEL:con;G) holds.
For the second example (inspired by [8]), consider for , the domain and . Taking the same orthonormal basis as above, and assuming in addition that with , one gets for instance
(7) 
Using the definition of the graph norm one gets \linenomath
Denoting and using the definition of the operator , one obtains \linenomath
We now illustrate how to estimate the term as an example. Using the definition of the curl operator, one gets \linenomath
Combing the above estimates, we obtain \linenomath
Using the Sobolev embedding for any , one finally obtains (LABEL:con1;G) and the linear growth property of .
The above assumptions suffice to establish wellposedness and regularity results of solutions to (LABEL:mod;CMAX). This uses similar arguments as, for instance, [28, Theorem 9] (for a more general drift coefficient in (LABEL:mod;CMAX)) and [8, Corollary 3.1].
Lemma 2.3.
Let . Under the Assumptions LABEL:ap;1LABEL:ap;3, the stochastic Maxwell’s equation (LABEL:mod;CMAX) is strongly well posed and its solution satisfies \linenomath
for any . Here, the constant depends on , , , bounds for and , and .
Subsequently we present a lemma on the Hölder regularity in time of solutions to (LABEL:mod;CMAX). This result is important in analysing the approximation error of the proposed time integrator in Section LABEL:sectEXP.
Lemma 2.4.
Let . Under the Assumptions LABEL:ap;1LABEL:ap;3, the solution of the stochastic Maxwell’s equation (LABEL:mod;CMAX) satisfies \linenomath
for any , and . Here, the constant depends on , , , bounds for and , and .
The proof is very similar to the proof of [8, Proposition 3.2], we omit it for ease of presentation.
Based on the above regularity results for solutions to the stochastic Maxwell’s equation (LABEL:mod;CMAX), the work [8] shows meansquare convergence order of the backward Euler–Maruyama scheme (in temporal direction). In the next section, we design and analyse an explicit and effective numerical scheme, the exponential integrator, which has the rate of convergence and preserves many inherent properties of the original problem (in the case of the stochastic Maxwell’s equations with additive noise).
3 Exponential integrators for stochastic Maxwell’s equations and error analysis
This section is concerned with a convergence analysis in strong sense of an exponential integrator for the stochastic Maxwell’s equation (LABEL:mod;CMAX). We first show an a priori estimate of the numerical solution. Then the strong convergence rate is studied in two cases, first when equation (LABEL:mod;CMAX) is driven by additive noise and then for multiplicative noise.
Fix a time horizon and an integer . Define a stepsize such that . We then construct a uniform partition of the interval \linenomath
with for . Next, we consider the mild solution of the stochastic Maxwell’s equation (LABEL:mod;CMAX) on the small time interval (with ):
By approximating both integrals in the above mild solution at the left end point, one obtains the exponential integrator \linenomath
(8) 
where stands for Wiener increments. One readily sees that (LABEL:sch;exp) is an explicit numerical approximation of the exact solution of the stochastic Maxwell’s equation (LABEL:mod;CMAX).
In order to present a result on the strong error of the exponential integrator (LABEL:sch;exp), we first show an a priori estimate of the numerical solution.
Theorem 3.1.
Under the Assumptions LABEL:ap;1LABEL:ap;3, the numerical solution to the stochastic Maxwell’s equation given by the exponential integrator (LABEL:sch;exp) satisfies \linenomath
for all and .
Proof. The numerical approximation given by the exponential integrator can be rewritten as \linenomath
Taking norm and expectation leads to, for , \linenomath
For the first term, using the definition of the graph norm and property (LABEL:unigro), we obtain \linenomath
which leads to . Based on the linear growth property of and Hölder’s inequality, the second term is estimated as follows \linenomath
One then obtains \linenomath
The third term is equivalent to \linenomath
with being the integer part of . The Burkholder–Davis–Gundy inequality for stochastic integrals and our assumption on give \linenomath
Using Hölder’s inequality, the last term in the above inequality becomes \linenomath
Taking expectation, we then obtain \linenomath
Altogether, we get that \linenomath
A discrete Gronwall inequality concludes the proof.
Using the above theorem, we arrive at
Corollary 3.1.
Under the same assumptions as in Theorem LABEL:tm1, for all , there exists a constant such that \linenomath
(9) 
Proof. The main idea to derive the estimate (LABEL:Nmm) is to properly estimate the stochastic integral \linenomath
Based on the unitarity of , Burkholder–Davis–Gundy’s inequality, Hölder’s inequality, and our assumptions on , the right hand side (RHS) of the above equality becomes \linenomath
where we use the result of Theorem LABEL:tm1 in the last step. The estimations of the other terms in the numerical solution are done in a similar way as in the previous result.
We are now in position to show the error estimates of the exponential integrator for the stochastic Maxwell’s equation (LABEL:mod;CMAX) driven by additive noise.
Theorem 3.2.
Let Assumptions LABEL:ap;1LABEL:ap;3 hold. Assume in addition that and does not dependent on . The strong error of the exponential integrator (LABEL:sch;exp) when applied to the stochastic Maxwell’s equation (LABEL:mod;CMAX) verifies, for all , \linenomath
where the positive constant depends on bounds for (and its derivatives) and , as well as on , and .
Proof. Let us denote , for . We then have \linenomath
(10) 
We now rewrite the term as \linenomath
We first estimate the term . Using a Taylor expansion, we obtain \linenomath
where , for some , depends on and . Combing this with the mild formulation of the exact solution on the interval , \linenomath
we rewrite the term as
where we define \linenomath