Exponential Integrators for Stochastic Maxwell’s Equations Driven by Itô NoiseSubmitted to the editors in DATE.      Funding: his work was supported by the National Natural Science Foundation of China (NO. 91530118, NO. 91130003, NO. 11021101, NO. 91630312 and NO. 11290142), the Swedish Foundation for International Cooperation in Research and Higher Education (STINT project nr. CH2016-6729), as well as the Swedish Research Council (VR) (projects nr. 2013−4562 and 2018-04443). The computations were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC) at HPC2N, Umeå University.

# Exponential Integrators for Stochastic Maxwell’s Equations Driven by Itô Noise††thanks: Submitted to the editors in DATE.      Funding: his work was supported by the National Natural Science Foundation of China (NO. 91530118, NO. 91130003, NO. 11021101, NO. 91630312 and NO. 11290142), the Swedish Foundation for International Cooperation in Research and Higher Education (STINT project nr. Ch2016−6729), as well as the Swedish Research Council (VR) (projects nr. 2013−4562 and 2018−04443). The computations were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC) at HPC2N, Umeå University.

David Cohen Department of Mathematics and Mathematical Statistics, Ume University, 90187 Ume, Sweden (david.cohen@umu.se)    Jianbo Cui 1. LSEC, ICMSEC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, 100190, China  2. School of Mathematical Science, University of Chinese Academy of Sciences, Beijing, 100049, China (jianbocui@lsec.cc.ac.cn, hjl@lsec.cc.ac.cn,liyingsun@lsec.cc.ac.cn(corresponding author))    Jialin Hong 33footnotemark: 3    Liying Sun 33footnotemark: 3
###### Abstract

This article presents explicit exponential integrators for stochastic Maxwell’s equations driven by both multiplicative and additive noises. By utilizing the regularity estimate of the mild solution, we first prove that the strong order of the numerical approximation is for general multiplicative noise. Combing a proper decomposition with the stochastic Fubini’s theorem, the strong order of the proposed scheme is shown to be for additive noise. Moreover, for linear stochastic Maxwell’s equation with additive noise, the proposed time integrator is shown to preserve exactly the symplectic structure, the evolution of the energy as well as the evolution of the divergence in the sense of expectation. Several numerical experiments are presented in order to verify our theoretical findings.

Key words. stochastic Maxwell’s equation, exponential integrator, strong convergence, trace formula, average energy, average divergence.

AMS subject classifications. 60H35, 60H15, 35Q61.

## 1 Introduction

In the context of electromagnetism, a common way to model precise microscopic origins of randomness (such as thermal motion of electrically charged micro-particles) is by means of stochastic Maxwell’s equations [35]. Further applications of stochastic Maxwell’s equations are: In [32], a stochastic model of Maxwell’s field equations in dimension is shown to be a simple modification of a random walk model due to Kac, which provides a basis for the telegraph equations. The work [27] studies the propagation of ultra-short solitons in a cubic nonlinear medium modeled by nonlinear Maxwell’s equations with stochastic variations of media. To simulate a coplanar waveguide with uncertain material parameters, time-harmonic Maxwell’s equations are considered in [4]. For linear stochastic Maxwell’s equations driven by additive noise, the work [21] proves that the problem is a stochastic Hamiltonian partial differential equation whose phase flow preserves the multi-symplectic geometric structure. In addition, the averaged energy along the flow increases linearly with respect to time and the flow preserves the divergence in the sense of expectation, see [10]. Let us finally mention that linear stochastic Maxwell’s equations are relevant in various physical applications, see e.g. [35, Chapter 3].

We now review the literature on the numerical discretisation of stochastic Maxwell’s equations. The work [41] performs a numerical analysis of the finite element method and discontinuous Galerkin method for stochastic Maxwell’s equations driven by colored noise. A stochastic multi-symplectic method for dimensional problems with additive noise, based on stochastic variational principle, is studied in [21]. In particular, it is shown that the implicit numerical scheme preserves a discrete stochastic multi-symplectic conservation law. The work [10] inspects geometric properties of the stochastic Maxwell’s equation with additive noise, namely the behavior of averaged energy and divergence, see below for further details. Especially, the authors of [10] investigate three novel stochastic multi-symplectic (implicit in time) methods preserving discrete versions of the averaged divergence. None of the proposed numerical schemes exactly preserve the behavior of the averaged energy. The work [22] proposes a stochastic multi-symplectic wavelet collocation method for the approximation of stochastic Maxwell’s equations with multiplicative noise (in the Stratonovich sense). For the same stochastic Maxwell’s equation as the one considered in this paper (see below for a precise definition), the recent reference [8] shows that the backward Euler–Maruyama method converges with mean-square convergence rate . Finally, the preprint [9] studies implicit Runge–Kutta schemes for stochastic Maxwell’s equation with additive noise. In particular, a mean-square convergence of order is obtained.

In the present paper, we construct and analyse an exponential integrator for stochastic Maxwell’s equations which is explicit (thus computationally more efficient than the above mentioned time integrators) and which enjoys excellent long-time behavior. Observe that exponential integrators are widely used for efficient time integrations of deterministic differential equations, see for instance [18, 7, 19, 12] and more specially [37, 31, 24, 39, 33] and references therein for Maxwell-type equations. In recent years, exponential integrators have been analysed in the context of stochastic (partial) differential equations (S(P)DEs). Without being too exhaustive, we mention analysis and applications of such numerical schemes for the following problems: stochastic differential equations [36, 25, 26]; stochastic parabolic equations [23, 29, 5, 15, 3]; stochastic Schrödinger equations [1, 11, 16]; stochastic wave equations [13, 40, 14, 2, 34] and references therein.

The main contributions of the present paper are:

• a strong convergence analysis of an explicit exponential integrator for stochastic Maxwell’s equations in . By making use of regularity estimates of the exact and numerical solutions, the strong convergence order is shown to be for general multiplicative noise. Furthermore, by using a proper decomposition and stochastic Fubini’s theorem, we prove that the strong convergence order of the proposed scheme can achieve .

• an analysis of long-time conservation properties of an explicit exponential integrator for linear stochastic Maxwell’s equations driven by additive noise. Especially, we show that the proposed explicit time integrator is symplectic and satisfies a trace formula for the energy for all times, i. e. the linear drift of the averaged energy is preserved for all times. In addition, the numerical solution preserves the averaged divergence. This shows that the exponential integrator inherits the geometric structure and the dynamical behavior of the flow of the linear stochastic Maxwell’s equations. This is not the case for classical time integrators such as Euler–Maruyama type schemes.

• an efficient numerical implementation of two-dimensional models of stochastic Maxwellâs equations by explicit time integrators.

We would like to remark that the proofs of strong convergence for the exponential integrator use similar ideas present in various proofs of strong convergence from the literature. But, to the best of our knowledge, the present paper offers the first explicit time integrator for linear stochastic Maxwell’s equations that is of strong order , symplectic, exactly preserves the linear drift of the averaged energy, and preserves the averaged divergence for all times. A weak convergence analysis of the proposed scheme for stochastic Maxwell’s equations driven by multiplicative noise will be reported elsewhere.

An outline of the paper is as follows. Section LABEL:sec;2 sets notations and introduces the stochastic Maxwell’s equation. This section also presents assumptions to guarantee existence and uniqueness of the exact solution to the problem and shows its Hölder continuity. The exponential integrator for stochastic Maxwell’s equation is introduced in Section LABEL:sect-EXP, where we also prove its strong order of convergence for additive and multiplicative noise. In Section LABEL:sect-LSM, we show that the proposed scheme has several interesting geometric properties: it preserves the evolution laws of the averaged energy, the evolution laws of the divergence, and the symplectic structure of the original linear stochastic Maxwell’s equations with additive noise. We conclude the paper by presenting numerical experiments supporting our theoretical results in Section LABEL:sect-NE.

## 2 Well-posedness of stochastic Maxwell’s equations

We consider the stochastic Maxwell’s equation driven by multiplicative Itô noise \linenomath

 dU=AUdt+F(U)dt+G(U)dW,t∈(0,+∞),U(0)=(E⊤0,H⊤0)⊤ (1)
\endlinenomath

supplemented with the boundary condition of a perfect conductor as in [21]. Here, , is -valued function whose domain is a bounded and simply connected domain in with smooth boundary . The unit outward normal vector to is denoted by . Moreover, stands for the formal time derivative of a -Wiener process on a stochastic basis . The -Wiener process can be written as , where is a sequence of mutually independent and identically distributed -valued standard Brownian motions; is an orthonormal basis of consisting of eigenfunctions of a symmetric, nonnegative and of finite trace linear operator , i. e., , with for . Assumptions on and are provided below.

The Maxwell’s operator is defined by \linenomath

 A(EH):=(0ϵ−1∇×−μ−1∇×0)(EH)=(ϵ−1∇×H−μ−1∇×E). (2)
\endlinenomath

It has the domain , where \linenomath

 H(curl,O):={U∈(L2(O))3:∇×U∈(L2(O))3},
\endlinenomath

is termed by the -space and \linenomath

 H0(curl,O):={U∈H(curl,O):n×U|∂O=0}
\endlinenomath

is the subspace of with zero tangential trace. In addition, and are bounded and uniformly positive definite functions:

 ϵ,μ∈L∞(O),ϵ,μ≥κ>0

with being a positive constant. These conditions on ensure that the Hilbert space is equipped with the weighted scalar product \linenomath

 ⟨(E1H1),(E2H2)⟩V=∫O(μ⟨H1,H2⟩+ϵ⟨E1,E2⟩)dx,
\endlinenomath

where stands for the standard Euclidean inner product. This weighted scalar product is equivalent to the standard inner product on . Moreover, the corresponding norm, which stands for the electromagnetic energy of the physical system, induced by this inner product reads \linenomath

 ∥∥∥(EH)∥∥∥2V=∫O(μ∥H∥2+ϵ∥E∥2)dx
\endlinenomath

with being the Euclidean norm. Based on the norm , the associated graph norm of is defined by \linenomath

 ∥V∥2D(A):=∥V∥2V+∥AV∥2V.
\endlinenomath

It is well known that Maxwell’s operator is closed and that equipped with the graph norm is a Banach space, see e.g. [30]. Moreover, is skew-adjoint, in particular, for all , \linenomath

\endlinenomath

In addition, the operator generates a unitary -group via Stone’s theorem, see for example [17]. According to the definition of unitary groups, one has \linenomath

 ∥S(t)V∥V=∥V∥Vfor % allV∈V, (3)
\endlinenomath

which means that the electromagnetic energy is preserved, for Maxwell’s operator, see [20]. Besides, the unitary group satisfies the following properties which will be made use of in the next section.

###### Lemma 2.1 (Theorem 3 with q=0 in [6]).

For the semigroup on , it holds that \linenomath

 ∥S(t)−Id∥L(D(A);V)≤Ct, (4)
\endlinenomath

where the constant does not depend on . Here, denotes the space of bounded linear operators from to .

Observe that, throughout the paper, stands for a constant that may vary from line to line.

For two real-valued separable Hilbert spaces and , we denote the set of Hilbert–Schmidt operators from to by . It will be equipped with the norm

 ∥Γ∥2L2(H1,H2):=∞∑i=1∥Γϕi∥2H2,

where is any orthonormal basis of . Furthermore, let be the unique positive square root of the linear operator (defining the noise ). We also introduce the separable Hilbert space endowed with the inner product for , where we recall that .

###### Lemma 2.2.

As a consequence of Lemma LABEL:lm;SG, for any and any we have \linenomath

 ∥(S(t)−Id)Φ∥L2(U0,V)≤Ct∥Φ∥L2(U0,D(A)). (5)
\endlinenomath

Proof Thanks to Lemma LABEL:lm;SG and the definition of the Hilbert–Schmidt norm, we know that, for an orthonormal basis of , \linenomath

 ∥(S(t)−Id)Φ∥2L2(U0,V) =∑k∈N+∥(S(t)−Id)ΦQ12ek∥2V ≤Ct2∑k∈N+∥ΦQ12ek∥2D(A)≤Ct2∥Φ∥2L2(U0,D(A)),
\endlinenomath

which proves the claim.

To guarantee existence and uniqueness of strong solutions to (LABEL:mod;CMAX), we make the following assumptions:

###### Assumption 2.1 (Coefficients).

Assume that the coefficients of Maxwell’s operator (LABEL:Mop) satisfy

 ϵ,μ∈L∞(O),ϵ,μ≥κ>0

with some positive constant .

###### Assumption 2.2 (Initial value).

The initial value of the stochastic Maxwell’s equation (LABEL:mod;CMAX) is a -valued stochastic process with for any .

###### Assumption 2.3 (Nonlinearity).

We assume that the operator is continuous and that there exists constants such that \linenomath

 ∥F(V1)−F(V2)∥D(A)≤C1F∥V1−V2∥D(A),V1,V2∈D(A), ∥F(V)∥V≤CF(1+∥V∥V),V∈V, ∥F(V)∥D(A)≤C1F(1+∥V∥D(A)),V∈D(A).
\endlinenomath

###### Assumption 2.4 (Noise).

We assume that the operator satisfies

 ∥G(V1)−G(V2)∥L2(U0,V)≤CG∥V1−V2∥V,V1,V2∈V,∥G(V1)−G(V2)∥L2(U0,D(A))≤C1G∥V1−V2∥D(A),V1,V2∈D(A),∥G(V)∥L2(U0,V)≤CG(1+∥V∥V),V∈V,∥G(V)∥L2(U0,D(A))≤C1G(1+∥V∥D(A)),V∈D(A), (6)

where may depend on the operator . We recall that and denote the spaces of Hilbert–Schmidt operators from to , resp. to .

We now present two examples of an operator verifying Assumption LABEL:ap;3 (we only prove one of the inequality in (LABEL:con;G), the others follow in a similar way).

For the first example (inspired by [21]), let , and consider for two real numbers and . The stochastic Maxwell’s equation (LABEL:mod;CMAX) then becomes an SPDE driven by additive noise. In this case, one chooses the orthonormal basis of to be , for , and . Assuming for example that , where , one can get that for all and thus the last inequality in (LABEL:con;G) holds.

For the second example (inspired by [8]), consider for , the domain and . Taking the same orthonormal basis as above, and assuming in addition that with , one gets for instance

 ∥G(V)∥L2(U0,D(A))≤C∥Q12∥L2(U,H1+γ)(1+∥V∥D(A)). (7)

Using the definition of the graph norm one gets \linenomath

 ∥G(V)∥2L2(U0,D(A)) =∑k∈N+∥VQ12ek∥2V+∑k∈N+∥A(VQ12ek)∥2V.
\endlinenomath

Denoting and using the definition of the operator , one obtains \linenomath

 ∥G(V)∥2L2(U0,D(A)) =∑k∈N+∑i=1,2,3∥EiVQ12ek∥2U+∑k∈N+∑i=1,2,3∥HiVQ12ek∥2U +∑k∈N+(∥∇×(EVQ12ek)∥2U3+∥∇×(HVQ12ek)∥2U3) ≤C∑k∈N+∥Q12ek∥2L∞(O)∥V∥2V+∑k∈N+(∥∇×(EVQ12ek)∥2U3+∥∇×(HVQ12ek)∥2U3).
\endlinenomath

We now illustrate how to estimate the term as an example. Using the definition of the curl operator, one gets \linenomath

 ∥∇×(EVQ12ek)∥2U3 =∥∂∂x2(E3VQ12ek)−∂∂x3(E2VQ12ek)∥2U +∥∂∂x1(E3VQ12ek)−∂∂x3(E1VQ12ek)∥2U +∥∂∂x1(E2VQ12ek)−∂∂x2(E1VQ12ek)∥2U +∥∂∂x1E2V−∇2E1V∥2U) +C(∥∂∂x1Q12ek∥2L∞(O)+∥∂∂x2Q12ek∥2L∞(O)+∥∂∂x3Q12ek∥2L∞(O))∥EV∥2U3 ≤C∥Q12ek∥2L∞(O)∥∇×EV∥2U3+C∥∇Q12ek∥2L∞(O)∥EV∥2V.
\endlinenomath

Combing the above estimates, we obtain \linenomath

 ∥G(V)∥2L2(U0,D(A)) ≤C∑k∈N+∥Q12ek∥2L∞(O)(∥V∥2V+∥AV∥2V)+C∑k∈N+∥∇Q12ek∥2L∞(O)∥V∥2V.
\endlinenomath

Using the Sobolev embedding for any , one finally obtains (LABEL:con1;G) and the linear growth property of .

The above assumptions suffice to establish well-posedness and regularity results of solutions to (LABEL:mod;CMAX). This uses similar arguments as, for instance, [28, Theorem 9] (for a more general drift coefficient in (LABEL:mod;CMAX)) and [8, Corollary 3.1].

###### Lemma 2.3.

Let . Under the Assumptions LABEL:ap;1-LABEL:ap;3, the stochastic Maxwell’s equation (LABEL:mod;CMAX) is strongly well posed and its solution satisfies \linenomath

 E[sup0≤t≤T∥U(t)∥pD(A)]
\endlinenomath

for any . Here, the constant depends on , , , bounds for and , and .

Subsequently we present a lemma on the Hölder regularity in time of solutions to (LABEL:mod;CMAX). This result is important in analysing the approximation error of the proposed time integrator in Section LABEL:sect-EXP.

###### Lemma 2.4.

Let . Under the Assumptions LABEL:ap;1-LABEL:ap;3, the solution of the stochastic Maxwell’s equation (LABEL:mod;CMAX) satisfies \linenomath

 E[∥U(t)−U(s)∥2pV]≤C|t−s|p,
\endlinenomath

for any , and . Here, the constant depends on , , , bounds for and , and .

The proof is very similar to the proof of [8, Proposition 3.2], we omit it for ease of presentation.

Based on the above regularity results for solutions to the stochastic Maxwell’s equation (LABEL:mod;CMAX), the work [8] shows mean-square convergence order of the backward Euler–Maruyama scheme (in temporal direction). In the next section, we design and analyse an explicit and effective numerical scheme, the exponential integrator, which has the rate of convergence and preserves many inherent properties of the original problem (in the case of the stochastic Maxwell’s equations with additive noise).

## 3 Exponential integrators for stochastic Maxwell’s equations and error analysis

This section is concerned with a convergence analysis in strong sense of an exponential integrator for the stochastic Maxwell’s equation (LABEL:mod;CMAX). We first show an a priori estimate of the numerical solution. Then the strong convergence rate is studied in two cases, first when equation (LABEL:mod;CMAX) is driven by additive noise and then for multiplicative noise.

Fix a time horizon and an integer . Define a stepsize such that . We then construct a uniform partition of the interval \linenomath

 0=t0
\endlinenomath

with for . Next, we consider the mild solution of the stochastic Maxwell’s equation (LABEL:mod;CMAX) on the small time interval (with ):

 U(tk+1)=S(Δt)Uk+∫tk+1tkS(tk+1−s)F(U(s))ds+∫tk+1tkS(tk+1−s)G(U(s))dW.

By approximating both integrals in the above mild solution at the left end point, one obtains the exponential integrator \linenomath

 Uk+1=S(Δt)Uk+S(Δt)F(Uk)Δt+S(Δt)G(Uk)ΔWk, (8)
\endlinenomath

where stands for Wiener increments. One readily sees that (LABEL:sch;exp) is an explicit numerical approximation of the exact solution of the stochastic Maxwell’s equation (LABEL:mod;CMAX).

In order to present a result on the strong error of the exponential integrator (LABEL:sch;exp), we first show an a priori estimate of the numerical solution.

###### Theorem 3.1.

Under the Assumptions LABEL:ap;1-LABEL:ap;3, the numerical solution to the stochastic Maxwell’s equation given by the exponential integrator (LABEL:sch;exp) satisfies \linenomath

 E[∥Uk∥2pD(A)]≤C(U0,Q,T,p,F,G)
\endlinenomath

for all and .

Proof. The numerical approximation given by the exponential integrator can be rewritten as \linenomath

 Uk =S(tk)U(0)+Δtk−1∑j=0S(tk−tj)F(Uj)+k−1∑j=0S(tk−tj)G(Uj)ΔWj.
\endlinenomath

Taking norm and expectation leads to, for , \linenomath

 E[∥Uk∥2pD(A)] ≤CE[∥S(tk)U(0)∥2pD(A)]+CE⎡⎢⎣∥∥ ∥∥Δtk−1∑j=0S(tk−tj)F(Uj)∥∥ ∥∥2pD(A)⎤⎥⎦ +CE⎡⎢⎣∥∥ ∥∥k−1∑j=0S(tk−tj)G(Uj)ΔWj∥∥ ∥∥2pD(A)⎤⎥⎦.
\endlinenomath

For the first term, using the definition of the graph norm and property (LABEL:unigro), we obtain \linenomath

 ∥S(tk)U(0)∥2pD(A)=(∥S(tk)U(0)∥V+∥S(tk)AU(0)∥V)2p=∥U(0)∥2pD(A),
\endlinenomath

which leads to . Based on the linear growth property of and Hölder’s inequality, the second term is estimated as follows \linenomath

 ∥∥ ∥∥Δtk−1∑j=0S(tk−tj)F(Uj)∥∥ ∥∥2pD(A)≤ C+CΔt2p(k−1∑j=0∥Uj∥D(A))2p ≤ C+CΔt2pk2p−1k−1∑j=0∥Uj∥2pD(A).
\endlinenomath

One then obtains \linenomath

 E⎡⎢⎣∥∥ ∥∥Δtk−1∑j=0S(tk−tj)F(Uj)∥∥ ∥∥2pD(A)⎤⎥⎦≤C+CΔtE[k−1∑j=0∥Uj∥2pD(A)].
\endlinenomath

The third term is equivalent to \linenomath

 E⎡⎢⎣∥∥ ∥∥k−1∑j=0S(tk−tj)G(Uj)ΔWj∥∥ ∥∥2pD(A)⎤⎥⎦ =E[∥∥∥∫tk0S(tk−[sΔt]Δt)G(U[sΔt]Δt)dW(s)∥∥∥2pD(A)]
\endlinenomath

with being the integer part of . The Burkholder–Davis–Gundy inequality for stochastic integrals and our assumption on give \linenomath

 E[∥∥∥∫tk0S(tk−[sΔt]Δt)G(U[sΔt]Δt)dW(s)∥∥∥2pD(A)]≤ ≤CE[(∫tk0∥∥G(U[sΔt]Δt)∥∥2L2(U0,D(A))ds)p] ≤C+CE[(∫tk0∥∥U[sΔt]Δt∥∥2D(A)ds)p]=C+CE[(Δtk−1∑j=0∥Uj∥2D(A))p].
\endlinenomath

Using Hölder’s inequality, the last term in the above inequality becomes \linenomath

 (Δtk−1∑j=0∥Uj∥2D(A))p≤Δtpkp−1k−1∑j=0∥Uj∥2pD(A).
\endlinenomath

Taking expectation, we then obtain \linenomath

 E[∥∥∥∫tk0S(tk−[sΔt]Δt)G(U(s))dW(s)∥∥∥2pD(A)]≤C+CΔtk−1∑j=0E[∥Uj∥2pD(A)].
\endlinenomath

Altogether, we get that \linenomath

 E[∥Uk∥2pD(A)]≤C+CΔtE[k−1∑j=0∥Uj∥2pD(A)].
\endlinenomath

A discrete Gronwall inequality concludes the proof.
Using the above theorem, we arrive at

###### Corollary 3.1.

Under the same assumptions as in Theorem LABEL:tm1, for all , there exists a constant such that \linenomath

 E[sup0≤k≤N∥Uk∥2pD(A)]≤C. (9)
\endlinenomath

Proof. The main idea to derive the estimate (LABEL:Nmm) is to properly estimate the stochastic integral \linenomath

 E⎡⎢⎣sup0≤k≤N∥∥ ∥∥k−1∑j=0S(tk−tj)G(Uj)ΔWj∥∥ ∥∥2pD(A)⎤⎥⎦= =E[sup0≤k≤N∥∥∥∫tk0S(tk−[sΔt]Δt)G(U[sΔt]Δt)dW(s)∥∥∥2pD(A)].
\endlinenomath

Based on the unitarity of , Burkholder–Davis–Gundy’s inequality, Hölder’s inequality, and our assumptions on , the right hand side (RHS) of the above equality becomes \linenomath

 RHS ≤C+CΔtN−1∑j=0E[∥Uj∥2pD(A)]≤C,
\endlinenomath

where we use the result of Theorem LABEL:tm1 in the last step. The estimations of the other terms in the numerical solution are done in a similar way as in the previous result.

We are now in position to show the error estimates of the exponential integrator for the stochastic Maxwell’s equation (LABEL:mod;CMAX) driven by additive noise.

###### Theorem 3.2.

Let Assumptions LABEL:ap;1-LABEL:ap;3 hold. Assume in addition that and does not dependent on . The strong error of the exponential integrator (LABEL:sch;exp) when applied to the stochastic Maxwell’s equation (LABEL:mod;CMAX) verifies, for all , \linenomath

 (E[maxk=0,…,N∥U(tk)−Uk∥2pV])12p≤CΔt,
\endlinenomath

where the positive constant depends on bounds for (and its derivatives) and , as well as on , and .

Proof. Let us denote , for . We then have \linenomath

 ϵk+1 =k∑j=0∫tj+1tj(S(tk+1−s)F(U(s))−S(tk+1−tj)F(Uj))ds +k∑j=0∫tj+1tj((S(tk+1−s)−S(tk+1−tj))G)dW(s) =:Errk1+Errk2. (10)
\endlinenomath

We now rewrite the term as \linenomath

 Errk1 =k∑j=0∫tj+1tj(S(tk+1−s)(F(U(s))−F(U(tj))))% ds +k∑j=0∫tj+1tj((S(tk+1−s)−S(tk+1−tj))F(U(tj)))ds +k∑j=0∫tj+1tj(S(tk+1−tj)(F(U(tj))−F(Uj)))ds =:RomanNumberk1+RomanNumberk2+RomanNumberk3.
\endlinenomath

We first estimate the term . Using a Taylor expansion, we obtain \linenomath

 F(U(s))−F(U(tj)) +12∂2F∂u2(Θ)(U(s)−U(tj),U(s)−U(tj)),
\endlinenomath

where , for some , depends on and . Combing this with the mild formulation of the exact solution on the interval , \linenomath

 U(s)=S(s−tj)U(tj)+∫stjS(s−r)F(U(r))dr+∫stjS(s−r)GdW(r),
\endlinenomath

we rewrite the term as

 RomanNumberk1=Ak1+Ak2,

where we define \linenomath

 Ak1 =k∑j=0∫tj+1tjS(tk+1−s)∂F∂u(U(tj))(S(s−tj)−Id)U(tj)