Uniformly accurate numerical schemes for highly oscillatory Klein-Gordon and nonlinear Schrödinger equations

Uniformly accurate numerical schemes for highly oscillatory Klein-Gordon and nonlinear Schrödinger equations

Philippe Chartier INRIA-Rennes Bretagne Atlantique, IPSO Project    Nicolas Crouseilles INRIA-Rennes Bretagne Atlantique, IPSO Project    Mohammed Lemou CNRS and IRMAR, Université de Rennes 1 and INRIA-Rennes Bretagne Atlantique, IPSO Project    Florian Méhats IRMAR, Université de Rennes 1 and INRIA-Rennes Bretagne Atlantique, IPSO Project
Abstract

This work is devoted to the numerical simulation of nonlinear Schrödinger and Klein-Gordon equations. We present a general strategy to construct numerical schemes which are uniformly accurate with respect to the oscillation frequency. This is a stronger feature than the usual so called “Asymptotic preserving” property, the last being also satisfied by our scheme in the highly oscillatory limit. Our strategy enables to simulate the oscillatory problem without using any mesh or time step refinement, and the orders of our schemes are preserved uniformly in all regimes. In other words, since our numerical method is not based on the derivation and the simulation of asymptotic models, it works in the regime where the solution does not oscillate rapidly, in the highly oscillatory limit regime, and in the intermediate regime with the same order of accuracy. In the same spirit as in [5], the method is based on two main ingredients. First, we embed our problem in a suitable “two-scale” reformulation with the introduction of an additional variable. Then a link is made with classical strategies based on Chapman-Enskog expansions in kinetic theory despite the dispersive context of the targeted equations, allowing to separate the fast time scale from the slow one. Uniformly accurate (UA) schemes are eventually derived from this new formulation and their properties and performances are assessed both theoretically and numerically.

1 Introduction

This work is concerned with the numerical solution of highly-oscillatory differential equations in an infinite dimensional setting. Our main two applications here are the nonlinear Schrödinger equation and the nonlinear Klein-Gordon equation, although, prior to addressing them specifically, we envisage the more general situation of an abstract differential equation in a Hilbert space. To be a bit more specific, we shall consider equations of the form

(1.1)

where the vector field is supposed to be periodic of period with respect to the variable (we shall denote ). The parameter is supposed to have a positive real value in an interval of the form for some . However, is not necessarily vanishing and may be as well thought of as being close to : this means we can consider equation (1.1) simultaneously in different regimes, namely highly-oscillatory for small values of or smooth for larger values of , and our aim is to design a versatile numerical method, capable of handling these two extreme regimes as much as all intermediate ones.

Generally speaking, standard numerical methods for equation (1.1) exhibit errors of the form for some positive and . The user of such methods is thus forced to restrict the step-size to values less than in order to obtain some accuracy. This becomes an unacceptable constraint for vanishing values of . Whenever equation (1.1) admits a limit model, Asymptotic-Preserving (AP) schemes [10] have been designed to overcome this restriction: the methods we construct obey the corresponding requirement, i.e. they degenerate into a consistent numerical scheme for the limit model whenever tends to zero.

As favorable as this property may seem, the error behavior of an AP scheme may deteriorate for “intermediate” regimes where is neither very small nor large. The derivation of asymptotic models for (1.1) has been the subject of many works – see e.g. [3, 4, 15, 16] for time-averaging techniques and [1, 8, 14] for homogenization techniques – and a hierarchy of averaged vector fields and models at different orders of can be classically written from asymptotic expansions of the solution. However, these asymptotic models are valid only when is small enough and any numerical methods based on the direct approximation of such averaged vector fields introduce a truncation index and a corresponding incompressible error .

In sharp contrast, our strategy in this paper consists in developing numerical schemes that solve directly (1.1) for a wide range of -values with uniform accuracy. The main output of our work are numerical methods for highly oscillatory equations of type (1.1), which are uniformly accurate (UA) with respect to the parameter . These methods, as we shall demonstrate, are able to capture the various scales occurring in the system, while keeping numerical parameters (for instance ) independent of the degree of stiffness ().

The main idea underlying our strategy (see also [5]) consists in separating the two time scales naturally present in (1.1), namely the slow time and the fast time . To this aim, we embed the solution into a two-variable function while imposing that coincides with on the diagonal . Clearly, this implies that

By virtue of the “separation” principle, we then consider the equation over the whole -domain, i.e.

(1.2)

An observation of paramount importance is that no initial condition for (1.2) is evident, since only the value is prescribed: consequently, as such, the transport equation (1.2) is not a Cauchy problem and may have many solutions. This apparent obstacle is in fact the way out to our numerical difficulties: given that for any smooth initial condition satisfying , we can recover the solution from the values of on the diagonal , the missing Cauchy condition should be regarded as an additional degree of freedom.

Now, it turns out that for some specific choice of , it is possible to prove that and its time-derivatives are bounded on uniformly w.r.t. . The point is, in this two-scale formulation (1.2) of (1.1), that stiffness is confined in the sole term . Interpreting this singularly perturbed term as a “collision” operator, we can derive the asymptotic behavior of through a Chapman-Enskog expansion (see for instance [6]) from which averaged models (first and second order) can easily be obtained. The initial datum is then chosen so as to satisfy this expansion at , a requirement compatible with . Two numerical schemes are then proposed for this augmented problem, following the strategy in [5]. In the present work, these schemes are proved to be uniformly accurate with respect to : they have respectively orders and uniformly in . These properties are assessed by numerical experiments on the nonlinear Klein-Gordon and Schrödinger equations.


This paper is organized as follows. In Section 2.1, we present the two-scale formulation in a general framework and perform in Subsection 2.3 the Chapman-Enskog expansion of . The question of the choice of the initial datum for this augmented equation (1.2) is addressed. In Section 3, a first-order numerical scheme is introduced and analyzed while a second-order one is similarly studied in Section 4. Finally, Section 5 is devoted to a series of numerical tests which confirm the theoretical properties of our schemes when applied to the Schrödinger and Klein-Gordon equations and demonstrate the relevance of our strategy.

2 Two-scale formulation of the oscillatory equation

In this section, we formulate and analyze the equation obtained by decoupling the slow variable and the fast one .

2.1 Setting of the problem

Given , we consider the following highly-oscillatory evolution problem

(2.1)

where the unknown is a smooth map onto a Sobolev space (either or with ) and the vector-field is a smooth map, -periodic w.r.t. (). Let us emphasize that may also depend on , although we shall not reflect specifically this dependence: whenever this is the case, all bounds on and its derivatives then implicitly hold uniformly in . In order to work in Banach algebras, we require that , a condition whose necessity will become apparent for the numerical schemes.

As described in the Introduction section, we envisage as the diagonal solution of the following transport equation which constitutes our starting point:

(2.2)

where the unknown is now the function . The choice of the Cauchy condition is discussed below, but it is already clear that and coincide provided that .

For our purpose, we shall need that the vector field obeys the following assumption, where each derivation w.r.t. or typically costs 2 derivatives in the space variable. Indeed, for applications to nonlinear Klein-Gordon or Schrödinger equation – see (5.7) and (5.12) –, one has in mind vector fields of the form

where is a smooth function.

Assumption (A)

For all , and , for all such that , the functional is continuous and locally bounded from to

2.2 Bounds in of the solution of the transport equation

Similar related transport equations will occur in our analysis, with possibly other functions than and initial conditions with various regularities. A somehow preliminary result thus concerns the existence and uniqueness of the solution of the general Cauchy problem

(2.3)

where (possibly) depends on and is assumed to be not identically zero.

Proposition 2.1

Let , let and suppose that is a locally Lipschitz continuous map from into , that it admits derivatives and which are continuous and locally bounded from into, respectively, and . If is uniformly bounded in with respect to the norm then, for any , there exists such that for all , equation (2.3) has a unique solution and we have

(2.4)

Moreover, has first derivatives w.r.t. both and which are functions of . If in addition, satisfies the estimate

for some positive constants and , then equation (2.3) has a unique solution in satisfying

Proof. Considering a smooth solution of (2.3) and denoting , it is easy to check that

so that the smooth function , parametrized by , is then solution of the ordinary differential equation

(2.5)

According to Cauchy-Lipschitz theorem in (a Banach space), equation (2.5) has a unique maximal solution on an interval of the form (when ) or a solution on , which furthermore satisfies the following inequality

Denote

and

Now, as long as , we have

so that

and estimate (2.4) holds. Now, since (resp. ) is a continuous and locally bounded function from to (resp. to ), then is the unique solution on of the linear differential equation in

Hence has first derivatives in and in . Finally, since , we have

so that has also first derivatives in . Finally, the proof of the subsequent assertions in Proposition 2.1 can be done in the same way, the last estimate being a consequence of the Gronwall lemma.  

Remark 2.2

From previous formulae, it appears that exists in but is not necessarily uniformly bounded in . In order to get a solution with uniformly bounded first derivatives, we have to consider an appropriate -dependent initial condition . In the next two subsections, we shall consider a formal expansion of in so as to determine how this initial condition should be prescribed.

2.3 A formal Chapman-Enskog expansion

In this subsection, we analyze formally the behavior of (2.2) in the limit under the assumption that its solution has uniformly bounded (in ) derivatives up to order . Following [5], we thus consider the linear operator , defined for all periodic (regular) function by

This operator is skew-adjoint with respect to the scalar product and its kernel is the set of constant functions. The -projector on this kernel is the averaging operator

which obviously satisfies . On the set of functions with vanishing average, is invertible with inverse defined by

In order to alleviate notations, we further introduce which operates on the set of periodic functions onto the set of zero-average periodic functions.

The Chapman-Enskog expansion (see for instance [6]) consists in writing the solution in the form

(2.6)

where

and then, under some regularity assumptions on with respect to and , one seeks the correction as an expansion in powers of :

(2.7)

Inserting the decomposition (2.6) into (2.2) leads to

(2.8)

Projecting on the kernel of and taking into account that , we obtain

(2.9)

and then subtracting from (2.8)

(2.10)

Since belongs to the range of , we get

(2.11)

Therefore, provided and its first time derivative are uniformly bounded w.r.t. , we first deduce from this last equation that . Now if we additionally assume that the second and third time derivatives are uniformly bounded w.r.t. , then, by a simple induction on (2.11), we get

with and defined by

(2.12)
(2.13)

Inserting these corrections into equation (2.9) yields the first and second order averaged models

Anticipating on next sections, let us now briefly address the crucial issue of the initial condition for (2.2). According to the above calculations, one expects to get a smooth solution of (2.2) if the initial condition follows the same expansion as above, i.e.

(2.14)

where we have denoted by a subindex the evaluation of functions at and where is chosen so as to be compatible with which is the initial condition for the original problem (2.1). Starting from and inserting successively higher-order terms in the previous equation, we can obtain the expression of and then of . For instance, we have

so that

which provides an initial condition for our first order numerical scheme (see Subsection 3.3). The explicit computation of second order terms is postponed to Subsection 4.3.

2.4 Estimates of time derivatives

In this subsection, we indeed prove that the initial condition (2.14) ensures that time derivatives of up to order are uniformly bounded in . In the sequel, the following functional space will be useful:

(2.15)
Proposition 2.3

Suppose that satisfies Assumption (A) and let and . Consider the following initial condition

(2.16)

where is assumed to be uniformly bounded in , where and are given by (2.12) and (2.13), and where the remainder term is assumed to be bounded in uniformly in . Then the following holds:

  1. is uniformly bounded in and there exists such that, for all , equation (2.2), subject to the initial condition (2.16), has a unique solution , which satisfies the uniform bound

    (2.17)
  2. Moreover, for any for which (2.17) holds, the solution satisfies the following estimates

    (2.18)

    for some constant .

Proof. We prove this proposition in several steps.

Existence of and uniform bound. Let us first estimate the initial condition defined by (2.16). From (2.12), (2.13), one gets

(2.19)

where, for conciseness, we have further omitted the dependence111In the sequel, we explicitly mention the dependence while stands for and stands for . of in and . We notice that, by Assumption (A), we have

with norms uniformly bounded w.r.t. . Hence, observing that and are bounded operators on , for all , one deduces that belongs to and is uniformly bounded w.r.t. . Hence, according to Proposition 2.1, exists on an interval independent of , and satisfies

Furthermore, derivatives of w.r.t. and exist and are functions with values in .

Estimate of the first derivative in . The first derivative satisfies the equation

(2.20)

with initial condition

From (2.19) and , , we obtain

Taylor-Lagrange expansions with integral remainder at order one and two give222The notation is used here for terms uniformly bounded in with the appropriate -norm.

where we used that is uniformly bounded in . Therefore333Notice that is continuously embedded in .

(2.21)
(2.22)

In particular, and is uniformly bounded in w.r.t. . According to the second part of Proposition 2.1 (for ) with

which is a map from into , we thus have an estimate of the form

Estimate of the second derivative in . We proceed in an analogous way for by considering

(2.23)

The initial condition for can be obtained from (2.20) at and (2.22)

(2.24)

which is uniformly bounded in the -norm w.r.t. both and . By Proposition 2.1 applied to with , one gets that is uniformly bounded in .

Estimate of the third derivative in . Finally, we derive the equation for , which reads

(2.25)

We then extract from (2.4):