A posteriori error estimates for leap-frog and cosine methods for second order evolution problems

A posteriori error estimates
for leap-frog and cosine methods
for second order evolution problems

Emmanuil H. Georgoulis Department of Mathematics, University of Leicester, Leicester LE1 7RH, England UK, and School of Applied Mathematical and Physical Sciences, National Technical University of Athens, Athens 15780, Greece. Email: Emmanuil.Georgoulis@le.ac.uk    Omar Lakkis Department of Mathematics, University of Sussex, Brighton BN1 9QH, England UK. http://www.maths.sussex.ac.uk/Staff/OL    Charalambos G. Makridakis Department of Mathematics, University of Sussex, Brighton BN1 9QH, England UK. Email: C.Makridakis@sussex.ac.uk    Juha M. Virtanen Department of Mathematics, University of Leicester, Leicester LE1 7RH, England UK. Email: jv77@le.ac.uk
July 4, 2019
Abstract

We consider second order explicit and implicit two-step time-discrete schemes for wave-type equations. We derive optimal order a posteriori estimates controlling the time discretization error. Our analysis, has been motivated by the need to provide a posteriori estimates for the popular leap-frog method (also known as Verlet’s method in molecular dynamics literature); it is extended, however, to general cosine-type second order methods. The estimators are based on a novel reconstruction of the time-dependent component of the approximation. Numerical experiments confirm similarity of convergence rates of the proposed estimators and of the theoretical convergence rate of the true error.

1 Introduction

This work is concerned with second order explicit and implicit two-step time-discrete schemes for wave-type equations. Our objective is to derive optimal order a posteriori estimates controlling the time-discretization error. To the best of our knowledge, error control for wave equations, discretized by popular methods is limited so far to first order schemes [9, 13]. Despite the importance of such wave-type problems, the lack of error control for time-discretizations used extensively in applications is probably due to the two-step character of these methods, and the associated technical issues. Our analysis, has been motivated by the need to provide a posteriori estimates for the leap-frog method or, as often termed, Verlet’s method in the molecular dynamics literature. It extends, however to general cosine-type second order methods [5, 6].

Adaptivity and a posteriori error control for parabolic problems have been developed in, e.g., [12, 25, 21, 15, 19, 8, 10, 17]. In particular, as far as time discretization is concerned, all implicit one-step methods can be treated within the framework developed in [2, 20, 3, 4, 18]. Although some of these results apply (directly or after appropriate modifications) to the wave equation also, when written as a first order system and discretised by implicit Runge-Kutta or Galerkin schemes, this framework does not cover popular two-step implicit or explicit time-discretisation methods. The recent results in [9, 13] cover only first order time discrete schemes; see also [1] for certain estimators to standard implicit time-stepping finite element approximations of the wave equation. For earlier works on adaptivity for wave equations from various perspectives we refer, e.g., to [16, 7, 23, 24].

Model problem and notation.

Let be a Hilbert space and , positive definite, self-adjoint, linear operator on , the domain of , which is assumed to be dense in . For time , we consider the linear second order hyperbolic problem: find , such that

\hb@xt@.01(1.1)

where , .

Leap-frog time-discrete schemes.

We shall be concerned with the popular leap-frog time-discrete scheme for (LABEL:pde). We consider a subdivision of the time interval into disjoint subintervals , , with and , and we define , the time-step. For simplicity of the presentation, we shall assume that is constant, although this is not a restriction of the analysis below. Despite being two-step, the schemes considered herein can be formulated for variable time steps also, with their consistency and stability properties then being influenced accordingly, cf. [22, 11]; the study of such extensions is out of the scope of this work. We shall use the notation .

The time-discrete leap-frog scheme (or Verlet’s method in the terminology of initial value problems or of molecular dynamics) for the wave problem (LABEL:pde) is defined by finding approximations of the exact values , such that:

\hb@xt@.01(1.2)

where ,

\hb@xt@.01(1.3)

with

assuming knowledge of and . We set and we define by

\hb@xt@.01(1.4)

where and . This is a widely used and remarkable method in many ways: it is the only two-step explicit scheme for second order problems which is second order accurate, it has important conservation and geometric properties, as it is symplectic, and it is very natural and simple to formulate and implement. We refer to the review article [14] for a thorough discussion. Explicit schemes, such as (LABEL:lf) are suited for the discretization of wave-type partial differential equations, since their implementation requires a mild CFL-type condition of the form (in contrast to parabolic problems,) where stands for the space-discretization parameter.

Cosine methods.

The leap-frog scheme is a member of a general class of two-step methods for second order evolution problems, which are based on the approximation of cosine and are used extensively in practical computations. In a two-step cosine method, for we seek approximations such that

\hb@xt@.01(1.5)

where we assume that for second order accuracy; we refer to [5, 6] for a detailed discussion and analysis of general multi-step cosine schemes. In this case, the rational function is a second order approximation to the cosine, in the sense that for sufficiently small,

\hb@xt@.01(1.6)

When the above methods are explicit, and the condition implies that the only explicit second order member of this family is the leap-frog method (LABEL:lf).

In this work, we derive a posteriori error bounds in the )-in-time-energy-in-space norm of the error. The derived bounds are of optimal order, i.e., of the same order as the error (which is known to be second order) for the class of the schemes considered [5, 6]. This is verified by the numerical experiments presented herein. Our approach is based on the following ingredients: first, we rewrite the schemes as one-step system on staggered time grids. In turn, this can be seen as a second order perturbation of the staggered midpoint method. Further, by introducing appropriate interpolants, we arrive to a form which can be viewed as perturbation of (LABEL:pde) written as a first-order system. Finally, we employ an adaptation of the time reconstruction from [2], yielding the desired a posteriori error estimates. An interesting observation is that our estimates hold without any additional time-step assumption, which at the fully discrete level would correspond to a CFL-type restriction. Thus, in a posteriori analysis, standard stability considerations of time discretisation schemes might influence the behaviour of the estimator, but are not explicitly required; the possible instability is sufficiently reflected by the behaviour of the estimator, see Section 4. Although not done here, by employing space reconstruction techniques it would be possible to derive error estimates for fully discrete schemes in various norms, using ideas from [19, 17, 13].

The remaining of this work is organized as follows. In §LABEL:reformulation we reformulate the numerical methods appropriately – this is a crucial step in our approach. We start with the leap-frog method and we continue with providing two alternative reformulations of general cosine methods. In §LABEL:apost_bound we introduce appropriate time reconstructions and we derive the error bounds. In §LABEL:sec:numerics we present detailed numerical experiments which yield experimental orders of convergence for the estimators that are the same with those of the actual error. Finally, in §LABEL:finalremark we draw some concluding remarks.

2 Reformulation of the methods

It will be useful for the analysis to reformulate the methods as a system in two staggered grids.

2.1 Leap-frog

Starting with the leap-frog method, we introduce the auxiliary variable

\hb@xt@.01(2.1)

for , and we set . (Note that, then, .) Also, we define and we observe that we have

Further, we introduce the notation

\hb@xt@.01(2.2)

noting that the identity also holds.

We can now write the method (LABEL:lf) as a system in the staggered form considered in [14]:

\hb@xt@.01(2.3)

for .

Next, our goal is to recast (LABEL:lf_pw_two) using globally defined piecewise linear functions. We define to be the piecewise linear interpolant of the sequence , at the points , with . In addition, let be the piecewise linear interpolant of the sequence , at the points Using the notation

\hb@xt@.01(2.4)

we, then, have

\hb@xt@.01(2.5)

for .

Hence, in view of (LABEL:identities), (LABEL:lf_pw_two) implies

\hb@xt@.01(2.6)

for . Upon defining the piecewise constant residuals

it is easy to check that, given that the leap-frog method is second order (in both and ), we have and Hence, (LABEL:lf_pw_three) can be viewed as a second order perturbation of the staggered mid-point method for (LABEL:pde) written as first order system

\hb@xt@.01(2.7)

In what follows, it will be useful to rewrite (LABEL:lf_pw_three) as a perturbation of the continuous system (LABEL:wave_syst). To this end, we introduce two time interpolants onto piecewise linear functions defined on the staggered grids: we define to be the piecewise linear interpolant of the sequence and to be the piecewise linear interpolant of the sequence . Then, (LABEL:lf_pw_three) can be written as

\hb@xt@.01(2.8)

where we define the interpolators

\hb@xt@.01(2.9)

This formulation will be the starting point of our analysis in the next section.

2.2 Cosine methods: Formulation 1

We shall see that cosine methods (LABEL:cos) can be reformulated in a similar way as a staggered system. As in the leap-frog case we introduce the auxiliary variable

\hb@xt@.01(2.10)

and we let

\hb@xt@.01(2.11)

Then the methods (LABEL:cos) can be rewritten in system form:

\hb@xt@.01(2.12)

for . Using the same notation and conventions as in the leap-frog case, we observe, respectively,

\hb@xt@.01(2.13)

where we used the fact that Therefore, as before we conclude that

\hb@xt@.01(2.14)

where and . Let us now define

As in the leap-frog case, it is easy to check that given that the method is second order, we have and Hence, (LABEL:lf_pw_three) can be seen as a second order perturbation of the staggered mid-point method for (LABEL:wave_syst). Further, still using the same notation for time interpolants as the leap-frog case, we obtain

\hb@xt@.01(2.15)

where It is interesting to compare (LABEL:cos_pw_four) to (LABEL:lf_pw_four).

2.3 Cosine methods: Formulation 2

We briefly discuss an alternative formulation of cosine methods. This time we let

\hb@xt@.01(2.16)

and, as before,

\hb@xt@.01(2.17)

Then, using again we rewrite the methods (LABEL:cos) as

\hb@xt@.01(2.18)

for . Using the same notation and conventions as in the leap-frog case, we finally conclude

\hb@xt@.01(2.19)

where and . Upon defining the perturbations as

it is easy to check, again, that the method is second order ( and ) and, thus, (LABEL:lf_pw_three) can be interpreted as a second order perturbation of the staggered mid-point method for (LABEL:wave_syst). Still, using the same notation as before, we have

\hb@xt@.01(2.20)

3 A posteriori error bounds

We have seen that all above schemes can be written in the form,

\hb@xt@.01(3.1)

where are defined in (LABEL:eqn:def:time-interpolators) and equal to , or , for the leap-frog, the first and second cosine method formulations, respectively; similarly, is equal to , or for each of the respective 3 formulations; cf., (LABEL:lf_pw_four), (LABEL:cos_pw_four) and (LABEL:cos2_pw_four). It is possible, in principle, to consider non-constant on each time-step; nevertheless the, easiest to implement, constant ones considered here suffice to deliver optimal estimator convergence rates, as will be highlighted in the numerical experiments below.

3.1 Reconstructions

We continue by defining appropriate time reconstructions, cf., [2]. To this end, on each interval , for , we define the reconstruction of by

where is a piecewise linear interpolant on the mesh , such that . We observe that , and

using the mid-point rule to evaluate the integral and the first equation in (LABEL:lf_pw_three).

Also, on each interval , for , we define the reconstruction of by

Again, we observe that and that

using the mid-point rule. Notice that each of the above reconstructions is similar in spirit to the Crank-Nicolson reconstruction of [2]; however, we note that, although are both globally continuous functions, their derivatives jump alternatingly at the nodes of the staggered grid.

3.2 Error equation and estimators

Setting and , we deduce

\hb@xt@.01(3.2)

with

\hb@xt@.01(3.3)

For , , we define the bilinear form

It is evident that is an inner product on . This is the standard energy inner product and the induced norm, denoted by i.e.,

is the natural energy norm for (LABEL:pde).

Then, the a posteriori error estimates will follow by applying standard energy arguments to the error equation (LABEL:lf_recon). More specifically, in view of (LABEL:lf_recon), we have

using the self-adjointness of . Hence, using the Cauchy-Schwarz inequality, we arrive to

Integrating between and , with such that

we arrive to

which implies

This already gives the following a posteriori bound.

Theorem 3.1

Let be the solution of (LABEL:pde), and Then, the following a posteriori error estimate holds

where , and are defined in (LABEL:residual).

An immediate Corollary from Theorem LABEL:MT is an posteriori bound for the error which can be trivially deduced through a triangle inequality.

Remark 3.2

Notice that due to the two-mesh stagerring, the computation of the ‘last’ used in the above estimate, requires the computation of This can be obtained by advancing one more time step in the computation before estimating.

4 Numerical Experiments

4.1 Fully discrete formulation.

Although the focus of the present work is in time-discretization, we shall introduce a fully discrete version of the time-stepping schemes for the numerical experiments bellow. To this end, we consider Let , be a domain with boundary . We consider the initial-boundary value problem for the wave equation: find such that,

\hb@xt@.01(4.1)
\hb@xt@.01(4.2)
\hb@xt@.01(4.3)
\hb@xt@.01(4.4)

where, for simplicity, we take and . Further, for each , we consider the standard, conforming finite element space , based on a quasiuniform triangulation of consisting of finite elements of polynomial degree , with denoting the largest element diameter. Focusing on time-discretization issues, we shall use the same spatial discretization for all time-steps. The respective discrete spatial operator is denoted by . The fully-discrete leap-frog method is then defined as follows: for each , find , such that

\hb@xt@.01(4.5)

and such that

\hb@xt@.01(4.6)

here for each , where denotes a suitable interpolation/projection operator onto the finite element space of the source function . We also set and . Note that can be calculated as above through , or can be computed as follows: find such that

\hb@xt@.01(4.7)

(LABEL:eqn:fully-discrete-leapfog-velocity-formulation) can be used to overcome the difficulty of evaluating estimators defined on staggered time mesh and depending on the term .

To assess the time-error estimator, we replace by its approximation in the a posteriori estimators discussed above, and in the -inner product and -norm. For brevity, we introduce the notation:

The objective is to study the performance of the a posteriori estimator

\hb@xt@.01(4.8)

from Theorem LABEL:MT.

4.2 Specific tests

For and , we consider the model problem (LABEL:modelpde1) - (LABEL:modelpde3) in the following set up: and , for constant. Then, the exact solution of problem (LABEL:modelpde1) - (LABEL:modelpde3) is given by:

\hb@xt@.01(4.9)

where , and . To illustrate the estimator’s behaviour, we have chosen the following sets of parameters as numerical examples:

\hb@xt@.01(4.10)
\hb@xt@.01(4.11)
\hb@xt@.01(4.12)

The solutions of (LABEL:eqn:numerics:example1) - (LABEL:eqn:numerics:example3) are all smooth, but (LABEL:eqn:numerics:example3) oscillates much faster temporally, while (LABEL:eqn:numerics:example2) has greater space-dependence of the error. In the numerical experiments, the C++ library FEniCS/dolfin 1.2.0 and PETSc/SuperLU were used for the finite element formulation and the linear algebra implementation. For each of the examples, we compute the solution of (LABEL:eqn:fully-discrete-leapfog) using finite element spaces of polynomial degree , and time step size , , for some constant , with denoting the diameter of the largest element in the mesh associated with . The sequences of meshsizes considered (with respective colouring in the figures below) are: (cyan), (green), (yellow), (red), (purple), for Examples (LABEL:eqn:numerics:example1) and (LABEL:eqn:numerics:example3), and (cyan), (green), (yellow), (red), (purple) for Example (LABEL:eqn:numerics:example2). Note that the CFL-condition required by the leap-frog method, is satisfied by the restriction on the time step

\hb@xt@.01(4.13)

for sufficiently small constant, which couples the temporal and spatial discretization sizes. We also include an experiment to highlight the behaviour of the estimator when the CFL condition is violated.

We monitor the evolution of the values and the experimental order of convergence of the estimator , and the errors and , as well as the effectivity index over time on a sequence of uniformly refined meshes with mesh sizes given as per each example. We also monitor the energy of the reconstructed solution:

\hb@xt@.01(4.14)

We define experimental order of convergence () of a given sequence of positive quantities defined on a sequence of meshes of size by

\hb@xt@.01(4.15)

the inverse effectivity index ,

\hb@xt@.01(4.16)

The has the same information as the (standard) effectivity index and has the advantage of relating directly to the inequality appearing in Theorem LABEL:MT. The results of numerical experiments on uniform meshes, depicted in Figures LABEL:fig:result1 and LABEL:fig:result2, indicate that the error estimators are reliable and also efficient provided the time steps are kept sufficiently small. In the last experiment, Figure LABEL:CFL-violation, the behaviour of the estimator is displayed, for the case when the CFL condition (LABEL:eqn:cfl-condition) is violated; the estimator remains reliable in this case also: the instability is reflected in the behaviour of estimator, cf., Figure LABEL:CFL-violation. Indeed, the inverse effectivity idex of the estimator oscillates around a constant value for all times in the course of the numerically unstable behaviour.

5 Concluding remarks

An a posteriori error bound for error measured in - norm in time and energy norm in space for leap-frog and cosine-type time semi-discretizations for linear second order evolution problems was presented and studied numerically. The estimator was found to be reliable, with the same convergence rate as the theoretical convergence rate of the error. In a fully discrete setting, this estimator corresponds to the control of time discretization error. The estimators were also found to be sharp on uniform meshes provided that the time steps and, thus, the time-dependent part of the error, is kept sufficiently small. Investigation into the suitability of the proposed estimators within an adaptive algorithm remains a future challenge.

References

  • [1] Slimane Adjerid, A posteriori finite element error estimation for second-order hyperbolic problems, Comput. Methods Appl. Mech. Engrg., 191 (2002), pp. 4699–4719.
  • [2] Georgios Akrivis, Charalambos Makridakis, and Ricardo H. Nochetto, A posteriori error estimates for the Crank-Nicolson method for parabolic equations, Math. Comp., 75 (2006), pp. 511–531 (electronic).
  • [3]  , Optimal order a posteriori error estimates for a class of Runge-Kutta and Galerkin methods, Numer. Math., 114 (2009), pp. 133–160.
  • [4]  , Galerkin and Runge-Kutta methods: unified formulation, a posteriori error estimates and nodal superconvergence, Numer. Math., 118 (2011), pp. 429–456.
  • [5] Garth A. Baker, Vassilios A. Dougalis, and Steven M. Serbin, High order accurate two-step approximations for hyperbolic equations, RAIRO Anal. Numér., 13 (1979), pp. 201–226.
  • [6]  , An approximation theorem for second-order evolution equations, Numer. Math., 35 (1980), pp. 127–142.
  • [7] Wolfgang Bangerth and Rolf Rannacher, Adaptive finite element techniques for the acoustic wave equation, J. Comput. Acoust., 9 (2001), pp. 575–591. <