Strong stability preserving explicit Runge–Kutta methods of maximal effective order

Strong stability preserving explicit Runge–Kutta methods of maximal effective order

Yiannis Hadjimichael Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science & Technology (KAUST), P.O. Box 4700, Thuwal 23955, Saudi Arabia (yiannis.hadjimichael@kaust.edu.sa, david.ketcheson@kaust.edu.sa). The work of these authors is supported by Award No. FIC/2010/05, made by King Abdullah University of Science and Technology (KAUST).    Colin B. Macdonald Mathematical Institute, University of Oxford, OX1 3LB, UK (macdonald@maths.ox.ac.uk). The work of this author was supported by NSERC Canada and by Award No KUK-C1-013-04 made by King Abdullah University of Science and Technology (KAUST).    David I. Ketcheson11footnotemark: 1    James H. Verner Department of Mathematics, Simon Fraser University, Burnaby, British Columbia, V5A 1S6, Canada (jverner@pims.math.ca). The work of this author was supported by Simon Fraser University.
Abstract

We apply the concept of effective order to strong stability preserving (SSP) explicit Runge–Kutta methods. Relative to classical Runge–Kutta methods, methods with an effective order of accuracy are designed to satisfy a relaxed set of order conditions, but yield higher order accuracy when composed with special starting and stopping methods. We show that this allows the construction of four-stage SSP methods with effective order four (such methods cannot have classical order four). However, we also prove that effective order five methods—like classical order five methods—require the use of non-positive weights and so cannot be SSP. By numerical optimization, we construct explicit SSP Runge–Kutta methods up to effective order four and establish the optimality of many of them. Numerical experiments demonstrate the validity of these methods in practice.

1 Introduction

Strong stability preserving time discretization methods were originally developed for the solution of nonlinear hyperbolic partial differential equations (PDEs). Solutions of such PDEs may contain discontinuities even when the initial conditions are smooth. Many numerical methods for their solution are based on a method-of-lines approach in which the problem is first discretized in space to yield a system of ODEs. The spatial discretization is often chosen to ensure the solution is total variation diminishing (TVD), in order to avoid the appearance of spurious oscillations near discontinuities, when coupled with first-order forward Euler time integration. Strong stability preserving (SSP) time discretizations (also known as TVD discretizations [25]) are high-order time discretizations that guarantee the TVD property (or other convex functional bounds), with a possibly different step-size restriction [10]. Section 2 reviews Runge–Kutta methods and the concept of strong stability preserving methods.

Explicit SSP Runge–Kutta methods cannot have order greater than four [23]. However, a Runge–Kutta method may achieve an effective order of accuracy higher than its classical order by the use of special starting and stopping procedures. The conditions for a method to have effective order are in general less restrictive than the conditions for a method to have classical order . Section 3 presents a brief overview of the algebraic representation of Runge–Kutta methods, following Butcher [5]. This includes the concept of effective order and a list of effective order conditions.

We examine the SSP properties of explicit Runge–Kutta methods whose effective order is greater than their classical order. Previous studies of SSP Runge–Kutta methods have considered only the classical order of the methods. Three natural questions are:

  • Can an SSP Runge–Kutta method have effective order of accuracy greater than four?

  • If we only require methods to have effective order , is it possible to achieve larger SSP coefficients than those obtained in methods with classical order ?

  • SSP Runge–Kutta methods of order four require at least five stages. Can SSP methods of effective order four have fewer stages?

We show in Section 4 that the answer to the first question is negative. We answer the second question by numerically solving the problem of optimizing the SSP coefficient over the class of methods with effective order ; see Section 5. Most of the methods we find are shown to be optimal, as they achieve a certain theoretical upper bound on the SSP coefficient that is obtained by considering only linear problems [19]. We answer the last question affirmatively by construction, also in Section 5. The paper concludes with numerical experiments in Section 6 and conclusions in Section 7.

2 Strong stability preserving Runge–Kutta methods

Strong stability preserving (SSP) time-stepping methods were originally introduced for time integration of systems of hyperbolic conservation laws [27]

(2.1)

with appropriate initial and boundary conditions. A spatial discretization gives the system of ODEs

(2.2)

where is a vector of continuous-in-time grid values approximating the solution at discrete grid points. Of course, (2.2) can arise in many ways and need not necessarily represent a spatial discretization. Particularly, may be time-dependent, but we can always make a transformation to an autonomous form. In any case, a time discretization then produces a sequence of solutions . This work studies explicit Runge–Kutta time discretizations. An explicit -stage Runge–Kutta method takes the form

where

Such methods are characterized by the coefficient matrix , the weight vector and the abscissa , where . The accuracy and stability of the method depend on the coefficients of the Butcher tableau [5].

In some cases, the solutions of hyperbolic conservation laws satisfy a monotonicity property. For example, if (2.1) is scalar then solutions are monotonic in the total variation semi-norm [15]. For this reason, many popular spatial discretizations are designed such that, for a suitable class of problems, the solution in (2.2) computed with the forward Euler scheme is non-increasing (in time) in some norm, semi-norm, or convex functional; i.e.,

(2.3)

If this is the case, then an SSP method also generates a solution whose norm is non-increasing in time, under a modified time-step restriction.

Definition 2.1 (Strong Stability Preserving).

A Runge–Kutta method is said to be strong stability preserving with SSP coefficient if, whenever the forward Euler condition (2.3) holds and

the Runge–Kutta method generates a monotonic sequence of solution values satisfying

Note that is a property of the spatial discretization and is independent of . The SSP coefficient is a property of the particular time-stepping method and quantifies the allowable time step size relative to that of the forward Euler method. Generally we want the SSP coefficient to be as large as possible for efficiency. To allow a fair comparison of explicit methods with different number of stages, we consider the effective SSP coefficient

Note that the use of the word effective here is unrelated to the concept of effective order introduced in Section 3.

2.1 Optimal SSP schemes

We say that an SSP Runge–Kutta method is optimal if it has the largest possible SSP coefficient for a given order and a given number of stages. The search for these optimal methods was originally based on expressing the Runge–Kutta method as combinations of forward Euler steps (the Shu–Osher form) and solving a non-linear optimization problem [25, 26, 28, 29, 24, 22]. However, the SSP coefficient is related to the radius of absolute monotonicity [20] and, for irreducible Runge–Kutta methods, the two are equivalent [8, 14]. This gives a simplified algebraic characterization of the SSP coefficient [9]; it is the maximum value of such that the following conditions hold:

(2.4a)
(2.4b)

provided that is invertible. Here

while denotes the vector of ones of length and is the identity matrix. The inequalities are understood component-wise.

The optimization problem of finding optimal SSP Runge–Kutta methods can thus be written as follows:

(2.5)

Here represents the order conditions.

Following [15, 18], we will numerically solve the optimization problem (2.5) to find optimal explicit SSP Runge–Kutta methods for various effective orders of accuracy. However, we first need to define the order conditions for these methods. This is discussed in the next section.

3 The effective order of Runge–Kutta methods

The definition, construction, and application of methods with an effective order of accuracy relies on the use of starting and stopping methods. Specifically, we consider a starting method , a main method , and a stopping method . The successive use of these three methods results in a method , which denotes the application of method , followed by method , followed by method . The method is an “inverse” of method . We want to have order , whereas might have lower classical order . We then say has effective order .

When the method is used for steps,

it turns out that only need be used repeatedly, as in , because leaves the solution unchanged up to order . The starting method introduces a perturbation to the solution, followed by time steps of the main method , and finally the stopping method is used to correct the solution. In Section 5.2, we propose alternative starting and stopping procedures which allow the overall procedure to be SSP.

The effective order of a Runge–Kutta method is defined in an abstract algebraic context introduced by Butcher [2] and developed further in [3, 13, 6, 4] and others. We follow the book [5] in our derivation of the effective order conditions.

3.1 The algebraic representation of Runge–Kutta methods

tree tree
0 1 0 9       \pstree\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7] 5
1       \pstree\Tdot[dotstyle=*,dotscale=0.7] 1 10 \pstree\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\pstree\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\Tn 10
2 \pstree\Tdot[dotstyle=*,dotscale=0.7]\Tn\Tdot[dotstyle=*,dotscale=0.7] 2 11       \pstree\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\pstree\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7] 15
3       \pstree\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7] 3 12 \pstree\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\pstree\Tdot[dotstyle=*,dotscale=0.7]\pstree\Tdot[dotstyle=*,dotscale=0.7]\Tn\Tdot[dotstyle=*,dotscale=0.7]\Tn 30
4 \pstree\Tdot[dotstyle=*,dotscale=0.7]\Tn\pstree\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\Tn 6 13       \pstree\Tdot[dotstyle=*,dotscale=0.7]\pstree\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\pstree\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7] 20
5       \pstree\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7] 4 14 \pstree\Tdot[dotstyle=*,dotscale=0.7]\Tn\pstree\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7] 20
6 \pstree\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\pstree\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\Tn 8 15       \pstree\Tdot[dotstyle=*,dotscale=0.7]\Tn\pstree\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\pstree\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\Tn 40
7       \pstree\Tdot[dotstyle=*,dotscale=0.7]\Tn\pstree\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7] 12 16 \pstree\Tdot[dotstyle=*,dotscale=0.7]\Tn\pstree\Tdot[dotstyle=*,dotscale=0.7]\pstree\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\Tn 60
8 \pstree\Tdot[dotstyle=*,dotscale=0.7]\Tn\pstree\Tdot[dotstyle=*,dotscale=0.7]\pstree\Tdot[dotstyle=*,dotscale=0.7]\Tn\Tdot[dotstyle=*,dotscale=0.7]\Tn 24 17       \pstree\Tdot[dotstyle=*,dotscale=0.7]\Tn\pstree\Tdot[dotstyle=*,dotscale=0.7]\pstree\Tdot[dotstyle=*,dotscale=0.7]\Tn\pstree\Tdot[dotstyle=*,dotscale=0.7]\Tdot[dotstyle=*,dotscale=0.7]\Tn\Tn 120
Table 3.1: Elementary weights of trees up to order five for a Runge–Kutta method with Butcher tableau . Here is a diagonal matrix with components and exponents of vectors represent component exponentiation. By convention , where denotes the empty tree.

According to Butcher’s algebraic theory, irreducible Runge–Kutta methods are placed in one-to-one correspondence with elements of a group , consisting of real-valued functions on the set of rooted trees [5, Theorem 384A]. A Runge–Kutta method corresponds to the map that takes each rooted tree to the corresponding elementary weight of that Runge–Kutta method. Table 3.1 lists the elementary weights for trees of up to degree five; a general recursive formula can be found in [5, Definition 312A]. The ordering of trees given in Table 3.1 is used throughout the remainder of this work; thus refers to the tree with elementary weight . For a function we write the values of the elementary weights as for tree . A special element of the group corresponds to the (hypothetical) method that evolves the solution exactly. The values of are denoted [5] and the values of are included in Table 3.1. Classical order conditions are obtained by comparing the elementary weights of a method with these values.

Let correspond to Runge–Kutta methods and respectively. The application of method followed by method corresponds to the multiplicative group operation .111We write to mean the application of followed by the application of (following matrix and operator ordering convention) but when referring to products of elements of we use the reverse ordering () to match the convention in [5]. This product is defined by partitioning the input tree and computing over the resulting forest [5, § 383].

Two Runge–Kutta methods and , are equivalent up to order if their corresponding elements in , and , satisfy , for every tree with , where denotes the order of the tree (number of vertices). We denote this equivalence relation by

In this sense, methods have inverses: the product of and must match the identity method up to order . Note that inverse methods up to order are not unique and inverse methods of explicit methods need not be implicit. We can then define the effective order of accuracy of a method with starting method and stopping method .

Definition 3.1.

[5, § 389] Suppose is a Runge–Kutta method with corresponding . Then the method is of effective order if there exist methods (with corresponding ) such that

(3.0)

where is an inverse of up to order ; i.e.

Here is the identity element and is the exact evolution operator.

Effective order conditions
.
\hdashline[2pt/3pt]

.
\hdashline[2pt/3pt]

,  .
\hdashline[2pt/3pt]

, , , .
\hdashline[2pt/3pt]

, ,
, ,
, ,
, ,   .
Table 3.2: Effective order five conditions on (main method ) in terms of order conditions on (starting method ). See also [5, § 389]. Recall that and are the elementary weights associated with the index in Table 3.1. We assume that (see Section 3.2.1).

3.2 Effective order conditions

For the main method to have effective order , its coefficients and those of the starting and stopping methods must satisfy a set of algebraic conditions. These effective order conditions can be found by rewriting (3.0) as and applying the group product operation. For trees up to order five these are tabulated in Table 3.2 (and also in [5, § 389]). In general, the effective order conditions allow more degrees of freedom for method design than do the classical order conditions. Note that the effective order conditions match the classical order conditions up to second order.

Remark 3.2.

The effective order conditions of the main method for the “tall” trees match the classical order conditions and these are precisely the order conditions for linear problems. This follows from inductive application of the group product on the tall trees. Therefore, methods of effective order have classical order at least for linear problems.

3.2.1 Order conditions of the main and starting methods

As recommended in [5], we consider the elementary weights of the starting method as free parameters when determining the elementary weights of the main method. The relationship in Table 3.2 between the and is mostly linear (although there are a few terms). It is thus straightforward to (mostly) isolate the equations for and determine the as linear combination of the . This separation provides maximal degrees of freedom and minimizes the number of constraints when constructing the method . The resulting effective order conditions for the main method are given in Table 3.3 (up to effective order five). For a specified classical and effective order, these are the equality constraints in the optimization problem (2.5) for method .

Constructing the main method then determines the values and we obtain a set of order conditions on (for that particular choice of ). These are given in the right-half of Table 3.3. We can also find the order conditions on in terms of the (see [5, Table 386(III)]). We note that increasing the classical order of the main method requires and thus by Table 3.2 requires more of the to be zero.

Order conditions for main method Order conditions for starting method
3 2 , , . , .
\hdashline[2pt/3pt]

4 2 , , , , ,
, . , .
\hdashline[2pt/3pt]

4 3 , , , , , , ,
, . .
\hdashline[2pt/3pt]

5 2 , , , , , , ,
, ,
, , ,
, ,
, ,
. .
\hdashline[2pt/3pt]

5 3 , , , , , , ,
, , ,
, ,
, ,
, ,
. .
\hdashline[2pt/3pt]

5 4 , , , , , , ,
, , , , , ,
, ,
, ,
, ,
. .
Table 3.3: Effective order , classical order conditions on and for the main and starting methods, and respectively.

Tables 3.2 and 3.3 both assume that (i.e., the starting and stopping methods perturb the solution but do not advance the solution in time). This assumption is without loss of generality following [5, Lemma 389A], the proof of which shows that we can always find starting procedures with for which the main method has effective order , whenever this holds for a starting method with .

4 Explicit SSP Runge–Kutta methods have effective order at most four

The classical order of any explicit SSP Runge–Kutta method cannot be greater than four [23]. It turns out that the effective order of any explicit SSP Runge–Kutta method also cannot be greater than four, although the proof of this result is more involved. We begin by recalling a well-known result.

Lemma 4.1.

(see [20, Theorem 4.2],[23, Lemma 4.2]) Any irreducible Runge–Kutta method with positive SSP coefficient must have positive weights .

Irreducibility [7] is technically important in this result and those that follow because a reducible SSP method might not have positive weights (but it would be reducible to one that does, as per the lemma). The main result of this section is:

Theorem 4.2.

Any explicit Runge–Kutta method with positive weights has effective order at most four.

Proof.

Any method of effective order five must have classical order at least two (see [5] or Table 3.3). Thus it is sufficient to show that any method with all positive weights cannot satisfy the conditions of effective order five and classical order two.

Let denote the coefficients of an explicit Runge–Kutta method with effective order at least five, classical order at least two, and positive weights . The effective order five and classical order two conditions (see Table 3.3 with and ) include the following:

(4.1a)
(4.1b)
(4.1c)
(4.1d)

where the powers on vectors are understood component-wise. Let . Then substituting (4.1b) into (4.1c) and expressing (4.1d) in terms of gives

Each of these is a strictly convex combination. Jensen’s inequality (with a strictly convex function, as is the case with the square function here) then states with equality if and only if all components of are equal [1, Theorem 12, pg 31]. Now for every explicit method so we deduce that . That implies the method has stage order two, which is not possible for explicit methods [23]. This contradiction completes the proof. ∎

Corollary 4.3.

Let denote an irreducible explicit Runge–Kutta method with . Then has effective order at most four.

Proof.

This follows immediately from Lemma 4.1 and Theorem 4.2. ∎

Remark 4.4.

It is worth noting here an additional result that follows directly from what we have proved. Using Theorem 4.2 and [7, Theorem 4.1], it follows that any irreducible explicit Runge–Kutta method with positive radius of circle contractivity has effective order at most four.

5 Optimal explicit SSP Runge–Kutta schemes with maximal effective order

In this section, we use the SSP theory and Butcher’s theory of effective order (Sections 2 and 3) to find optimal explicit SSP Runge–Kutta schemes with prescribed effective order and classical order. According to Corollary 4.3, there are no explicit SSPRK methods of effective order five, and therefore we need only consider methods with effective order up to four.

Recall from Section 3 that the methods with an effective order of accuracy involve a main method as well as starting and stopping methods and . In Section 5.2 we introduce a novel approach to construction of starting and stopping methods in order to allow them to be SSP.

We denote by ESSPRK() an -stage explicit SSP Runge–Kutta method of effective order and classical order . Also we write SSPRK() for an -stage explicit SSP Runge–Kutta method of order .

5.1 The main method

Our search is carried out in two steps, first searching for optimal main methods and then for possible corresponding methods and . For a given number of stages, effective order, and classical order, our aim is thus to find an optimal main method, meaning one with the largest possible SSP coefficient .

To find a method ESSPRK() with Butcher tableau , we consider the optimization problem (2.5) with representing the conditions for effective order and classical order (as per Table 3.3). The methods are found through numerical search, using Matlab’s optimization toolbox. Specifically, we use fmincon with a sequential quadratic programming approach [15, 18]. This process does not guarantee a global minimizer, so many searches from random initial guesses are performed to help find methods with the largest possible SSP coefficients.

5.1.1 Optimal SSP coefficients

Useful bounds on the optimal SSP coefficient can be obtained by considering an important relaxation. In the relaxed problem, the method is required to be accurate and strong stability preserving only for linear, constant-coefficient initial value problems. This leads to a reduced set of order conditions and a relaxed absolute monotonicity condition [19, 15, 16]. We denote the maximal SSP coefficient for linear problems (maximized over all methods with order and stages) by .

Let denote the maximal SSP coefficient (relevant to non-linear problems) over all methods of stages with order . Let denote the object of our study, i.e. the maximal SSP coefficient (relevant to non-linear problems) over all methods of stages with effective order and classical order . From Remark 3.2 and the fact that the ESSPRK() methods form a super class of the SSPRK() methods, we have

(5.1)

The effective SSP coefficients for methods with up to eleven stages are shown in Table 5.1. Recall from Section 4 that implies a zero SSP coefficient and from Section 3 that for , the class of explicit Runge–Kutta methods with effective order is the simply the class of explicit Runge–Kutta methods with order . Therefore we consider only methods of effective order and . Exact optimal values of are known for many classes of methods; for example see [19, 15, 16]. Those results and (5.1) allow us to determine the optimal value of a priori for the cases (for any ) and for , since in those cases we have .

stages
Table 5.1: Effective SSP coefficients of the best known ESSPRK() methods. Entries in bold achieve the bound given by the linear SSP coefficient and are therefore optimal. If no positive can be found, we use “” to indicate non-existence. The optimal fourth-order linear SSP coefficients are , and .

5.1.2 Effective order three methods

Since for , the optimal effective order three methods have SSP coefficients equal to the corresponding optimal classical order three methods. In the cases of three and four stages, we are able to determine exact coefficients for families of optimal methods of effective order three.

Theorem 5.1.

A family of optimal three-stage, effective order three SSP Runge–Kutta methods of classical order two, with SSP coefficient , is given by

where is a free parameter.

Theorem 5.2.

A family of optimal four-stage, effective order three SSP Runge–Kutta methods of classical order two, with SSP coefficient is given by

where is a free parameter.

Proof.

In either theorem, feasibility can be verified by direct calculation of the conditions in problem (2.5). Optimality follows because . ∎

Theorem 5.1 gives a family of three-stage methods. The particular value of corresponds to the classical Shu–Osher SSPRK() method [25]. Similarly, in Theorem 5.2 the particular value of corresponds to the usual SSPRK() method. It seems possible that for each number of stages, the ESSPRK() methods may form a family in which an optimal SSPRK(, ) method is a particular member.

5.1.3 Effective order four methods

The ESSPRK() methods can have classical order or . In either case, for stages the methods found are optimal because the SSP coefficient attains the upper bound of . For fewer stages, the new methods still have SSP coefficients up to 30% larger than that of explicit SSPRK() methods. In the particular case of four-stage methods we have the following:

Remark 5.3.

In contrast with the non-existence of an SSPRK(4, 4) method [25, 23], we are able to find ESSPRK(4, 4, 2) and ESSPRK(4, 4, 3) methods. The coefficients of these methods are found in Tables 5.3 and 5.4.

Additionally, we find two families of methods with effective order four, for which asymptotically approaches unity. The families consist of second order methods with stages and SSP coefficient . They are optimal since [19, Theorem 5.2(c)]. It is convenient to express the coefficients in the modified Shu–Osher form [10]

because of the sparsity of the matrices and vector . For the non-zero elements are given by

where and

In [10, § 6.2.2], a similar pattern was found for SSPRK() methods.

5.2 Starting and stopping methods

Provided an ESSPRK() scheme that can be used as the main method , we want to find perturbation methods and such that the Runge–Kutta scheme attains classical order , equal to the effective order of method . We also want the resulting overall process to be SSP. However at least one of the and methods is not SSP: if then implies the presence of at least one negative weight and thus neither scheme can be SSP. Even if we consider methods with , one of or must step backwards and thus that method cannot be SSP (unless we consider the downwind operator [24, 11, 17]).

In order to overcome this problem and achieve “bona fide” SSPRK methods with an effective order of accuracy, we need to choose different starting and stopping methods. We consider methods and which each take a positive step such that and . That is, the order conditions of and must match those of and , respectively, up to order . This gives a new scheme which is equivalent up to order to the scheme and attains classical order . Each starting and stopping procedure now takes a positive step forward in time.

To derive order conditions for the and methods, consider their corresponding functions in group to be and respectively. Then the equivalence is expressed as

(5.2)

Rewriting the second condition in (5.2) as , the order conditions for the starting and stopping methods can be determined by the usual product formula and are given in Table 5.2. These conditions could be constructed more generally but here we have assumed (see Section 3.2.1); this will be sufficient for constructing SSP starting and stopping conditions.

\hdashline[2pt/3pt]

Table 5.2: Order conditions on and up to effective order four for starting and stopping methods and , respectively. The upper block represents the effective order three conditions. As in Table 3.2 and Table 3.3 we assume .

5.2.1 Optimizing the starting and stopping methods

It turns out that the order conditions from (5.2) do not contradict the SSP requirements. We can thus find methods and using the optimization procedure described in Section 2.1 with the order conditions given by Table 5.2 for in (2.5).

The values of are determined by the main method . Also note that for effective order , the algebraic expressions on up to order are already found by the optimization procedure of the main method (see Table 3.3). However, the values of the order elementary weights on are not known; these are and for effective order three and , , and for effective order four. From Table 5.2, we see that both the