Optimally convergent HDG method for third-order Korteweg-de Vries type equations

Optimally convergent HDG method for third-order Korteweg-de Vries type equations

Bo Dong Department of Mathematics, University of Massachusetts Dartmouth, 285 Old Westport Road, North Dartmouth, MA 02747, USA bdong@umassd.edu
Abstract.

We develop and analyze a new hybridizable discontinuous Galerkin (HDG) method for solving third-order Korteweg-de Vries type equations. The approximate solutions are defined by a discrete version of a characterization of the exact solution in terms of the solutions to local problems on each element which are patched together through transmission conditions on element interfaces. We prove that the semi-discrete scheme is stable with proper choices of stabilization function in the numerical traces. For the linearized equation, we carry out error analysis and show that the approximations to the exact solution and its derivatives have optimal convergence rates. In numerical experiments, we use an implicit scheme for time discretization and the Newton-Raphson method for solving systems of nonlinear equations, and observe optimal convergence rates for both the linear and the nonlinear third-order equations.

2000 Mathematics Subject Classification:
Primary 65M60, 65N30
copyright: ©: American Mathematical Society

1. Introduction

In this paper, we develop and analyze a new hybridizable discontinuous Galerkin (HDG) method for the following initial-boundary value problem of the Korteweg-de Vries (KdV) type equation on a finite domain

(1.1)

Here and , where is a constant and an integer. The well-posedness of the problem (1.1) and properties of the solution have been theoretically and numerically studied; see [4, 18, 3, 5, 17, 30] and references therein.

KdV type equations play an important role in applications, such as fluid mechanics [26, 7, 25], nonlinear optics [1, 19], acoustics [28, 33], plasma physics [6, 37, 32, 29], and Bose-Einstein condensates [31, 21] among other fields. They also have an enormous impact on the development of nonlinear mathematical science and theoretical physics. Many modern areas were opened up as a consequence of the basic research on KdV equations. Due to their importance in applications and theoretical studies, there has been a lot of interest in developing accurate and efficient numerical methods for KdV equations. In particular, an ongoing effort on developing discontinuous Galerkin (DG) methods for KdV type equations has been made in the last decade. The first DG method, the local discontinuous Galerkin (LDG) method, for the KdV equation was introduced in 2002 by Yan and Shu in [36] and further studied for the linear case in [23, 34, 35, 20]. In [10], a DG method for the equation was devised by using repeated integration by parts. Recently, several conservative DG methods [2, 9, 22] were developed for KdV type equations to preserve quantities such as the mass and the -norm of the solutions. When solving KdV equations, one can use these DG methods for spatial discretization together with explicit schemes for time-marching if the coefficient before the third-order derivative is very small. However, when such coefficient is of order one, for example, implicit time-marching methods might be the methods of choice.

Traditional DG methods, despite their prominent features such as -adaptivity and local conservativity, were criticized for having larger number of degrees of freedom than continuous finite element methods when solving steady-state problems or problems that require implicit-in-time solvers. Here, we develop an HDG method which is very suitable for solving KdV equations when implicit time-marching is used. HDG methods [13, 11, 15, 14] were first introduced for diffusion problems and they provide optimal approximations to both the potential and the flux. Due to the feature that the global coupled degrees of freedom only live on element interfaces, they are significantly advantageous for solving steady-state problems or time-dependent problems that require implicit time-marching. In [8], we introduced the first family of HDG methods for stationary third-order linear equations, which allow the approximations to the exact solution and its derivatives and to have different polynomial degrees. We proved superconvergence properties of these methods on projection of errors and numerical traces, and numerical results indicate that the HDG method using the same polynomial degree for all three variables is quite robust with respect to the choice of the stabilization function and provides a converging postprocessed solution with order with the least amount of degrees of freedom. This suggests that the HDG method using the same polynomial degrees for all variables is the method of choice for solving one-dimensional third-order problems. Therefore, in this paper we extend this HDG method to time-dependent third-order KdV type equations.

To construct the HDG method for KdV equations, we follow the approach used in [8] for stationary third-order equations. That is, given any mesh of the domain, we show that the exact solution can be obtained by solving the equation on each element with provided boundary data that are determined by transmission conditions. Then we define HDG methods by a discrete version of this characterization, which ensures that the only globally-coupled degrees of freedom are those associated to the numerical traces on element interfaces. In [8], it was shown that HDG methods derived by providing boundary data to local problems in different ways are indeed equivalent to each other when the stabilization function is finite and nonzero. So here we just need to consider the one that takes the numerical trace of at both ends of the interval and the numerical trace of at the right end as boundary data for the local problems. Our method is different from the HDG method in [27], which was designed from implementation point of view. That HDG method involves two sets of numerical traces for , and there is no error analysis for the method.

Our way of devising HDG methods from the characterization of the exact solution allows us to carry out stability and error analysis. We first apply an energy argument to find conditions on the stabilization function in the numerical traces, under which the HDG method has a unique solution for KdV type equations. Then by deriving four energy identities and combining them together, we prove that the method has optimal approximations to as well as its derivatives and for linear equations; this technique is similar to that in [35]. In implementation, implicit time-marching schemes such as BDF or DIRK methods can be used, and at each time step a stationary third-order equation is solved by the HDG method together with the Newton-Raphson method (see Appendix A). Due to the one-dimensional setting of the KdV equations, the global matrix of the HDG method that needs to be numerically inverted at each time step is independent of the polynomial degree of the approximations, its size is only , where is the number of intervals of the mesh, and its condition number is of the order of , where denotes the size of the intervals of the mesh.

The paper is organized as follows. In Section 2, we define the HDG method for third-order KdV type equations and state and discuss our main results. The details of all the proofs are given in Section 3. We show numerical results in Section 4 and some concluding remarks in Section 5. The details on implementation of the method are in Appendix A.

2. Main Results

In this section, we state and discuss our main results. We begin by describing the characterizations of the exact solution that the HDG method is a discrete version of. We then introduce our HDG method for KdV type equations, and state our stability result and optimal a priori error estimate.

2.1. Characterizations of the exact solution

To display the characterizations of the exact solution we are going to work with, let us first rewrite our third-order model equation as the following first-order system:

(2.1a)
with the initial and boundary conditions
(2.1b)
(2.1c)
(2.1d)

We partition the domain as

and introduce the set of the boundaries of its elements, . We also set , and .

We know that, when is smooth enough, if we provide the values and and, for each , solve the local problem

then coincides with the solution of (2.1) if and only if the transmission conditions

and the boundary conditions

are satisfied. There are other possible characterizations of the exact solution corresponding to different choices of boundary data for the local problem; see [8]. Note that for these characterizations, the boundary data of the local problems are the unknowns of a global problem obtained from the transmission conditions and boundary conditions, and the system of equations for the global unknowns is square.

2.2. HDG method

To define our HDG method, we first introduce the finite element spaces to be used. We let the approximations to be in the space where

Here is the space of polynomials of degree at most on the domain . For any function lying in , we denote its values on by (or simply ) and (or simply ). Note that is not necessarily equal to . In contrast, for any in the space , its value at , (or simply ) is uniquely defined; in this case, or mean nothing but .

To obtain the HDG formulation, we use a discrete version of the characterization of the exact solution. Assuming that the values and are given, for each , we solve a local problem on the element by using a Galerkin method. To describe it, let us introduce the following notation. By , we denote the integral of times on the interval , and by we simply mean the expression . Here denotes the outward unit normal to : and .

On the element , we give and the boundary data and and take the HDG approximate solutions to be the solution of the equations

for all , where the remaining undefined numerical traces are given by

The functions , and are defined on and are called the components of the stabilization function; they have to be properly chosen to ensure that the above problem has a unique solution. In particular, due to the nonlinearity of , the function can be nonlinear in terms of and . In the case of , we simply take .

It remains to impose the transmission conditions

and the boundary conditions

Here, . This completes the definition of the HDG methods using the characterization of the exact solution. Note that this way of defining the HDG methods immediately provides a way to implement them.

On the other hand, the above presentation of the HDG methods is not very well suited for their analysis. Thus, we now rewrite it in a more compact form using the notation

Let

The approximation provided by the HDG method, , is the element of which solves the equations

(2.2a)
(2.2b)
(2.2c)
and
(2.2d)
for all , where, on , we have
(2.2e)

It is not difficult to define HDG methods that are associated to other characterizations of the exact solution, but these methods are actually the same, provided that the corresponding stabilization function allows for the transition from one characterization to the other; see [16, 8]. In fact, the choice of characterization to use is more relevant for the actual implementation of the HDG method rather than for its actual definition. The implementation of the HDG method (2.2) is discussed in the Appendix.

When above scheme is discretized in time, we can choose the initial approximation () to be the HDG approximate solutions of the stationary equation where and is the initial data of the time-dependent problem (1.1); see [8] for HDG methods on stationary third-order equations. The initial approximation , is the element of which solves the equations

for all , where , and are defined in the same ways as , and in (2.2e). Note that the equations above are almost the same as those in (2.2) except the third one. This way of choosing initial data for time-dependent problems by solving corresponding stationary problems has been used in [12, 9].

Next, we present our stability result and a priori error estimate of the HDG method under some conditions on the stabilization function.

2.3. Stability

To discuss the -stability of the HDG method, we let

We have the following stability result.

Theorem 2.1.

Assume that . If the stabilization function satisfies

(2.3)

then for the HDG method (2.2), we have

Note that if the nonlinear term , then we have and the condition (2.3) in the Theorem above can be simplified as

(2.4)

If , we just need to have and take and to satisfy (2.4). Since

where , the stabilization function satisfies the condition if

For other choices of which satisfies the condition , see [24].

2.4. A priori error estimate for linear equations

Now we consider the convergence properties of our HDG method for linear equations in which . We proceed as follows. We first define an auxiliary projection and state its optimal approximation property. Then, we provide an estimate for the -norm of the projections of the errors in the primary and auxiliary variables.

Let us introduce a key auxiliary projection that is tailored to the numerical traces. The projection of the function , , is defined as follows. On an element , the projection is the element of which solves the following equations:

(2.5a)
(2.5b)
(2.5c)
(2.5d)
(2.5e)
(2.5f)

where we use the notation for , and . Note that the last three equations have exactly the same structure as the numerical traces of the HDG method in (2.2e).

The following result for the optimal approximation properties of the projection was shown in [8]. To state it, we use the following notation. The -norm is denoted by . We drop the first subindex if , and the second one if or .

Lemma 2.2.

Suppose that

(2.6)

Then the projection in (2.5) is well defined on any interval . In addition, if and are constants, we have that, for and , there is a constant such that

provided .

Next, we provide estimates for the -norm of the projection of the errors

and deduce from them the estimates for the -norm of the errors

Theorem 2.3.

Suppose that in the problem (2.1) and the exact solution . If the stabilization function of the HDG method (2.2) satisfies the condition

(2.7)

then for and small enough, we have

where is independent of .

It is easy to see that if the stabilization function satisfies the condition (2.7), then it also satisfies the conditions (2.4) and (2.6). Using Lemma 2.2, Theorem 2.3 and the triangle inequality, we immediately get the following error estimate for the actual errors.

Theorem 2.4.

Suppose that the hypotheses of Theorem 2.3 are satisfied. Then we have

where is independent of .

3. Proofs

In this section, we provide detailed proofs of our main results. We first prove Theorem 2.1 on the -stability of the HDG method for general KdV type equations. Then we combine several energy identities to prove the error estimate in Theorem 2.3 for linear third-order equations.

3.1. -stability

Now let us prove Theorem 2.1 on the stability of the HDG method for the KdV equation. We treat the nonlinear term in a way similar to that in [24].

Proof.

Taking and in (2.2a)–(2.2c) and adding the three equations together, we get

Using integration by parts and (2.2d), we have

(3.1)

Let be such that . It is easy to see that

Using it for the second term on the right hand side of (3.1), we get that

where

Next, we just need to show that . Let

Using the definition of in (2.2e), we have

By the definition of and in (2.2e), we get

It is easy to check that if the stabilization function satisfies the condition (2.3), then we get and . This shows that

3.2. Error analysis

In this section, we prove the optimal error estimate for the projections of the errors in Theorem 2.3 for linear equations with . First, we obtain the equations for the projection of the errors.

3.2.1. The error equations

From the equations defining the HDG method, (2.2a)–(2.2c), and the fact that the exact solution also satisfy these equations, we obtain the following error equations

for all , where for , and . From (2.2e) and (2.2d), it is easy to see that

and

for all . Now we set

and let

(3.2)