Iterative Galerkin Discretizations for Strongly Monotone Problems

Iterative Galerkin Discretizations
for Strongly Monotone Problems

Scott Congreve Fakultät für Mathematik
Universität Wien
Oskar-Morganstern-Platz 1
A-1090 Wien
Austria
scott.congreve@univie.ac.at
 and  Thomas P. Wihler Mathematisches Institut
Universität Bern
Sidlerstrasse 5
CH-3012 Bern
Switzerland
wihler@math.unibe.ch
Abstract.

In this article we investigate the use of fixed point iterations to solve the Galerkin approximation of strictly monotone problems. As opposed to Newton’s method, which requires information from the previous iteration in order to linearise the iteration matrix (and thereby to recompute it) in each step, the alternative method used in this article exploits the monotonicity properties of the problem, and only needs the iteration matrix calculated once for all iterations of the fixed point method. We outline the abstract a priori and a posteriori analysis for the iteratively obtained solutions, and apply this to a finite element approximation of a second-order elliptic quasilinear boundary value problem. We show both theoretically, as well as numerically, how the number of iterations of the fixed point method can be restricted in dependence of the mesh size, or of the polynomial degree, to obtain optimal convergence. Using the a posteriori error analysis we also devise an adaptive algorithm for the generation of a sequence of Galerkin spaces (adaptively refined finite element meshes in the concrete example) to minimise the number of iterations on each space.

Key words and phrases:
Banach fixed point methods; finite element methods; monotone problems; quasilinear PDEs, nonlinear elliptic PDE; adaptive mesh refinement.
2010 Mathematics Subject Classification:
65N30
Both authors acknowledge the support of the Swiss National Science Foundation (SNF)
\WithSuffix

[1]|||#1 |||_Ω

1. Introduction

In this paper we study Galerkin approximations of strictly monotone problems of the form:

(1.1)

Here, is a real Hilbert space, with inner product denoted by  and induced norm . Furthermore, is a possibly nonlinear form such that, for any , the mapping  is linear and bounded. Moreover, we suppose that  satisfies

  1. the strong monotonicity property

    (P1)

    for a constant , and

  2. the Lipschitz continuity condition

    (P2)

    with a constant .

Under these assumptions, there exists a unique solution  of the weak formulation (1.1); see, e.g., [14, Theorem 2.H] or [11]. In addition, the solution can be obtained as limit of a sequence resulting from the fixed point iteration

(1.2)

for an arbitrary initial value . Indeed, defining the contraction constant

(1.3)

there holds the a priori bound

(1.4)

for the iteration (1.2), i.e., .

Restricting the iteration (1.2) to a finite dimensional linear subspace , leads to an iterative Galerkin approximation scheme for (1.1). More precisely, we consider, for an initial guess  and , the iteration

(1.5)

where  and  are the constants from (P1) and (P2) respectively. We emphasize that the problem of finding  from  in the iteration scheme (1.5) is linear and uniquely solvable. Similarly as for (1.2) and (1.1), the fixed point iteration (1.5) converges to the (unique) solution  of the Galerkin formulation

(1.6)

Furthermore, we note the a priori bound

(1.7)

analogous to (1.4).

In solving nonlinear differential equations numerically two approaches are commonly employed. Either the nonlinear problem under consideration is discretized by means of a suitable numerical scheme thereby resulting in a (finite-dimensional) nonlinear algebraic system, or the differential equation problem is approximated by a sequence of (locally) linearized problems which are discretized subsequently. The latter approach is attractive from both a computational as well as an analytical view point; indeed, working with a sequence of linear problems allows the application of linear solvers as well as the use of a linear numerical analysis framework (e.g., in deriving error estimates). In the context of fixed point linearizations (1.5) yet another benefit comes into play: solving for  from  involves setting up and inverting a mass matrix on the left-hand side of (1.5). We emphasize that this matrix is the same for all iterations; hence, it only needs to be computed once (on a given Galerkin space).

The idea of approximating nonlinear problems within a linear Galerkin framework has been applied in a variety of works. For example in the article [5], the authors have considered general linearizations of strongly monotone operators, and have derived computable estimators for the total error (consisting of the linearization error and the Galerkin approximation error), with identifiable components for each of the error sources. A more specific linearization approach for monotone problems, which is based on the Newton method, has been presented in [7]. In a related context linear finite element approximations resulting from adaptive Newton linearization techniques as applied to semilinear problems have been investigated in the papers [1, 2]. Finally, we remark that the linear Galerkin approximation approach for nonlinear problems is not only employed for the purpose of obtaining linearized schemes, but also to address the issue of modelling errors in linearized models; see, e.g. [4, 8].

The aim of the current paper is to derive a priori and a posteriori error bounds for the Galerkin iteration method (1.5). Our error estimates are expressed as the summation of the linearization error resulting from the fixed point formulation with the Galerkin approximation error. In particular, based on the a posteriori error analysis, we will develop an adaptive solution procedure for the numerical solution of (1.1) that features an appropriate interplay between the fixed point iterations and possible Galerkin space enrichments (e.g., mesh refinements for finite elements); specifically, our scheme selects between these two options depending on whichever constitutes the dominant part of the total error. In this way, we aim to keep the number of fixed point iterations at a minimum in the sense that no unnecessary iterations are performed if they are not expected to contribute a substantial reduction of the error on the actual Galerkin space.

The outline of the rest of this article is as follows. In Section 2 we derive an abstract analysis for the fixed point iteration (1.5), which includes the derivation of both a priori and a posteriori error bounds; in addition, we formulate an abstract adaptive procedure. The purpose of Section 3 is the application of our abstract theory to the finite element approximation of a second-order elliptic quasi-linear elliptic diffusion reaction boundary value problem; in particular, we derive a fully adaptive algorithm based on suitable a posteriori error estimates, and provide a series of numerical experiments. Finally, in Section 4 we summarise the work presented and draw some conclusions.

2. Abstract analysis

2.1. Fixed point Galerkin approximation

As previously discussed, we let  be a finite dimensional linear subspace of a Hilbert space . Then, in order to approximate the solution of (1.1), we consider the Galerkin solution  defined in (1.6). For the purpose of calculating  we consider, in turn, the discrete and linear fixed point iteration scheme (1.5). Evidently, this is equivalent to a linear algebraic system of equations. More precisely, using basis functions , for , where is the number of degrees of freedom, and letting

for some unknown coefficients , we obtain the linear system version of the fixed point iteration (1.5):

Here, is the iteration matrix defined by , and , with

is the vector form of . We can see that the iteration matrix does not depend on the iteration number ; hence, it only needs calculating once for all iterations of the fixed point method (on a given Galerkin space).

2.2. A priori error bound

Denoting by

(2.1)

the error between the solution  of (1.1) and  from (1.5), we employ the triangle inequality and (1.7) to obtain

where  is the Galerkin solution defined in (1.6). Furthermore, employing the monotonicity property (P1) leads to

for any . Involving (P2), we conclude

and thus

Combining these estimates we obtain the following result.

Proposition 2.2.

For the error between the solution  of (1.1) and its iterative Galerkin approximation  from (1.5) there holds the a priori error estimate

for any .

2.3. A posteriori error analysis

In order to derive an a posteriori error analysis for (1.5) let us consider the auxiliary problem of finding  such that

(2.3)

We note that is a reconstruction (cf. [10]) in the sense that  from (1.5) is the Galerkin approximation of . We assume that we can bound the error between the solution  of (2.3) and its Galerkin approximation  in terms of an a posteriori computable quantity , i.e.,

(2.4)

Involving the monotonicity property (P1), the error  from (2.1) satisfies

Furthermore, recalling (2.3), we write

Then, using (P2) and applying the Cauchy-Schwarz inequality, we infer that

and dividing by , we obtain

Hence, inserting (2.4), the following result can be deduced.

Proposition 2.5.

For the error between the solution  of (1.1) and its iterative Galerkin approximation  from (1.5) there holds the a posteriori error estimate

(2.6)

where  is given in (2.4).

2.4. An abstract adaptive algorithm

The a posteriori error estimate (2.6) shows that the error  from (2.1) is controlled by two separate parts: a fixed point iteration error given by , and a Galerkin approximation term . When performing the fixed point iteration (1.5) it is worth noting that once the fixed point error is less than the Galerkin error carrying out another iteration will not cause a substantial reduction of the error on the actual Galerkin space. Based on this observation we are able to develop an algorithm which generates a sequence of hierarchically enriched Galerkin spaces , with the aim of performing a minimal number of fixed point iterations at each enrichment step. Our algorithm will, therefore, feature an interplay between fixed point iterations and Galerkin space refinements.

On the Galerkin space , , we define the Galerkin approximation error by

and the fixed point error

This allows us to write the a posteriori error bound (2.6) as

Here, we denote by  the Galerkin solution obtained after steps of the fixed point iteration (1.5) on the current space ; for , the initial guess on the current Galerkin space  is obtained as the natural inclusion (or a projection) of the solution  of the last (namely, the -th) iteration on the previous Galerkin space  to the space . In particular, the fixed point iteration index  is reinitiated in each space enrichment step.

Algorithm 2.7.

Choose an initial starting space , and an initial guess .

for  do
     
     repeat
         
         Perform a single fixed point iteration (1.5) to calculate .
     until 
     Perform a hierarchical enrichment of 
      based on the error indicator  from (2.4)
      to obtain a new Galerkin space .
     Define (by inclusion  or by projection)
end for

Here, is a prescribed parameter. The algorithm is stopped if either the iteration number  reaches a given maximum, or if the right-hand side of (2.6) is found to be sufficiently small.

3. Application to quasilinear elliptic PDE

3.1. Problem formulation

In this section, we focus on the numerical approximation of second-order elliptic diffusion reaction boundary value problems of the form

(3.1)
(3.2)

where is a bounded, open, polygonal domain in , with boundary . Here, we assume the following monotonicity conditions on the nonlinearities and :

  1. , and there exist constants such that the following property is satisfied:

    (3.3)
  2. , and there exist constants such that

    (3.4)

From [9, Lemma 2.1] we note that, as satisfies (3.3), for all vectors and all , we have

(3.5)
(3.6)

Similarly, as satisfies (3.4), it holds that for all and all ,

(3.7)
(3.8)

For ease of notation we shall suppress the dependence of and on and write and instead of and , respectively.

The weak formulation of the boundary value problem (3.1)–(3.2) is to find such that

(3.9)

where

(3.10)

Throughout this section, we use function spaces based on a polygonal Lipschitz domain . We denote by the Sobolev space of order endowed with the norm . In the case that , we set and denote the matching norm by . Furthermore, we define as the space of functions in with zero trace on .

Introducing the inner product

where  and  are the constants from (3.3) and (3.4), respectively, we note the induced norm

on .

3.2. Basic properties

Under the conditions (3.3) and (3.4) we can show that the properties (P1) and (P2) are satisfied for , and . Indeed, noting the Poincaré inequality,

(3.11)

where  is a constant dependent only on , there holds the ensuing result.

Proposition 3.12.

Provided that (3.3) and (3.4) hold, then the form  from (3.10) is both strongly monotone with constant  in (P1), and Lipschitz continuous with constant

(3.13)

in (P2). Here, is the Poincaré constant from (3.11).

Proof.

In order to show (P1) with , we apply (3.6) and (3.8) to arrive at

Furthermore, to prove the Lipschitz continuity property (P2), we recall (3.5) and (3.7). In combination with the Cauchy-Schwarz inequality, this yields

We first consider the case when ; then, noting that , we apply the Poincaré inequality (3.11) to observe that

When we introduce a constant and apply the Poincaré inequality (3.11), to get that

Using the Cauchy-Schwarz inequality yields

where

Minimizing  within the given range, , depends on the constants . More precisely, if  then  is the optimal choice, and there holds ; otherwise, we select

and thereby obtain

This completes the proof. ∎

Remark 3.14.

Incidentally, referring to, e.g., [11, Theorem 3.3.23] or [14, Theorem 2.H], the above result, Proposition 3.12, guarantees the existence of a unique solution  of (3.9).

Remark 3.15.

We note that the fixed point iteration (1.2) for the current problem (3.1)–(3.2) reads in strong form as

in , the dual space of , for .

3.3. Finite element discretization

In order to solve (3.9) by a fixed point Galerkin iteration, we will use a finite element framework.

3.3.1. Meshes and spaces

We consider regular and shape-regular meshes that partition the domain into open disjoint triangles and/or parallelograms , such that . We denote by the elemental diameter of , and let .

With this notation, for a fixed polynomial degree , we are ready to introduce the finite element space

(3.16)

where

Here, denotes the space of polynomials of total order at most on , while is the tensored space of polynomials of order at most in each variable on .

3.3.2. Iterative Galerkin FEM

Based on the class of spaces  introduced before, we can now introduce the finite element formulation for a linear fixed point formulation (1.5) of (3.9): Given an initial guess , we iterate for ,

(3.17)
Remark 3.18.

Recalling (1.3) the contraction constant for this iteration is given by

Here, we point out that, in the singularly perturbed case when , for , and , the contraction factor  does not deteriorate to . Indeed, this follows from the fact that the Lipschitz constant  from Proposition 3.12 remains robustly bounded from  in this situation.

3.4. Error Analysis

We will now apply the abstract analysis derived in Section 2 to the iterative Galerkin method (3.17) for the numerical approximation of (3.1)–(3.2).

3.4.1. A priori error bound

Using our abstract a priori error analysis from Proposition 2.2 and applying suitable -approximation results (see, e.g., [3]), we obtain a bound for the error between the numerical solution obtained at the -th step of the fixed point iteration (3.17) and of the exact solution from (3.9). For simplicity of presentation we assume a (quasi-) uniform diameter of all elements.

Theorem 3.19.

Let , with , be the solution to the weak formulation (3.9), any initial guess, and the numerical solution after steps of the fixed point iteration (3.17); then, for , there holds the a priori error estimate

(3.20)

where is a constant independent of , , and  from (3.13), but depends on  and  from (3.3) and (3.4), respectively.

Remark 3.21.

From the above Theorem 3.19 it is possible to predict the (approximate) number of fixed point iterations required to obtain an optimal convergence rate in the linear finite element iteration (3.17). To this end, we ask for the second term on the right-hand side of (3.20) to converge at least at the rate of the first term. In order to discuss the resulting convergence behaviour of the numerical solution  obtained from (3.17), we distinguish two different cases:

  • -FEM: We fix a low polynomial degree  and investigate the convergence properties with respect to the mesh size  as (mesh refinement). Here, for , we need

    and hence, as .

  • -FEM: We now fix the mesh, and suppose that the solution of (3.1)–(3.2) is analytic. Then, as , it can be shown that the FEM converges exponentially (see [12]), i.e., the error bound (3.20) reads

    for some constant . Again, balancing the two terms on the right, we require  iterations as .

We will test these observations with some numerical experiments in Section 3.6.

3.4.2. A posteriori error analysis

In this section we obtain an a posteriori error bound for the error between the numerical solution obtained at the -th step of the fixed point iteration (3.17) and of the exact solution obtained from (3.1)–(3.2). According to our analysis in Section 2.3 the key is to derive an a posteriori error estimate between the reconstruction  from (2.3) and the iterative Galerkin solution  from (1.5) (i.e.,  from (3.17) in the present context); see (2.4).

To establish such a bound, we begin with a quasi-interpolation result.

Lemma 3.22.

Consider a finite element mesh , and a corresponding FEM space  as in (3.16). Moreover, let  be the Clément interpolation operator [6]. Then,

for all , with a constant  independent of the local element sizes, and

(3.23)

for any . Here, and  are the constants from (3.3) and (3.4), respectively.

Proof.

Let . We begin by recalling the following well-known approximation properties of the Clément interpolant:

for any , with a constant  independent of the local element sizes and of ; for  we denote by  the patch of all elements in  adjacent to . In particular, following the approach in [13], this implies that

and that, if , then

Therefore,

and so

(3.24)

Moreover, using the multiplicative trace inequality, that is,

we infer that

Observing that

we now arrive at

(3.25)

Finally, combining (3.24) and (3.25), and summation over all  concludes the argument. ∎

In order to formulate the following result, we consider a series of meshes, ; for each mesh we denote the finite element space on that mesh as .

Theorem 3.26.

Let be the exact solution to the boundary value problem (3.1)–(3.2), and be an initial mesh with initial guess . Moreover, denote by