Iterative Galerkin Discretizations
for Strongly Monotone Problems
In this article we investigate the use of fixed point iterations to solve the Galerkin approximation of strictly monotone problems. As opposed to Newton’s method, which requires information from the previous iteration in order to linearise the iteration matrix (and thereby to recompute it) in each step, the alternative method used in this article exploits the monotonicity properties of the problem, and only needs the iteration matrix calculated once for all iterations of the fixed point method. We outline the abstract a priori and a posteriori analysis for the iteratively obtained solutions, and apply this to a finite element approximation of a second-order elliptic quasilinear boundary value problem. We show both theoretically, as well as numerically, how the number of iterations of the fixed point method can be restricted in dependence of the mesh size, or of the polynomial degree, to obtain optimal convergence. Using the a posteriori error analysis we also devise an adaptive algorithm for the generation of a sequence of Galerkin spaces (adaptively refined finite element meshes in the concrete example) to minimise the number of iterations on each space.
Key words and phrases:Banach fixed point methods; finite element methods; monotone problems; quasilinear PDEs, nonlinear elliptic PDE; adaptive mesh refinement.
2010 Mathematics Subject Classification:65N30
In this paper we study Galerkin approximations of strictly monotone problems of the form:
Here, is a real Hilbert space, with inner product denoted by and induced norm . Furthermore, is a possibly nonlinear form such that, for any , the mapping is linear and bounded. Moreover, we suppose that satisfies
the strong monotonicity property
for a constant , and
the Lipschitz continuity condition
with a constant .
Under these assumptions, there exists a unique solution of the weak formulation (1.1); see, e.g., [14, Theorem 2.H] or . In addition, the solution can be obtained as limit of a sequence resulting from the fixed point iteration
for an arbitrary initial value . Indeed, defining the contraction constant
there holds the a priori bound
for the iteration (1.2), i.e., .
Restricting the iteration (1.2) to a finite dimensional linear subspace , leads to an iterative Galerkin approximation scheme for (1.1). More precisely, we consider, for an initial guess and , the iteration
where and are the constants from (P1) and (P2) respectively. We emphasize that the problem of finding from in the iteration scheme (1.5) is linear and uniquely solvable. Similarly as for (1.2) and (1.1), the fixed point iteration (1.5) converges to the (unique) solution of the Galerkin formulation
Furthermore, we note the a priori bound
analogous to (1.4).
In solving nonlinear differential equations numerically two approaches are commonly employed. Either the nonlinear problem under consideration is discretized by means of a suitable numerical scheme thereby resulting in a (finite-dimensional) nonlinear algebraic system, or the differential equation problem is approximated by a sequence of (locally) linearized problems which are discretized subsequently. The latter approach is attractive from both a computational as well as an analytical view point; indeed, working with a sequence of linear problems allows the application of linear solvers as well as the use of a linear numerical analysis framework (e.g., in deriving error estimates). In the context of fixed point linearizations (1.5) yet another benefit comes into play: solving for from involves setting up and inverting a mass matrix on the left-hand side of (1.5). We emphasize that this matrix is the same for all iterations; hence, it only needs to be computed once (on a given Galerkin space).
The idea of approximating nonlinear problems within a linear Galerkin framework has been applied in a variety of works. For example in the article , the authors have considered general linearizations of strongly monotone operators, and have derived computable estimators for the total error (consisting of the linearization error and the Galerkin approximation error), with identifiable components for each of the error sources. A more specific linearization approach for monotone problems, which is based on the Newton method, has been presented in . In a related context linear finite element approximations resulting from adaptive Newton linearization techniques as applied to semilinear problems have been investigated in the papers [1, 2]. Finally, we remark that the linear Galerkin approximation approach for nonlinear problems is not only employed for the purpose of obtaining linearized schemes, but also to address the issue of modelling errors in linearized models; see, e.g. [4, 8].
The aim of the current paper is to derive a priori and a posteriori error bounds for the Galerkin iteration method (1.5). Our error estimates are expressed as the summation of the linearization error resulting from the fixed point formulation with the Galerkin approximation error. In particular, based on the a posteriori error analysis, we will develop an adaptive solution procedure for the numerical solution of (1.1) that features an appropriate interplay between the fixed point iterations and possible Galerkin space enrichments (e.g., mesh refinements for finite elements); specifically, our scheme selects between these two options depending on whichever constitutes the dominant part of the total error. In this way, we aim to keep the number of fixed point iterations at a minimum in the sense that no unnecessary iterations are performed if they are not expected to contribute a substantial reduction of the error on the actual Galerkin space.
The outline of the rest of this article is as follows. In Section 2 we derive an abstract analysis for the fixed point iteration (1.5), which includes the derivation of both a priori and a posteriori error bounds; in addition, we formulate an abstract adaptive procedure. The purpose of Section 3 is the application of our abstract theory to the finite element approximation of a second-order elliptic quasi-linear elliptic diffusion reaction boundary value problem; in particular, we derive a fully adaptive algorithm based on suitable a posteriori error estimates, and provide a series of numerical experiments. Finally, in Section 4 we summarise the work presented and draw some conclusions.
2. Abstract analysis
2.1. Fixed point Galerkin approximation
As previously discussed, we let be a finite dimensional linear subspace of a Hilbert space . Then, in order to approximate the solution of (1.1), we consider the Galerkin solution defined in (1.6). For the purpose of calculating we consider, in turn, the discrete and linear fixed point iteration scheme (1.5). Evidently, this is equivalent to a linear algebraic system of equations. More precisely, using basis functions , for , where is the number of degrees of freedom, and letting
for some unknown coefficients , we obtain the linear system version of the fixed point iteration (1.5):
Here, is the iteration matrix defined by , and , with
is the vector form of . We can see that the iteration matrix does not depend on the iteration number ; hence, it only needs calculating once for all iterations of the fixed point method (on a given Galerkin space).
2.2. A priori error bound
for any . Involving (P2), we conclude
Combining these estimates we obtain the following result.
2.3. A posteriori error analysis
In order to derive an a posteriori error analysis for (1.5) let us consider the auxiliary problem of finding such that
We note that is a reconstruction (cf. ) in the sense that from (1.5) is the Galerkin approximation of . We assume that we can bound the error between the solution of (2.3) and its Galerkin approximation in terms of an a posteriori computable quantity , i.e.,
Furthermore, recalling (2.3), we write
Then, using (P2) and applying the Cauchy-Schwarz inequality, we infer that
and dividing by , we obtain
Hence, inserting (2.4), the following result can be deduced.
2.4. An abstract adaptive algorithm
The a posteriori error estimate (2.6) shows that the error from (2.1) is controlled by two separate parts: a fixed point iteration error given by , and a Galerkin approximation term . When performing the fixed point iteration (1.5) it is worth noting that once the fixed point error is less than the Galerkin error carrying out another iteration will not cause a substantial reduction of the error on the actual Galerkin space. Based on this observation we are able to develop an algorithm which generates a sequence of hierarchically enriched Galerkin spaces , with the aim of performing a minimal number of fixed point iterations at each enrichment step. Our algorithm will, therefore, feature an interplay between fixed point iterations and Galerkin space refinements.
On the Galerkin space , , we define the Galerkin approximation error by
and the fixed point error
This allows us to write the a posteriori error bound (2.6) as
Here, we denote by the Galerkin solution obtained after steps of the fixed point iteration (1.5) on the current space ; for , the initial guess on the current Galerkin space is obtained as the natural inclusion (or a projection) of the solution of the last (namely, the -th) iteration on the previous Galerkin space to the space . In particular, the fixed point iteration index is reinitiated in each space enrichment step.
Choose an initial starting space , and an initial guess .
Here, is a prescribed parameter. The algorithm is stopped if either the iteration number reaches a given maximum, or if the right-hand side of (2.6) is found to be sufficiently small.
3. Application to quasilinear elliptic PDE
3.1. Problem formulation
In this section, we focus on the numerical approximation of second-order elliptic diffusion reaction boundary value problems of the form
where is a bounded, open, polygonal domain in , with boundary . Here, we assume the following monotonicity conditions on the nonlinearities and :
, and there exist constants such that the following property is satisfied:
, and there exist constants such that
Similarly, as satisfies (3.4), it holds that for all and all ,
For ease of notation we shall suppress the dependence of and on and write and instead of and , respectively.
Throughout this section, we use function spaces based on a polygonal Lipschitz domain . We denote by the Sobolev space of order endowed with the norm . In the case that , we set and denote the matching norm by . Furthermore, we define as the space of functions in with zero trace on .
3.2. Basic properties
where is a constant dependent only on , there holds the ensuing result.
We first consider the case when ; then, noting that , we apply the Poincaré inequality (3.11) to observe that
When we introduce a constant and apply the Poincaré inequality (3.11), to get that
Using the Cauchy-Schwarz inequality yields
Minimizing within the given range, , depends on the constants . More precisely, if then is the optimal choice, and there holds ; otherwise, we select
and thereby obtain
This completes the proof. ∎
3.3. Finite element discretization
In order to solve (3.9) by a fixed point Galerkin iteration, we will use a finite element framework.
3.3.1. Meshes and spaces
We consider regular and shape-regular meshes that partition the domain into open disjoint triangles and/or parallelograms , such that . We denote by the elemental diameter of , and let .
With this notation, for a fixed polynomial degree , we are ready to introduce the finite element space
Here, denotes the space of polynomials of total order at most on , while is the tensored space of polynomials of order at most in each variable on .
3.3.2. Iterative Galerkin FEM
Recalling (1.3) the contraction constant for this iteration is given by
Here, we point out that, in the singularly perturbed case when , for , and , the contraction factor does not deteriorate to . Indeed, this follows from the fact that the Lipschitz constant from Proposition 3.12 remains robustly bounded from in this situation.
3.4. Error Analysis
3.4.1. A priori error bound
Using our abstract a priori error analysis from Proposition 2.2 and applying suitable -approximation results (see, e.g., ), we obtain a bound for the error between the numerical solution obtained at the -th step of the fixed point iteration (3.17) and of the exact solution from (3.9). For simplicity of presentation we assume a (quasi-) uniform diameter of all elements.
Let , with , be the solution to the weak formulation (3.9), any initial guess, and the numerical solution after steps of the fixed point iteration (3.17); then, for , there holds the a priori error estimate
From the above Theorem 3.19 it is possible to predict the (approximate) number of fixed point iterations required to obtain an optimal convergence rate in the linear finite element iteration (3.17). To this end, we ask for the second term on the right-hand side of (3.20) to converge at least at the rate of the first term. In order to discuss the resulting convergence behaviour of the numerical solution obtained from (3.17), we distinguish two different cases:
-FEM: We fix a low polynomial degree and investigate the convergence properties with respect to the mesh size as (mesh refinement). Here, for , we need
and hence, as .
We will test these observations with some numerical experiments in Section 3.6.
3.4.2. A posteriori error analysis
In this section we obtain an a posteriori error bound for the error between the numerical solution obtained at the -th step of the fixed point iteration (3.17) and of the exact solution obtained from (3.1)–(3.2). According to our analysis in Section 2.3 the key is to derive an a posteriori error estimate between the reconstruction from (2.3) and the iterative Galerkin solution from (1.5) (i.e., from (3.17) in the present context); see (2.4).
To establish such a bound, we begin with a quasi-interpolation result.
Let . We begin by recalling the following well-known approximation properties of the Clément interpolant:
for any , with a constant independent of the local element sizes and of ; for we denote by the patch of all elements in adjacent to . In particular, following the approach in , this implies that
and that, if , then
Moreover, using the multiplicative trace inequality, that is,
we infer that
we now arrive at
In order to formulate the following result, we consider a series of meshes, ; for each mesh we denote the finite element space on that mesh as .