MultiGrid Preconditioners for Mixed Finite Element Methods of Vector Laplacian

MultiGrid Preconditioners for Mixed Finite Element Methods of Vector Laplacian

Long Chen, Yongke Wu, Lin Zhong, and Jie Zhou Department of Mathematics, University of California at Irvine, Irvine, CA 92697, USA chenlong@math.uci.edu School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China. wuyongke1982@sina.com Department of Mathematics, University of California at Irvine, Irvine, CA 92697, USA lzhong1@uci.edu School of Mathematical and Computational Sciences, Xiangtan University, Xiangtan, 411105, China xnuzj2004@163.com
July 9, 2019
Abstract.

Due to the indefiniteness and poor spectral properties, the discretized linear algebraic system of the vector Laplacian by mixed finite element methods is hard to solve. A block diagonal preconditioner has been developed and shown to be an effective preconditioner by Arnold, Falk, and Winther [Acta Numerica, 15:1–155, 2006]. The purpose of this paper is to propose alternative and effective block diagonal and block triangular preconditioners for solving this saddle point system. A variable V-cycle multigrid method with the standard point-wise Gauss-Seidel smoother is proved to be a good preconditioner for a discrete vector Laplacian operator. This multigrid solver will be further used to build preconditioners for the saddle point systems of the vector Laplacian and the Maxwell equations with divergent free constraint. The major benefit of our approach is that the point-wise Gauss-Seidel smoother is more algebraic and can be easily implemented as a black-box smoother.

Key words and phrases:
Saddle point system, multigrid methods, mixed finite elements, vector Laplacian, Maxwell equations
2010 Mathematics Subject Classification:
65N55; 65F10; 65N22; 65N30;
L. Chen was supported by NSF Grant DMS-1418934. Y. Wu was supported by the National Natural Science Foundation of China (11501088) and partially supported by NSF Grant DMS-1115961. L. Zhong was supported by NSF Grant DMS-1115961 and DMS-1418934. J. Zhou was supported by doctoral research project of Xiangtan University (09kzkz08050).

1. Introduction

Discretization of the vector Laplacian in spaces and by mixed finite element methods is well-studied in [1]. The discretized linear algebraic system is ill-conditioned and in the saddle point form which leads to the slow convergence of classical iterative methods as the size of the system becomes large. In [1], a block diagonal preconditioner has been developed and shown to be an effective preconditioner. The purpose of this paper is to present alternative and effective block diagonal and block triangular preconditioners for solving these saddle point systems.

Due to the similarity of the problems arising from spaces and , we use the mixed formulation of the vector Laplacian in as an example to illustrate our approach. Choosing appropriate finite element spaces (a vertex element space) and (an edge element space), the mixed formulation is: Find such that

The corresponding matrix formulation is

(1)

Here and are mass matrices of the vertex element and the face element, respectively, corresponds to a scaled operator, and corresponds to the operator.

Based on the stability of (1) in norm, in [1], a block diagonal preconditioner in the form

with , is proposed and the preconditioned Krylov space method is shown to converge with optimal complexity. To compute the inverse operators in the diagonal, multigrid methods based on additive or multiplicative overlapping Schwarz smoothers  [2], multigrid methods based on Hiptimair smoothers [16, 17], or HX auxiliary space preconditioner [19] can be used. In all these methods, to achieve a mesh independent condition number, a special smoother taking care of the large kernel of the (or ) differential operators is needed.

In contrast, we shall apply multigrid methods with the standard point-wise Gauss-Seidel (G-S) smoother to the Schur complement of the block

(2)

which is a matrix representation of the following identity of the vector Laplacian

In (2), the inverse of the mass matrix, i.e., is dense. To be practical, the exact Schur complement can be replaced by an approximation

with an easy-to-invert matrix, e.g., the diagonal or a mass lumping of .

We shall prove that a variable V-cycle multigrid method using the standard point-wise Gauss-Seidel smoother is a good preconditioner for the Schur complement or its approximation . The major benefit of our approach is that the point-wise Gauss-Seidel smoother is more algebraic and can be easily implemented as a black-box smoother. The block smoothers proposed in [2] for the and problems, however, requires more geometric information and solving local problems in small patches.

Although the finite element spaces are nested and is symmetric positive definite, due to the inverse of the mass matrix, the bilinear forms in the coarse grid are non-inherited from the fine one. To overcome this difficulty, we shall follow the multigrid framework developed by Bramble, Pasciak, and Xu [4]. In this framework, we need only to verify two conditions: (1) Regularity and approximation assumption; (2) Smoothing property. Since is symmetric and positive definite, the smoothing property of the Gauss-Seidel smoother is well known, see e.g. [5]. To prove the approximation property, we make use of the -error estimates of mixed finite element methods established in [2] and thus have to assume the full regularity of elliptic equations. Numerically our method works well for the case when the full regularity does not hold. With the approximation and smoothing properties, we show that one V-cycle is an effective preconditioner. As noticed in [5], W-cycle or two V-cycles may not be a valid preconditioner as the corresponding operator may not be positive definite. In other words, the proposed multigrid method for the Schur complement cannot be used as an iterative method but one V-cycle can be used as an effective preconditioner.

The multigrid preconditioner for will be used to build preconditioners for (1). We propose a block diagonal preconditioner and a block triangular preconditioner:

(3)

The action can be further approximated by and by one V-cycle multigrid. Following the framework of [20], we prove that the preconditioned system using these two preconditioners has a uniformly bounded conditional number by establishing a new stability result of the saddle point system (1) in the norm.

As an application we further consider a prototype of Maxwell equations with divergence-free constraint

A regularized system obtained by the augmented Lagrangian method [14] has the form

(4)

where is the vector Laplacian in . We then construct a block diagonal preconditioner and a block triangular preconditioner

(5)

and prove that they are uniformly bounded preconditioners for the Maxwell system (4). Our preconditioners are new and different with the solver proposed in [12].

The paper is organized as follows. In Section 2, we introduce the discretization of the mixed formulation of the vector Laplacian, and prove stability results. In Section 3, we consider the multigrid methods for the discrete vector Laplacian and verify the approximation and smoothing properties. In Section 4, we propose the uniform preconditioner for the vector Laplacian and apply to Maxwell equation in the saddle point form. At last, we support our theoretical results with numerical experiments.

2. Discretization

In this section, we first recall the function spaces and finite element spaces, and then present discrete formulations of the vector Laplacian problems in both space and space . We shall define a new norm using the Schur complement and prove corresponding Poincaré inequalities and inverse inequalities.

We assume that is a bounded and convex polyhedron in with a simple topology (homomorphism to a ball), and it is triangulated into a mesh with size . We assume that the mesh belongs to a shape regular and quasi-uniform family.

2.1. Function Spaces and Finite Element Spaces

We use to denote the space of all square integrable scalar or vector functions on and for both the scalar and vector -inner product. Given a differential operator or , we introduce the Sobolev space . For , is the standard . For simplicity, we will suppress the domain in the notation. Let be the unit outwards normal vector of . We further introduce the following Sobolev spaces on domain with homogenous traces:

Then, let us recall the following finite element spaces:

  • is the well-known Lagrange elements, i.e., continuous and piecewise polynomials,

  • is the edge element space [23, 24],

  • is the face element space [25, 23, 7, 24, 6, 8],

  • is discontinuous and piecewise polynomial space.

To discretize the vector Laplacian problem posed in or , we start from the following de Rham complex

We choose appropriate degrees and types of finite element spaces such that the discrete de Rham complex holds

(6)

Important examples are: is the linear Lagrange element; is the lowest order Nedelec edge element; is the lowest order Raviart-Thomas element, and is the piecewise constant.

We now define weak differential operators and introduce the following exact sequence in the reversed ordering:

(7)

The weak divergence is defined as the adjoint of operator in the -inner product, i.e., , s.t.,

(8)

Weak operator and weak operator are defined similarly. For a given , define as

(9)

For a given , define as

(10)

In the limiting case when , these weak differential operators becomes the so-called co-differential operators, c.f. [1], and will be denoted by .

The exactness of (7) can be easily verified by the definition and the exactness of (6). Note that the inverse of mass matrices will be involved when computing the weak differential operators and thus they are global operators.

We introduce the null space of differential operators:

and the null space of weak differential operators

Similar notation will be used for the null spaces in the continuous level when the subscript is skipped.

According to the exact sequence (6), we have the discrete Hodge decompositions [1]:

The notation stands for the orthogonal decomposition. These discrete version of Hodge decompositions play an important role in the analysis.

We update the exact sequences as:

(11)

and

(12)

The space in the end of the arrow is the range of the operator above and in the beginning is the real domain. The precise characterization of the null space or can be found by tracing back of the corresponding operators.

2.2. Discrete Formulations of Vector Laplacian.

On the continuous level, the mixed formulation of the vector Laplacian in space is: Find such that

(13)

The problem (13) on the discrete level is: Find such that

(14)

Note that the first equation of  (14) can be interpreted as and in the second equation of  (14) the term . After eliminating from the first equation, we can write the discrete vector Laplacian for edge elements as

(15)

which is a discretization of the identity

Choosing appropriate bases for the finite element spaces, we can represent the spaces and by and respectively. In the following, we shall use the same notation for the vector representation of a function if no ambiguity arises. Then we have the corresponding operator and matrix formulations as:

(16)

Here and are mass matrices of the vertex element, edge element and the face element, respectively, corresponds to a scaling of the operator , and to the operator. We follow the convention of Stokes equations to reserve for the (negative) divergence operator. Note that to form the corresponding matrices of weak derivative operators, the inverse of mass matrices will be involved. The Schur complement

(17)

is the matrix representation of discrete vector Laplacian  (15). The system (16) can be reduced to the Schur complement equation

(18)

Similarly, the mixed formulation of the vector Laplacian in space is: Find such that

(19)

The corresponding discrete mixed formulation is: Find such that

(20)

Eliminating from the first equation of (20), we have the discrete vector Laplacian for face elements as

(21)

and the operator and matrix formulations are:

(22)

where denotes the mass matrix of the discontinuous element. The Schur complement is the matrix representation of discrete vector Laplacian  (21). Similarly, the reduced equation of (22) is

(23)

We shall consider multigrid methods for solving (18) and (23) and use them to construct efficient preconditioners for the corresponding saddle point systems (16) and (22), respectively.

2.3. Discrete Poincaré Inequality and Inverse Inequality

In this subsection, we define the norms associated with the discrete vector Laplacian, and prove discrete Poincaré and inverse inequalities.

Definition 2.1.

For , define , where the bilinear form is defined as

Similarly, for , define , where the bilinear form is defined as

Lemma 2.2 (Discrete Poincaré  Inequality).

We have the following discrete Poincaré inequalities:

(24)
(25)
Proof.

We prove the first inequality (24) and refer to [11] for a proof of (25). From the discrete Hodge decomposition, we have for , there exist and such that

(26)

Applying to (26), we have , thus

which leads to

(27)

To control the other part, we first prove a discrete Poincaré inequality in the form

(28)

By the exactness of the complex (12), there exists such that . We recall another Poincaré  inequality [22, 18]

Then we have

Canceling one , we obtain the desired inequality (28).

Applying to the Hodge decomposition (26) and using the inequality (28), we have , thus

which leads to the inequality

(29)

Combine inequalities (27) and (29), we have proved that

Remark 2.3.

The result and the proof can be easily generalized to mixed discretization of Hodge Laplacian in discrete differential forms [1]. We keep the concrete form in and conforming finite element spaces for the easy access of these results. ∎

It is easy to prove the following inverse inequalities:

3. Multigrid Methods for Discrete Vector Laplacian

In this section, we describe a variable V-cycle multigrid algorithm to solve the Schur complement equations (18) and (23), and prove that it is a good preconditioner.

3.1. Problem Setting

Let us assume that nested tetrahedral partitions of are given as

and the corresponding , and finite element spaces are

For a technical reason, we assume that the edge element space and the face element space contain the full linear polynomial which rules out only the lowest order case. When no ambiguity can arise, we replace subscripts by the level index for .

The discretization (13) of the mixed formulation of the vector Laplacian in space based on , for , can be written as

(30)

Eliminating from (30), we get the reduced Schur complement equation

(31)

The discretization (19) of the mixed formulation of vector Laplacian in space on , for , can be written as

(32)

and the reduced Schur complement equation is

(33)

We are interested in preconditioning the Schur complement equations (31) and (33) in the finest level, i.e., .

Notice that, for , and are defined by the discretization of the vector Laplacian on the trianglulation , but not by the Galerkin projection of or since the inverse of a mass matrix is involved. In other words, and are non-inherited from or for .

When necessary, the notation without the superscript and is used to unify the discussion. The notation is used to represent both and spaces.

3.2. A Variable V-cycle Multigrid Method

We introduce some operators first. Let denote a smoothing operator on level , which is assumed to be symmetric and convergent. Let denote the prolongation operator from level to level , which is the natural inclusion since finite element spaces are nested. The transpose then represents the restriction from level to level . The Galerkin projection , which is from level to level , is defined as: for any given satisfies

The variable V-cycle multigrid algorithm is as following.
                                                                                                            
Algorithm 2. Multigrid Algorithm:
                                                                                                            
Set .
For , assume that has been defined. Define as follows:

  • Pre-smoothing: Define for by

  • Coarse-grid correction: Define , where

  • Post-smoothing: Define for by

    Define .

                                                                                                            

In this algorithm, is a positive integer which may vary from level to level, and determines the number of smoothing iterations on the -th level, see [4, 5].

3.3. Multigrid Analysis Framework

We employ the multigrid analysis framework developed in [4]. Denoted by the largest eigenvalue of . For the multigrid algorithm to be a good preconditioner to , we need to verify the following assumptions:

(A.1):

“Regularity and approximation assumption”: For some ,

holds with constant independent of ;

(A.2):

“Smoothing property”:

holds with constant independent of .

Following the standard arguments, we can show that the largest eigenvalue of , , satisfies for .

3.4. Smoothing Property

The symmetric Gauss-Seidel (SGS) or a properly weighted Jacobi iteration both satisfy the smoothing property (A.2), a proof of which can be found in [5]. For completeness we present a short proof below.

Recall that Gauss-Seidel iteration can be understood as a successive subspace correction method applied to the basis decomposition with exact local solvers [26]. For , let be the basis decomposition. By the X-Z identity [27, 10] for the multiplicative method, we have

where is the orthogonal projection to . For an index , we denote by the set of indices such that the corresponding basis function has overlapping support with basis function at . We then estimate the second term as

Here we use the sparsity of such that the repetition in the summation, i.e, the number of indices in , is uniformly bounded above by a constant. The last step is from the stability of the basis decomposition in -norm which holds for all finite element spaces under consideration.

We have thus proved that which is equivalent to the smoothing property by a simple change of variable. Similar proof can be adapted to the weighted Jacobi smoother.

3.5. Regularity Results

In this subsection, we will present some regularity results for Maxwell equation. Recall that, we assume is a bounded and convex polyhedron throughout of this paper.

Lemma 3.1 (Theorem 3.7 and 3.9 in [15]).

The space and are continuously imbedded into and

for all functions or .

In the sequel, we are going to develop an regularity result of Maxwell equation.

Lemma 3.2.

For functions or . satisfying and . Then and

Proof.

Let be the zero extension of from to and denote the Fourier transform of defined as usual by

By carefully calculation, we can prove that

The desired result follows by the properties of Fourier transform. ∎

Then we have the following regularity of Maxwell equation.

Lemma 3.3.

For any , define to be the solution of

(34)

Then and

(35)
(36)
Proof.

Indeed with and (34) implies holds in . The desired regularity (35) of then follows from Lemma 3.1.

For any , let . Then equation (34) implies

Thus, we have

Again by Lemma 3.1, it holds

The desired result (36) is then obtained by Lemma 3.2. ∎

3.6. Error Estimate of Several Projection Operators

We define several projection operators to the null space . Given , find such that

(37)

Equation (37) determines uniquely since is an inner product on the subspace which can be proved using the Poincaré inequality (Lemma 2.2). For , we understand as .

Lemma 3.4 (Theorem 2.4 in Monk [21]).

Suppose that and let to which contains polynomial of degree less than or equal to . Then we have the error estimate