MultiGrid Preconditioners for Mixed Finite Element Methods of Vector Laplacian
Due to the indefiniteness and poor spectral properties, the discretized linear algebraic system of the vector Laplacian by mixed finite element methods is hard to solve. A block diagonal preconditioner has been developed and shown to be an effective preconditioner by Arnold, Falk, and Winther [Acta Numerica, 15:1–155, 2006]. The purpose of this paper is to propose alternative and effective block diagonal and block triangular preconditioners for solving this saddle point system. A variable V-cycle multigrid method with the standard point-wise Gauss-Seidel smoother is proved to be a good preconditioner for a discrete vector Laplacian operator. This multigrid solver will be further used to build preconditioners for the saddle point systems of the vector Laplacian and the Maxwell equations with divergent free constraint. The major benefit of our approach is that the point-wise Gauss-Seidel smoother is more algebraic and can be easily implemented as a black-box smoother.
Key words and phrases:Saddle point system, multigrid methods, mixed finite elements, vector Laplacian, Maxwell equations
2010 Mathematics Subject Classification:65N55; 65F10; 65N22; 65N30;
Discretization of the vector Laplacian in spaces and by mixed finite element methods is well-studied in . The discretized linear algebraic system is ill-conditioned and in the saddle point form which leads to the slow convergence of classical iterative methods as the size of the system becomes large. In , a block diagonal preconditioner has been developed and shown to be an effective preconditioner. The purpose of this paper is to present alternative and effective block diagonal and block triangular preconditioners for solving these saddle point systems.
Due to the similarity of the problems arising from spaces and , we use the mixed formulation of the vector Laplacian in as an example to illustrate our approach. Choosing appropriate finite element spaces (a vertex element space) and (an edge element space), the mixed formulation is: Find such that
The corresponding matrix formulation is
Here and are mass matrices of the vertex element and the face element, respectively, corresponds to a scaled operator, and corresponds to the operator.
with , is proposed and the preconditioned Krylov space method is shown to converge with optimal complexity. To compute the inverse operators in the diagonal, multigrid methods based on additive or multiplicative overlapping Schwarz smoothers , multigrid methods based on Hiptimair smoothers [16, 17], or HX auxiliary space preconditioner  can be used. In all these methods, to achieve a mesh independent condition number, a special smoother taking care of the large kernel of the (or ) differential operators is needed.
In contrast, we shall apply multigrid methods with the standard point-wise Gauss-Seidel (G-S) smoother to the Schur complement of the block
which is a matrix representation of the following identity of the vector Laplacian
In (2), the inverse of the mass matrix, i.e., is dense. To be practical, the exact Schur complement can be replaced by an approximation
with an easy-to-invert matrix, e.g., the diagonal or a mass lumping of .
We shall prove that a variable V-cycle multigrid method using the standard point-wise Gauss-Seidel smoother is a good preconditioner for the Schur complement or its approximation . The major benefit of our approach is that the point-wise Gauss-Seidel smoother is more algebraic and can be easily implemented as a black-box smoother. The block smoothers proposed in  for the and problems, however, requires more geometric information and solving local problems in small patches.
Although the finite element spaces are nested and is symmetric positive definite, due to the inverse of the mass matrix, the bilinear forms in the coarse grid are non-inherited from the fine one. To overcome this difficulty, we shall follow the multigrid framework developed by Bramble, Pasciak, and Xu . In this framework, we need only to verify two conditions: (1) Regularity and approximation assumption; (2) Smoothing property. Since is symmetric and positive definite, the smoothing property of the Gauss-Seidel smoother is well known, see e.g. . To prove the approximation property, we make use of the -error estimates of mixed finite element methods established in  and thus have to assume the full regularity of elliptic equations. Numerically our method works well for the case when the full regularity does not hold. With the approximation and smoothing properties, we show that one V-cycle is an effective preconditioner. As noticed in , W-cycle or two V-cycles may not be a valid preconditioner as the corresponding operator may not be positive definite. In other words, the proposed multigrid method for the Schur complement cannot be used as an iterative method but one V-cycle can be used as an effective preconditioner.
The multigrid preconditioner for will be used to build preconditioners for (1). We propose a block diagonal preconditioner and a block triangular preconditioner:
The action can be further approximated by and by one V-cycle multigrid. Following the framework of , we prove that the preconditioned system using these two preconditioners has a uniformly bounded conditional number by establishing a new stability result of the saddle point system (1) in the norm.
As an application we further consider a prototype of Maxwell equations with divergence-free constraint
A regularized system obtained by the augmented Lagrangian method  has the form
where is the vector Laplacian in . We then construct a block diagonal preconditioner and a block triangular preconditioner
The paper is organized as follows. In Section 2, we introduce the discretization of the mixed formulation of the vector Laplacian, and prove stability results. In Section 3, we consider the multigrid methods for the discrete vector Laplacian and verify the approximation and smoothing properties. In Section 4, we propose the uniform preconditioner for the vector Laplacian and apply to Maxwell equation in the saddle point form. At last, we support our theoretical results with numerical experiments.
In this section, we first recall the function spaces and finite element spaces, and then present discrete formulations of the vector Laplacian problems in both space and space . We shall define a new norm using the Schur complement and prove corresponding Poincaré inequalities and inverse inequalities.
We assume that is a bounded and convex polyhedron in with a simple topology (homomorphism to a ball), and it is triangulated into a mesh with size . We assume that the mesh belongs to a shape regular and quasi-uniform family.
2.1. Function Spaces and Finite Element Spaces
We use to denote the space of all square integrable scalar or vector functions on and for both the scalar and vector -inner product. Given a differential operator or , we introduce the Sobolev space . For , is the standard . For simplicity, we will suppress the domain in the notation. Let be the unit outwards normal vector of . We further introduce the following Sobolev spaces on domain with homogenous traces:
Then, let us recall the following finite element spaces:
is the well-known Lagrange elements, i.e., continuous and piecewise polynomials,
is discontinuous and piecewise polynomial space.
To discretize the vector Laplacian problem posed in or , we start from the following de Rham complex
We choose appropriate degrees and types of finite element spaces such that the discrete de Rham complex holds
Important examples are: is the linear Lagrange element; is the lowest order Nedelec edge element; is the lowest order Raviart-Thomas element, and is the piecewise constant.
We now define weak differential operators and introduce the following exact sequence in the reversed ordering:
The weak divergence is defined as the adjoint of operator in the -inner product, i.e., , s.t.,
Weak operator and weak operator are defined similarly. For a given , define as
For a given , define as
In the limiting case when , these weak differential operators becomes the so-called co-differential operators, c.f. , and will be denoted by .
The exactness of (7) can be easily verified by the definition and the exactness of (6). Note that the inverse of mass matrices will be involved when computing the weak differential operators and thus they are global operators.
We introduce the null space of differential operators:
and the null space of weak differential operators
Similar notation will be used for the null spaces in the continuous level when the subscript is skipped.
The notation stands for the orthogonal decomposition. These discrete version of Hodge decompositions play an important role in the analysis.
We update the exact sequences as:
The space in the end of the arrow is the range of the operator above and in the beginning is the real domain. The precise characterization of the null space or can be found by tracing back of the corresponding operators.
2.2. Discrete Formulations of Vector Laplacian.
On the continuous level, the mixed formulation of the vector Laplacian in space is: Find such that
The problem (13) on the discrete level is: Find such that
Note that the first equation of (14) can be interpreted as and in the second equation of (14) the term . After eliminating from the first equation, we can write the discrete vector Laplacian for edge elements as
which is a discretization of the identity
Choosing appropriate bases for the finite element spaces, we can represent the spaces and by and respectively. In the following, we shall use the same notation for the vector representation of a function if no ambiguity arises. Then we have the corresponding operator and matrix formulations as:
Here and are mass matrices of the vertex element, edge element and the face element, respectively, corresponds to a scaling of the operator , and to the operator. We follow the convention of Stokes equations to reserve for the (negative) divergence operator. Note that to form the corresponding matrices of weak derivative operators, the inverse of mass matrices will be involved. The Schur complement
Similarly, the mixed formulation of the vector Laplacian in space is: Find such that
The corresponding discrete mixed formulation is: Find such that
Eliminating from the first equation of (20), we have the discrete vector Laplacian for face elements as
and the operator and matrix formulations are:
2.3. Discrete Poincaré Inequality and Inverse Inequality
In this subsection, we define the norms associated with the discrete vector Laplacian, and prove discrete Poincaré and inverse inequalities.
For , define , where the bilinear form is defined as
Similarly, for , define , where the bilinear form is defined as
Lemma 2.2 (Discrete Poincaré Inequality).
We have the following discrete Poincaré inequalities:
Applying to (26), we have , thus
which leads to
To control the other part, we first prove a discrete Poincaré inequality in the form
Then we have
Canceling one , we obtain the desired inequality (28).
which leads to the inequality
The result and the proof can be easily generalized to mixed discretization of Hodge Laplacian in discrete differential forms . We keep the concrete form in and conforming finite element spaces for the easy access of these results. ∎
It is easy to prove the following inverse inequalities:
3. Multigrid Methods for Discrete Vector Laplacian
3.1. Problem Setting
Let us assume that nested tetrahedral partitions of are given as
and the corresponding , and finite element spaces are
For a technical reason, we assume that the edge element space and the face element space contain the full linear polynomial which rules out only the lowest order case. When no ambiguity can arise, we replace subscripts by the level index for .
The discretization (13) of the mixed formulation of the vector Laplacian in space based on , for , can be written as
Eliminating from (30), we get the reduced Schur complement equation
The discretization (19) of the mixed formulation of vector Laplacian in space on , for , can be written as
and the reduced Schur complement equation is
Notice that, for , and are defined by the discretization of the vector Laplacian on the trianglulation , but not by the Galerkin projection of or since the inverse of a mass matrix is involved. In other words, and are non-inherited from or for .
When necessary, the notation without the superscript and is used to unify the discussion. The notation is used to represent both and spaces.
3.2. A Variable V-cycle Multigrid Method
We introduce some operators first. Let denote a smoothing operator on level , which is assumed to be symmetric and convergent. Let denote the prolongation operator from level to level , which is the natural inclusion since finite element spaces are nested. The transpose then represents the restriction from level to level . The Galerkin projection , which is from level to level , is defined as: for any given satisfies
The variable V-cycle multigrid algorithm is as following.
Algorithm 2. Multigrid Algorithm:
For , assume that has been defined. Define as follows:
Pre-smoothing: Define for by
Coarse-grid correction: Define , where
Post-smoothing: Define for by
3.3. Multigrid Analysis Framework
We employ the multigrid analysis framework developed in . Denoted by the largest eigenvalue of . For the multigrid algorithm to be a good preconditioner to , we need to verify the following assumptions:
“Regularity and approximation assumption”: For some ,
holds with constant independent of ;
holds with constant independent of .
Following the standard arguments, we can show that the largest eigenvalue of , , satisfies for .
3.4. Smoothing Property
The symmetric Gauss-Seidel (SGS) or a properly weighted Jacobi iteration both satisfy the smoothing property (A.2), a proof of which can be found in . For completeness we present a short proof below.
Recall that Gauss-Seidel iteration can be understood as a successive subspace correction method applied to the basis decomposition with exact local solvers . For , let be the basis decomposition. By the X-Z identity [27, 10] for the multiplicative method, we have
where is the orthogonal projection to . For an index , we denote by the set of indices such that the corresponding basis function has overlapping support with basis function at . We then estimate the second term as
Here we use the sparsity of such that the repetition in the summation, i.e, the number of indices in , is uniformly bounded above by a constant. The last step is from the stability of the basis decomposition in -norm which holds for all finite element spaces under consideration.
We have thus proved that which is equivalent to the smoothing property by a simple change of variable. Similar proof can be adapted to the weighted Jacobi smoother.
3.5. Regularity Results
In this subsection, we will present some regularity results for Maxwell equation. Recall that, we assume is a bounded and convex polyhedron throughout of this paper.
Lemma 3.1 (Theorem 3.7 and 3.9 in ).
The space and are continuously imbedded into and
for all functions or .
In the sequel, we are going to develop an regularity result of Maxwell equation.
For functions or . satisfying and . Then and
Let be the zero extension of from to and denote the Fourier transform of defined as usual by
By carefully calculation, we can prove that
The desired result follows by the properties of Fourier transform. ∎
Then we have the following regularity of Maxwell equation.
For any , define to be the solution of
3.6. Error Estimate of Several Projection Operators
We define several projection operators to the null space . Given , find such that
Lemma 3.4 (Theorem 2.4 in Monk ).
Suppose that and let to which contains polynomial of degree less than or equal to . Then we have the error estimate