Development and Analysis of a Block-Preconditioner for the Phase-Field Crystal Equation

# Development and Analysis of a Block-Preconditioner for the Phase-Field Crystal Equation

Simon Praetorius222Corresponding author: simon.praetorius@tu-dresden.de (Simon Praetorius) 333Institut für Wissenschaftliches Rechnen, Technische Universität Dresden, D-01062 Dresden, Germany    Axel Voigt333Institut für Wissenschaftliches Rechnen, Technische Universität Dresden, D-01062 Dresden, Germany
###### Abstract

We develop a preconditioner for the linear system arising from a finite element discretization of the Phase Field Crystal (PFC) equation. The PFC model serves as an atomic description of crystalline materials on diffusive time scales and thus offers the opportunity to study long time behaviour of materials with atomic details. This requires adaptive time stepping and efficient time discretization schemes, for which we use an embedded Rosenbrock scheme. To resolve spatial scales of practical relevance, parallel algorithms are also required, which scale to large numbers of processors. The developed preconditioner provides such a tool. It is based on an approximate factorization of the system matrix and can be implemented efficiently. The preconditioner is analyzed in detail and shown to speed up the computation drastically.

Key words. Phase-Field Crystal equation, Preconditioner, Finite Element method, Spectral analysis, Rosenbrock time-discretization

AMS subject classifications. 65F08, 65F10, 65N22, 65Y05, 65Z05, 82C21, 82D25,

## 1 Introduction

The Phase Field Crystal (PFC) model was introduced as a phenomenological model for solid state phenomena on an atomic scale [26, 27]. However, it can also be motivated and derived through classical dynamic density functional theory (DDFT) [28, 65] and has been used for various applications in condensed and soft matter physics, see the review [31] and the references therein. Applications include non-equilibrium processes in complex fluids [7, 53], dislocation dynamics [19], nucleation processes [12, 9, 35, 13], (dendritic) growth [32, 70, 62] and grain growth [10].

The main solution methods for the PFC model, which is a non-linear 6th order parabolic partial differential equation, are finite-difference discretizations and spectral methods, which are combined with an explicit or semi-implicit time-discretization. Numerical details are described in [20, 37, 38, 63, 29].

Recently, the PFC model has been coupled to other field variables, such as flow [53], orientational order [1, 54] and mesoscopic phase-field parameters [45]. This limits the applicability of spectral methods due to the lack of periodic boundary conditions in these applications. On the other hand, simulations in complex geometries have been considered, e.g. colloids in confinements motivated by studies of DDFT [5], crystallization on embedded manifolds [14, 11, 4, 60] or particle-stabilized emulsions, where the PFC model is considered at fluid-fluid interfaces [2, 3]. These applicabilities also limit the use of spectral and finite-difference methods or make them sometimes even impossible. The finite element method provides high flexibility concerning complex geometries and coupling to other partial differential equations, which is the motivation to develop efficient solution methods for the PFC model based on finite element discretizations.

Basic steps of finite element methods include refinement and coarsening of a mesh, error estimation, assembling of a linear system and solving the linear system. Most previous finite element simulations for the PFC model [12, 14, 53] have used direct solvers for the last step, which however restrict the system size due to the high memory requirements and only allow computations in 2D. Well-established solution methods for linear systems, such as iterative Krylov-subspace solvers, like CG, MINRES, GMRES, TFQMR, BiCGStab are not directly applicable for the PFC equation or do not converge, respectively converge very slowly, if used without or with standard preconditioners, like Jacobi or ILU preconditioners.

In this paper, we propose a block-preconditioner for the discretized PFC equations and analyze it with respect to convergence properties of a GMRES method. We have organized the paper as follows: In the next section, we formulate the PFC model in terms of a higher order non-linear partial differential equation. Section LABEL:seq:discretization introduces a space- and time-discretization of the model, including the treatment of the non-linearity. In Section LABEL:seq:preconditioner, the preconditioner is introduced and an efficient preconditioning procedure is formulated. The convergence analysis of GMRES is introduced in Section LABEL:seq:convergence_analysis and Section LABEL:seq:spectral_analysis provides an analysis of the preconditioner in terms of a spectral analysis. Finally, in Section LABEL:seq:numerical_examples we examine the preconditioner in numerical examples and demonstrate its efficiency. Conclusion and outlook are provided in Section LABEL:seq:conclusion.

## 2 Modelling

We consider the original model introduced in [26], which is a conserved gradient flow of a Swift-Hohenberg energy and serves as a model system for a regular periodic wave-like order-parameter field that can be interpreted as particle density. The Swift-Hohenberg energy is given here in a simplified form:

 F(ψ)=∫Ω14ψ4+12ψ(r+(1+Δ)2)ψdx, \hb@xt@.01(2.1)

where the order-parameter field describes the deviation from a reference density, the parameter can be related to the temperature of the system and is the spatial domain. According to the notation in [65] we consider the -gradient flow of , the PFC2-model:

 ∂tψ=ΔδF[ψ]δψ, \hb@xt@.01(2.2)

respective a Wasserstein gradient flow [43] of , the PFC1-model, as a generalization of (LABEL:eq:H-1-gradientflow):

 ∂tψ=∇⋅(ψ+∇δF[ψ]δψ), \hb@xt@.01(2.3)

with a mobility coefficient with the lower bound111The lower bound is due to the scaling and shifting of the order-parameter from a physical density with lower bound . . By calculus of variations and splitting of higher order derivatives, we can find a set of second order equations, which will be analyzed in this paper:

 μ=ψ3+(1+r)ψ+2Δψ+Δω in Ω×[0,T]∂tψ=∇⋅(ψ+∇μ)ω=Δψ, \hb@xt@.01(2.4)

for a time interval and subject to initial condition in and boundary conditions on , e.g. homogeneous Neumann boundary conditions

 ∂nψ=∂nω=ψ+∂nμ=0 on ∂Ω.

## 3 Discrete equations

To transform the partial differential equation (LABEL:eq:pfc-equation) into a system of linear equations, we discretize in space using finite elements and in time using a backward Euler discretization, respective a Rosenbrock discretization scheme.

Let be a regular domain () with a conforming triangulation with a discretization parameter describing the maximal element size in the triangulation. We consider simplicial meshes, i.e. made of line segments in 1D, triangles in 2D and tetrahedra in 3D. Let

 Vh:={v∈H1(Ω);v|T∈Pp(T),∀T∈Th(Ω)}

be the corresponding finite element space, with the space of local polynomials of degree , where we have chosen in our simulations. The problem (LABEL:eq:pfc-equation) in discrete weak form can be stated as:

Find with , s.t.

 (μh−ψ3h−(1+r)ψh,ϑh)Ω +(2∇ψh+∇ωh,∇ϑh)Ω +(∂tψh,ϑ′h)Ω +(ψ+h∇μh,∇ϑ′h)Ω \hb@xt@.01(3.1) +(ωh,ϑ′′h)Ω +(∇ψh,∇ϑ′′h)Ω=0 ∀ϑh,ϑ′h,ϑ′′h∈Vh,

with .

In the following let be a discretization of the time interval . Let be the timestep width in the -th iteration and , respective and the discrete functions at time . Applying a semi-implicit Euler discretization to (LABEL:eq:discrete_pfc) results in a time and space discrete system of equations:

Let be given. For find , s.t.

 a(e)((μk+1,ψk+1,ωk+1),(ϑh,ϑ′h,ϑ′′h)):=(μk+1−(1+r)ψk+1,ϑh)Ω+(2∇ψk+1+∇ωk+1,∇ϑh)Ω+(ψk+1,ϑ′h)Ω+(τkψ+k∇μk+1,∇ϑ′h)Ω+(ωk+1,ϑ′′h)Ω+(∇ψk+1,∇ϑ′′h)Ω=(ψ3k,ϑh)Ω+(ψk,ϑ′h)Ω=:⟨F(e),(ϑh,ϑ′h,ϑ′′h)⟩∀ϑh,ϑ′h,ϑ′′h∈Vh. \hb@xt@.01(3.2)

Instead of taking explicitly it is pointed out in [12], that a linearization of this non-linear term stabilizes the system and allows for larger timestep widths. Therefore, we replace by . Thus (LABEL:eq:time_discrete_pfc) reads

 a((μk+1,ψk+1,ωk+1),(ϑh,ϑ′h,ϑ′′h)):=(μk+1−(1+r)ψk+1−3ψ2kψk+1,ϑh)Ω+(2∇ψk+1+∇ωk+1,∇ϑh)Ω+(ψk+1,ϑ′h)Ω+(τkψ+k∇μk+1,∇ϑ′h)Ω+(ωk+1,ϑ′′h)Ω+(∇ψk+1,∇ϑ′′h)Ω=(−2ψ3k,ϑh)Ω+(ψk,ϑ′h)Ω=:⟨F,(ϑh,ϑ′h,ϑ′′h)⟩∀ϑh,ϑ′h,ϑ′′h∈Vh, \hb@xt@.01(3.3)

Let be a basis of , than we can define the system matrix and the right-hand side vector , for the linear system , as

 A=⎡⎢ ⎢⎣A00A01A02A10A11A12A20A21A22⎤⎥ ⎥⎦,b=⎛⎜ ⎜⎝b0b1b2⎞⎟ ⎟⎠,

with each block defined via

 [Aij]kl=a(ejφl,eiφk),[bi]j=⟨F,eiφj⟩,

where is the -th Cartesian unit vector.

Introducing the short cuts and for mass- and stiffness-matrix, for the mobility matrix and for the non-linear term the short cut , we can write as

 A=⎡⎢⎣M−(1+r)M+N(ψk)+2KKτkK+(ψk)M00KM⎤⎥⎦. \hb@xt@.01(3.4)

We also find that , and . Using this, we can define a new matrix to decouple the first two equations from the last equation, i.e.

 A′=[M−(1+r)M+N(ψk)+2K−BτkK+(ψk)M]. \hb@xt@.01(3.5)

With , where the discrete coefficient vectors correspond to a discretization with the same basis functions as the matrices, i.e.

 ψh=∑iψ(i)φi with coefficients –ψ=(ψ(i))i,

and in a same manner, we have

 (???)⇔Ax=b⇔A′x′=b′,Mω––k+1=−Kψ––k+1.

The reduced system can be seen as a discretization of a partial differential equation including the Bi-Laplacian, i.e.

 ∂tψ=∇⋅(ψ+∇μ),withμ=ψ3+(1+r)ψ+2Δψ+Δ2ψ.

In the following, we will drop the underscore for the coefficient vectors for ease of reading.

### 3.1 Rosenbrock time-discretization

To obtain a time discretization with high accuracy and stability with an easy step size control, we replace the discretization (LABEL:eq:time_discrete_pfc), respective (LABEL:eq:time_discrete_pfc_2), by an embedded Rosenbrock time-discretization scheme, see e.g. [36, 46, 55, 41, 42].

Therefore consider the abstract general form of a differential algebraic equation

 M∂tx=F[x],

with a linear (mass-)operator and a (non-linear) differential operator . Using the notation for the Gâteaux derivative of at in the direction , one can write down a general Rosenbrock scheme

 1τkγMyki−JF(xk)[yki]=F[xki]+i−1∑j=1cijτkMykj \hb@xt@.01(3.6) for i=1,…,s xki=xk+i−1∑j=1aijykj(ith stage solution)xk+1=xk+s∑j=1mjykj,^xk+1=xk+s∑j=1^mjykj, \hb@xt@.01(3.7)

with coefficients and timestep . The coefficients and build up linear-combinations of the intermediate solutions of two different orders. This can be used in order to estimate the timestep error and control the timestep width. Details about stepsize control can be found in [36, 46]. The coefficients used for the PFC equation are based on the Ros3Pw scheme [55] and are listed in Table LABEL:tbl:rosenbrock_scheme. This W-method has 3 internal stages, i.e. , and is strongly A-stable. As Rosenbrock-method it is of order 3. It avoids order reduction when applied to semidiscretized parabolic PDEs and is thus applicable to our equations.

In case of the PFC system (LABEL:eq:pfc-equation) we have and . The functional applied to is given by

 \hb@xt@.01(3.8)

For the Jacobian of in the direction we find

 JF(x)[y] =FLin[y]+⎡⎢⎣3ψ2dψ∇⋅(ψ+∇dμ)+∇⋅(dψ∂ψ(ψ+)∇μ)0⎤⎥⎦ (assuming ψ+=ψ−ψmin) =FLin[y]+⎡⎢⎣3ψ2dψ∇⋅(ψ+∇dμ)+∇⋅(dψ∇μ)0⎤⎥⎦.

By multiplication with test functions and integration over , we can derive a weak form of equation (LABEL:eq:rosenbrock):

For find , s.t.

 1τkγ(Myki,ϑ)Ω−JF(xk)[yki,ϑ]=F(xki)[ϑ]+i−1∑j=1cijτk(Mykj,ϑ)Ω∀ϑ∈(Vh)3, \hb@xt@.01(3.9)

with the linear form :

 F(x)[ϑ]=[(−μ+(1+r)ψ,ϑ)Ω−(2∇ψ+∇ω,∇ϑ)Ω−(ω,ϑ′′)Ω−(∇ψ,∇ϑ′′)Ω]+(ψ3,ϑ)Ω−(ψ+∇μ,∇ϑ′)Ω=:FLin(x)[ϑ]+(ψ3,ϑ)Ω−(ψ+∇μ,∇ϑ′)Ω

and the bi-linear form :

Using the definitions of the elementary matrices and , as above and introducing , we can write the Rosenbrock discretization in matrix form for the -th stage iteration:

 ⎡⎢ ⎢⎣M−(1+r)M+2K+N(ψk)KτkK+(ψk)1γM+τkF(μk)00KM⎤⎥ ⎥⎦ARyki=bRi, \hb@xt@.01(3.10)

with the assembling of the right-hand side of (LABEL:eq:rosenbrock_weak), with a factor multiplied to the second component. The system matrix in each stage of one Rosenbrock time iteration is very similar to the matrix derived for the simple backward Euler discretization in (LABEL:eq:systemmatrix), up to a factor in front of a mass matrix and the derivative of the mobility term . The latter can be simplified in case of the PFC2 model (LABEL:eq:H-1-gradientflow), where and .

## 4 Precondition the linear systems

To solve the linear system , respective , linear solvers must be applied. As direct solvers, like UMFPACK [21], MUMPS [6] or SuplerLU_DIST [47] suffer from fast increase of memory requirements and bad scaling properties for massively parallel problems, iterative solution methods are required. The system matrix , respective , is non-symmetric, non-positive definite and non-normal, which restricts the choice of applicable solvers. We here use a GMRES algorithm [59], respectively the flexible variant FGMRES [58], to allow for preconditioners with (non-linear) iterative inner solvers, like a CG method.

Instead of solving the linear system , we consider the modified system , i.e. a right preconditioning of the matrix . A natural requirement for the preconditioner is, that it should be simple and fast to solve for arbitrary vectors , since it is applied to the Krylov basisvectors in each iteration of the (F)GMRES method.

We propose a block-preconditioner for the upper left block matrix of based on an approach similar to a preconditioner developed for the Cahn-Hilliard equation [17]. Therefore, we first simplify the matrix , respective the corresponding reduced system of , by considering a fixed timestep and using a constant mobility approximation, i.e. , with the mean of the mobility coefficient , and . For simplicity, we develop the preconditioner for the case and only. For small timestep widths the semi-implicit Euler time-discretization (LABEL:eq:time_discrete_pfc) is a good approximation of (LABEL:eq:rosenbrock_matrix), so we neglect the non-linear term . What remains is the reduced system

By adding a small perturbation to the diagonal of , we can find a matrix having an explicit triangular block-factorization. This matrix we propose as a preconditioner for the original matrix :

 \hb@xt@.01(4.1)

with . In each (F)GMRES iteration, the preconditioner is applied to a vector , that means solving the linear system , in four steps:

 (1) My0=b0 (2) (M+δK)y1=b1−τKy1 (3) (M−2δK+δB)x1=My1 (4) x0=y0+1δ(y1−x1).

Since the overall system matrix has a third component, that was removed for the construction of the preconditioner, the third component of the vector has to be preconditioned as well. This can be performed by solving: (5) .

In step (3) we have to solve

 Sx1:=(M−2δK+δKM−1K)x1=My1, \hb@xt@.01(4.2)

which requires special care, as forming the matrix explicitly is no option, as the inverse of the mass matrix is dense and thus the matrix as well. In the following subsections we give two approximations to solve this problem.

### 4.1 Diagonal approximation of the mass matrix

Approximating the mass matrix by a diagonal matrix leads to a sparse approximation of . Using the ansatz the matrix can be approximated by

 SD:=(M−2δK+δKM−1DK).

By estimating the eigenvalues of the generalized eigenvalue problem we show, similar as in [16], that the proposed matrix is a good approximation.

###### Lemma 1

The eigenvalues of the generalized eigenvalue problem are bounded by bounds of the eigenvalues of the generalized eigenvalue problem for mass-matrix and diagonal approximation of the mass-matrix.

Proof. We follow the argumentation of [16, Section 3.2].

Using the matrices and we can reformulate the eigenvalue problem as

 λM12(I−2δˆK+δˆKˆDˆK)M12x=M12(I−2δˆK+δˆKˆK)M12x. \hb@xt@.01(4.3)

Multiplying from the left with , dividing by and defining the normalized vector results in a scalar equation for :

 λ(1−2δk+δk2d)=1−2δk+δk2,

with the Rayleigh quotients and . Assuming that we arrive at

 λ=1−2δk+δk21−2δk+δk2d,

where the difference in the highest order terms of the rational function is the factor . From the definition of and , bounds are given by the bounds of the eigenvalues of .

In [67] concrete values are provided for linear and quadratic Lagrangian finite elements on triangles and linear Lagrangian elements on tetrahedra. For the latter, the bound translates directly to the bound for , i.e. , and thus provides a reasonable approximation of .

###### Remark 1

Other diagonal approximations based on lumped mass matrices could also be used, which however would lead to different eigenvalue bounds.

### 4.2 Relation to a Cahn-Hilliard system

An alternative to the diagonal approximation can be achieved by using the similarity of step (3) in the preconditioning with the discretization of a Cahn-Hilliard equation [18, 17]. This equation can be written using higher order derivatives:

 ∂tc=Δ(c3)−Δc−ηΔ2c

with a parameter related to the interface thickness. For an Euler discretization in time with timestep width and finite element discretization in space as above, we find the discrete equation

 (M−τ′K+τ′ηB−τ′N′(ck))ck+1=Mck.

Setting and , and neglecting the Jacobian operator we recover equation (LABEL:eq:schur_complement_step_3). A preconditioner for the Cahn-Hilliard equation, see [17, 16, 8] thus might help to solve the equation in step (3), which we rewrite as a block system

 [MM−ηKτ′KM](∗x1)=(0My1) \hb@xt@.01(4.4)

with Schur complement . Using the proposed inner preconditioner of [17, p.13]:

with Schur complement as a direct approximation of equation (LABEL:eq:cahn_hilliard_3c), respective (LABEL:eq:schur_complement_step_3), i.e.

 SCHx1=(M+2√δK+δKM−1K)x1=My1 \hb@xt@.01(4.5)

we arrive at a simple two step procedure for step (3):

 (3.1)(M+√δK)z=My1(3.2)(M+√δK)x1=Mz.
###### Lemma 2

The eigenvalues of the generalized eigenvalue problem satisfy .

Proof. We follow the proof of [52, Theorem 4] and denote by the eigenvalue of with the corresponding eigenvector . We have symmetric and positive definite and hence positive definite and thus invertible.

 S−1CHSx =λx ⇒ (M+2√δK+δKM−1K)−1(M−2δK+δKM−1K)x =λx ⇒ (I+√δM−1K)−2(I−2δM−1K+δ(M−1K)2)x =λx.

Thus, for each eigenvalue of we have and

 λ(μ):=(μ2+2δμ+δ)(μ+√δ)−2

an eigenvalue of and since is similar to that is symmetric, all eigenvalues are determined.

With algebraic arguments and we find

 λ(μ)≤μ2+(√δ)2(μ+√δ)2≤1

and for . This leads to the lower bound .

With this Lemma we can argue that provides a good approximation of at least for small timestep widths .

We can write the matrix in terms of the Schur complement matrix :

 \hb@xt@.01(4.6)

Inserting instead of gives the precondition-matrix for the Cahn-Hilliard approximation

 PCH:=P(SCH)=[[]ccM−2√δK−BτKM+(δ+2√δ)K+δB].

## 5 Convergence analysis of the Krylov-subspace method

To analyze the proposed preconditioners for the GMRES algorithm, we have a look at the norm of the residuals of the approximate solution obtained in the -th step of the GMRES algorithm. In our studies, we are interested in estimates of the residual norm of the form

 ∥rk∥2∥r0∥2=minp∈Πk∥p(A)r0∥2∥r0∥2≤minp∈Πk∥p(A)∥2 \hb@xt@.01(5.1)

with and the initial residual. The right-hand side corresponds to an ideal-GMRES bound that excludes the influence of the initial residual. In order to get an idea of the convergence behavior, we have to estimate / approximate the right-hand side term by values that are attainable by analysis of . Replacing by we hope to get an improvement in the residuals.

A lower bound for the right-hand side of (LABEL:eq:residual_estimate) can be found by using the spectral mapping theorem , as

 minp∈Πkmaxλ∈σ(A)|p(λ)|≤minp∈Πk∥p(A)∥2, \hb@xt@.01(5.2)

see [64, 23] and an upper bound can be stated by finding a set associated with , so that

 minp∈Πk∥p(A)∥2≤Cminp∈Πkmaxλ∈S(A)|p(λ)|, \hb@xt@.01(5.3)

where is a constant that depends on the condition number of the eigenvector matrix, the -pseudospectra of respective on the fields of values of .

Both estimates contain the min-max value of . In [56, 23] it is shown, that the limit

 limk→∞[minp∈Πkmaxλ∈S|p(λ)|]1/k=:ρS

exists, where is called the estimated asymptotic convergence factor related to the set . Thus, for large we expect a behavior for the right-hand side of (LABEL:eq:residual_estimate) like

 ρkσ(A)≲minp∈Πk∥p(A)∥2≲CρkS(A).

The tilde indicates that this estimate only holds in the limit .

In the next two sections, we will summarize known results on how to obtain the asymptotic convergence factors and the constant in the approximation of the relative residual bound.

### 5.1 The convergence prefactor

The constant plays an important role in the case of non-normal matrices, as pointed out by [30, 64], and can dominate the convergence in the first iterations. It is shown in Section LABEL:seq:spectral_analysis that the linear part of the operator matrix related to is non-normal and also the preconditioned operator related to is non-normal. Thus, we have to take a look at this constant.

An estimate of the convergence constant, applicable for general non-normal matrices, is related to the -pseudospectrum of the matrix . This can be defined by the spectrum of a perturbed matrix [64, 30]

 σϵ(A):={z∈C∣∣z∈σ(A+E),∥E∥2≤ϵ}.

Let be the boundary of , respective an union of Jordan curves approximating the boundary, then

 minp∈Πk∥p(A)∥2≤|Γϵ|2πϵminp∈Πkmaxλ∈Γϵ|p(λ)|≤|Γϵ|2πϵminp∈Πkmaxλ∈σϵ(A)|p(λ)|, \hb@xt@.01(5.4)

and thus , with the length of the curve [64]. This estimate is approximated, using the asymptotic convergence factor for large , by

 minp∈Πk∥p(A)∥2≲|Γϵ|2πϵρkΓϵ≤|Γϵ|2πϵρkσϵ(A). \hb@xt@.01(5.5)

This constant gives a first insight into the convergence behavior of the GMRES method for the PFC matrix respective the preconditioned matrix .

### 5.2 The asymptotic convergence factor

The asymptotic convergence factor , where is a set in the complex plane, e.g. , or , can be estimated by means of potential theory [23, 44]. Therefore, we have to construct a conformal mapping of the exterior of to the exterior of the unit disk with . We assume that is connected. Otherwise, we will take a slightly larger connected set. Having the convergence factor is then given by

 ρS=1|Φ(0)|. \hb@xt@.01(5.6)

Let be a real interval with and , then a conformal mapping from the exterior of the interval to the exterior of the unit circle is given by

 Φ(z)=2z−κ−1−2√z2−(κ+1)z+κκ−1, \hb@xt@.01(5.7)

see [23]222In [23] the sign of the square-root is wrong and thus, the exterior of the interval is mapped to the interior of the unit circle. In formula (LABEL:eq:Phi_interval) this has been corrected., and gives the asymptotic convergence factor

 ρ[α,β]=√κ−1√κ+1, \hb@xt@.01(5.8)

that is a well known convergence bound for the CG method for symmetric positive definite matrices with the spectral condition number of the matrix . In case of non-normal matrices, the value is not necessarily connected to the matrix condition number.

In the next Section, we will apply the given estimates for the asymptotic convergence factor and for the constant to the Fourier transform of the operators that define the PFC equation, in order to get an estimate of the behavior of the GMRES solver.

## 6 Spectral analysis of the preconditioner

We analyze the properties and quality of the proposed preconditioner by means of a Fourier analysis. We therefore consider an unbounded, respective periodic domain and introduce continuous operators and associated with the linear part of (LABEL:eq:non-linear-operator), of the linear part of the 6th order non-splitted version of (LABEL:eq:H-1-gradientflow) and the preconditioner, for :

 A[x] :=⎡⎢⎣μ−(1+r)ψ−2Δψ−Δω−τΔμ+ψω−Δψ⎤⎥⎦, A0[ψ]

with . The operator, that represents the preconditioner reads

 P[x]:=⎡⎢⎣μ−2Δψ−Δ2ψ−τΔμ+ψ−δΔψ+δΔ2ψω−Δψ⎤⎥⎦.

Using the representation of in (LABEL:eq:preconditioner_S), we can also formulate the operator that determines the Cahn-Hilliard approximation of by inserting :

 PCH[x]:=⎡⎢ ⎢⎣μ+2√δΔψ−Δ2ψ−τΔμ+ψ+(δ+2√δ)Δψ+δΔ2ψω−Δψ⎤⎥ ⎥⎦.

We denote by the wave vector with . The Fourier transform of a function will be denoted by and is defined as

 F:u(r)↦ˆu(k)=∫R3e−i(k⋅r)u(r)dr.

Using the inverse Fourier transform, the operators and applied to , respective , can be expressed as , with and and the symbols of and , respectively. These symbols can be written in terms of the wave vector :

 Aˆx =⎡⎢ ⎢⎣1−(1+r)+2k2k2τk2100k21⎤⎥ ⎥⎦⎛⎜⎝ˆμˆψˆω⎞⎟⎠, \hb@xt@.01(6.1) A0ˆψ =(1+τ[(1+r)k2−2k4+k6])ˆψ, \hb@xt@.01(6.2) Pˆx =⎡⎢ ⎢⎣12k2−k40τk21−δk2+δk400k21⎤⎥ ⎥⎦⎛⎜⎝ˆμˆψˆω⎞⎟⎠, \hb@xt@.01(6.3) PCH