# Spectral-Galerkin Approximation and Optimal Error Estimate for Stokes Eigenvalue Problems in Polar Geometries^{†}^{†}thanks:
This work is supported in part by the National Natural Science Foundation of China grants No. 11661022, 91130014, 11471312, 91430216, 11471031, and U1530401; and by the US National Science Foundation grant DMS-1419040.

###### Abstract

In this paper we propose and analyze spectral-Galerkin methods for the Stokes eigenvalue problem based on the stream function formulation in polar geometries. We first analyze the stream function formulated fourth-order equation under the polar coordinates, then we derive the pole condition and reduce the problem on a circular disk to a sequence of equivalent one-dimensional eigenvalue problems that can be solved in parallel. The novelty of our approach lies in the construction of suitably weighted Sobolev spaces according to the pole conditions, based on which, the optimal error estimate for approximated eigenvalue of each one dimensional problem can be obtained. Further, we extend our method to the non-separable Stokes eigenvalue problem in an elliptic domain and establish the optimal error bounds. Finally, we provide some numerical experiments to validate our theoretical results and algorithms.

Keywords: Stokes eigenvalue problem, polar geometry, pole condition, spectral-Galerkin approximation, optimal error analysis

## 1 Introduction

We consider in this paper the Stokes eigenvalue problem which arises in stability analysis of the stationary solution of the Navier-Stokes equations [20]:

(1.1) | ||||

(1.2) | ||||

(1.3) |

where is the flow velocity, is the pressure, is the Laplacian operator, is the flow domain and denotes the boundary of the flow domain .

Let us introduce the stream function such that . Then we derive an alternative formulation for (1.1)-(1.3):

(1.4) | ||||

(1.5) |

where is the unit outward normal to the boundary . (1.4) is also referred to as the biharmonic eigenvalue problem for plate buckling. The naturally equivalent weak form of (1.4)-(1.5) reads: Find such that

(1.6) |

where the bilinear forms and are defined by

There are various numerical approaches to solving (1.4)-(1.5). Mixed finite element methods introduce the auxiliary function to reduce the fourth-order equation to a saddle point problem and then discretize the reduced second order equations with (-) continuous finite elements[8, 22, 10, 29]. However, spurious solutions may occur in some situations. The conforming finite element methods including Argyris elements [2] and the partition of unity finite elements [11], require globally continuously differentiable finite element spaces, which are difficult to construct and implement. The third type of approaches use non-conforming finite element methods, such as Adini elements [1], Morley elements [19, 21, 25] and the ordinary -interior penalty Galerkin method [26]. Their disadvantage lies in that such elements do not come in a natural hierarchy. Both the conforming and nonconforming finite element methods are based on the naturally equivalent variational formulation (1.6), and usually involve low order polynomials and guarantee only a low order of convergence.

In contrast, it is observed in [31] that the spectral method, whenever it is applicable, has tremendous advantage over the traditional -version methods. In particular, spectral and spectral element methods using high order orthogonal polynomials for fourth-order equations result in an exponential order of convergence for smooth solutions [23, 6, 5, 13, 30, 14, 9]. In analogy to the Argyris finite element methods, the conforming spectral element method requires globally continuously differentiable element spaces, which are extremely difficult to construct and implement on unstructured (triangular or quadrilateral) meshes. This is exactly the reason why -conforming spectral elements are rarely reported in literature except those on rectangular meshes [30]. Hence, the spectral methods using globally smooth basis functions are naturally suitable choices in practice for (1.6) on some fundamental regions including rectangles, triangles and polar geometries.

To the best of our knowledge there are few reports on spectral-Galerkin approximation for the Stokes eigenvalue problem by the stream function formulation in polar geometries. The polar transformation introduces polar singularities and variable coefficients of the form in polar coordinates [23, 4], which involves intricate pole conditions thus brings forth severe difficulties in both the design of approximation schemes and the corresponding error analysis. The aim of the current paper is to propose and analyze an efficient spectral-Galerkin approximation for the stream function formulation of the Stokes eigenvalue problem in polar geometries. As the first step, we use the separation of variables in polar coordinates to reduce the original problem in the unit disk to equivalent infinite sequence of one-dimensional eigenvalue problems which can be solved individually in parallel. Rigorous pole conditions involved are prerequisite for the equivalence of the original problem and the sequence of the one-dimensional eigenvalue problems, and thus play a fundamental role in our further study. It is worthy to note, however, that the pole conditions derived for the fourth-order source problems in open literature (such as [23, 4]) are inadequate for our eigenvalue problems since they would inevitably induce improper/spurious computational results.

Based on the pole condition, suitable approximation spaces are introduced and spectral-Galerkin schemes are proposed. A rigorous analysis on the optimal error estimate in certain properly introduced weighted Sobolev spaces is made for each one dimensional eigenvalue problem by using the minimax principle. Finally, we extend our spectral-Galerkin method to solving the stream function formulation of the Stokes eigenvalue problem in an elliptic region. Owing to its non-separable property, this problem is actually another challenge both in computation and analysis. A brief explanation on the implementation of the approximation scheme is first given, and an optimal error estimate is then presented in the Cartesian coordinates under the framework of Babǔska and Osborn [3].

The rest of this paper is organized as follows. In the next section, dimension reduction scheme of the Stokes eigenvalue problem is presented. In §3, we derive the weak formulation and prove the error estimation for a sequence of equivalent one-dimensional eigenvalue problems. Also, we describe the details for an efficient implementation of the algorithm. In §4, we extend our algorithm to the case of elliptic region. We present several numerical experiments in §5 to demonstrate the accuracy and efficiency of our method. Finally, in §6 we give some concluding remarks.

## 2 Dimensionality reduction and pole conditions

Before coming to the main body of this section, we would like to introduce some notations and conventions which will be used throughout the paper. Let be a generic positive weight function on a bounded domain , which is not necessarily in . Denote by the inner product of whose norm is denoted by . We use and to denote the usual weighted Sobolev spaces, whose norm is denoted by . In cases where no confusion would arise, (if ) and may be dropped from the notation. Let (resp. ) be the collection of nonnegative integers (resp. integers). For , we denote by the collection of all algebraic polynomials on with the total degree no greater than . We denote by a generic positive constant independent of any function and of any discretization parameters. We use the expression to mean that .

In the current section, we restrict our attention to the unit disk . We shall employ a classical technique, separation of variables, to reduce the problem to a sequence of equivalent one-dimensional problems.

Throughout this paper, we shall use the polar coordinates for points in the disk such that . We associate any function in Cartesian coordinates with its partner in polar coordinates. If no confusion would arise, we shall use the same notation for and . We now recall that, under the polar coordinates,

(2.1) |

Then the bilinear forms and in (1.6) become

Denote and define the bilinear forms for functions on ,

Further let us assume

(2.2) |

By the orthogonality of the Fourier system , one finds that

For the well-posedness of and , the following pole conditions for (and the same type of pole conditions for ) should be imposed,

(2.3) |

which can be further simplified into the following three categories,

(2.4) | ||||

(2.5) | ||||

(2.6) |

It is worthy to note that our pole condition (2.5) for is a revision of the pole condition in (4.8) of [23]. A concrete example to support the absence of reads,

The boundary conditions on states for all integer . Meanwhile, together with implies . It is then easy to verify that (resp. ) induces a Sobolev norm for any function on which satisfies the boundary condition (resp. ) and the pole condition (resp. ).

We now introduce two non-uniformly weighted Sobolev spaces on ,

(2.7) | ||||

(2.8) |

which are endowed with energy norms

(2.9) |

In the sequel, (1.6) is reduced to a system of infinite one-dimensional eigen problems: to find such that and

(2.10) |

We now conclude this section with the following lemma on and .

###### Lemma 2.1

For ,

(2.11) | |||

(2.12) |

## 3 Spectral Galerkin approximation and its error estimates

Let be the space of polynomials of degree less than or equal to on , and setting . Then the spectral Galerkin approximation scheme to (2.10) is: Find such that and

(3.1) |

Due to the symmetry properties and , we shall only consider from now on in this section.

### 3.1 Mini-max principle

To give the error analysis, we will use extensively the minimax principle.

###### Lemma 3.1

Let denote the eigenvalues of (2.10) and be any -dimensional subspace of . Then, for , there holds

(3.2) |

Proof. See Theorem 3.1 in [18].

###### Lemma 3.2

Let denote the eigenvalues of (2.10) and be arranged in an ascending order, and define

where is the eigenfunction corresponding to the eigenvalue . Then we have

(3.3) | |||||

(3.4) |

Proof. See Lemma 3.2 in [18].

###### Lemma 3.3

Let denote the eigenvalues of (3.1), and be any -dimensional subspace of . Then, for , there holds

(3.5) |

Define the orthogonal projection such that

(3.6) |

###### Theorem 3.1

Proof. According to the coerciveness of and we easily derive . Since , from (3.2) and (3.5) we can obtain . Let denote the space spanned by . It is obvious that is a -dimensional subspace of . From the minimax principle, we have

Since from and the non-negativity of , we have

Thus, we have

The proof of Theorem 3.1 is completed.

### 3.2 Error estimates

Denote by the Jacobi weight function of index , which is not necessarily in . Define the -orthogonal projection such that

Further, for , define recursively the -orthogonal projections such that

Next, for any nonnegative integers , define the Sobolev space

Now we have the following error estimate on .

###### Lemma 3.4 ([15, Theorem 3.1.4])

is a Legendre tau approximation of such that

(3.8) | |||

(3.9) |

Further suppose with . Then for ,

(3.10) |

###### Theorem 3.2

Suppose and with and . Then for ,

(3.11) |

Proof. Define the differential operator and then set

We shall first prove . By (3.9), we find that

where the last equality sign is derived from the boundary condition . Moreover,

As a result, and

Further, implies

which, together with the property (3.8) of , gives

In the sequel, we deduce that if and . In summary, we conclude that .

###### Theorem 3.3

Let is the -th approximate eigenvalue of . If with , then we have

Proof. For any , it can be represented by ; we then have

### 3.3 Implementations

We describe in this section how to solve the problems (3.1) efficiently. To this end, we first construct a set of basis functions for . Let

(3.12) |

where is the Jacobi polynomial of degree .

It is clear that

Define if and otherwise. Our basis functions lead to the penta-diagonal matrix and the deca-diagonal mass matrix instead of the hepta- and hendecagon-diagonal ones in [23].

###### Lemma 3.5

For ,