A FEAST Algorithm with oblique projection for generalized eigenvalue problems

# A FEAST Algorithm with oblique projection for generalized eigenvalue problems

Guojian Yin Department of Mathematics, The Chinese University of Hong Kong, Shatin, Hong Kong (guojianyin@gmail.com).    Raymond H. Chan Department of Mathematics, The Chinese University of Hong Kong, Shatin, Hong Kong. Research is supported in part by HKRGC GRF Grant No. CUHK400412, HKRGC CRF Grant No. CUHK2/CRF/11G, HKRGC AoE Grant AoE/M-05/12, CUHK DAG No. 4053007, and CUHK FIS Grant No. 1902036 (rchan@math.cuhk.edu.hk).    Man-Chung Yeung Department of Mathematics, University of Wyoming, Dept. 3036, 1000 East University Avenue, Laramie, WY 82071, USA (myeung@uwyo.edu).
###### Abstract

The contour-integral based eigensolvers are the recent efforts for computing the eigenvalues inside a given region in the complex plane. The best-known members are the Sakurai-Sugiura (SS) method, its stable version CIRR, and the FEAST algorithm. An attractive computational advantage of these methods is that they are easily parallelizable. The FEAST algorithm was developed for the generalized Hermitian eigenvalue problems. It is stable and accurate. However, it may fail when applied to non-Hermitian problems. In this paper, we extend the FEAST algorithm to non-Hermitian problems. The approach can be summarized as follows: (i) to construct a particular contour integral to form a subspace containing the desired eigenspace, and (ii) to use the oblique projection technique to extract desired eigenpairs with appropriately chosen test subspace. The related mathematical framework is established. We also address some implementation issues such as how to choose a suitable starting matrix and design good stopping criteria. Numerical experiments are provided to illustrate that our method is stable and efficient.

Key words. generalized eigenvalue problems, contour integral, spectral projection

AMS subject classifications. 15A18, 58C40, 65F15

## 1 Introduction

Consider the generalized eigenvalue problem

 Ax=λBx, \hb@xt@.01(1.1)

where . The scalars and the associated vectors , are called the eigenvalues and eigenvectors, respectively. In this paper, we are concerned with computing the eigenvalues of (1) that are located inside a given region in the complex plane together with their eigenvectors.

Large-scale generalized eigenvalue problems arise in various areas of science and engineering, such as dynamic analysis of structures [18], determination of the linearized stability of 3-D fluid flows [7], the electron energy and position problems in quantum chemistry [13], the widely used principal component analysis [36], and the linear discriminant analysis in statistical data analysis [10]. In some applications, it is not the whole spectrum but rather a significant part of it is of interest to the users. For example, in the electronic structure calculations of materials [30], it is required to compute the lowest portion of the spectrum of (1); and in the model reduction of a linear dynamical system, one only needs to know the response over a range of frequencies, see [4, 15].

Solving (1) is a very challenging problem, even if there are various practical methods and software available, see [4]. When and have no special structures and the whole spectrum is required, the QZ method [25] is the most widely used method. It uses a sequence of unitary equivalence transformations to reduce the original pair to generalized Schur form. The algorithm is numerically stable. However its computational cost is expensive, requiring about floating point operations [17].

There are several methods for computing only part of the spectrum of (1). The rational Krylov subspace method approximates all eigenvalues in a union of regions around the chosen shifts [28]. However, it needs locking, purging and implicit restart techniques, which create difficulties in practical implementation. The divide-and-conquer approaches, which are based on the sign-function or inverse-free techniques, are also popular choices [5]. However, these approaches always suffer from slow convergence or poor stability [26]. When and are Hermitian, a shifted block Lanczos algorithm was proposed in [18] to compute the eigenvalues contained in a given interval. The authors designed a shift strategy combining with the LDL decomposition to guarantee that all eigenvalues in the given interval can be found. But the shift strategy is complicated in practical implementation and its efficiency depends on the distribution of the spectrum.

The methods based on contour integral are recent efforts for solving partial spectrum of (1). When (1) is a diagonalizable and non-degenerate system, a contour integral method, called the Sakurai-Sugiura (SS) method, was proposed in [32] for evaluating the eigenvalues inside a specific domain. In this method, the original problem (1) is reduced to a small eigenproblem with Hankel matrices. However, since Hankel matrices are usually ill-conditioned [6], the SS method always suffers from numerical instability [3, 33]. Later in [33], Sakurai et al. used the Rayleigh-Ritz procedure to replace the Hankel matrix approach to get a more stable algorithm called CIRR. In [21] and [20] the block versions of SS and CIRR were proposed respectively to make SS and CIRR also available for degenerate systems. It was shown that SS and CIRR, as well as their corresponding block variants, can be regarded as the Krylov subspace techniques [19, 21].

Recently in [27], Polizzi proposed another eigenproblem solver based on contour integral, called FEAST, to solve (1) under the assumptions that and are Hermitian and is positive definite, i.e, (1) is a Hermitian system or is a definite matrix pencil [4]. His algorithm computes all eigenvalues inside a given interval, along with their associated eigenvectors. The FEAST algorithm is accurate and reliable, see [23] for more details. It was shown that the FEAST algorithm can be understood as a standard subspace iteration with the Rayleigh-Ritz procedure [38].

The FEAST algorithm originally was proposed for Hermitian problems. However, when it is applied to non-Hermitian problems, it may fail to find the desired eigenvalues. A simple example (Example 3.1) will be given later to illustrate this. Motivated by this fact, our goal is to generalize the FEAST algorithm to non-Hermitian problems and establish the related mathematical framework. The only requirement in our method is that the corresponding matrix pencil is regular, i.e., is not identically zero for all . In other words, our new method can deal with the most common generalized eigenvalue problems [4]. Unlike FEAST which uses the Rayleigh-Ritz procedure to extract the desired eigenpairs, our generalized FEAST algorithm uses the oblique projection technique to find them.

One of the main drawbacks of the contour-integral based algorithms, including ours, is that the information about the number of desired eigenvalues has to be known a priori. This is because we need this information (i) to choose an appropriate size for the starting matrix to start the methods, and (ii) to determine whether all desired eigenvalues are captured when the methods stop. In this paper, we use a way similar to that proposed in [31] to find an upper bound of the number of eigenvalues inside the target region. It will help to choose the starting matrix. We also provide good stopping criteria which not only can guarantee that all desired eigenpairs are captured, but also can give the accuracy that our method can achieve. With these efforts, our method is applicable in practical implementation. Comparisons with Matlab’s eig command and the block version of CIRR [20, 31] show that our method is an efficient and stable solver for large generalized eigenvalue problems.

The outline of the paper is as follows. In Section 2, we briefly describe two typical contour-integral based eigensolvers: the CIRR method [33] and the FEAST algorithm [27]. In Section 3, we extend the FEAST algorithm to non-Hermitian problems and establish the related mathematical framework. In Section 4, we present a way to find a suitable upper bound of the number of eigenvalues inside the prescribed region and give the stopping criteria. Then we present the complete algorithm of our method. In Section 5, numerical experiments are reported to illustrate the efficiency and stability of our method.

Throughout the paper, we use the following notation and terminology. The subspace spanned by the columns of a matrix is denoted by . The rank and conjugate transpose of are denoted by and respectively. We denote the submatrix consisting of the first rows and the first columns of by , the submatrixs consisting of the first columns of and the first rows of by and respectively. The algorithms are presented in Matlab style.

## 2 Two typical contour-integral based eigensolvers

In this section, we briefly introduce two typical contour-integral based eigensolvers: the CIRR method and the FEAST algorithm. Before starting our discussion, we present some facts about matrix pencil which will play important roles in the derivation of these methods.

Recall that the matrix pencil is regular if is not identically zero for all . The Weierstrass canonical form for regular matrix pencils is given below.

###### Theorem 2.1 ([16])

Let be a regular matrix pencil of order . Then there exist nonsingular matrices and such that

 TAS=[Jd00In−d]andTBS=[Id00Nn−d], \hb@xt@.01(2.1)

where is a matrix in Jordan canonical form with its diagonal entries corresponding to the eigenvalues of , is an nilpotent matrix also in Jordan canonical form, and denotes the identity matrix of order .

Let be of the form

 Jd=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣Jd1(λ1)0⋯00Jd2(λ2)⋯0⋮⋮⋱⋮00⋯Jdm(λm)⎤⎥ ⎥ ⎥ ⎥ ⎥⎦ \hb@xt@.01(2.2)

where and are matrices of the form

 Jdi(λi)=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣λi10⋯00λi1⋮⋱⋱⋱0⋮⋱⋱10⋯0λi⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,i=1,2,…,m

with being the eigenvalues. Here the are not necessarily distinct and can be repeated according to their multiplicities.

Let be of the form

 N(n−d)=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣Nd′10⋯00Nd′2⋯0⋮⋮⋱⋮00⋯Nd′m′⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,

where and are matrices of the form

 Nd′i=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣010⋯0001⋮⋱⋱⋱0⋮⋱⋱10⋯00⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,i=1,2,…,m′.

Let us partition into block form

 S=[S1,S2,…,Sm,Sm+1], \hb@xt@.01(2.3)

where , and . One can easily verify by (2.1) (or see (3.8)) that the first column in each , , is the eigenvector associated with the eigenvalue of (1).

### 2.1 The CIRR method

In [32], Sakurai et al. used a moment-based technique to formulate a contour-integral based method, which now is known as SS, for finding the eigenvalues of (1) inside a given region. Since the SS method suffers from numerical instability, a stable version, called the CIRR method, was later developed using the Rayleigh-Ritz procedure [20, 33].

Below we show how to use the CIRR method to compute the eigenvalues inside , which is a given positively oriented simple closed curve in the complex plane. Without loss of generality, let the set of eigenvalues of (1) enclosed by be , and be the number of eigenvalues inside with multiplicity taken into account. Define the contour integrals

 Fi:=12π√−1∮Γzi(zB−A)−1dz,i=0,1,…. \hb@xt@.01(2.4)

It was shown in [20] that

 Fi=S(:,1:s)(J(1:s,1:s))iT(1:s,:) \hb@xt@.01(2.5)

where and are given by Theorem 2.1. The CIRR method uses the Rayleigh-Ritz procedure to extract the eigenpairs inside [33]. Originally, it was derived under the assumptions that (1) is a Hermitian system and the desired eigenvalues are distinct, i.e., they are non-degenerate [33]. However, based on the following theorem, the CIRR method was extended to non-Hermitian cases in [20].

###### Theorem 2.2 ([20])

Let , , , be arbitrary matrices, and , where is defined in (2.4). A projected matrix pencil is defined by and . If the ranks of both and are , the non-singular part of the projected matrix pencil is equivalent to .

Theorem 2.2 says that the desired eigenvalues can be solved by computing the eigenvalues of the projected eigenproblem , if the ranks of both and are .

Define the columns of to be

 D(:,i)=(T−1)(:,1:s)(J(1:s,1:s))i−1T(1:s,:)v,i=1,2,…,t. \hb@xt@.01(2.6)

It is shown in [21] that the rank of is , if all the elements of are non-zero and there is no degeneracy in . By (2.4) and (2.6), we have

 R(:,i)=F0D(:,i)=Fiv,i=1,…,t. \hb@xt@.01(2.7)

Based on Theorem 2.2, the right Ritz space is spanned by the vectors . As for the left Ritz space, in the CIRR for non-Hermitian problems, it is chosen to be the same as the right one.

In order to remove the restriction on the non-degeneracy in , a block CIRR method was also proposed in [20], where the random vector is replaced by a random matrix of appropriate dimension. The right and the left Ritz spaces are spanned by the vectors , where is a positive integer satisfying . Then all eigenvalues of -order degeneracy, , can be found [20, 31]. Obviously, the main task of the block CIRR method is to evaluate . In practice, are computed approximately by a quadrature scheme according to (2.4). Below is the block CIRR algorithm for non-Hermitian problems.

###### Algorithm 1

Input matrices and , a random matrix , and a positive integer satisfying . The function “Block_CIRR” computes eigenpairs of (1) that are located inside , and they are output in the vector and the matrix .

 Function [Λs,Xs] = Block_CIRR(A,B,Y,g,Γ) 1. Compute Ri=FiY,i=0,1,…,g−1, approximately by a quadrature scheme. 2. Compute the singular value decomposition: [R0,…,Rh−1]=UΣV∗. 3. Set ¯A=U∗AU and ¯B=U∗BU. 4. Solve the generalized eigenproblem of size t: ¯Ay=¯λ¯By, to obtain the eigenpairs {(¯λi,yi)}ti=1. 5. Compute ¯xi=Uyi,i=1,2,…t, and select s approximate eigenpairs.

We see that for Algorithm 1, in order to choose the parameters and , one needs to know and the degrees of degeneracy of the desired eigenvalues. In the recent article [31], Sakurai et al. gave a method to choose a suitable for fixed . We will describe the method in Section 4.1. Also in [31], the authors suggested to perform iterative refinement in case the eigenpairs computed by Algorithm 1 cannot attain the prescribed accuracy.

### 2.2 The FEAST algorithm

In this section, we give a brief introduction to the FEAST algorithm [27] due to Polizzi. The FEAST algorithm was formulated under the assumptions and are Hermitian and is positive definite, in which case the eigenvalues of (1) are real-valued [4]. It is used to find all eigenvalues of (1) within a specified interval, say , and their associated eigenvectors. Here we also assume that the desired eigenvalues are . Let be any contour that contains inside. For example, can be the circle with center at and radius .

When is a definite matrix pencil, the Weierstrass canonical form (2.1) is reduced to

 TAS=Λ=diag([λ1,λ2⋯,λn])andTBS=In. \hb@xt@.01(2.8)

It is easy to see that , and the columns are the eigenvectors corresponding to , . Therefore according to (2.5), we have

 F0=S(:,1:s)T(1:s,:)=S(:,1:s)(S(:,1:s))∗. \hb@xt@.01(2.9)

Let , where is an matrix with . Then forms a basis for the desired eigenspace if is full-rank. In the FEAST algorithm, the elements of are chosen to be random numbers to increase the chance that may form a basis for . According to the Rayleigh-Ritz procedure [27, 37], the problem (1) is transformed to the problem of computing the eigenpairs of the smaller generalized eigenvalue problem

 ^Ay=λ^By

of size , where , , and .

Now the task is to get the basis . Since is unknown, one cannot use (2.9). Instead is computed by (2.4) numerically using a quadrature scheme such as the Gauss-Legendre quadrature rule [11]. The complete FEAST algorithm is given as follows.

###### Algorithm 2

Input Hermitian matrices and with being positive definite, a random matrix , where , the circle enclosing the interval , and a convergence tolerance . The function “Feast” computes eigenpairs of (1) that satisfy

 ^λi∈[σ1,σ2]ands∑i=1^λi<ϵ, \hb@xt@.01(2.10)

and they are output in the vector and the matrix .

 Function [Λs,Xs] = Feast(A,B,Y,Γ,ϵ) 1. Compute W=F0Y approximately by the Gauss-Legendre quadrature rule. 2. Set ^A=W∗AW and ^B=W∗BW. 3. Solve the generalized eigenproblem of size t: ^Ay=^λ^By, to obtain the eigenpairs {(^λi,yi)}ti=1. 4. Compute ^xi=Wyi,i=1,2,…t. 5. Check if {(^λi,^xi)}ti=1 satisfy the convergence criteria (2.10). If s eigenpairs satisfy (2.10), stop. Otherwise, set Xt=[^x1,^x2,…,^xt] and Y=BXt, then go back to Step 1.

The FEAST algorithm can be understood as a standard subspace iteration combining with the Rayleigh-Ritz procedure [38]. It is an accurate and reliable technique [23]. However, like CIRR, in practice we have to know in advance in order to choose for the starting matrix and to determine whether all desired eigenpairs are found. In [38], a technique was presented to find an estimation of under the conditions that and are Hermitian and is positive definite. The estimation can help to select an appropriate starting matrix .

## 3 A FEAST method with oblique projection

The FEAST method was originally developed for generalized Hermitian eigenvalue problems. We note that it may fail when applied to non-Hermitian problems, as is shown by the following example.

Example 3.1: Let and be defined as follows:

 A=⎛⎜ ⎜ ⎜⎝0005002000.5000.2000⎞⎟ ⎟ ⎟⎠andB=⎛⎜ ⎜ ⎜⎝0001001001001000⎞⎟ ⎟ ⎟⎠.

Then (2.1) holds with

 T=⎛⎜ ⎜ ⎜⎝1000010000100001⎞⎟ ⎟ ⎟⎠,andS=⎛⎜ ⎜ ⎜⎝0001001001001000⎞⎟ ⎟ ⎟⎠.

In fact, and . Suppose we want to find the eigenvalues of lying inside the unit circle. Obviously, the eigenvalues of interest are and , and the corresponding eigenvectors are and . By (2.5) and (2.9), it is easy to check that all projected matrices and in the FEAST algorithm are zero for any given random matrix . Hence any complex number will be an eigenvalue of the corresponding projected eigenproblem !

In view of the above example, we now extend the FEAST algorithm to non-Hermitian problems. The only requirement of our method is that the corresponding matrix pencil is regular. Hence our method can deal with the most common generalized eigenvalue problems [4]. Below we establish the related mathematical framework.

Again without loss of generality, we let the eigenvalues of (1) enclosed by be , and be the number of eigenvalues inside with multiplicity taken into account. Define the contour integral

 Q:=F0=12π√−1∮Γ(zB−A)−1Bdz. \hb@xt@.01(3.1)

For , according to (2) the matrix is invertible. Hence by (2.1), the resolvent operator is given by

 (zB−A)−1B = S[(zId−Jd)−100(zNn−d−In−d)−1]TB \hb@xt@.01(3.2) = S[(zId−Jd)−100(zNn−d−In−d)−1][Id00Nn−d]S−1 = S[(zId−Jd)−100(zNn−d−In−d)−1Nn−d]S−1.

Notice that the first diagonal block in (3.2) is of the block diagonal form where each diagonal sub-block is of the form:

 (zIdi−Jdi(λi))−1=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣1z−λi1(z−λi)2⋯1(z−λi)di01z−λi⋯1(z−λi)di−1⋮⋮⋱⋮00⋯1z−λi⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦. \hb@xt@.01(3.3)

Similarly, the second diagonal block in (3.2) is also of the block diagonal form where each diagonal sub-block is of the form:

 \hb@xt@.01(3.4)

Then, according to the residue theorem in complex analysis [1], it follows from (3.2)–(3.4) that

 Q=12π√−1∮Γ(zB−A)−1Bdz=S[Is000]S−1=S(:,1:s)(S−1)(1:s,:). \hb@xt@.01(3.5)

Using the remark following (2.3), we know that contains the eigenspace corresponding to the eigenvalues . Since , is a spectral projector onto . Define , where is an appropriately chosen matrix so that forms a basis for . As in FEAST, we choose randomly, and we show below that the resulting does form a basis for .

###### Lemma 3.1

Let . If the entries of are random numbers from a continuous distribution and that they are independent and identically distributed (i.i.d.), then with probability 1, the matrix is nonsingular.

Proof. Let . Consider , the square of the absolute value of , as a real coefficient polynomial in the elements of . We now show that the polynomial is non-zero, i.e. .

Since the rank of is , it has an nonsingular submatrix. Without loss of generality, let the left submatrix of be nonsingular. Then set , we have . Therefore the polynomial is not identically zero. Hence the set of zeros of the polynomial is of measure zero in according to [22, Prop. 4]. When is randomly picked, it is with probability 1 that , or equivalently, .

Since

 U=QY=S(:,1:s)(S−1)(1:s,:)Y, \hb@xt@.01(3.6)

and is nonsingular by Lemma 3.1, the columns of form a basis for . Our next step is to project the original problem (1) onto a small subspace where we can extract the required eigenpairs. Unlike CIRR and FEAST which use the Rayleigh-Ritz procedure to extract the desired eigenpairs, here we resort to the oblique projection method, namely, the Petrov-Galerkin condition [4, 29]. Since contains the eigenspace corresponding to the eigenvalues inside , it is natural to choose as the search subspace. Our next task is to seek an appropriate test subspace.

Let us further partition each in (2.3) into with , . Notice that by (2.1), we have for any eigenvalue , ,

 (λiB−A)S[Id00Nn−d] = T−1[λiId−Jd00(λiNn−d−In−d)Nn−d] \hb@xt@.01(3.7) = T−1[Id00Nn−d][λiId−Jd00λiNn−d−In−d] = BS[λiId−Jd00λiNn−d−In−d].

By comparing the first columns on both sides above, we get

 (λiB−A)sij=Bsij−1,1≤j≤di,1≤i≤m, \hb@xt@.01(3.8)

with . In particular, is the eigenvector corresponding to the eigenvalue for all . From (3.8), we see that . Therefore we choose as the test subspace. The Petrov-Galerkin condition then becomes:

 (Axi−λiBxi) ⊥ BK,1≤i≤l, \hb@xt@.01(3.9)

with and .

Now we are in the position to find a basis for . From (2.1), we know that the rank of is . Hence by Lemma 3.1, is full-rank, which implies that forms a basis for . Recall that forms a basis for , and we seek an . Therefore (3.9) can be written in matrix form

where satisfying . Accordingly, we get the projected eigenproblem

 ~Ay=~λ~By, \hb@xt@.01(3.11)

with

 ~A=(BU)∗AUand~B=(BU)∗BU. \hb@xt@.01(3.12)

Our method is to compute the desired eigenpairs of (1) by solving the projected eigenproblem (3.11). The theory behind our method is given in the next theorem.

###### Theorem 3.2
1. Let be eigenpairs of the projected eigenproblem (3.11). Then are the eigenpairs of (1) located inside .

2. If is the eigenspace of (3.11) corresponding to the eigenvalue , then is the eigenspace of (1) corresponding to the eigenvalue .

Proof. (a): First, since and forms a basis for , there exist vectors such that

By (3.10), we have . Since is full-rank, . Consequently, . Thus are the eigenpairs of (1).

Next we want to show that are exactly the eigenvalues of (1) inside . By (3.7), we can easily verify that

 (zB−A)S(:,1:s)=BS(:,1:s)(zIs−(Jd)(1:s,1:s)),∀z∈C.

Hence by the definitions in (3.12) and (3.6), we have

 z~B−~A = (BU)∗(zB−A)S(:,1:s)(S−1)(1:s,:)Y \hb@xt@.01(3.13) = (BU)∗BS(:,1:s)(zIs−(Jd)(1:s,1:s))(S−1)(1:s,:)Y.

Therefore,

 det(z~B−~A)=det((BU)∗BS(:,1:s))det(zIs−(Jd)(1:s,1:s))det((S−1)(1:s,:)Y).

Since is of full rank (see (2.1)), and is nonsingular by Lemma 3.1, the matrix is nonsingular. Hence if and only if . By the special structure of (see (2)), the zeros of the determinant are precisely with multiplicities respectively. Therefore, are precisely all the eigenvalues of (1) inside .

(b): Let be the eigenspace of (1.1) corresponding to the eigenvalue . Then by Part(a). From (2) and (3.13), it can be seen that is equal to the number of Jordan blocks in corresponding to the eigenvalue . On the other hand, the later coincides with the number of Jordan blocks in corresponding to , which is equal to . Therefore, we have . Since has full column rank, it follows that

 dim(UY~λi)=dim(Y~λi)=dim(X~λi).

Therefore, we have .

Thus computing the eigenpairs of (1) inside is transformed into computing the eigenpairs of the small projected problem (3.11), which can be solved by standard solvers in LAPACK [4, 12], such as xGGES and xGGEV [2].

In order to construct the projected eigenproblem (3.11), the most important task is to compute the matrix in (3.6). In practice, we have to compute by using the contour integral in (3.5), i.e.

 U=QY=12π√−1∮Γ(zB−A)−1BdzY, \hb@xt@.01(3.14)

which can be approximated by using for example the Gauss-Legendre quadrature rule.

We summarize our above derivation into the following algorithm.

###### Algorithm 3

Input , an i.i.d. random matrix where , a closed curve , a convergence tolerance , and “max_iter” to control the maximum number of iterations. The function “Eigenpairs” computes eigenpairs of (1) that satisfies

 ~λi inside Γand∥A~xi−~λiB~xi∥2∥A~xi∥2+∥B~xi∥2<ϵ. \hb@xt@.01(3.15)

The results are stored in the vector and the matrix .

 Function [Λ,X]=\textscEigenpairs(A,B,Y,Γ,ϵ,max\_iter) 1. For k=1,⋯,max\_iter 2. Compute U in (3.14) approximately by the Gauss-Legendre quadrature rule. 3. Compute QR decompositions: U=U1R1 and BU=U2R2. 4. Form ~A=U∗2AU1 and ~B=U∗2BU1. 5. Solve the projected eigenproblem ~Ay=~λ~By of size t to obtain eigenpairs {(~λi,yi)}ti=1. Set ~xi=U1yi,i=1,2,…,t. 6. Set Λ=[ ] and X=[ ]. 7. For i=1:t 8. If (~λi,~xi) satisfies (3.15), then Λ=[Λ,~λi] and X=[X,~xi]. 9. End 10. If there are s eigenpairs satisfy (3.15), stop. Otherwise, set Y=U1. 11. End.

Algorithm 3 faces the same issue occurred in CIRR and FEAST, that is we have to know in advance in order to choose for the starting matrix and to determine whether all desired eigenpairs are found. More precisely, the number of columns of should satisfy . This is because if , then . Consequently, the columns of cannot form a basis for . Moreover, since is unknown a priori, it is also hard to decide whether all desired eigenvalues are found and therefore it is hard to decide when to stop the algorithm. In the next section, we present strategies to address these two problems, which will make the resulting algorithm applicable to practical implementation.

## 4 Our Algorithm

In this section, we first introduce a method to find an upper bound for . Our method is similar to a technique proposed in [31]. Next we design stopping criteria to guarantee all eigenvalues are captured. After that, we present the complete algorithm.

### 4.1 Finding an Upper Bound for the Number of Eigenvalues inside Γ

In the following, by “”, we mean is a matrix with i.i.d. entries drawn from the standard normal distribution .

In [14], an approach was proposed for finding an estimation of . Here we derive a similar method. Let . One can easily verify that the mean for any matrix . In particular, by (3.5) and (3.6),

 1pE[trace(Y∗U)] = 1pE[trace(Y∗QY)] \hb@xt@.01(4.1) = trace(Q)=trace(S(:,1:s)(S−1)(1:s,:)) = trace((S−1)(1:s,:)S(:,1:s))=trace(Is)=s.

So is a good initial estimation of . In [14], the entries of are taken to be or with equal probability.

In Algorithm 3, we need to choose an upper bound of for the starting matrix . However, in practice, may be less than , and we may not know this fact as we do not know . Below we present a way to find a better estimate based on .

Recall that in CIRR, for a given integer , we need to select an such that . In [31], a method was proposed for selecting a suitable . The method works as follows. Assume an estimation of is available. Let , where and . Compute (see (2.4)) and the minimum singular value of . If is not small, then is increased until of the updated is small enough. Similar to this idea, we determine an upper bound for our Algorithm 3 by using the numerical rank of , where . The rationale behind our method is as follows.

Let be a positive integer and . Consider

 Us†=QYs†