A contour-integral based QZ algorithm for generalized eigenvalue problems

# A contour-integral based QZ algorithm for generalized eigenvalue problems

Guojian Yin Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, P.R. China (guojianyin@gmail.com).
###### Abstract

Recently, a kind of eigensolvers based on contour integral were developed for computing the eigenvalues inside a given region in the complex plane. The CIRR method is a classic example among this kind of methods. In this paper, we propose a contour-integral based QZ method which is also devoted to computing partial spectrum of generalized eigenvalue problems. Our new method takes advantage of the technique in the CIRR method of constructing a particular subspace containing the eigenspace of interest via contour integrals. The main difference between our method and CIRR is the mechanism of extracting the desired eigenpairs. We establish the related framework and address some implementation issues so as to make the resulting method applicable in practical implementations. Numerical experiments are reported to illustrate the numerical performance of our new method.

Key words. generalized eigenvalue problems, contour integral, QZ method, generalized Schur decomposition

AMS subject classifications. 15A18, 58C40, 65F15

## 1 Introduction

Let and be large matrices. Assume that we have a generalized eigenvalue problem

 Ax=λBx, \hb@xt@.01(1.1)

and want to compute the eigenvalues , along with their eigenvectors , of (1) inside a given region in the complex plane. This problem arises in various areas of scientific and engineering applications, for example in the model reduction of a linear dynamical system, one needs to know the response over a range of frequencies, see [4, 13, 21]. Computing a number of interior eigenvalues of a large problem remains one of the most difficult problems in computational linear algebra today [10]. In practice, the methods of choice are always based on the projection techniques, the key to the success of which is to construct an approximately invariant subspace enclosing the eigenspace of interest. The Krylov subspace methods in conjunction with the spectral transformation techniques, such as the shift-and-invert technique, are most often used [22, 26].

Recently, the eigensolvers based on contour integral were developed to compute the eigenvalues inside a prescribed domain in the complex plane. The best-known methods of this kind are the Sakurai-Sugiura (SS) method [24] and the FEAST algorithm [20]. A major computational advantage of these contour-integral based methods is that they can be easily implemented in modern distributed parallel computers [3, 18]. The FEAST algorithm works under the conditions that matrices and are Hermitian and is positive definite. In the SS method, the original eigenproblem (1) is reduced to a small one with Hankel matrices, if the number of sought-after eigenvalues is small. However, since Hankel matrices are usually ill-conditioned [5], the SS method always suffers from numerical instability [3, 25]. By noticing this fact, later in [25], Sakurai et al. used the Rayleigh-Ritz procedure to replace the Hankel matrix approach to get a more stable algorithm called CIRR.

Originally, the CIRR method was formulated under the assumptions that matrices and are Hermitian and is positive definite, i.e., (1) is a Hermitian problem [4]. Moreover, it is required that the eigenvalues of interest are distinct. In [17], the authors adapted the CIRR method to non-Hermitian cases; meanwhile, they presented a block version of the CIRR method so as to deal with the degenerate systems.

The CIRR method is always accurate and powerful. It first constructs a subspace containing the eigenspace of interest through a sequence of particular contour integrals. Then the orthogonal projection technique is used to extract desired eigenpairs. In our work, we propose a contour-integral based QZ method for solving partial spectrum of (1). The motivation stems from the attempt of using the oblique projection method, instead of the orthogonal one, to extract desired eigenpairs in the CIRR method. When using oblique projection technique, the most important task is to find an appropriate left subspace, we borrow ideas of the JDQZ method [11], and derive our new method. We establish the related mathematical framework. Some implementation issues will also be discussed before giving the resulting algorithm.

The rest of the paper is organized as follows. In Section 2, we briefly review the CIRR method [25]. In Section 3, we derive a contour-integral based QZ method and establish the related mathematical framework. Then we will discuss some implementation issues and present the complete algorithm. Numerical experiments are reported in Section 4 to illustrate the numerical performance of our new method.

Throughout the paper, we use the following notation and terminology. The subspace spanned by the columns of matrix is denoted by . The rank of matrix is denoted by . For any matrix , we denote the submatrix that lies in the first rows and the first columns of by , the submatrix consisting of the first columns of by , and the submatrix consisting of the first rows of by . The algorithms are presented in Matlab style.

## 2 The CIRR method

In [24], Sakurai et al. used a moment-based technique to formulate a contour-integral based method, i.e., the SS method, for finding the eigenvalues of (1) inside a given region. In order to improve the numerical stability of the SS method, a variant of it used the Rayleigh-Ritz procedure to extract desired eigenpairs. This leads to the so-called CIRR method [17, 25]. Originally the CIRR method was derived in [25] under the assumptions that (i) matrices and are Hermitian with being positive definite, and (ii) the eigenvalues inside the given region are distinct. In [17], the authors adapted the CIRR method to the non-Hermitian cases, meanwhile, a block version was proposed to deal with the degenerate problems. In this section we give a briefly review of the block CIRR method.

The matrix pencil is regular if is not identically zero for all [2, 9]. The Weierstrass canonical form of regular matrix pencil is defined as follows.

###### Theorem 2.1 ([14])

Let be a regular matrix pencil of order . Then there exist nonsingular matrices and such that

 TAS=[Jd00In−d]andTBS=[Id00Nn−d], \hb@xt@.01(2.1)

where is a matrix in Jordan canonical form with its diagonal entries corresponding to the eigenvalues of , is an nilpotent matrix also in Jordan canonical form, and denotes the identity matrix of order .

Let in (2.1) be of the form

 Jd=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣Jd1(λ1)0⋯00Jd2(λ2)⋯0⋮⋮⋱⋮00⋯Jdm(λm)⎤⎥ ⎥ ⎥ ⎥ ⎥⎦, \hb@xt@.01(2.2)

where , for and are matrices of the form

 Jdi(λi)=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣λi10⋯00λi1⋮⋱⋱⋱0⋮⋱⋱10⋯0λi⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,i=1,2,…,m

with being the eigenvalues. Here the are not necessarily distinct and can be repeated according to their multiplicities.

Let us partition into block form

 S=[S1,S2,…,Sm,Sm+1], \hb@xt@.01(2.3)

where , , and . Then the first column in each is an eigenvector associated with eigenvalue for [4, 17, 18, 27].

Let be a given positively oriented simple closed curve in the complex plane. Below we show how to use the block CIRR method to compute the eigenvalues of (1) inside , along with their associated eigenvectors. Without loss of generality, let the set of eigenvalues of (1) enclosed by be , and be the number of eigenvalues inside with multiplicity taken into account.

Define the contour integrals

 Fk:=12π√−1∮Γzk(zB−A)−1Bdz,k=0,1,…. \hb@xt@.01(2.4)

With the help of residue theorem in complex analysis [1], it was shown in [18] that

 Fk=S(:,1:s)(J(1:s,1:s))k(S−1)(1:s,:),k=0,1,…. \hb@xt@.01(2.5)

Let and be two positive integers satisfying , and be an random matrix. Define

 Uk:=FkY, k=0,…,g−1, and U:=[U0,U1,…,Ug−1]. \hb@xt@.01(2.6)

We have the following result for the CIRR method.

###### Theorem 2.2

Let the eigenvalues inside be , then the number of eigenvalues of (1) inside is , counting multiplicity. If , then we have

 span{U}=span{S(:,1:s)}. \hb@xt@.01(2.7)

Proof. By (2.5) and (2.6), we know that

 U=S(:,1:s)E, \hb@xt@.01(2.8)

where

 E=[(S−1)(1:s,:)Y,J(1:s,1:s)(S−1)(1:s,:)Y,…,(J(1:s,1:s))g−1(S−1)(1:s,:)Y]. \hb@xt@.01(2.9)

Since the rank of is , we have that is full-rank, following from which the expression (2.7) holds.

According to Theorem 2.2, we know that contains the eigenspace corresponding to the desired eigenvalues. The block CIRR method uses the well-known orthogonal projection technique to extract the eigenpairs inside from , i.e., imposing the Ritz-Galerkin condition:

 Ax−λBx ⊥ span{U}, \hb@xt@.01(2.10)

where and .

The main task of the block CIRR method is to evaluate (cf. (2.6)). In practice, have to be computed approximately by a numerical integration scheme:

 Uk≈~Uk=12π√−1q∑j=1ωjzkj(zjB−A)−1BY,k=0,1,…,g−1, \hb@xt@.01(2.11)

where are the integration points and are the corresponding weights. From (2.11), it is easy to see that the dominant work of the block CIRR method is actually solving generalized shifted linear systems of the form

 (zjB−A)Xj=BY,j=1,2,…,q. \hb@xt@.01(2.12)

Noticing that the integration points and the columns of right-hand sides are independent, the CIRR method can be easily implemented in modern distributed parallel computer.

The complete block CIRR method is summarized as follows.

Algorithm 1: The block CIRR method    Input: . Output: Approximate eigenpairs , inside . 1. Compute approximately by (2.11). 2. Compute the singular value decomposition of . 3. Set and . 4. Solve the generalized eigenproblem of size : , to obtain the eigenpairs . 5. Compute , and select approximate eigenpairs .

## 3 A contour-integral based QZ algorithm

The contour-integral based methods are recent efforts for the eigenvalue problems. The CIRR method is a typical example among the methods of this kind. According to the brief description in the previous section, the basic idea of the block CIRR method can be summarized as follows: (i) constructing a particular subspace that contains the desired eigenspace by means of a sequence of contour integrals (cf. (2.4)), and (ii) using the orthogonal projection technique, with respect to the subspace (cf. (2.6)), to extract the desired eigenpairs. In this section, we will derive another contour-integral based eigensolver. The idea stems from the attempt to use the oblique projection technique to extract desired eigenvalues in the block CIRR method. Applying the oblique projection method, the key step is finding a suitable left subspace. We find an appropriate left subspace via using the QZ method to generate a generalized Schur decomposition associated with the desired eigenvalues. This intention finally leads us to a contour-integral based QZ method for solving (1). We call the resulting algorithm CIQZ for ease of reference.

In this section, we first detail the derivation of our contour-integral based QZ method. Later on, we discuss some implementation issues that our contour-integral based QZ method may encounter in the practical application, after that, we give the complete CIQZ method.

### 3.1 The derivation of the CIQZ algorithm

The CIRR method uses the orthogonal projection technique to extract the sought-after eigenpairs from . Here we consider using the oblique projection technique [4, 22], another class of projection method, to compute the desired eigenpairs.

Since contains the eigenspace of interest, it is natural to choose as the right subspace (or search subspace). The oblique projection technique extracts the desired eigenpairs from by imposing the Petrov-Galerkin condition, which requires orthogonality with respect to some left subspace (or test subspace), say, :

 Ax−λBx ⊥ span{W}, \hb@xt@.01(3.1)

where is located inside , , and is an orthogonal matrix. Let be an matrix whose columns form an orthogonal basis of . The orthogonality condition (3.1) leads to the projected eigenproblem

 W∗AVy=λW∗BVy, \hb@xt@.01(3.2)

where satisfies .

Now our task is to seek an appropriate left subspace . Our discussion begins with a partial generalized Schur form for matrix pair .

###### Definition 3.1 ([11])

A partial generalized Schur form of dimension for a matrix pair is the decomposition

 AQs=ZsHs,BQs=ZsGs, \hb@xt@.01(3.3)

where and are orthogonal matrices, and and are upper triangular matrices. A column is referred to as a generalized Schur vector, and we refer to a pair as a generalized Schur pair.

The formulation (3.3) is equivalent to

 (Zs)∗AQs=Hs,(Zs)∗BQs=Gs, \hb@xt@.01(3.4)

from which we know that are the eigenvalues of . Let be the eigenvectors of pair associated with , then we have are the eigenpairs of [11, 19].

Applying the QZ algorithm to (3.2) to yield generalized Schur form

 (PL)∗(W∗AV)PR=HAand(PL)∗(W∗BV)PR=HB, \hb@xt@.01(3.5)

where and are orthogonal matrices, and are upper triangular matrices. The eigenvalues of pair are [15, 19].

Comparing (3.4) with (3.5), it is readily to see that we have constructed a partial generalized Schur form in (3.5) for matrix pair : constructs a and constructs a .

Since the desired eigenvalues are finite, the diagonal entries of and are non-zero, which means that and are nonsingular. In view of (3.5), we can conclude that

 span{WPL}=span{AVPR}=span{BVPR}. \hb@xt@.01(3.6)

On the other hand, since and are nonsingular, we have

 span{W}=span{AV}=span{BV}. \hb@xt@.01(3.7)

Motivated by (3.7), we choose the left subspace to be . Below we want to justify this choice.

###### Theorem 3.2

Let , be arbitrary matrices, and . A projected matrix pencil is defined by and . If ranks of both and are , then the eigenvalues of are , i.e., the eigenvalues that are located inside .

The proof is almost identical with that of Theorem 4 in [17], where the contour integrals were defined as , that is, the term was dropped comparing with the expression (2.5).

Theorem 3.2 says that the desired eigenvalues can be solved via computing the eigenvalues of projected eigenproblem , if the ranks of both and are . Due to this, we want to show the following results.

###### Theorem 3.3

If the rank of is , then the ranks of and are .

Proof. We first show that the rank of is . By (2.1) and (2.8), we have

 (S−1)(1:s,:)U=(S−1)(1:s,:)S(:,1:s)E=E. \hb@xt@.01(3.8)

Since is full-rank, by (2.8), we know that . Therefore, the rank of is .

Next we show that the rank of is . For convenience, we turn to show that the rank of , i.e., the conjugate transpose of , is .

Since (cf. (3.7)), there exists a nonsingular matrix such that . According to (2.1), (2.5), and (2.8), we have

 ((T−1)(:,1:s))∗(AU+BU)=(BS(:,1:s))∗BS(:,1:s)E(Δ+Is). \hb@xt@.01(3.9)

In view of (2.1), we know is full-rank, which means is nonsingular. By (3.9), we can conclude that is full rank, thus the rank of is .

Based on Theorem 3.2 and Theorem 3.3, we have that the eigenvalues of are the eigenvalues of (1) inside , which justifies our choice of taking the left subspace to be . On the other hand, the columns of and form the base of and , respectively. As a consequence, there exist nonsingular matrices and , such that

 (AU+BU)P1=[W,0],UP2=[V,0]. \hb@xt@.01(3.10)

Now, we have

 P∗1(z(AU+BU)∗BU−(AU+BU)∗AU)P2=[zW∗BV−W∗AV000]. \hb@xt@.01(3.11)

Therefore, shares the same eigenvalues with , which are by (3.5). Let be the eigenpairs of , then according to (3.4) and (3.5), we have that are exactly the eigenpairs of (1) inside .

We use the following algorithm to summarize the above discussion.

Algorithm 2: A contour-integral based QZ algorithm.    Input: . Output: Approximate eigenpairs . 1. Compute approximately by (2.11). 2. Form and compute orthogonalization: and . 3. Compute and . 4. Compute . 5. Compute and . 6. Select the approximate eigenpairs .

### 3.2 The implementation issues

If we apply Algorithm 2 to compute the eigenvalues inside , we will encounter some issues in practical implementation, just like other contour-integral based eigensolvers [20, 24, 25]. In this section, we discuss the implementation issues of our new method.

The first issue we have to treat is about selecting a suitable size for the starting matrix , with a prescribed parameter . Since (cf. 2.6) is expected to span a subspace that contains the eigenspace of interest, we have to choose a parameter , the number of columns of , such that , the number of eigenvalues inside . A strategy was proposed in [23] for finding a suitable parameter for the block CIRR method. It starts with finding an estimation to . Giving a positive integer , by “”, we mean is an matrix with i.i.d. entries drawn from standard normal distribution . By (2.5) and (2.6), one can easily verify that the mean

 \hb@xt@.01(3.12)

Therefore,

 s0:=1h0⋅E[trace((Yh0)∗F0Yh0)] \hb@xt@.01(3.13)

gives an initial estimation to [12, 27]. With this information on hand, the strategy in [23] works as follows: (i) set , where , (ii) select the starting matrix and compute by (2.11), (iii) if the minimum singular value of is small enough, we find a suitable ; otherwise, replace with and repeat (ii) and (iii). We observe that the formula (3.13) always gives a good estimation of . However the computed may be much larger than in some cases, such as the matrices and are ill-conditioned, which leads to that it is potentially expensive to compute the singular value decomposition of . Due to this fact, in our method we turn to use the strategy proposed in [27], whose working mechanism is as follows: use the rank-revealing QR factorization [7, 15] to monitor the numerical rank of , if is numerically rank-deficient, then it means that the subspace spanned by already contains the desired eigenspace sufficiently, as a result, we find a suitable parameter .

Another issue we have to address is designing the stopping criteria. The stopping criteria here include two aspects: (i) all computed approximate eigenpairs attain the prescribed accuracy, and (ii) all eigenpairs inside the given region are found.

As for the first aspect of the stopping criteria, since we can only compute approximately by some quadrature scheme (cf. (2.11)), the approximate eigenpairs computed by Algorithm 2 may be unable to attain the prescribed accuracy in practical applications. A natural solution is to refine (step 2 in Algorithm 2) iteratively. A refinement scheme was suggested in [16]. Let and be a positive integer, the refinement scheme iteratively computes by a -point numerical integration scheme:

 U(l)k≈~U(l)k=12π√−1q∑j=1ωjzkj(zjB−A)−1B~U(l−1)0,k=0,1,…,g−1, \hb@xt@.01(3.14)

and then constructs

 ~U(l)=[~U(l)0,~U(l)1,…,~U(l)g−1]. \hb@xt@.01(3.15)

The refined is used to form projected eigenproblem (3.2), through which we compute the approximate eigenpairs. The accuracy of approximate eigenpairs will be improved as the iterations proceed, see [23] for more details.

If all approximate eigenpairs attain the prescribed accuracy after a certain iteration, we could stop the iteration process. However, in general we do not know the number of eigenvalues inside the target region in advance. This fact leads to the second aspect of the stopping criteria: how to guarantee that all desired eigenpairs are found when the iteration process stops. We take advantage of the idea proposed in [27]. The rationale of the idea is that, as the iteration process proceeds, the accuracy of desired eigenpairs will be improved while the spurious ones do not, as a result, there will exist a gap of accuracy between the desired eigenpairs and the spurious ones [27]. Based on this observation, a test tolerance , say , is introduced to discriminate between the desired eigenpairs and the spurious ones. Specifically, for approximate eigenpair , define the corresponding residual norm as

 ri=∥A~xi−~λiB~xi∥∥A~xi∥+∥B~xi∥. \hb@xt@.01(3.16)

If , then we view as an approximation to a sought-after eigenpair and refer to it as a filtered eigenpair by . If the numbers of filtered eigenpairs are the same in two consecutive iterations, then we set them to be the number of eigenvalues inside , see [27] for more details.

From (3.14) we can see that, in each iteration, the dominate work is to compute generalized shifted linear systems of the form

 (ziB−A)X(l−1)i=BU(l−1)0,i=1,2,…,q. \hb@xt@.01(3.17)

Integrating the above strategies with Algorithm 2, below we give the complete CIQZ algorithm for computing the eigenpairs inside the given region .

Algorithm 3: The complete CIQZ method    Input: . Output: Approximate eigenpairs . 1. Let , compute by (2.11). 2. Compute , and set . 3. If 4. Pick and compute by by (2.11). Augment to : and construct . 5. Else 6. Set and construct . 7. End 8. Compute the rank-revealing QR factorization: . Set . If , stop; otherwise, set , and go to step 3. 9. Set and . 10. For 11. Compute the orthogonalization: and . 12. Compute and . Set . 13. Compute . 14. Compute and . 15. Set , and . 16. For 17. Compute . 18. If inside and , then , and . 19. End 20. Set . 21. If and , set . Stop. 22. Set , and compute by (2.11). Construct . 23. End   Here we give some remarks on Algorithm 3.

• Steps 1 to 8 are devoted to determining a suitable parameter for the starting matrix . Meanwhile, a matrix is also generated.

• The for-loop, steps 16 to 19, is used to detect the spurious eigenvalues. Only the approximate eigenpairs whose residual norms are less than are retained.

• Step 21 refers to the stopping criteria, which contain two aspects: (i) the number of filtered eigenpairs by is the same with the one in the previous iteration, and (ii) the residual norms of all filtered eigenpairs are less than the prescribed tolerance .

• By (3.7), theoretically, the left subspace can be chosen either or . However, in practical implementation can only be computed by a quadrature scheme to get an approximation , in Algorithm 3 we choose the left subspace to be so as to include the information of both and .

## 4 Numerical Experiments

In this section, we use some numerical experiments to illustrate the performance of our CIQZ method (Algorithm 3). The test problems are from the Matrix Market collection [6]. They are the real-world problems from scientific and engineering applications. The descriptions of the related matrices are presented in Table 4.1, where nnz denotes the number of non-zero entries and cond denotes the condition numbers which are computed by Matlab function condest. All computations are carried out in Matlab version R2014b on a MacBook with an Intel Core i5 2.5 GHz processor and 8 GB RAM.

We use Gauss-Legendre quadrature rule with quadrature points on to compute the contour integrals (3.14) [8]. As for solving the generalized shifted linear systems of the form (3.17), we first use the Matlab function lu to compute the LU decomposition of , and then perform the triangular substitutions to get the corresponding solutions. In the experiments, the size of sampling vectors and the parameter are taken to be and , respectively.

Experiment 4.1 The goal of this experiment is to show the convergence behavior of CIQZ. The test problem is the bounded fineline dielectric waveguide generalized eigenproblem BFW782 (cf. Table 4.1) [6]. It stems from a finite element discretization of the Maxwell equation for propagating modes and magnetic field profiles of a rectangular waveguide filled with dielectric and PEC structures [4]. We are interested in the eigenvalues inside the circle with center at and radius . By using the Matlab function eig to compute all eigenvalues of the test problem in dense format, we find that there are eigenvalues within .

Define

 max\_r=max1≤i≤sri, \hb@xt@.01(4.1)

where are the residual norms given by and are the filtered eigenpairs. In our CIQZ method, we stop the iteration process when: (i) the numbers of filtered eigenpairs in two consecutive iterations are the same, and (ii) max_r in current iteration is less than the prescribed tolerance (step 21 in Algorithm 3).

The left picture in Fig 4.1 depicts the numbers of filtered eigenpairs in ten iterations. Recall that the filtered eigenpairs are the ones whose residual norms are less than the test tolerance . In the experiments, we take . We see that in the first iteration there are 28 approximate eigenpairs whose residual norms are less than . But from the second iteration to the last, the number of filtered eigenvalues in each iteration is 141, which is exactly the number of eigenvalues inside .

The right picture in Fig 4.1 shows the maximum of the residual norms of filtered eigenvalues, i.e., max_r (cf. (4.1)), in each iteration. From the left picture, we know that the number of filtered eigenvalues attains the one of eigenvalues inside starts from the second iteration. Therefore, we plot max_r starting from the second iteration to the 10th. We see that max_r decreases monotonically and dramatically from the second iteration to the fourth, maintains at almost the same level in the next three iterations, and rebounds from the eighth iteration.

Experiment 4.2 This experiment is devoted to showing the numerical performance of our CIQZ. We compare the CIQZ method with Matlab built-in function eig and the block CIRR method (Block_CIRR). In [23], the authors addressed some implementation problems of the block CIRR method, including the selection of the size of the starting vectors and iterative refinement scheme. In the experiment, as for the block CIRR method, we use the version proposed in [23]. Note that the dominate computation cost of both CIQZ and Block_CIRR comes from solving generalized linear shifted systems of the form (3.17). On the other hand, when using eig to compute the eigenvalues inside the target region, we have to first compute all eigenvalues in dense format and then select the target eigenvalues according to their coordinates. However, the matrices listed in Table 4.1 are sparse. Therefore, for the sake of fairness, we compare the three methods only in terms of accuracy, and will not show the amount of CPU time taken by each method.

Block_CIRR and CIQZ are contour-integral based eigensolvers, the common parameters and we take to be and , respectively. In [23], Block_CIRR performs two iterative refinements, i.e., three iterations in total. For comparison, in the experiment, we set the convergence tolerance and for our CIQZ method. As a result, the results computed by CIQZ and Block_CIRR will actually be those computed in the third iteration.

We use max_r (cf. (4.1)) to measure the accuracy achieved by each of three methods. In Table 4.1, and represent the center and the radius of target circle respectively, and is the number of eigenvalues inside . In the last three columns of Table 4.1, we display the max_rs computed by all three methods for each of the six test problems.

From Table 4.1, we see that for the two contour-integral based eigensolvers, Block_CIRR and CIQZ, the latter outperforms the former in all six test problems. Especially, as for the problem 7, whose matrices are ill-conditioned, Block_CIRR fails to compute the desired eigenpairs. Therefore, our CIQZ method is more accurate and reliable than Block_CIRR. When it comes to the comparison of Matlab function eig and CIQZ, it is shown that the results computed by the two methods agree almost the same digits of accuracy for the first three test problems; while our CIQZ method is more accurate than eig by around two digits of accuracy in the last three problems, whose matrices are ill-conditioned. We should point out that in the experiment our CIQZ method just runs three iterations, it may obtain more accurate results if it performs more iterations.

## 5 Conclusions

In this paper, we present a new contour-integral based method for computing the eigenpairs inside a given region. Our method is based on the CIRR method. The main difference between the original CIRR method and our CIQZ method is the way to extract the desired eigenpairs. We establish the mathematical framework and address some implementation issues for our new method. Numerical experiments show that our method is reliable and accurate.

## References

• [1] L. Ahlfors, Complex Analysis, 3rd Edition, McGraw-Hill, Inc., 1979.
• [2] A. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen, LAPACK Users’ Guide, 3rd Edition, SIAM, Philadephia, 1999.
• [3] A. P. Austin and L. N. Trefethen, Computing eigenvalues of real real symmetric matrices with rational filters in real arithmetic, preprint.
• [4] Z. Bai, J. Demmel, J. Dongarra, A. Ruhe, and H. Van Der Vorst, Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide , SIAM, Philadelphia, 2000.
• [5] B. Beckermann, G. H. Golub, and G. Labahn, On the numerical condition of a generalized Hankel eigenvalue problem, Numer. Math., 106 (2007), pp. 41–68.
• [6] R. F. Boisvert, R. Pozo, K. Remington, R. Barrett, and J. Dongarra, The matrix market: A web repository for test matrix data, in The Quality of Numerical Software, Assessment and Enhancement, R. Boisvert, ed., Chapman & Hall, London, 1997, pp. 125 –137.
• [7] T. T. Chan, Rank revealing QR factorizations, Lin. Alg. Appl., 88-89 (1987), pp. 67–82.
• [8] P. J. Davis and P. Rabinowitz, Methods of numerical integration, 2nd Edition, Academic Press, Orlando, FL, 1984.
• [9] J. Demmel, Applied Numerical Linear Algebra, SIAM, Philadelphia, 1997.
• [10] H-R. Fang and Y. Saad, A filtered Lanczos procedure for extreme and interior eigenvalue problems, SIAM J. Sci. Comput., 34 (2012), pp. A2220–A2246.
• [11] D. R. Fokkema, G. L. G. Sleijpen, and H. A. Van Der Vorst, Jacobi-Davidson style QR and QZ algorithms for the reduction of matrix pencils, SIAM J. Sci. Comput., 20 (1998), pp. 94–125.
• [12] Y. Futamura, H. Tadano, and T. Sakurai, Parallel stochastic estimation method of eigenvalue distribution, JSIAM Letters 2 (2010), pp.127–130.
• [13] K. Gallivan, E. Grimme, and P. Van Dooren, A rational Lanczos algorithm for model reduction, Numer. Algorithms, 12 (1996), pp. 33–64.
• [14] F. R. Gantmacher, The Theory of Matrices, Chelsea, New York, 1959.
• [15] G. H. Golub and C. F. Van Loan, Matrix Computations, 3rd Edition, Johns Hopkins University Press, Baltimore, MD, 1996.
• [16] A. Imakura, L. Du, and T. Sakurai, Error bounds of Rayleigh-Ritz type contour integral-based eigensolver for solving generalized eigenvalue problems, Numer. Algor., 68 (2015), pp. .
• [17] T. Ikegami and T. Sakurai, Contour integral eigensolver for non-Hermitian systems: a RayleighâRitz-type approach, Taiwanese J. Math., 14 (2010), pp. 825–837.
• [18] T. Ikegami, T. Sakurai, and U. Nagashima, A filter diagonalization for generalized eigenvalue problems based on the Sakurai-Sugiura projection method, J. Comp. Appl. Math., 233 (2010), pp. 1927–1936.
• [19] C. B. Moler and G. W. Stewart, An algorithm for generalized matrix eigenvalue problems, SIAM J. Numer. Anal., 10 (1973), pp. 241–256.
• [20] E. Polizzi, Density-matrix-based algorithm for solving eigenvalue problems, Phys. Rev. B 79 (2009) 115112.
• [21] A. Ruhe, Rational Krylov: A practical algorithm for large sparse nonsymmetric matrix pencils, SIAM J. Sci. Comput., 19 (1998), pp. 1535–1551.
• [22] Y. Saad, Numerical Methods for Large Eigenvalue Problems, SIAM, Philadelphia, 2011.
• [23] T. Sakurai, Y. Futamura, and H. Tadano, Efficient parameter estimation and implementation of a contour integral-based eigensolver, J. Alg. Comput. Tech., 7 (2013), pp. 249–269.
• [24] T. Sakurai and H. Sugiura, A projection method for generalized eigenvalue problems using numerical integration, J. comput. Appl. Math., 159 (2003), pp. 119–128.
• [25] T. Sakurai and H. Tadano, CIRR: A Rayleigh–Ritz type method with contour integral for generalized eigenvalue problems, Hokkaido Math. J. 36 (2007), pp. 745–757.
• [26] G. W. Stewart, Matrix Algorithms, Vol. II, Eigensystems, SIAM, Philadelphia, 2001.
• [27] G. Yin, R. Chan, and M-C. Yueng, A FEAST algorithm for generalized non-Hermitian eigenvalue problems
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters