Recursive Integral Method with Cayley Transformation
Abstract
Recently, a nonclassical eigenvalue solver, called RIM, was proposed to compute (all) eigenvalues in a region on the complex plane. Without solving any eigenvalue problem, it tests if a region contains eigenvalues using an approximate spectral projection. Regions that contain eigenvalues are subdivided and tested recursively until eigenvalues are isolated with a specified precision. This makes RIM an eigensolver distinct from all existing methods. Furthermore, it requires no a priori spectral information. In this paper, we propose an improved version of RIM for nonHermitian eigenvalue problems. Using Cayley transformation and Arnoldi’s method, the computation cost is reduced significantly. Effectiveness and efficiency of the new method are demonstrated by numerical examples and compared with ’eigs’ in Matlab.
1 Introduction
We consider the nonHermitian eigenvalue problem
(1) 
where and are large sparse matrices. Here can be singular. Such eigenvalue problems arise in many scientific and engineering applications [6, 11, 13] as well as in emerging areas such as data analysis in social networks [8].
The problem of interest in this paper is to find (all) eigenvalues in a given region on the complex plane without any spectral information, i.e., the number and distribution of eigenvalues in are not known.
In a recent work [2], we developed an eigenvalue solver RIM (recursive integral method). RIM, which is essentially different from all the existing eigensolvers, is based on spectral projection and domain decomposition. Briefly speaking, given a region whose boundary is a simple closed curve, RIM computes an indicator using spectral projection defined by a Cauchy contour integral on . The indicator is used to decide if contains eigenvalue(s). When the answer is positive, is divided into subregions and indicators for these subregions are computed. The procedure continues until the size of the region is smaller than the specified precision (e.g., ). The centers of the regions are the approximations of eigenvalues.
To be specific, for , the resolvent of the matrix pencil is defined as as
(2) 
Let be a simple closed curve lying in the resolvent set of on . Spectral projection for (1) is given by
(3) 
Given a random vector , it is wellknown that projects onto the generalized eigenspace associated with the eigenvalues enclosed by . Clearly, is zero if there is no eigenvalue(s) inside , and nonzero otherwise.
RIM differs from classical eigensolvers [6] and recently developed integral based methods[5, 4]. It simply computes an indictor of a region using the approximation to ,
(4) 
where ’s are quadrature weights and ’s are the solutions of the linear systems
(5) 
Recall that if there is no eigenvalue inside , then for all .
In practice, one needs a threshold to distinguish between and . The indicator needs to be robust enough to treat the following problems:

The randomly selected may have only a small component in the range of (which we write as ), in which case may be small even when there are eigenvalues in ;

Since a quadrature rule is used to approximate , the indicator will not be zero (and may not even be very small) when there is no eigenvalue in .
The strategy of RIM in [2] selects a small threshold based on substantial experimentation, i.e., contains no eigenvalue if . This choice of threshold for discarding a region systematically leans towards further investigation of regions that may potentially contain eigenvalues. Of course, such a strategy leads to more computation cost.
To understand the first problem, consider an orthonormal basis for , which coincides with the eigenspace associated with all the eigenvalues within . For a random ,
(6) 
where . Since is random, could be very small. The solution in [2] is to normalize and project again. The indicator is set to be
(7) 
Analytically, . But numerical approximations to and may differ significantly.
The following is the basic algorithm for RIM:

RIM

Input: matrices , region , precision , threshold , random vector .

Output: generalized eigenvalue(s) inside

Compute using (7).

Decide if contains eigenvalue(s).

If , then exit.

Otherwise, compute the size of .

If ,

partition into subregions .

for

RIM.

end


If ,

set to be the center of .

output and exit.



To compute , one needs to solve many linear systems
(8) 
parameterized by . In [2], the Matlab linear solver ‘' is used to solve (8). This is certainly not efficient!
In this paper, we propose a new version of RIM, called RIMC, to improve the efficiency. The contributions include: 1) Cayley transformation and Arnoldi’s method to speedup linear solves for the parameterized system (8); and 2) a new indicator to improve the robustness and efficiency. The rest of the paper is arranged as follows. In Section 2, we present how to incorporate Cayley transformation and the Arnoldi’s method into RIM. In Section 3, we introduce a new indicator to decide if a region contains eigenvalues. Section 4 contains the new algorithm and some implementation details. Numerical examples are presented in Section 5. We end up the paper with some conclusions and future works in Section 6.
2 Cayley Transformation and Arnoldi’s Method
2.1 Cayley transformation
The computation cost of RIM mainly comes from solving the linear systems (8) to compute the spectral projection . In particular, when the method zooms in around an eigenvalue, it needs to solve linear systems for many close ’s. This is done one by one in the first version of RIM [2]. It is clear that the computation cost will be greatly reduced if one can take the advantage of the parametrized linear systems of same structure.
Without loss of generality, we consider a family of linear systems
(9) 
where is a complex number. When is nonsingular, multiplication of on both sides of (9) leads to
(10) 
Given a matrix , a vector , and a nonnegative integer , the Krylov subspace is defined as
(11) 
The shiftinvariant property of Krylov subspaces says that
(12) 
where and are two scalars. Thus the Krylov subspace of is the same as , which is independent of .
2.2 Analysis of the preconditioners
Now we look at the connection between two preconditioners and . Assume that is nonsingular. Let be an eigenvalue of . Then is an eigenvalue of
The spectrum of might spread over the complex plane such that Krylov subspace based iterative methods may not converge. However, after Cayley transformation, when becomes large, will cluster around (see Fig. 1 for matrices and of Example 1 in Section 5). Similar result holds when is singular. Note that when approaches , will be very large in magnitude. When approaches , goes to zero. When is away from and , is . The key here is that the spectrum of (2.1) has a cluster of eigenvalues around and only a few isolated eigenvalues, which favors fast convergence in Krylov subspace.


2.3 Arnoldi Method for Linear Systems
The computation cost can be significantly reduced by exploiting (14). Consider the orthogonal projection method for
Let the initial guess be . One seeks an approximate solution in of dimension by imposing the Galerkin condition [10]
(15) 
The basic Arnoldi’s process (Algorithm 6.1 of [11]) is as follows.

Choose a vector of norm

for

,

,

, if stop

.

Let be the orthogonal matrix with column vectors and be the Hessenberg matrix whose nonzero entries are defined as above. From Proposition 6.6 of [11], one has that
(16) 
such that
Let . The Galerkin condition (15) becomes
(17) 
Since (see Proposition 6.5 of [10]), the following holds:
From the construction of , . Let . Then
(18) 
Consequently, the residual of the approximated solution can be written as
(19) 
Due to the shift invariant property, one has that
(20) 
By imposing a Galerkin condition similar to (15), we have that
(21) 
which implies
(22) 
From (19), one has that
(23) 
Matrix is an matrix and is an upper Hessenberg matrix such that . Once and are constructed by Arnoldi’s process, they can be used to solve (22) for different ’s with residual given by (23). The residual can be monitored with a little extra cost.
3 An Efficient Indicator
Another critical problem of RIM is to how to define the indicator . As seen above, the indicator in [2] defined by (7) is to project a random vector twice. One needs to solve linear systems with different right hand sides, i.e., and . Consequently, two Krylov subspaces, rather than one, are constructed for a single shift .
In this section, we propose a new indicator that avoids the construction of two Krylov subspaces. The indicator stills needs to resolve the two problems (P1 and P2) in Section 1. The idea is to approximate with different sets of trapezoidal quadrature points by taking the advantage of the Cayley transformation and Arnoldi’s method discussed in the previous section.
Let be the approximation of with quadrature points. It is wellknown that trapezoidal quadratures of a periodic function converges exponentially [14, Section 4.6.5], i.e.,
where C is a constant depending on . The spectral projection satisfies
For a large enough , one has that
The new indicator is set to be
(27) 
A threshold value is also needed to decide if there exists eigenvalue in or not. If , is said to be admissible, i.e., there exists eigenvalue(s) in . The value is chosen based on numerical experimentation. Due to (25)  (26), the computation cost to evaluate the new indicator is not expensive.
4 The New Algorithm
Now we are ready to give the algorithm in detail. It starts with several shifts ’s distributed in uniformly. The associated Krylov subspaces are constructed and stored. For a quadrature point , the algorithm first attempts to solve the linear system (9) using the Krylov subspace with shift closest to . If the residual is larger than the given precision , a Krylov subspace with a new shift is constructed, stored and used to solve the linear system. Briefly speaking, the algorithm constructed some Krylov subspaces with different ’s. These subspaces are then used to solve the linear system for all quadrature points ’s. From (25) and (26), instead of solving a family of linear systems of size , the algorithm solves linear systems of reduced size for most ’s. This is the key idea to speed up RIM. We denote this improved version of RIM by RIMC (RIM with Cayley transformation).
Given a search region and a normalized random vector , we compute the indicator using (27). Without loss of generality, is assumed to be a square. We set in (27). If , is divided uniformly into regions. The indicators of these regions are computed. This process continues until the size of the region is smaller than .

Algorithm RIMC:

RIMC

Input:

: matrices

: search region in

: a random vector

: precision

: residual threshold

: indicator threshold

: size of Krylov subspace

: number of quadrature points


Output:

generalized eigenvalues inside


Choose several ’s uniformly in and construct Krylov subspaces

Compute using (27).

Let be a quadrature point.

Check if the linear system can be solved using the existing Krylov subspaces with residual less than .

Otherwise, choose a new , construct a new Krylov subspace to solve the linear system.


Decide if each contains eigenvalues(s).

If , exit.

Compute the size of , .

If , uniformly partition into subregions

for to

call RIMC

end


Otherwise, output the eigenvalue and exit.

5 Numerical Examples
In this section, RIMC (implemented in Matlab) is employed to compute all the eigenvalues in a given region. To the authors’ knowledge, there exists no eigensolver doing exactly the same thing. We compare RIMC with ‘eigs’ in Matlab (IRAM: Implicitly Restarted Arnoldi Method [12]). Although the comparison seems to be unfair to both methods, it gives some idea about the performance of RIMC.
The matrices for Examples 15 come from a finite element discretization of the transmission eigenvalue problem [7, 3] using different mesh size . Therefore, the spectra of these problems are similar. For Matlab function ‘eigs(A,B,K,SIGMA)’, ‘K’ and ‘SIGMA’ denote the number of eigenvalues to compute and the shift, respectively. For RIMC, the size of Krylov space is set to be , , , , and . All the examples are computed on a Macbook pro with 16 Gb memory and 3 GHz Intel Core i7.
Example 1: The matrices and are (mesh size ). The search region . For ‘eigs’, the ‘shift’ is set to be . For this problem, it is known that there exist eigenvalues in . Therefore, ‘K’ is set to be . Note that RIMC does not need this information. The results are shown in Table 1. Both RIMC and ‘eigs’ compute eigenvalues and they are consistent. ‘eigs’ uses less time than RIMC.
RIMC  ‘eigs’  
Eigenvalues  3.994539018848445  3.994539018856096 
6.939719143800903  6.939719143804773  
6.935053985873570  6.935053985844678  
10.654665853490588  10.654665853441946  
10.658706024650019  10.658706024609756  
CPU time  0.284922s  0.247310s 
Example 2: Matrices and are (mesh size ). Let . For ‘eigs’, ’shift’ is set to be . Again, it is known in advance that there are eigenvalues in . Hence ‘K’ is set to be . The results are shown in Table 2. Both methods compute same eigenvalues and ‘eigs’ is faster.
RIMC  ‘eigs’  
Eigenvalues  23.803023938395199  23.803023938403236 
5.682304314876092i  5.682304314840053i  
24.737027497006540  24.737027497003453  
24.750959635036583  24.750959635022376  
25.278145187465789  25.278145187457707  
25.284501515028143  25.284501515036474  
CPU time  0.558687s  0.333513s 
Example 3: Matrices and are matrices (mesh size ). Let . There are eigenvalues in . It is wellknown that the performance of ‘eigs’ is highly dependent on ‘shift’. In Table 3, we show the time used by RIMC and ‘eigs’ with different shifts ’shift =5, 10, 15’. Notice that when the shift is not good, ‘eigs’ uses much more time. In practice, good shifts are not known in advance.
RIMC  ‘eigs’ shift=5  ‘eigs’ shift=10  ‘eigs’ shift=15  

CPU time  2.571800s  0.590186  7.183679s  0.392902s 
Example 4: We consider a larger problem: and are . Let (mesh size ). There are eigenvalues in . The results are in Table 4. This example, again, shows that for larger problems without any spectrum information, the performance of RIMC is quite stable and consistent. However, the performance of ‘eigs’ varies a lot with different ’shifts’.
RIMC  ‘eigs’ shift=5  ‘eigs’ shift=10  

CPU time  104.228413s  1696.703477s  272.506573s 
Example 5: This example demonstrates the effectiveness and robustness of the new indicator. The same matrices in Example 3 () are used. Consider three regions and . has one eigenvalue inside. has two eigenvalues inside. contains no eigenvalue. Table 5 shows the indicators of these three regions computed using (27). It is seen that the indicator is different when there are eigenvalues inside the region and when there are no eigenvalues.
number of quadrature points  

4  0.021036161440  0.000256531878  0.001173702609 
8  0.020981705584  0.000258504259  0.000044238403 
0.997411  0.992370  0.037691 
Table 6 shows the means, minima, maxima, and standard deviations of indicators of these three regions computed using random vectors. The indicators are consistent for different random vectors.
mean  min.  max.  std. dev.  

0.99848393687  0.66250246918  1.43123449889  0.08740952445  
0.99926772105  0.92600650392  1.14648387384  0.01788832121  
0.03763601782  0.03734608324  0.03775912970  0.00010228556 
6 Conclusions and Future Works
This purposes of this paper is to compute (all) the eigenvalues of a large sparse nonHermitian problem in a given region. We propose a new eigensolver RIMC, which is an improved version of the recursive integral method using spectrum projection. RIMC uses Cayley transformation and Arnoldi method to reduce the computation cost.
To the authors’ knowledge, RIMC is the only eigensolver for this particular purpose. As we mentioned, the comparison of RIMC and ‘eigs’ is unfair to both methods. However, the numerical results do show that RIMC is effective and has the potential to treat large scale problems.
Currently, the algorithm is implemented in Matlab. A parallel version using C++ is under development. For the time being, RIMC only computes eigenvalues, which is good enough for some applications. However, adding a component to give the associated eigenvectors is necessary for other applications. It would also be useful to provide the multiplicity of an eigenvalue. These are our future works to make RIMC a robust efficient eigensolver.
References
 D.N. Arnold, G. David, D. Jerison, S. Mayboroda and M. Filoche, Effective confining potential of quantum states in disordered media. Physical Review Letters, 116(5), 2016, 056602.
 R. Huang, A. Struthers, J. Sun and R. Zhang, Recursive integral method for transmission eigenvalues. Journal of Computational Physics, Vol. 327, 830  840, 2016.
 J. Sun, Iterative methods for transmission eigenvalues. SIAM J. Numer. Anal., 49(5),18601874, 2011.
 E. Polizzi, Densitymatrixbased algorithms for solving eigenvalue problems. Phys. Rev. B., 79, 115112, 2009.
 T. Sakurai and H. Sugiura, A projection method for generalized eigenvalue problems using numerical integration. J. Comput. Appl. Math, 159(1), 119128, 2003.
 G.H. Golub and H.A. van der Vorst, Eigenvalue computation in the 20th century. J. Comput. Appl. Math, 123(12), 3565, 2000.
 X. Ji, J. Sun and T. Turner, A mixed finite element method for Helmholtz Transmission eigenvalues. ACM Trans. Math. Software, 38(4), Article No. 29, 2012.
 S. Boccaletti, V. Latora, Y. Moreno, M. Chavez, D.U. Hwang, Complex Networks: Structure and Dynamics. Physics Reports, 424(45), 175308, 2006.
 K. Meerbergen, The Solution of Parametrized Symmetric Linear Systems. SIAM Journal on Matrix Analysis and Applications, 24(4),10381059, 2003.
 Y. Saad, Iterative Methods for Sparse Linear Systems, 2nd Ed., Society for Industrial and Applied Mathematics, 2003.
 Y. Saad, Numerical Methods for Large Eigenvalue Problems, 2nd Ed., Society for Industrial and Applied Mathematics, 2011.
 R.B. Lehoucq, D.C. Sorensen and C. Yang, ARPACK User’s Guide – Solution of LargeScale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods, Society for Industrial and Applied Mathematics, Philadelphia, PA, 1998.
 J. Sun and A. Zhou, Finite Element Methods for Eigenvalue Problems, Chapman and Hall/CRC, 2016.
 P.J. Davis and P. Rabinowitz, Methods of Numerical Integration, 2nd Ed., Academic Press, Inc., Orlando, FL, 1984.