Implicitly Restarted Generalized Second-order Arnoldi Type Algorithms for the Quadratic Eigenvalue ProblemSupported by National Basic Research Program of China 2011CB302400 and the National Science Foundation of China (Nos. 11071140, 11201020 and 11371219).

Implicitly Restarted Generalized Second-order Arnoldi Type Algorithms for the Quadratic Eigenvalue Problem1

Abstract

We investigate the generalized second-order Arnoldi (GSOAR) method, a generalization of the SOAR method proposed by Bai and Su [SIAM J. Matrix Anal. Appl., 26 (2005): 640–659.], and the Refined GSOAR (RGSOAR) method for the quadratic eigenvalue problem (QEP). The two methods use the GSOAR procedure to generate an orthonormal basis of a given generalized second-order Krylov subspace, and with such basis they project the QEP onto the subspace and compute the Ritz pairs and the refined Ritz pairs, respectively. We develop implicitly restarted GSOAR and RGSOAR algorithms, in which we propose certain exact and refined shifts for respective use within the two algorithms. Numerical experiments on real-world problems illustrate the efficiency of the restarted algorithms and the superiority of the restarted RGSOAR to the restarted GSOAR. The experiments also demonstrate that both IGSOAR and IRGSOAR generally perform much better than the implicitly restarted Arnoldi method applied to the corresponding linearization problems, in terms of the accuracy and the computational efficiency.

Keywords. QEP, GSOAR procedure, GSOAR method, RGSOAR method, Ritz vector, refined Ritz vector, implicit restart, exact shifts, refined shifts.

AMS Subject Classification (2000).  65F15, 15A18

1 Introduction

Consider the large QEP

(1)

with , where are matrices with nonsingular and is the 2-norm of a vector or matrix. Such QEP arises in a wide variety of scientific and engineering applications [4, 25]. One is often interested in a few largest eigenvalues in magnitude or a few eigenvalues nearest to a target in the complex plane. One of the commonly used approaches is to linearize the QEP and then solve the linearized problem. There are a number of linearizations available [25], of which a commonly used one is to transform (1) to the generalized eigenvalue problem

(2)

which is equivalent to the standard linear eigenvalue problem

(3)

where . Clearly, (3) corresponds to the monic QEP

(4)

The mathematical theory on (2) and (3) has been well established and a number of numerical methods have been available for solving them [1, 6, 22, 24, 26]. One of the drawbacks via linearizations is that general numerical methods do not take the structures of (2) and (3) into account, making computations expensive and the approximate eigenpairs possibly lose their physical structures.

To improve the computational efficiency of the Arnoldi method that is directly applied to some linearization problem of QEP (1), Meerbergen [19] proposes a quadratic Arnoldi (Q-Arnoldi) method, which exploits the structure of the linearization problem to reduce the memory requirements by about a half and can compute a partial Schur form of the linearization problem with respect to the structure of the Schur vectors. He shows that the Q-Arnoldi method can be implicitly restarted. Some similar methods have proposed very recently in [20, 28]. All of them have are special Arnoldi methods applied to certain linearization problems of QEP (1), that is, each of them projects the corresponding linearization problem rather than QEP (1) or (4) onto some Krylov subspace whose orthonormal basis is generated efficiently by a special Arnoldi process. For these methods, the implicit restarting technique [23] is easily applied.

In this paper, we are interested in projection methods that work on QEP (1) directly other than its linearizations, and such methods preserve some important structures of it. The second-order Arnoldi (SOAR) method proposed by Bai and Su [2] falls into this category and is a Rayleigh–Ritz method. They propose a SOAR procedure that computes an orthonormal basis of a second-order Krylov subspace generated by the matrices and simultaneously. The SOAR method then projects (1) onto this subspace and computes the Ritz pairs to approximate the desired eigenpairs of (1). A unified and general convergence theory has recently been established in [9] for the Rayleigh–Ritz method and the refined Rayleigh–Ritz method for the QEP, generalizing some of the known results on the Rayleigh–Ritz method for the linear eigenvalue problem [24]. It is proved in [9] that for a sequence of projection subspaces containing increasingly accurate approximations to a desired eigenvector there is a Ritz value that converges to the desired eigenvalue unconditionally while the corresponding Ritz vector converges conditionally and may fail to converge. Alternatively, we can compute a refined Ritz vector whose unconditional convergence is guaranteed.

In the spirit of the Hessenberg-triangular decomposition of a matrix pencil, which reduces to the Hessenberg decomposition of a single matrix, Huang et al. [10] propose a semiorthogonal generalized Arnoldi (SGA) procedure for the matrix pencil resulting from some linearization of the QEP. The SGA method first generates an SGA decomposition and then computes the Rayleigh–Ritz approximations of QEP (1) with respect to the subspace defined by an orthonormal basis generated by the SGA decomposition. To overcome the possible non-convergence of Ritz vectors obtained by the SGA method, they apply the refined projection principle [11] to propose a refined SGA (RSGA) method that computes better refined Ritz vectors. On the basis of implicitly shifted QZ iterations, they have developed the implicitly restarted SGA and RSGA algorithms, abbreviated as IRSGA and IRRSGA, with certain exact shifts and refined shifts suggested, respectively.

One disadvantage of SOAR is that the implicit restarting technique is not directly applicable. In order to make implicit restarting applicable, Otto [21] proposes a modified SOAR procedure that replaces the original special starting vector by a general one. Under the assumption that there is no deflation in the modified SOAR procedure, implicit restarting is directly adapted to this procedure. However, it is hard to interpret and understand the modified SOAR method. This is unlike the SOAR method, whose convergence is related to the Arnoldi method for the linear eigenvalue problem. Wei et al. [3, 29] make a similar modification and propose a generalized second-order Arnoldi (GSOAR) method and their refined variants, for solving the QEP and higher degree polynomial eigenvalue problems. Based on the explicit restarting scheme of the Arnoldi algorithm for the linear eigenvalue problem, Wei et al. [3, 29] have developed explicitly restarted generalized Krylov subspace algorithms. Deflation and breakdown may take place in the SOAR and modified SOAR procedures, but they have completely different consequences [2, 21], where it is proved that the SOAR method will find some exact eigenpairs of QEP (1) if breakdown occurs but no eigenpair is found generally when deflation takes place. A remedy strategy is given in [2] to treat the deflation so as to continue the SOAR procedure. Similarly, deflation may occur in the GSOAR procedure, but it is not mentioned in [3, 29].

Similar to the modified SOAR procedure, implicit restarting is directly adapted to the GSOAR procedure, but it is useable only conditionally and requires that no deflation occur in implicit restarts. Once deflation takes place, implicit restarting fails completely. Therefore, one must cure deflations, so that implicit restarting can be applied unconditionally. For the success and overall performance of implicitly restarted GSOAR type algorithms, just as the mechanism for those implicitly restarted Krylov subspace algorithms for the linear eigenvalue problem and SVD problems [8, 12, 13, 15, 16, 18], it turns out that a proper selection of the shifts is crucial. Otto [21] has proposed certain exact shifts for his implicitly restarted modified SOAR algorithm, but they could cause convergence problems since some important aspects on the QEP are ignored when determining the shifts.

In this paper, we are concerned with the GSOAR and RGSOAR methods and their implicit restarting. We will explore more properties and features of them, and consider the efficient and reliable computation of refined Ritz vectors. Particularly, we show that there is a close relationship between the subspace generated by the GSOAR procedure and a standard Krylov subspace. With help of this result, we can interpret the convergence of the GSOAR type methods. Our main concern is a reasonable selection of the shifts when implicitly restarting the GSOAR and RGSOAR algorithms. We advance certain exact shifts, different from those in [21], and refined shifts for respective use within the implicitly restarted GSOAR and RGSOAR algorithms. The refined shifts are based on the refined Ritz vectors and theoretically better than the exact shifts. Unlike the implicitly restarted algorithms for the linear eigenvalue problem, both exact and refined shift candidates are now more than the shifts allowed. We show how to reasonably select the desired shifts among them. We present an efficient algorithm to compute the exact and refined shift candidates reliably. In addition, we propose an effective approach to cure deflations in implicit restarts, so that implicit restarting is useable unconditionally.

The rest of this paper is organized as follows. In Section 2 we review the SOAR and GSOAR procedures, present some properties of them, and describe the SOAR and GSOAR methods. In Section 3, we describe the RGSOAR method and discuss some practical issues of it. In Section 4, we develop implicitly restarted GSOAR and RGSOAR algorithms with the exact and refined shifts suggested. We present an effective approach to treat deflations in implicit restarts. In Section 5, we report numerical experiments to illustrate the efficiency of the restarted algorithms and the superiority of the refined algorithm. We also compare our algorithms with IRSGA and IRRSGA [10], demonstrating that ours perform better. More importantly, we compare our algorithms with the Matlab function eigs, the implicitly restarted Arnoldi method applied to a commonly used linearization problem, showing that ours generally have sharp superiority to eigs in terms of the accuracy and the computational efficiency. Finally, we conclude the paper in Section 6.

Throughout the paper, we denote by the spectral norm of a matrix and the 2-norm of a vector, by the identity matrix with the order clear from the context, by the superscripts and the transpose and conjugate transpose of a vector or matrix, by the complex vector space of dimension and by the set of matrices. We denote by the smallest singular value of a matrix and by the Matlab notation the submatrix consisting of rows to and columns to of .

2 The SOAR and GSOAR methods

Bai and Su [2] introduce the following concepts.

Definition 1.

Let be matrices of order and for the vector , and define

Then is called a second-order Krylov sequence based on and , and a -th second-order Krylov subspace.

Note that (3) is a linearization of (4). Define the matrix

(5)

of order . For a -dimensional starting vector , we can generate a Krylov subspace . Particularly, if we choose , we have

(6)

We observe the fundamental relation

(7)

where is the subspace generated by the vector set

Due to the equivalence of QEP (4) and the eigenproblem of , relation (7) shows that if the eigenvector is contained in , then the eigenvector of QEP (4) is contained in . By continuity, if there is a good approximation to in , then there must be a good approximation to contained in .

Bai and Su [2] propose the following procedure for computing an orthonormal basis of and an auxiliary vector sequence generating .

Algorithm 1.

SOAR procedure

1:

,

2:

for do

3:


4:


5:

for do

6:


7:


8:


9:

end for

10:


11:

if , stop

12:


13:


14:

end for

The following basic results hold for this algorithm; see [2].

Theorem 1.

Define and and . If Algorithm 1 does not stop before step , then we have

(8)

and the -step SOAR decomposition

(9)

where .

Before proceeding, we introduce the following definition.

Definition 2.

[2] If are linearly dependent but with are not, we call this situation deflation; if both and   are linearly dependent at step , we call this situation breakdown.

According to Definition 2, if Algorithm 1 stops prematurely at step , then either deflation or breakdown must occur at that step. Deflation means that but , so the Arnoldi process on does not terminate at step . As a result, when deflation occurs at step , does not contain any exact eigenvector of , which, from (7), implies that may not contain any exact eigenvector of QEP (1). Therefore, deflation must be remedied to continue the algorithm.

Bai and Su [2] present the following algorithm that detects and remedies deflation.

Algorithm 2.

SOAR procedure with deflation remedy

1:

,

2:

for do

3:


4:


5:

for do

6:


7:


8:


9:

end for

10:


11:

if

12:

if

13:

break

14:

else deflation

15:

reset

16:


17:


18:

end if

19:

else

20:


21:


22:

end if

23:

end for

In the procedure, if deflation occurs, we simply set to one and take . To decide if , the Gram–Schmidt orthogonalization with refinement is used [2, 21]. When deflation occurs, the nonzero vectors in the sequence are still orthonormal and span the second-order Krylov subspace with the dimension smaller than . We refer the reader to Bai and Su [2] for details.

We point out that Theorem 1 is true for Algorithm 2 but there are zero columns in when deflation occurs.

It is easily checked that a serious disadvantage of the SOAR procedure is that the implicit restarting technique is not applicable since the updated is not zero any more. Several researchers have proposed replacing in Algorithms 12 by a nonzero one [3, 21, 29]. This leads to the following generalized second-order Krylov sequence and subspace; see [3, 29].

Definition 3.

Let and be matrices and for vectors , and define

Then is called a generalized second-order Krylov sequence based on and , and the -th generalized second-order Krylov subspace.

Obviously, . For a general , it is seen that

(10)

But different from (7), since is a general vector, the fundamental relation now becomes

(11)

where is the subspace generated by the vector set

Note that if the eigenvector is contained in then it also lies in the subspace . If this is the case, (11) shows that the eigenvector of QEP (1) is contained in . More generally, by continuity, it is deduced from the above that if has a good approximation to then has one too, which, in turn, means that there must be a good approximation to contained in .

Analogous to Algorithm 2, we can present a GSOAR procedure, i.e., Algorithm 3, that remedies deflation and generates the vector sequence , whose nonzero ones form an orthonormal basis of . We point out that the GSOAR procedure in [3, 29] is the same as Algorithm 1 except that in line 1 is replaced by a general vector .

Algorithm 3.

GSOAR procedure with deflation remedy

1:

, .

2:

for do

3:


4:


5:

for   do

6:


7:


8:


9:

end for

10:


11:

if  

12:

if

13:

break

14:

else deflation

15:

reset

16:


17:


18:

end if

19:

else

20:


21:


22:

end if

23:

end for

It is direct to justify that Theorem 1 holds for this algorithm with a general , that is, we have

with and normalized and the -step GSOAR decomposition (9) if the algorithm does not break down before step .

Otto [21] defines the modified second-order Krylov sequence as and the modified second-order Krylov subspace of dimension generated by the vector sequence. After the orthonormal are generated, he orthonormalizes against them to get . This is called the modified SOAR procedure. A disadvantage of it is that there is no compact relationship (7) or (11). So it is hard to interpret such a modified subspace and establish definitive results on breakdown and deflation.

The GSOAR method is a Rayleigh–Ritz method, and it projects the large QEP (1) onto by imposing the Galerkin condition, leading to the -dimensional QEP

(12)

with , where , and . Let the be the eigenpairs of (12). Then the GSOAR method uses the Rayleigh–Ritz pairs to approximate some of the eigenpairs of (1). We comment that if deflation occurs then consists of only nonzero orthonormal vectors and the dimension of (12) is smaller than .

3 A refined GSOAR (RGSOAR) method

As is known, the Rayleigh–Ritz method may fail to converge for computing eigenvectors of the linear eigenvalue problem and the QEP; see [17] and [9], respectively. To correct this deficiency, a refined projection principle is proposed in [11] (see also [24, 26]) for the linear eigenvalue problem, which leads to the refined Rayleigh–Ritz method. The refined method extracts the best approximate eigenvectors from a given subspace in the sense that the residuals formed with certain approximate eigenvalues available are minimized in the sense of 2-norm over the subspace. A refined GSOAR (RGSOAR) method has been proposed in [3, 29]. We next describe it and give more details on some practical issues.

Suppose that we have computed the Ritz values by the GSOAR method and select ones of them to approximate desired eigenvalues of (1). For each chosen , the RGSOAR method seeks a unit length vector satisfying the optimal requirement

(13)

and uses it as an approximate eigenvector, called the refined Ritz vector. The pairs are also called the refined Rayleigh–Ritz approximations. Since the (non-zero) columns of form an orthonormal basis of , (13) amounts to seeking a unit length vector such that with

(14)

the right singular vector of the matrix associated with its smallest singular value . However, the direct computation of its SVD may be expensive. Precisely, assume that the matrix is real and . Then the cost of Golub–Reinsch’s SVD algorithm is about flops, and that of Chan’s SVD algorithm is about flops [6, p. 254]. Keep in mind that is the number of the desired eigenpairs. The CPU time costs are then and flops, respectively.

The first author in [14] has proposed a cross-product matrix-based algorithm for computing the SVD of a matrix, which can be much more efficient than the above standard SVD algorithms. Applying the algorithm to (13), we form the cross-product matrix

which is the Hermitian (semi-)positive definite. is then the eigenvector of associated with its smallest eigenvalue . We compute the eigensystem of by the QR algorithm to get . In finite precision arithmetic, the computed eigenvector is an approximation to with accuracy provided that the second smallest singular value of is not very close to the smallest one, where is the machine precision.

Let us now look at the computational cost of this algorithm. Define

which are available when forming the projected QEP and do not need extra cost. Then

(15)

where the bar denotes the complex conjugate of a scalar. Assume that and are real and note that is Hermitian for a complex and real symmetric for a real . Then we only need to form the upper (lower) triangular part of , which involves the upper (lower) triangular parts of the nine matrices . All these cost about flops. With these nine available, we only need flops to form for either a real or complex , negligible to flops. So, we CPU timely need flops to form Hermitian matrices for approximate eigenvalues . We then compute the complete eigensystems of these by the QR algorithm using flops. Therefore, we can compute right singular vectors using about flops when , a natural requirement in practice. As a result, a simple comparison indicates that such cross-product based algorithm is more efficient than Golub–Reinsch’s SVD algorithm when and Chan’s SVD algorithm when .

We can now present a basic (non-restarted) RGSOAR algorithm.

Algorithm 4.

The RGSOAR algorithm

  1. Given the starting vectors , run the GSOAR procedure to generate an orthonormal basis of .

  2. Compute and .

  3. Compute and , solve the projected QEP

    (16)

    and select Ritz values as approximations to the desired eigenvalues .

  4. For each chosen , form , and compute the eigenvector of associated with its smallest eigenvalue and the refined Ritz vector .

  5. Test convergence of by computing the relative residual norms

4 Implicitly restarted algorithms

This section consists of three subsections. In Section 4.1, under the assumption that no deflation occurs, we describe how to implicitly restart the GSOAR procedure. In Section 4.2, we discuss how to select best possible shifts, and propose exact and refined shifts for respective use within implicitly restarted GSOAR and RGSOAR algorithms. In Section 4.3, we present an effective approach to cure deflation in implicit restarts, so that implicit restarting can be run unconditionally.

4.1 Implicit restarts

As step increases, the GSOAR and RGSOAR methods become expensive and impractical due to storage requirement and/or computational cost. So restarting is generally necessary. That is, for a given maximum and the subspace with and normalized, if the methods do not converge yet, based on the information available, we select new unit length vectors and to construct a better subspace that contains richer information on the desired eigenvectors . We then extract new better approximate eigenpairs with respect to . Proceed in such a way until the methods converge.

If no deflation occurs, it is direct to adapt the implicit restarting scheme [23] to the modified SOAR procedure in [21] and the GSOAR procedure. Given shifts , performing implicit shifted QR iterations on yields the relation

where is a orthogonal (unitary) matrix and is upper triangular. Specifically, has only nonzero subdiagonals. Adapted from the derivation of implicitly restarting the standard Arnoldi process [23], we can establish the following result for the GSOAR procedure.

Theorem 2.

Given shifts , perform steps of implicit shifted QR iterations on . Let with , and define and . Assume that no deflation occurs in the -step GSOAR decomposition (9). Then we have an updated -step GSOAR decomposition

(17)

starting with , where , , is upper Hessenberg and

with the entry of in position .

Theorem 2 states that if no deflation occurs then we have naturally obtained an -step GSOAR decomposition (17) after implicit shifted QR iterations are run on , thus generating an orthonormal basis of the -dimensional subspace . Decomposition (17) is then extended to a -step one from step upwards in a standard way other than from scratch, producing an orthonormal basis of the updated -dimensional subspace .

Analogous to the proof of the result on updated starting vectors in [23], it is direct to justify the following theorem.

Theorem 3.

It holds that

(18)

with and a normalizing factor.

4.2 The selection of shifts

The selection of the shifts is one of the keys for the success and overall efficiency of implicitly restarted GSOAR and RGSOAR algorithms. In this subsection we propose the corresponding best possible shifts for respective use within each algorithm.

Assume that is diagonalizable. It is shown in, e.g., [22], that if the starting vector is a linear combination of eigenvectors of then is an invariant subspace. Therefore, a fundamental principle of restarting is to select a better vector , in some sense, from the current as an updated starting vector that amplifies the components of the desired eigenvectors and simultaneously dampens those of the unwanted ones, so that the updated contains more accurate approximations to the desired eigenvectors. For implicit restarting, based on formulas for updated starting vectors like (18), for the linear eigenvalue problem and the computation of a partial SVD, it has been shown in [12, 13] and [15, 16] that such goal is achieved by selecting the shifts to approximate some of the unwanted eigenvalues or singular values as best as possible within the framework of the underlying method. A general result is that the better the shifts approximate the unwanted eigenvalues, the richer information on the desired eigenvectors is contained in the updated starting vector, so that a better Krylov subspace is generated.

Motivated by the above results, we now investigate a reasonable selection of shifts for use within implicitly restarted GSOAR and RGSOAR algorithms. Observe that the projected QEP (12) of the large QEP (1) over amounts to the generalized eigenvalue problem

(19)

which is the projected problem of large generalized eigenvalue problem (2) over the subspace (c.f. (11)) spanned by the (nonzero) columns of

The above problem amounts to the standard linear eigenvalue problem

(18) indicates that we should select the shifts as the best possible approximations to the unwanted eigenvalues of so as to generate increasingly better updated subspaces and with . In terms of (11) and the comments followed, this, in turn, leads to increasingly better updated that contains increasingly better approximations to the desired eigenvectors of . As a result, contains more accurate approximations to the desired eigenvectors of (1). So, just as for the linear eigenvalue problem, we should choose shifts for each implicitly restarted GSOAR type algorithm in the sense that they are best possible approximations to some of the unwanted eigenvalues of (1).

For the Rayleigh–Ritz method with respect to a given subspace, the Ritz values can be considered as the best approximations available to some eigenvalues of (1). Otto [21] proposed exact second-order shifts for his implicitly restarted modified SOAR algorithm. Adapted here, one solves the projected QEP (12) and selects Ritz values as approximations to the desired eigenvalues. Then the unwanted Ritz values are shift candidates, called the exact second-order shift candidates. A problem is that there are shift candidates, while for (17) the number of shifts must not exceed . One must select shifts among the candidates. Otto simply suggested to take any shifts among ones. We should point out that this situation is unlike implicitly restarted Arnoldi type algorithms for the linear eigenvalue problem, where the the maximum number of shifts is just that of candidates; see [23] and [12, 13, 15, 16].

However, the above selection of exact second-order shifts is problematic and susceptible to failure, as elaborated below. It is crucial to keep in mind a basic fact that the QEP may often have two distinct eigenvalues that share the same eigenvector [25]. This means that, for QEP (12), some of the shift candidates and some of the Ritz values used to approximate the desired eigenvalues may share common eigenvector(s). Therefore, if it is unfortunate to take such candidates for shifts, restarting will filter out the information on the corresponding desired eigenvectors and thus makes implicitly restarted GSOAR algorithms perform poorly.

In order to avoid the above deficiency, we propose new shift candidates for the implicitly restarted GSOAR and RGSOAR algorithms, respectively, and show how to reasonably select the shifts among the candidates. We first consider the GSOAR method. Project QEP (1) onto the orthogonal complement of with respect to , where are the Ritz vectors approximating the desired eigenvectors . Then we obtain a -dimensional projected QEP and compute its eigenvalues. A remarkable consequence is that these eigenvalues must be approximations to some of the unwanted eigenvalues of QEP (1) because the information on has been removed from . So we can use any ones of these candidates as shifts. To be unique, we choose the ones farthest from the Ritz values that are used to approximate the desired eigenvalues . The motivation of this choice is that, based on (18), these shifts can be better to amplify the information of on the desired eigenvectors and dampen the components of undesired eigenvectors in .

If we are interested in the eigenvalues nearest to a target and/or the associated eigenvectors, QEP (1) can be equivalently transformed to a shift-invert QEP; see the end of this subsection. In this case, we select the Ritz values among candidates farthest from as shifts. Such selection of shifts is motivated by an idea from [15, 16], where some of the shifts are taken to be unwanted Ritz values farthest from the wanted approximate singular values. It was argued there that this selection can better dampen those components of the unwanted singular vectors and meanwhile amplify the components of the desired singular vectors.

We now turn to the selection of shifts for the RGSOAR algorithm. Algorithm 4 computes the refined Ritz vectors , which are generally more and can be much more accurate than the Ritz vectors [9, 17]. The first author [12, 13] has proposed certain refined shifts for the refined Arnoldi method and the refined harmonic Arnoldi method for the linear eigenvalue problem. It is shown that the refined shifts are generally better than the corresponding exact shifts and can be computed efficiently and reliably. In the same spirit, we next propose certain refined shifts for the RGSOAR algorithm.

Since the refined Ritz vectors are more accurate than the corresponding , the orthogonal complement of with respect to contains richer information on the unwanted eigenvectors than the orthogonal complement of with respect to . As a result, the eigenvalues of the projected QEP of QEP (1) onto this orthogonal complement are more accurate approximate eigenvalues than the exact shift candidates described above. We call them refined shift candidates. We use the same approach as above to select ones among them as shifts, called the refined shifts, for use within the implicitly restarted RGSOAR algorithm.

Finally, we show how to compute the exact and refined shifts efficiently and reliably. We take the refined shifts as example. The computation of exact shifts is analogous. Recall , and write . If QEP (1) is real and two columns and of are complex conjugate, we replace them by their normalized real and imaginary parts, respectively, so that the resulting is real. We then make the full QR decomposition

where and are and column orthonormal matrices, respectively, and is upper triangular. We use the Matlab built-in function qr.m to compute the decomposition in experiments. This costs flops, negligible to the cost of the -step GSOAR procedure. Obviously, it holds that

Therefore, is an orthonormal basis of the orthogonal complement of with respect to . It is direct to justify that the projected QEP of the original QEP (1) onto is just the projected QEP of the small QEP (16) onto . So, we form the projected QEP of the original QEP (1) onto at cost of flops. We then compute its eigenvalues using flops and select ones among them as the refined shifts. Since , the CPU time cost of computing the refined shifts is flops. For the exact shifts, recall the Ritz vectors . Write and replace by it. We then compute the exact shifts in the same way as above.

Having done the above, we have finally developed the following Algorithm 5.

Algorithm 5.

The implicitly restarted GSOAR type algorithms

  1. Given unit length starting vectors and , the number of desired eigenpairs and the number of shifts satisfying , run the -step GSOAR procedure to generate .

  2. Do until convergence

    Project QEP (1) onto to get QEP (12), select Ritz pairs or refined Ritz pairs as approximations to the desired eigenpairs, respectively, and determine their convergence.

  3. If not converged, compute the exact shifts or refined shifts, and implicitly restart the GSOAR method or the RGSOAR method, respectively.

  4. EndDo

Algorithm 5 includes two algorithms: the implicitly restarted GSOAR algorithm with the exact shifts and RGSOAR algorithm with the refined shifts, abbreviated as IGSOAR and IRGSOAR here and hereafter. They can be used to compute a number of largest eigenvalues in magnitude and the associated eigenvectors of QEP (1). We determine the convergence of a Ritz pair by requiring

(20)

where is a user-prescribed accuracy. For the convergence of a refined Ritz pair , we replace the above by .

If the eigenvalues closest to a given target are desired, we use the shift-invert transformation with to transform QEP (1) to the new QEP