Short-recurrence Krylov subspace methods for the overlap Dirac operator at nonzero chemical potential\tnotereft1

Short-recurrence Krylov subspace methods for the overlap Dirac operator at nonzero chemical potential\tnotereft1

Abstract

The overlap operator in lattice QCD requires the computation of the sign function of a matrix, which is non-Hermitian in the presence of a quark chemical potential. In previous work we introduced an Arnoldi-based Krylov subspace approximation, which uses long recurrences. Even after the deflation of critical eigenvalues, the low efficiency of the method restricts its application to small lattices. Here we propose new short-recurrence methods which strongly enhance the efficiency of the computational method. Using rational approximations to the sign function we introduce two variants, based on the restarted Arnoldi process and on the two-sided Lanczos method, respectively, which become very efficient when combined with multishift solvers. Alternatively, in the variant based on the two-sided Lanczos method the sign function can be evaluated directly. We present numerical results which compare the efficiencies of a restarted Arnoldi-based method and the direct two-sided Lanczos approximation for various lattice sizes. We also show that our new methods gain substantially when combined with deflation.

\tnotetext

[t1]Supported by DFG collaborative research center SFB/TR-55 “Hadron Physics from Lattice QCD”.

1 Introduction

While this paper discusses new numerical methods that are expected to be useful in a large number of applications, the main motivation for these new methods comes from quantum chromodynamics (QCD) formulated on a discrete space-time lattice. QCD is the fundamental theory of the strong interaction. Being a non-Abelian gauge theory, it is notoriously difficult to deal with. Lattice QCD is the only systematic non-perturbative approach to compute observables from the theory, and it is amenable to numerical simulations. The main object relevant for our discussion is the Dirac operator, for which there exist several formulations that differ on the lattice but are supposed to give the same continuum limit when the lattice spacing is taken to zero. We are focusing on the overlap Dirac operator (1); (2), which is the cleanest formulation in terms of lattice chiral symmetry (3); (4) but very expensive in terms of the numerical effort it requires. Trying to improve algorithms dealing with the overlap operator is an active field of research, and even small improvements can have an impact on the large-scale lattice simulations that are being run by the lattice QCD collaborations worldwide.

The overlap operator is essentially given by the sign function of its kernel, which we assume is the usual Hermitian Wilson operator (see (5) for the notation). On the lattice, this operator is represented by a sparse matrix, and on current production lattices the dimension of this matrix can be as large as . The main numerical effort lies in the inversion of the overlap operator, which is done by iterative methods and requires the repeated application of the sign function of on a vector. At zero chemical potential , is Hermitian, and many sophisticated methods have been developed for this case (see, e.g., (6)). However, one can also study QCD at nonzero quark chemical potential (or, equivalently, density), which is relevant for many physical systems such as neutron stars, relativistic heavy ion collisions, or the physics of the early universe. The overlap operator has been generalized to this case (5); (7). While the result is formally similar to the one at , it is in fact more complicated since becomes a non-Hermitian matrix, of which we need to compute the sign function. This case is much less studied and the focus of the present paper, which is a natural continuation of earlier work (8). For simplicity we will still refer to as the “Hermitian” Wilson operator.

In mathematical terms, we investigate the computation of , where is non-Hermitian and is a general function defined on the spectrum of such that the extension of to matrix arguments is defined. For a simple definition of matrix functions we assume that is diagonalizable and let be the eigendecomposition with and diagonal containing the eigenvalues . Then the matrix evaluation of is defined as

(1)

Accordingly, if is a vector expressed in terms of the eigenvectors, then

(2)

For a thorough treatment of matrix functions see (9); a compact overview is given in (10). The case will be of particular interest. We use the standard definition for (9), which in the physics case we are considering was also shown to follow from the domain-wall formalism (7).

If is large and sparse, is too costly to compute, whereas can still be obtained in an efficient manner via a Krylov subspace method.

The foundation for any Krylov subspace method is the computation of an appropriate basis for the Krylov subspace . For Hermitian matrices an orthonormal basis can be built with short recurrences using the Lanczos process. For non-Hermitian matrices the corresponding process, which again computes an orthonormal basis, is known as the Arnoldi process. It requires long recurrences and is usually summarized via the Arnoldi relation

(3)

Here, is the matrix which contains the computed basis vectors (the Arnoldi vectors), is the upper Hessenberg matrix containing the recurrence coefficients , and denotes the -th unit vector of . being upper Hessenberg reflects the fact that the computation of the next Arnoldi vector results in a long recurrence since the projection of on all previous vectors has to be subtracted from . Long recurrences slow down computation and increase storage requirements, and thus become inefficient or even infeasible if , the dimension of the Krylov subspace, becomes large. This is the reason why in this paper we investigate two ways to circumvent this problem for non-Hermitian matrices, i.e., restarts of the Arnoldi process and the use of the two-sided Lanczos process. We will consider these two methods in combination with a rational function approximation to . In the case of two-sided Lanczos, we will also consider a direct evaluation of the function.

This paper is organized as follows. In Section 2 we describe the two alternatives just mentioned to obtain short recurrences. In Section 3 we address several aspects of the important issue of deflation. Section 4 contains the descriptions of four different short-recurrence algorithms to compute the sign function, all of which use the preferred method of LR deflation. In Section 5 we discuss the choice of the rational function to approximate the sign function. Our numerical results are presented in Section 6, and conclusions are drawn in Section 7.

2 Short recurrences for non-Hermitian matrices

For simplicity, we assume from now on. The standard Krylov subspace approach, introduced in (11) (see also (9)), to obtain approximations to the action is to compute

(4)

for some suitable . Here, denotes the first unit vector. We refer to (8); (12) for a discussion in the context of the overlap operator at nonzero chemical potential.

The approximation (4) can be viewed as a projection approach. The operator is orthogonally projected onto , the projection being represented by . We then compute , i.e., we evaluate the matrix function of for the projected operator, applied to the projected vector . This result is finally lifted back to the larger, original space by multiplication with . The matrix function , where is of small size, can be evaluated using existing schemes for matrix functions, e.g., by computing the eigendecomposition of or by using iterative schemes like, in the case of , Roberts’ iterative scheme based on Newton’s method, see, e.g., (9).

2.1 Restarting the Arnoldi process

To prevent recurrences from becoming too long for (4) one could, in principle, use a restart procedure. This means that one stops the Arnoldi process after iterations. At this point we have a, possibly crude, approximation (4) to , and to allow for a restart one now has to express the error of this approximation anew as the action of a matrix function, , say. It turns out that this can indeed be done, see (13), at least in theory, with defined as a divided difference of with respect to the eigenvalues of and with , the last Arnoldi vector of the previous step. Unfortunately, however, this may result in a numerically unstable process, so that after a few restarts the numerical results become useless. For details, see (13).

An important exception arises when is a rational function of the form

(5)

We then have

(6)

where the , , are solutions of the shifted systems

(7)

and is the unit matrix (we will frequently suppress the index on ). For large and sparse, these shifted systems cannot be solved efficiently by direct methods. Using the Arnoldi projection approach outlined before, the current approximation for is obtained as

(8)

Note that Krylov subspaces are shift invariant, i.e., , and that the Arnoldi process applied to instead of produces the same set of Arnoldi vectors, i.e., the same matrices with replaced by the shifted counterpart , see (14); (15). This shows that the vectors in (8) are the iterates of the full orthogonalization method FOM, see (16), for the linear systems

(9)

A crucial observation is that for any the individual residuals of the FOM iterates are just scalar multiples of the Arnoldi vector , see, e.g., (17); (18), i.e.,

(10)

with collinearity factors . The error of the approximation at step can therefore be expressed as

(11)

This allows for a simple restart at step of the Arnoldi process, with the new function again being rational with the same poles as . For this reason, the stability problems that are usually encountered with restarts for general functions do not occur here.

The restart process just described can also be regarded as performing restarted FOM for each of the individual systems (and combining the individual iterates appropriately), the point being that, even after a restart, we need only a single Krylov subspace for all systems, see (18). Restarted FOM is not the only “multishift” solver based on a single Krylov subspace to compute approximations to by combining approximate solutions to . An important alternative to FOM is to use restarted GMRES for families of shifted linear systems as presented in (19). This method also relies on the restarted Arnoldi process, but now a difference has to be made between the seed system, for which “true” restarted GMRES is performed, and the other systems, for which a variant of GMRES is performed which keeps the residuals collinear to that of the seed system. The convergence analysis in (19) shows that this approach is justified if is positive real (i.e., for all ) and all shifts are negative.1 Indeed then, taking as the seed system the one belonging to the shift which is smallest in modulus, say, the residuals of all the other systems — for which we do not perform “true” GMRES — are smaller in norm than the residual for . But for the first system we do perform true restarted GMRES which is known to converge under the assumptions made.

2.2 The two-sided Lanczos process

Another way to obtain short recurrences when computing a basis for the Krylov subspaces for non-Hermitian matrices is to replace the Arnoldi process by the two-sided Lanczos process. The two-sided Lanczos process builds two biorthogonal bases and for the two Krylov subspaces and , respectively. Here, is a so-called shadow vector which can be chosen arbitrarily. We always chose motivated by the fact that then for one recovers the standard Lanczos method for which the projection on the Krylov subspace (see (14) below) is orthogonal. With and we thus have , and the resulting recurrences can be summarized as

(12)
(13)

where is tridiagonal. Note that an iteration will now require two matrix-vector multiplications, one by and one by . In principle, the choice of can substantially influence the two-sided Lanczos process, which can even break down prematurely or run into numerical instabilities. With our choice of such undesirable behavior never occurred in our numerical experiments.

The matrix now represents an oblique projection, and in analogy to (4) we get the approximations

(14)

A first report on the application of (14) to the overlap operator with chemical potential can be found in (21).

If is a rational function, see (5), the approximation (14) can be expressed as

(15)

Since, just as the Arnoldi process, the two-sided Lanczos process creates the same vectors if one passes from to with the projected matrix passing to , this shows that for all the vectors are just the BiCG iterates for the systems . In other words: If is a rational function, the approximation (14) is equivalent to performing “multishift” BiCG, see (17); (22) (and recombining the individual iterates as ). Although no breakdowns were observed in the numerical experiments of Ref. (20), for reasons of numerical stability one might prefer using the BiCGStab (23) or QMR method (24) instead of BiCG. Both also rely on the two-sided Lanczos process, and efficient multishift versions exist as well, see (17); (22); (25).

2.3 Summary and comparison

To summarize, so far we have presented the following approaches to developing short-recurrence methods to iteratively approximate :

  1. Methods based on restarted Arnoldi:

    • Approximate by a rational function . Then use multishift restarted FOM or multishift restarted GMRES for .

    • Apply the restarted Arnoldi process directly. As discussed at the beginning of Section 2.1, this is not possible in computational practice because of stability problems.

  2. Methods based on two-sided Lanczos:

    • Approximate by a rational function . Then use multishift BiCG/BiCGStab/QMR for .

    • Use directly the approximation to the oblique projection for any , see (14).

The corresponding algorithms will be given in Section 4.

Note that short recurrences, in principle, result in constant work per iteration. However, for approach 2b) we will have to evaluate for a matrix , and this work will become substantial if is large, see Proposition 2 below.2 Also, in approach 2b) we have to store all vectors which may become prohibitive so that a two-pass strategy may be mandatory: The two-sided Lanczos process is run twice. In the first run, is built up, but the vectors are discarded. Once has been computed, the Lanczos process is run again, and the vectors are combined with the coefficients from to obtain the final approximation. Both of these drawbacks are not present in the other approaches. These, however, rely on the fact that we must be able to replace the computation of by with a sufficiently precise rational approximation to .

3 Deflation

In (8) two approaches to deflate eigenvectors were proposed for the Krylov subspace approximation (4). These deflation techniques use eigenvalue information, namely Schur vectors (Schur deflation) or left and right eigenvectors (LR deflation) corresponding to some “critical” eigenvalues. Critical eigenvalues are those which are close to a singularity of since, if these are not reflected very precisely in the Krylov subspace, we get a poor approximation. In case of the sign function the critical eigenvalues are those close to the imaginary axis. In this section we describe both deflation methods and show how they can be combined with multishifts so that they can be used in approaches based on a rational approximation. We point out a serious disadvantage of Schur deflation, leaving LR deflation as the method of choice. For the sake of simplicity we present the deflation techniques without taking restarts into account. We will briefly comment on restarts after (24) below.

We start with Schur deflation. Let be the matrix whose columns are the Schur vectors of critical eigenvalues of the matrix . This means that we have and

(16)

where is an upper triangular matrix with the critical eigenvalues of on its diagonal, see (27). Let us note that the Schur vectors span an invariant subspace of , and that they can be computed via orthogonal transformations, which is very stable numerically. The extraction of the eigenvectors themselves is a less stable process if is non-Hermitian.

In the case of the shifted matrices , , and , computed with respect to we have

(17)

Clearly, the matrix represents the orthogonal projector onto the subspace . The solutions to (7) are now approximated in augmented Krylov subspaces,

(18)

In fact, the projected Krylov subspace , which is orthogonal to , is a Krylov subspace again, but now for instead of and instead of : Since is -invariant, i.e., for any there is a such that , we have

(19)

and thus

(20)

To build a basis for this Krylov subspace we have to multiply by instead of in every step, reflecting the fact that we have to project out the -part after every multiplication by . This may result in quite considerable computational work: The work for one projection has cost , because each of the Schur vectors is usually non-sparse.

We now turn to LR deflation. The idea is essentially the same as for Schur deflation, except that we use a different projector. As we will see below, this has a useful consequence: It removes the need to multiply by in every step. Thus the effort for the projection step has to be paid only once, instead of once per iteration (but see the comment after Eq. (22)).

The projector we use is an oblique projector onto , defined by , where is the matrix containing the right eigenvectors and is the matrix containing the left eigenvectors corresponding to the critical eigenvalues of . With the diagonal matrix with the critical eigenvalues on its diagonal, the left and right eigenvectors satisfy

(21)

The left and right eigenvectors are biorthogonal and are normalized such that , thus ensuring .

As in the Schur deflation the projected Krylov subspace is a Krylov subspace. It is no longer orthogonal to because the projector is oblique, but it now is a Krylov subspace for the original matrix since both and are -invariant so that for . Instead of (3) we now have

(22)

Therefore, no additional projection is needed within the Arnoldi method when we build up a basis for this subspace. In computational practice, however, components outside of will show up gradually due to rounding effects in floating-point arithmetic. It is thus necessary to apply from time to time in order to eliminate these components. We will come back to this point in Section 4.1.

The numerical accuracy of the computed eigenvectors turned out to always be sufficient in our computations. Therefore, because of its greater efficiency, from now on we concentrate on LR rather than Schur deflation.

The overall approach is thus as follows: With the oblique projector we split into the two parts

(23)

Since we know the left and right eigenvectors which make up , using (2) we directly obtain

(24)

The remaining part can then be approximated iteratively by any of the approaches discussed in Section 2. Since the only effect of LR deflation is the replacement of by in (4), no modifications are necessary when using one of the restarted approaches.

There is a beneficial effect of deflation on the number of poles to use when is approximated by a rational function . Let be the coefficient vector of when represented in the basis of right eigenvectors of , i.e., , and assume that we sorted them to put the critical eigenvectors first, i.e.,

(25)

Then and . So when approximating via a rational function , we have . This shows that we only have to take care that approximates well on the non-critical eigenvalues (those in ). Consequently, a good approximation can be obtained using a smaller number of poles as compared to the situation where we would have to approximate well on the full spectrum of . This pole-reduction phenomenon can be very substantial, even if we deflate only a small number of eigenvalues, see Section 6.

4 Algorithms

In this section we present four algorithms, corresponding to the list in Section 2.3, to compute the action (23) of the sign function of a non-Hermitian matrix on a vector, using LR deflation for the first term and short-recurrence Krylov subspace methods for the remaining term .

4.1 Restarted Arnoldi with rational functions

In this subsection we discuss methods based on restarted Arnoldi, corresponding to 1a) in Section 2.3. We assume that the original function is replaced by a rational function given by (5) which approximates the original function sufficiently well. The choice of the rational function will be discussed in Section 5.

We start with LR-deflated restarted FOM. The resulting algorithm is given as Algorithm 1, where we use (8) to obtain the iterates for all shifted systems in the current cycle, and where we give the details on how to obtain the collinearity factors from (10) for the residuals, see also (18). Here, the notation FOM-LR() indicates that we LR-deflate a subspace of dimension and that we restart FOM after a cycle of iterations. The vector is the approximation to . After the completion of each cycle we perform a projection step to eliminate numerical contamination by components outside of , as discussed in Section 3 after (22).

Algorithm 1

Restarted FOM-LR()

  {Input , , , , , , , , }
  
  
  
  ,
  while not all systems are converged do {loop over restart cycles}
     
     
     compute , by running steps of Arnoldi with
     for  do
        
     end for
     
     
     ,
      {projection step}
  end while

Since Algorithm 1 will be used in our numerical experiments, we now analyze the main contributions to its computational cost.

Proposition 1

Let denote the cost for one matrix-vector multiplication by the matrix , and let be the total number of such matrix-vector multiplications performed. The computational cost of Algorithm 1 is given as

(26)

To see this, let us discuss the dominating contributions to the computational cost in one sweep of the while-loop. For simplicity we write instead of , as we also did in the algorithm. Computing and with the Arnoldi process has cost , since for the -th step requires one matrix-vector multiplication and inner products, vector additions and scalings. Since we can solve systems with the upper Hessenberg matrices with cost , the total cost for the computation of all the vectors is , which can be neglected compared to the cost contained in the Arnoldi process. Updating with the linear combination of the columns of has cost , which can again be neglected compared to the cost in the Arnoldi process. The final projection step has cost . Multiplying these costs by the number of sweeps through the while-loop and using gives the total cost. The initial steps prior to the while loop have cost , which is dominated by the last term of Eq. (26).

We now formulate the LR-deflated restarted GMRES algorithm. Let us first introduce the matrix

(27)

through which the Arnoldi relation (3) can be summarized as . We choose the first system (with shift to be the seed system, i.e., the system for which we run “true” restarted GMRES. This implies that we have to solve a least squares problem involving to get the corresponding iterate. Here the matrix denotes the -dimensional identity matrix extended with an extra row of zeros. For the other shifts we impose the collinearity constraint for the residuals. The corresponding iterates are now obtained via solutions of linear systems. For a detailed derivation we refer to (19), and the detailed algorithmic formulation is given in Algorithm 2.

Algorithm 2

Restarted GMRES-LR()

  {Input , , , , , , , , }
  
  
  
  ,
  
  while not all systems are converged do {loop over restart cycles}
     
     compute , by running steps of Arnoldi for
     compute as the minimizer of
     for  do
        compute and as the solution of the system
     end for
     
     
     
     ,
      {projection step}
  end while

4.2 Two-sided Lanczos with rational functions

We now turn to methods based on two-sided Lanczos, corresponding to 2a) in Section 2.3. In this case there is no need for restarts because the two-sided Lanczos process uses only short recurrences anyway. We summarize a high-level view of the resulting computational method using multishift BiCG as Algorithm 3. The changes necessary to obtain multishift BiCGStab/QMR should be obvious.

Algorithm 3

BiCG-LR()

  {Input , , , , , , , }
  
  
  
  for  until all systems are converged do
     compute the -th BiCG iterates , , for the systems
  end for
  

4.3 Direct application of the two-sided Lanczos approach

We now consider the two-sided Lanczos approach for as given in (14), corresponding to 2b) in Section 2.3. The resulting computational method is summarized as Algorithm 4. Note that due to the deflation this algorithm uses a modified shadow vector: We remove from all critical eigenvector components belonging to the right eigenvectors of , i.e., the left eigenvectors of . With this modified shadow vector, the biorthogonality relation enforced by the two-sided Lanczos process numerically helps keeping the computed basis for free of contributions from the right critical eigenvectors, as it should be in exact arithmetic. In Algorithm 4, the parameter denotes the number of deflated eigenvalues, and is the maximum dimension of the Krylov subspace being built, a parameter which has to be fixed a priori.

Algorithm 4

Direct two-sided Lanczos-LR(, )

  {Input , , , , , , }
  
  
  
   {the modified shadow vector}
  put , , choose and normalize s.t. 
  for  do
     update , compute and from the two-sided Lanczos process (12), (13)
  end for
  put

We now analyze the main contributions to the computational cost of Algorithm 4, which will also be used in our numerical tests.

Proposition 2

Let denote the cost for one matrix-vector multiplication by the matrix , and let be the total number of iterations performed, i.e., from Algorithm 4. The computational cost of Algorithm 4 is given as

(28)

To see this, we discuss the dominating contributions to the computational cost as we did for Algorithm 1. The initialization phase has cost , since . In each sweep through the for-loop, updating and the Lanczos vectors has cost , which gives a total of . The last line of the algorithm requires operations to compute and additional operations to get .

5 Choice of the rational function

Figure 1: Error of the Zolotarev rational approximation (left) and the Neuberger rational approximation (right). Both rational approximations are of the form with . We took in both cases and plotted the contours for . We chose and for the Zolotarev approximation, and for the Neuberger approximation. The white spots on the imaginary axis mark the poles of the rational approximation that lie in the interval .

In this section we address the issue of how to find good rational approximations to the sign function in the non-Hermitian case.

In the Hermitian case, if we know intervals , which contain the (deflated) spectrum of , the sign function of can be approximated using the Zolotarev best rational approximation, see (28) and, e.g., (29); (6). Using the Zolotarev approximation on non-Hermitian matrices gives rather poor results, unless all eigenvalues are close to the real axis (see the left plot in Figure 1). A better choice for generic non-Hermitian matrices is the rational approximation originally suggested by Kenney and Laub (30) and used by Neuberger (31); (32) for vanishing chemical potential,

(29)

Note that , and for . The partial fraction expansion of is known to be

(30)

see (30); (31). In (29), is a parameter which one chooses to minimize the number of poles of the partial fraction expansion (30), see (6) and the discussion after Theorem 1 below. Whereas for the Zolotarev approximation the regions of good approximation are concentrated along the real axis, the approximation approaches well on circles to the left and right of the imaginary axis, see the right plot in Figure 1. For this reason, the Neuberger approximation is better suited for generic non-Hermitian matrices. All we need is some a priori information on the spectrum from which we can determine an appropriate circle in the right half-plane, centered on the real axis, such that together with contains all the eigenvalues. We then can compute the degree of the Neuberger approximation such that the sign function is approximated to a given accuracy on .

The following theorem gives the insight necessary for this approach to work.

Theorem 1

For given and we have

(31)

where is the circle with radius and center , with .

Proof. Assume that is in the right half-plane (the case of in the left half-plane can be treated in a completely analogous manner). With we write such that . Therefore if and only if .

Since , a sufficient condition for is , which is equivalent to

(32)

Let be on the circle , i.e., . Then

(33)

In fact we have and thus

(34)

So we have shown that on the boundary of the circle , and by the maximum modulus principle this also holds for inside the circle.

The parameter in (29) can now be used in order to optimize the number of poles for a given target accuracy if the spectrum of the operator is known to be contained in the union of two circles , where is the circle and is real, . For symmetry reasons it is again sufficient to discuss only the circle in the right half-plane, . Note that is a positive integer. Restricting the function to real arguments, we see that it is positive on , monotonically increasing on , and that as well as . The maximum error is therefore smallest if is chosen such that the scaled interval is of the form . This is the case for with , see also (6).3 For this choice of we see that is in if and only if is in with and . But is precisely of the form that was considered in Theorem 1 with . Therefore, if we want the error to be smaller than for , Theorem 1 tells us that it is sufficient to require . Solving for we see that this precision is obtained if the number of poles satisfies

(35)

6 Numerical results

This section contains the results of several numerical experiments comparing some of the methods developed in this paper. We only present results for Algorithms 1 and 4 since the results for Algorithms 2 and 3 are very similar to those of Algorithm 1 (20). Algorithms 1 and 4 as described in Section 4 were applied to compute , where is the “Hermitian” Wilson Dirac operator at nonzero chemical potential and for generic QCD gauge field configurations on lattices with sizes , and . The lattice parameters are , , , and , see (5) for the notation.

In Algorithm 1 one has to decide which rational approximation to use. This decision should be made depending on the spectrum of . Even though in lattice QCD the eigenvalues do not move far away from the real axis for reasonable values of , we adopted a conservative strategy and used the Neuberger approximation in our numerical experiments. As we discussed at the end of section 5, in order to use a Neuberger rational approximation we have to determine circles and which should contain all the eigenvalues (except the ones that have been deflated). Of course, we cannot precompute the whole spectrum, so we have to rely on a reasonable heuristics. From the deflation process we know a parameter such that all non-deflated eigenvalues have modulus larger than . We also precomputed the eigenvalue which is largest in modulus with value . The heuristics, which is confirmed by additional numerical experiments, is to assume that for reasonable values of all eigenvalues are contained in the two circles centered on the real line and intersecting it at the points , and , , respectively. This gives and . The number of poles to use is now given by (35) together with the corresponding (scaled) Neuberger approximation. Note that this approach is quite defensive since it allows eigenvalues to deviate substantially from the real axis if their real parts are not close to , ,