Comparing iterative methods to compute the overlap Dirac operator at nonzero chemical potential

Comparing iterative methods to compute the overlap Dirac operator at nonzero chemical potential


The overlap Dirac operator at nonzero quark chemical potential involves the computation of the sign function of a non-Hermitian matrix. In this talk we present iterative Krylov subspace approximations, with deflation of critical eigenvalues, which we developed to compute the operator on large lattices. We compare the accuracy and efficiency of two alternative approximations based on the Arnoldi and on the two-sided Lanczos method. The short recurrences used in the latter method make it faster and more effective for realistic lattice simulations.


Comparing iterative methods to compute the overlap Dirac operator at \FullConferenceThe XXVI International Symposium on Lattice Field Theory
July 14 - 19, 2008
Williamsburg, Virginia, USA

1 The overlap Dirac operator at nonzero chemical potential

Although chiral symmetry can not be implemented exactly in a space-time discretization of QCD [Nielsen:1981hk], a lattice version of chiral symmetry can be obtained by a renormalization group blocking transformation. The ensuing lattice chiral symmetry is embodied by the Ginsparg-Wilson relation [Ginsparg:1981bj], which is solved by the overlap Dirac operator proposed by Narayanan and Neuberger [Narayanan:1994gw, Neuberger:1997fp].

Astrophysical objects like neutron stars exhibit an abundance of quarks over anti-quarks, and the study of QCD in such a background necessitates the introduction of a quark chemical potential in the QCD Lagrangian. In this situation chiral symmetry still holds and in Ref. [Bloch:2006cd] two of the present authors proposed an extension of the overlap Dirac operator to nonzero chemical potential which still satisfies the Ginsparg-Wilson relation,


where with the Dirac gamma matrices in Euclidean space, is the matrix sign function, and


is the Wilson Dirac operator at nonzero chemical potential [Hasenfratz:1983ba] with , , and SU(3). The exponential factors implement the quark chemical potential on the lattice.

For the argument of the sign function becomes non-Hermitian, and one is faced with the problem of defining and computing the sign function of a non-Hermitian matrix. If the matrix of dimension is diagonalizable, then with eigenvalues and eigenvector matrix , and a function can be computed using the spectral definition [Golub]


If is not diagonalizable one can still apply an extension of the spectral definition using the Jordan canonical form. As the eigenvalues of a general matrix can be complex, the sign function computed using Eq. (3) requires the sign of a complex number, which can be defined by


where the branch cut of the square root is chosen along the negative real axis. This definition ensures that and gives the usual result when is real. It is straightforward to check that this definition ensures that . Therefore the overlap Dirac operator at satisfies the Ginsparg-Wilson relation, and the operator can have exact zero modes with definite chirality. The main properties of the operator were discussed in Ref. [Bloch:2007xi].

Since its introduction the operator has been validated in a number of studies: Its definition is consistent with that of domain wall fermions at when the extension of the fifth dimension is taken to infinity and its lattice spacing taken to zero [Bloch:2007xi], its microscopic density and first peak computed from quenched lattice simulations on a lattice agree with the predictions of non-Hermitian chiral random matrix theory [Bloch:2006cd, Akemann:2007yj], and the free fermion energy density was shown to have the correct continuum limit [Gattringer:2007uu, Banerjee:2008ii].

2 Arnoldi approximation for a function of a non-Hermitian matrix

The exact computation of using the spectral definition (3) is only feasible for small lattice volumes as the memory requirements to store the full matrix and the computation time needed to perform a full diagonalization become prohibitively large for realistic lattice volumes. Therefore it is necessary to develop iterative Krylov subspace methods to compute for vectors . For Hermitian matrices the corresponding methods have already reached a high level of sophistication and are widely applied, but for non-Hermitian matrices such methods are novel, except for the special cases of the inverse and the exponential function. The Arnoldi approximation discussed below was first introduced in Ref. [Bloch:2007aw], while the two-sided Lanczos approximation of Sec. 4 was presented in Ref. [Breu2007].

Such iterative methods are based on the observation that the unique polynomial of order which interpolates at all the eigenvalues of satisfies the equality


as follows from the definition (3). Therefore it is an obvious endeavor to construct a good low-degree polynomial approximation for , taking into account the spectrum of and the decomposition of in the eigenvectors of .

In order to construct such a low order polynomial approximation to we consider the Krylov subspace . By definition this subspace contains all the vectors resulting from the action of an arbitrary polynomial of degree in on the source vector . One of these vectors, namely the orthogonal projection of on the Krylov subspace, will minimize over all polynomials of degree .

The Arnoldi method uses the recursive scheme




to build an orthonormal basis in , where , is the normalization of , and is the -th basis vector in . The matrix is a Hessenberg matrix (upper triangular + one subdiagonal) of dimension , whose eigenvalues are called the Ritz values of w.r.t. .

Once the Arnoldi basis has been constructed, the projection of on can be written as


This formal expression requires the knowledge of the exact solution and is therefore of no practical use. However, using the Ritz approximation


which is based on Eq. (7), allows us to approximate the projection by


By construction this approximation is an element of the Krylov subspace , and it can be shown that the implicit polynomial constructed by this approximation interpolates at the Ritz values of w.r.t. [saad:209]. Note that only the first column of is needed in Eq. (10).

This approximation reduces the problem to the computation of with , which makes it very useful for practical use. The inner sign function will then be computed with one’s method of choice. This could be an exact spectral decomposition if is small, or some suitable approximation method, e.g., Roberts’ matrix-iterative method for the sign function,

which converges quadratically to .

3 Sign function and deflation

When computing the sign function of the matrix an additional problem occurs when the matrix has small eigenvalues, as large Krylov subspaces are then required to achieve a good accuracy. The reason is that it is not possible to approximate well by a low-degree polynomial over the entire spectrum of if it varies too rapidly in a small subregion. A solution to this problem is to use the exact value of for a number of critical eigenvalues of . Over the remaining part of the spectrum should then behave well enough to allow for a low-degree polynomial approximation.

Implementing this so-called deflation is straightforward in the Hermitian case, where it is based on the fact that any number of eigenvectors span a subspace orthogonal to the remaining eigenvectors. In the non-Hermitian case the eigenvectors of are no longer orthogonal, and a more involved approach is needed. We have previously proposed two alternative deflation variants for this case [Bloch:2007aw]. Herein we only present the left-right (LR) deflation and refer to Ref. [Bloch:2007aw] for the details of the Schur deflation.

The method needs the left and right eigenvectors belonging to the critical eigenvectors,


where is the diagonal eigenvalue matrix for the critical eigenvalues and and are the matrices of right and left eigenvectors, respectively. These matrices can be made bi-orthonormal, i.e., , such that is an oblique projector on the subspace spanned by the right eigenvectors.

If the source vector is decomposed as


where is an oblique projection of on and , then the matrix function can be written as


The contribution of the first term can be computed exactly, while the second term can be approximated by the Arnoldi method described in Sec. 2 applied to the Krylov subspace . The final approximation is then given by


As before the function of the internal matrix should be computed with a suitable method. The critical eigenvalues with their left and right eigenvectors should be computed once for all source vectors .

4 Two-sided Lanczos approximation

The Arnoldi method described in Sec. 2 suffers from the long recurrences used to orthogonalize the Arnoldi basis. As an alternative we now consider the two-sided Lanczos method which uses short recurrences at the cost of giving up orthogonality for bi-orthogonality.

Consider the two Krylov subspaces and and construct two bi-orthogonal bases and such that and the matrix is tridiagonal with


It can be shown that and can be built using the short recurrence relations


where , , and and are determined from the normalization condition .

The matrix is an oblique projector on , such that an oblique projection of on is obtained using


In analogy to the Arnoldi method we introduce the approximation such that


The approximation , and the problem is now reduced to the computation of with .

If the matrix has small eigenvalues, deflation will again be necessary when computing the sign function. To implement the LR-deflation in this case one constructs two bi-orthogonal bases and in and , where the directions along , respectively , have been fully deflated from , i.e., and . With LR-deflation the function approximation becomes


5 Results

In this section we compare the results obtained with both methods. In an initial deflation phase the right and left eigenvectors of corresponding to the eigenvalues with smallest magnitude are determined using ARPACK.1 With these exact eigenvectors the final approximation to of Eq. (1) is computed as described in the previous sections.

In Fig. 1 we compare the convergence rate for both methods, by plotting the accuracy versus the Krylov subspace size, and observe that the Arnoldi method has a somewhat better accuracy than the two-sided Lanczos method. To achieve the same accuracy, the Krylov subspace of the two-sided Lanczos method has to be chosen about 20% larger than in the Arnoldi method.

Figure 1: Comparison of the convergence rate for the Arnoldi and the two-sided Lanczos method for a (left) and (right) lattice, for various deflation sizes.

However, the speed of the short recurrences by far makes up for this as can be seen from Fig. 2, where we show the accuracy as a function of the required CPU-time. The two-sided Lanczos method is clearly more efficient than the Arnoldi method, and the advantage gained by the short recurrences increases as the volume grows. The dotted lines show the time needed to build the basis in the Krylov subspace, while the full lines represent the total CPU-time used by the iterative method (without the deflation time). The construction of the basis is much faster for the two-sided Lanczos method () than in the Arnoldi case (). For the Arnoldi method the time needed to construct the Arnoldi basis is dominating, while for the two-sided Lanczos this time can almost be neglected compared to that needed to compute the sign function of the inner matrix. Methods to boost the computation of the sign function of the inner matrix are the subject of a current study. An additional advantage of the short recurrences is their possible implementation with small memory footprint if a two-pass procedure is used to compute Eq. (19).

Figure 2: Accuracy versus CPU-time (in seconds), on an Intel Core 2 Duo 2.33GHz workstation, for the Arnoldi (red) and the two-sided Lanczos method (blue) for a (left) and (right) lattice.

To make the method practical for large-scale lattice simulations it will be important to improve the deflation phase, even though first tests on larger lattices () with realistic parameter values seem to indicate that the number of deflated eigenvalues can be taken much smaller than in the and lattices investigated herein.

The methods discussed above are also currently being tested to compute eigenvalues of the overlap operator. First results are encouraging and clearly show the superiority of the two-sided Lanczos method.


This work was supported in part by DFG grant FOR465-WE2332/4-2. We gratefully acknowledge helpful discussions with A. Frommer.



  1. Typical deflation times on an Intel Core 2 Duo 2.33GHz workstation were for on the lattice and for on the lattice.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minumum 40 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description