# An iterative method to compute the overlap Dirac operator at nonzero chemical potential

###### Abstract

The overlap Dirac operator at nonzero quark chemical potential involves the computation of the sign function of a non-Hermitian matrix. In this talk we present an iterative method, first proposed by us in Ref. [Bloch:2007aw], which allows for an efficient computation of the operator, even on large lattices. The starting point is a Krylov subspace approximation, based on the Arnoldi algorithm, for the evaluation of a generic matrix function. The efficiency of this method is spoiled when the matrix has eigenvalues close to a function discontinuity. To cure this, a small number of critical eigenvectors are added to the Krylov subspace, and two different deflation schemes are proposed in this augmented subspace. The ensuing method is then applied to the sign function of the overlap Dirac operator, for two different lattice sizes. The sign function has a discontinuity along the imaginary axis, and the numerical results show how deflation dramatically improves the efficiency of the method.

An iterative method to compute the overlap Dirac operator at nonzero chemical potential

Andreas Frommer and Bruno Lang

Department of Mathematics, University of Wuppertal, 42097 Wuppertal, Germany

\abstract@cs

## 1 The overlap operator and the sign function at nonzero quark chemical potential

The overlap Dirac operator [Narayanan:1994gw, Neuberger:1997fp] provides an exact solution of the Ginsparg-Wilson relation and hence implements chiral symmetry in lattice QCD even at finite lattice spacing. At zero quark chemical potential the overlap operator requires the computation of the sign function of the Hermitian Wilson-Dirac operator, for which efficient methods have been developed [Neuberger:1998my, vandenEshof:2002ms].

To describe QCD at nonzero baryon density (see Ref. [Stephanov:2007fk] for a review), a quark chemical potential is introduced in the QCD Lagrangian. The massless overlap Dirac operator at nonzero was defined in Ref. [Bloch:2006cd] as

(1.0) |

with . is the Wilson-Dirac operator at nonzero chemical potential [Hasenfratz:1983ba]

(1.0) | ||||

where with negative Wilson mass , with are the Dirac gamma matrices in Euclidean space, , and are matrices. The exponential factors implement the quark chemical potential on the lattice. For the argument of the sign function in Eq.(1) becomes non-Hermitian, and one is faced with the problem of defining and computing the sign function of a non-Hermitian matrix.

Consider a given matrix of dimension and a generic function . Let be a collection of closed contours in such that is analytic inside and on and such that encloses the spectrum of . Then the function of the matrix can be defined by [Dunford]

(1.0) |

where the integral is defined component-wise and denotes the identity matrix. From this definition it is easy to derive a spectral function definition. If the matrix is diagonalizable, i.e., with a diagonal eigenvalue matrix and , then

(1.0) |

If cannot be diagonalized, a more general spectral definition can be derived from Eq. (1.0) using the Jordan decomposition [Golub, Bloch:2007aw]. Non-Hermitian matrices typically have complex eigenvalues, and applying Eq. (1.0) to the sign function in Eq. (1) requires the evaluation of the sign of a complex number. The sign function needs to satisfy and, for real , if . To satisfy these properties, it has become standard to define

(1.0) |

where in the last equality the cut of the square root is chosen along the negative real axis. Using the definition (1.0) in the spectral definition (1.0) and reordering the eigenvalues according to the sign of their real part allows one to write the matrix sign function as

(1.0) |

The sign function satisfies , and a short calculation [Bloch:2006cd] shows that for this reason the overlap operator as defined in Eq. (1) satisfies the Ginsparg-Wilson relation. Moreover, this definition agrees with the result obtained when deriving Eq. (1) from the domain-wall fermion formalism at [Bloch:2007xi].

## 2 Arnoldi method and function approximation for a non-Hermitian matrix

A numerical implementation of the sign function using the spectral definition (1.0) is only possible for small matrices, as a full diagonalization becomes too expensive as the matrix grows. Alternatively, matrix-based iterative algorithms for the computation of the matrix sign function have been around for many years, see Ref. [Higham97] and references therein. These are efficient for medium-sized problems, but are still unaffordable for the very large matrices occurring in typical lattice QCD simulations. Therefore, another iterative method is required which approximates the vector , rather than the full sign matrix itself. Such iterative methods are already extensively used for Hermitian matrices [Vor88, Drus98]. Most of these methods are derived from the Lanczos method, which uses short recurrences to build an orthonormal basis in a Krylov subspace.

Krylov subspace methods have also been introduced for non-Hermitian matrices [gallopoulos89parallel, hochbruck]. The two most widely used methods to compute a basis for the Krylov subspace are the Arnoldi method and the two-sided Lanczos method. In contrast to the Hermitian case, the Arnoldi method requires long recurrences to construct an orthonormal basis for the Krylov subspace, while the two-sided Lanczos method uses two short recurrence relations at the cost of losing orthogonality. Here we describe a Krylov subspace approximation based on the Arnoldi method to evaluate for a generic function of a non-Hermitian matrix.

We aim to construct an approximation to using a polynomial of degree with . For any there exists a best polynomial approximation of degree at most , which is the orthogonal projection of on the Krylov subspace . An orthonormal basis for the Krylov subspace is constructed using the Arnoldi recurrence

(2.0) |

where , , is an upper Hessenberg matrix, , and is the -th basis vector in . Then is a projector on the Krylov subspace, and the projection of on can formally be written as

(2.0) |

However, to compute the projection (2.0) one would already have to know the exact result . Therefore, a method is needed to approximate the projected vector . From Eq. (2.0) it follows that

(2.0) |

which suggests the approximation [gallopoulos89parallel]

(2.0) |

As , Eq. (2.0) can be substituted in Eq. (2.0), finally yielding the approximation

(2.0) |

In this approximation the computation of is replaced by that of , where is of much smaller size than . should be evaluated by some suitable numerical method.

The computation of the matrix sign function using Eq. (2.0) converges to the exact solution (see the curve in the left pane of Fig. 1). Unfortunately, in the case of the sign function, the convergence as a function of the size of the Krylov subspace is very slow if some of the eigenvalues are close to the function discontinuity along the imaginary axis. This problem can be resolved by deflation of these critical eigenvalues.

For Hermitian matrices, it is well known that the computation of the sign function can be improved by deflating the eigenvalues smallest in absolute value [vandenEshof:2002ms]. Assume that critical eigenvalues of with orthonormal eigenvectors have been computed. Then

(2.0) |

where with and . The first term on the right-hand side of Eq. (2.0) can be computed exactly, while the second term can be approximated using a Krylov subspace method for . Deflation will allow for a much smaller-sized Krylov subspace.

For non-Hermitian matrices the eigenvectors are no longer orthogonal, and the simple decomposition into orthogonal subspaces, leading to Eq. (2.0), no longer holds. In the next two sections we will develop two alternative deflation schemes for the non-Hermitian case.

## 3 Schur deflation

We construct the subspace , which is the sum of the subspace spanned by the right eigenvectors corresponding to critical eigenvalues of and the Krylov subspace . Assume that critical eigenvalues and right eigenvectors of have been computed. From this, one can construct Schur vectors , which form an orthonormal basis of , satisfying

(3.0) |

where and is an upper triangular matrix whose diagonal elements are the eigenvalues corresponding to the Schur vectors.

We propose a modified Arnoldi method to construct an orthogonal basis of the composite subspace . That is, each Arnoldi vector is orthogonalized not only against the previous ones, but also against the Schur vectors . In analogy to (2.0), this process can be summarized as

(3.0) |

with , where is the projection of onto the orthogonal complement of and . The Hessenberg matrix

(3.0) |

satisfies a relation similar to Eq. (2.0), namely , where the columns of form an orthonormal basis of . In analogy to Sec. 2 we construct the approximation

(3.0) |

Because of the block structure (3.0) of , the matrix can be written as

(3.0) |

where reflects the coupling between both subspaces and satisfies the Sylvester equation

(3.0) |

with . Combining (3.0) and (3.0), we obtain

(3.0) |

and are computed with some suitable numerical method, and the mixed triangular/Hessenberg Sylvester equation (3.0) for can be solved efficiently with the method of Ref. [Bloch:2007aw].

## 4 LR-deflation

An alternative deflation in the same composite subspace can be constructed using both the left and right eigenvectors corresponding to the critical eigenvalues. Assume that critical eigenvalues of and their corresponding left and right eigenvectors have been computed,

(4.0) |

where is the diagonal matrix of critical eigenvalues, and and are the matrices with the corresponding right and left eigenvectors, respectively. The left and right eigenvectors corresponding to different eigenvalues are orthogonal. If the eigenvectors are normalized such that , then , and is an oblique projector on the subspace spanned by the right eigenvectors. Let us now decompose as , where and . Then

(4.0) |

The first term on the right-hand side, which follows from Eq. (4.0), can be evaluated exactly, while the second term can be approximated by applying the Arnoldi method described in Sec. 2 to . An orthonormal basis is constructed in the Krylov subspace using the Arnoldi recurrence (2.0), with and . Successive operations of on will yield no contributions along the critical eigendirections, hence does not mix with . Applying the Arnoldi approximation (2.0) to Eq. (4.0) yields the final approximation

(4.0) |

Again, the first column of will be computed with some suitable numerical method.

## 5 Results

We used the methods described above to compute the sign function occurring in the overlap Dirac operator (1) for a and a lattice gauge configuration. Deflation of critical eigenvalues is essential because has eigenvalues close to the imaginary axis. In practice, these eigenvectors need to be computed to high accuracy, as this will limit the overall accuracy of the function approximation. This was done with ARPACK [arpack]. The modified Arnoldi method was implemented in C++ using the optimized ATLAS BLAS library [atlas-hp]. The convergence of the method is illustrated in Fig. 1, where the accuracy of the approximation is shown as a function of the Krylov subspace size. The various curves correspond to different numbers of deflated eigenvalues, using the LR-deflation scheme. Without deflation () the need for large Krylov subspaces would make the method unusable. Clearly, deflation highly improves the efficiency of the numerical method: as more eigenvalues are deflated, smaller Krylov subspaces are sufficient to achieve a given accuracy. Furthermore, the deflation efficiency seems to grow with increasing lattice volume. Indeed, although the matrix size for the lattice is more than 5 times larger than in the case, the Krylov subspace only has to be expanded by a factor of 1.2 to achieve a given accuracy of (for ). It is also interesting to note that the modified Arnoldi approximation (4.0) for is very close to the best approximation in the composite subspace, which is given by the orthogonal projection of on , as was checked numerically.

The results for the Schur deflation are not shown here, but are very similar to those for the LR-deflation. The Schur deflation is slightly less accurate, and requires more CPU time per evaluation, mainly because of the additional orthogonalization of the Arnoldi vectors with respect to the Schur vectors. However, the time taken by its initialization phase is halved, as it only requires the computation of the right eigenvectors, and the best choice of deflation scheme will depend on the number of vectors for which needs to be computed. If one needs to apply both and its adjoint, then, obviously, the LR-deflation will be the better choice. A more detailed discussion of both deflation schemes can be found in Ref. [Bloch:2007aw]

## 6 Conclusion

In this talk we presented an algorithm to approximate the action of a function of a non-Hermitian matrix on an arbitrary vector, when some of the eigenvalues of the matrix lie in a region of the complex plane close to a discontinuity of the function. The method approximates the solution vector in a composite subspace consisting of a Krylov subspace augmented by the eigenvectors corresponding to a small number of critical eigenvalues. Two deflation variants were presented based on different subspace decompositions: the Schur deflation uses two coupled orthogonal subspaces, while the LR-deflation uses two decoupled but non-orthogonal subspaces. Deflation explicitly takes into account the contribution of the critical eigenvalues. This allows for smaller-sized Krylov subspaces, which is crucial for the efficiency of the method. The method was applied to the overlap Dirac operator of lattice QCD at nonzero chemical potential, where the importance of deflation was clearly demonstrated.

## Acknowledgments

This work was supported in part by DFG grants FOR465-WE2332/4-2 and Fr755/15-1.