The generalized fermionbag approach
Abstract
We present a new approach to some fourfermion lattice field theories which we call the generalized fermion bag approach. The basic idea is to identify unpaired fermionic degrees of freedom that cause sign problems and collect them in a bag. Paired fermions usually act like bosons and do not lead to sign problems. A resummation of all unpaired fermion degrees of freedom inside the bag is sufficient to solve the fermion sign problem in a variety of interesting cases. Using a concept of duality we then argue that the size of the fermion bags is small both at strong and weak couplings. This allows us to construct efficient algorithms in both these limits. Using the fermion bag approach, we study the quantum phase transition of the 3D massless lattice Thirrring model which is of interest in the context of Graphene. Using our method we are able to solve the model on lattices as large as with moderate computational resources. We obtain the precise location of the quantum critical point and the values of the critical exponents through this study.
The generalized fermionbag approach
Shailesh Chandrasekharan and Anyi Li^{†}^{†}thanks: Speaker. ^{†}^{†}thanks: Address since September 2011 : Institute for Nuclear Theory, University of Washington, Seattle, 98195, USA
Department of Physics, Box 90305, Duke University, Durham, North Carolina 27708, USA
Email: sch@phy.duke.edu and anyili@phy.duke.edu
\abstract@cs
1 Introduction
Monte Carlo methods for lattice field theories with massless fermions in three or more dimensions continue to pose a variety of challenges. For example, the popular Hybrid Monte Carlo (HMC) method encounters many difficulties. Sometimes the method encounters sign problems, while in other cases small eigenvalues of the fermion matrix lead to severe singularities. For this reason most practical calculations have always relied on extrapolations to the massless limit. Studies in Quantum Chromodynamics have shown that reliable extrapolations require calculations at many small fermion masses [1]. As far as we know Monte Carlo calculations on large lattices with exactly massless fermions have never been performed so far with traditional Monte Carlo methods including the HMC.
Recently, an alternative Monte Carlo method called the fermion bag approach was proposed to solve some fourfermion lattice field theories with exactly massless fermions [2]. It was shown that the method is extremely efficient at strong couplings where traditional Monte Carlo methods fail. Here we argue that the method is also quite general and can be applied to a variety of problems and in addition, contains an interesting strongweak coupling duality which makes the method efficient even at weak couplings. The efficiency is due to the fact that the required effort to perform a single local update scales like the square of the number of fermion degrees of freedom inside a bag instead of the spacetime volume. Interestingly, the bag size is a small fraction of the thermodynamic volume at both strong and weak couplings. In the massless lattice Thirring model which we study here as a first application, the fermion bags typically contain only an eighth of the total degrees of freedom even at the quantum critical point. Due to this feature, for the first time we are able solve a three dimensional lattice field theory containing exactly massless fermions close to an interesting quantum critical point on lattices as large as with modest computing resources. Through a careful finite size scaling analysis we are able to extract the critical exponents accurately at the quantum critical point in this model.
2 The Fermion Bag Idea
The idea behind the fermion bag is to identify fermion degrees of freedom that cause sign problems and collect them in a bag and sum only over them. This is in contrast to traditional approaches where all fermion degrees of freedom in the entire thermodynamic volume are summed over in order to solve the sign problem. In some fourfermion models, the degrees of freedom inside a fermion bag often involve only a small fraction of the total number of degrees of freedom in the entire thermodynamic volume and can be summed up quickly yielding a positive weight. This makes the fermion bag approach very powerful. It is an extension of the meron cluster idea proposed some time ago [3].
In order to illustrate the idea consider models formulated with tastes of massless staggered fermions. These models contain Grassmann variables per site which will be denoted as and where represent taste indices and denotes the space time lattice point. Let be the free staggered Dirac matrix whose matrix elements are denoted as . The properties of are such that any point correlation function of chiral condensates
(2.0) 
involving the taste is always positive. Using Wick contractions it is easy to prove that
(2.0) 
where is the matrix of propagators between the sites whose matrix elements are . It is also possible to argue that [2],
(2.0) 
where the matrix is the same as the matrix except that that the sites are dropped from the matrix. Thus, is a matrix. The identity
(2.0) 
is the basis behind the weak couplingstrong coupling duality we mentioned in the previous section.
Now consider a generic four fermion lattice field theory action involving massless staggered fermions given by
(2.0) 
where refers to some well defined set of neighboring lattice sites. Further we will assume that all the couplings are nonnegative real constants. The partition function of the model
(2.0) 
can be expanded in powers of the couplings and each term in the expansion consists of a product of correlation functions of the type introduced above. Mathematically the expansion looks like
(2.0) 
where refers to some generic power of the coupling and refers to the number interaction vertices for each taste. Note that on a finite lattice the expansion is a very high order polynomial and so no convergence issues arise.
An intuitive physical picture emerges from the above expansion of the partition function. Since the interaction sites contain both and , fermions are already paired on these sites and do not cause sign problems^{1}^{1}1Any remaining sign problem would also be present in a bosonic field theory.. On the other hand, unpaired fermions that hop freely on the remaining sites can indeed cause sign problems. The free sites are collectively referred to as a fermion bag. The summation of all fermion world lines inside the bag should be a determinant of a matrix. Indeed . The determinant can be evaluated easily if is small, which naturally occurs at strong couplings. Hence we refer to these bags as strong coupling fermion bags. It was shown in [2] that at strong couplings a fermion bag splits into many small disconnected pieces making things even simpler. The left figure of Fig. 1 gives an illustration of the disconnected pieces of a fermion bag at strong couplings. The solid bonds represent interaction sites while the sites in the shaded regions are free sites and form the fermion bag. The arrows inside the shaded regions is an illustration of free fermion world lines inside the bag.
Clearly at weak couplings the number of free sites grows enormously. Thus, the definition of a fermion bag as a set of free sites, which was natural at strong couplings loses its charm at weak couplings. However, thanks to a duality we can construct the fermion bag differently. At weak couplings we can view the interactions as the unpaired fermionic degrees of freedom that cause fluctuations over the paired fermionic free vacuum as they hop from one interaction site to another. The effort to compute these fermionic fluctuations should only grow as the determinant of a matrix. The duality relation of Eq.(2) shows that where the fluctuation matrix is indeed a matrix. The right figure of Fig.1 gives an illustration of a weak coupling fermion bag. The solid bonds again represent interaction sites which form the fermion bag. Fermions hop from one interaction site to another through the free fermion propagator. One set of hopping is shown by arrows in the figure.
We must acknowledge that the weak coupling fermion bag idea is exactly equivalent to the idea of summing over all Feynman diagrams and was introduced earlier in the framework of diagrammatic determinantal Monte Carlo methods. For a review please see Ref. [4]. Indeed the sum over all Wick contractions at the given order in perturbation theory is simply . On the other hand, we think the fermion bag approach as a more natural interpretation at least in the context of lattice field theories since it uncovers the powerful concept of duality discussed above and shows that the approach is efficient both at weak and strong couplings.
The fermion bag approach is rather general and is equally applicable to nonrelativistic fermions and models with Wilson fermions. However, sometimes it is necessary to introduce unconventional interactions like eight fermion interactions. In some models, the weight of a fermion bag is no longer a determinant but involves new mathematical structures like fermionants [5]. These are very similar to determinants, and may be exponentially difficult to compute [6]. For these models, the fermion bag approach is not practical.
3 Massless Thirring Model
As a first application of the fermion bag approach we have studied the three dimensional massless lattice Thirring model. It is a lattice field theory containing massless fermions and an interesting quantum critical point. Variants of this model have been used recently to describe the critical points associated with Graphene [7]. The model is constructed with two Grassmann valued fields and on each site of a cubic lattice. The lattice action is given by
(3.0) 
Here refers to the set of nearest neighbor sites across a bond. We use antiperiodic boundary conditions in all the three directions. The four fermion coupling generates the currentcurrent coupling of the Thirring model in the continuum. The lattice model is invariant under a symmetry, where is the fermion number symmetry and is the chiral symmetry. When the coupling is small the model contains flavors of massless fourcomponent Dirac fermions at long distances due to fermion doubling. At large the chiral symmetry breaks spontaneously and generates a single massless Goldstone boson while the fermions become massive. There is a quantum critical point which separates the phase with massless fermions from the phase with massless bosons.
The model has been studied earlier using mean field techniques [8], and conventional Monte Carlo methods [9, 10, 11, 12]. In particular properties of the quantum critical point have been computed. However, all previous work was done in the presence of a fermion mass and on lattice sizes which are not very big compared to the correlation lengths introduced due to the presence of a fermion mass. Thus, the analysis may contain uncontrolled systematic errors. A fresh Monte Carlo method which works directly in the massless limit should be useful. This is what we accomplish here using the fermion bag approach.
The fermion bag approach for the model was first developed in [2]. However, in the previous study only strong coupling bags were used since the concept of duality was not appreciated. Here we repeat the study with weak coupling bags and are able to push the study to larger lattice sizes. In the fermion bag approach the partition function of the model in Eq.(3) can be rewritten as
(3.0) 
where refers to the configuration of interaction bonds and refers to the total number of bonds. For a given configuration , the free sites form the strong coupling fermion bag (see the left figure in Fig. 1). However this representation of the partition function is not useful when is a small fraction of the volume. For example, at the quantum critical point is about an eighth of the lattice volume. On the other hand, one can use the duality relation
(3.0) 
to simplify the computation when is small. Here is the free staggered fermion matrix on the whole lattice and is the free fermion propagator matrix between even lattice sites to the odd lattice sites of the bonds. This representation is equivalent to a redefinition of the fermion bag as the set of interaction sites (see the right figure of Fig. 1).
4 Observables
In order to uncover the properties of the quantum critical point we have measured four observables:

The simplest is the average number of bonds . Fluctuations in this observable allow us to monitor equilibration and autocorrelation times easily.

Since we work with exactly massless fermions the chiral condensate will always be zero. However, the chiral condensate susceptibility is nonzero. It is defined as
(4.0) We expect to scale as at the critical point. A related quantity is the bosonic twopoint function:
(4.0) We will define the ratio as a useful way to track autocorrelations.

Another useful observable that measures the onset of chiral symmetry breaking is the chiral winding susceptibility . If we define the conserved chiral charge passing through the surface perpendicular to the direction as
(4.0) where is the parity of the site x, then , is expected to be independent of at the critical point. This allows us to determine the critical point accurately.

Since one of the novel features of the current quantum critical point is the presence of massless fermions at the critical point, the fermion twopoint correlation function is expected to show nontrivial scaling. Hence we define
(4.0) where belongs to a site with and is a unit vector along each of the three directions. We compute the ratio which is expected to scale as at the critical point.
5 The Algorithm
We now discuss the details of the Monte Carlo algorithm we have used to solve the model. An important technical problem is that some observables like get contributions from configurations that do not contribute to the partition function. For example, the configuration illustrated in Fig. 2 contributes to , but has zero weight in the partition function. This is because, in the massless limit any strong coupling fermion bag with an odd number of free sites has zero weight. However, introducing a source in each of the two odd bags makes the number of free sites even and thereby changing the configuration weight to be nonzero. Hence instead of computing (which can be done in principle [2]) we redefine our partition function to be
(5.0) 
which always contains two sources. We then redefine where is the fraction of the configurations which contain sources one lattice unit apart. The redefined has all the same finite size scaling properties as the original definition of close to the quantum critical point. Below we will describe the Monte Carlo algorithm for simulating the partition function . The algorithm for simulating the real partition function will be the subset of updates in which the sources can be ignored.
The algorithm consists of four updates: (1) Bond creation/destruction, (2) Bond translation, (3) Source translation, (4) Worm update. Each update begins with a configuration where bond variables at each site are completely defined. Note that denote the directions (the negative signs indicate negative directions). If a bond exists between to then . In our notation . The configuration also defines source variables such that on only two sites in the entire lattice. On these sites . The parity of the site is given by .
5.1 Bond Creation/Destruction
Bond creation and destruction is accomplished through a simple Metropolis algorithm. The exact update is as follows:

With probability we propose to destroy a bond. If the update stops. Otherwise we pick one of bonds randomly with probability . We destroy that bond with probability
(5.0) where and where is the new configuration obtained by destroying the bond. denotes the number of the locations where we would have been allowed to create a bond starting from the configuration .

With probability we decide to create a bond. Let to be the number of locations where we could create a bond. If the update stops. Otherwise we pick one of locations with probability . We then create the bond at that location with probability
(5.0) where and where is the new configuration obtained by creating the new bond.
If we begin with a configuration with a small number of bonds, since the acceptance probability is close to one and the algorithm creates bonds efficiently. Once is of the order of , the system begins to thermalize and the average number bonds fluctuates only a little.
5.2 Bond Translation
In order to reduce autocorrelation times it is useful to have an update which can move bonds from one location to another. This update is again a Metropolis update and is meaningful only when . We pick an existing bond at random, destroy it and create a bond at another allowed location picked at random with probability
(5.0) 
where and are the weights of the new configuration and old configurations.
5.3 Source Translation
We need to update the locations of the two sources in the configurations that contribute to . We again accomplish this using a Metropolis algorithm. We pick one of the sources at random, destroy it and create it at an allowed site with the same parity as the site where the source was destroyed, with probability
(5.0) 
where and are the weights of the new and old configurations.
5.4 Worm Updates
Configurations with two sources come in two varieties. One set of configurations contain both sources in a single strong coupling fermion bag, while the two sources appear in two different strong coupling fermion bags in the other set. The above three updates cannot change between these two set of configurations easily and this can lead to a long autocorrelation time. The reason is that it is impossible to remove the source from a bag and move it to another bag in one step. Only a series of correlated moves can accomplish this task. The problem is most severe in the broken phase and perhaps close to the quantum critical point. Fortunately, worm updates of the type discussed in Ref. [13] completely eliminate this problem. Further, they are extremely fast since they do not require the computation of any determinants. We perform two types of worm updates :

Bond Update

We pick a site at random and define it as the first site.

If , the update ends. Otherwise, we break the bond by creating a source at and (i.e., set , and ) and move to the site .

We then pick a direction such that contains a bond () or is the first site (). We pick this direction at random from the available choices.

If , then we break that bond and create a bond that connects the site and and move the source from to (i.e., we set , and ). We then redefine and go back to the previous step.

If , then we remove the two sources at and and create a bond connecting the two sites (i.e. set , and ). Then the update ends. Note that the update begins and ends at the same site and hence is a loop update


Source Update

We pick one of the two sources at site .

We look for all directions that contains a bond. If no such site exists the update ends. Otherwise a direction is picked at random from the available choices.

If the site contains a bond in the direction we break this bond and create a new bond that connects and and move the source at site to the site (i.e., set , and ). The update then ends.

When the source update is repeated many times it can move the source from one bag to another bag. The source update is very cheap and can be repeated thousands of times within a second.
5.5 Determinant Computation
Except for the worm updates, all other updates require the computation of the determinant of which is an matrix where is the number of bonds. A change in by a single bond or source location typically changes one row and one column of this matrix. The most time consuming part of the algorithm is calculating the determinants before and after the change. Changes in can lead to singular matrices quite often. Further, submatrices of can also be singular. There are ways to compute the determinant of a matrix quickly if a single row and column are changed [14]. However, these tricks involve inverses and must be thought through carefully due to possible singularities.
In our work we use LU decomposition with complete pivoting to compute the determinant. However, this approach requires computations. Further, a single sweep requires roughly bond updates. Hence, naively the computations necessary to accomplish a single sweep will scale like . However, as we explain below, we can speed up the update by a factor of , thus reducing the number of computations for a single sweep to .
The basic idea is that if one decides apriori that a fixed set of rows and columns will be updated in the matrix , then while performing the LU decomposition we can organize our calculations so that we can reuse a large part of the computation. For example, consider the LU decomposition of the block matrix
(5.0) 
where , , and are respectively , , and matrices. If the matrix is not singular, we can compute the determinant of as
(5.0) 
Since is fixed during the update, the determinant of due to varying the elements in requires only computations. If is singular, then while computing the decomposition of we can isolate the zero mode sector and merge it with the part that is being updated. Since the zero mode sector of is always tiny the above approach works well.
In order to implement the above idea, we divide the full volume into such that every bond belongs to a unique block. Each block contains roughly bonds. We then perform the three time consuming updates on the bonds associated to a randomly chosen block . We choose a basis such that the matrix can be written as a block matrix as shown in Eq. (5.5). In particular the matrix contains propagators only between sites of bonds that do not belong to the block , while the matrix contains only propagators between sites of the bonds in . The matrices and contain propagators between the block and outside. We then compute the LU decomposition with full pivoting on the matrix . This allows us to isolate any singular part of the matrix if it exists and merge it with the matrices and . Since the singular part also does not change, it can be easily taken into account if necessary. The updates described above can be adapted to each block by redefining and as numbers obtained within the block . By choosing , the number of computations for one sweep scales like . We believe our method is similar to the “fastupdates” algorithm of the usual determinantal algorithm.
5.6 Performance of the Algorithm
We have studied thermalization times and autocorrelation times for two observables, and , in units of a sweep. In our work, a sweep is defined as updates of all types discussed above ^{2}^{2}2Since worm updates are cheap we perform several thousand worm updates per sweep.. In Fig 3, we plot the thermalization time in units of a sweep, for for different volumes where we initialize configurations with randomly chosen bonds. The figure shows the system at different volumes can be thermalized within roughly the same number of sweeps. In Fig 4, we plot the autocorrelation time in units of a sweep. One typically expects , where is the dynamical critical exponent of the algorithm. For most local algorithms . Many efficient cluster algorithms on the other hand are known to have . Surprisingly, our data shows that even at . However, we must emphasize that the time spent for each sweep grows as where is some fixed fraction at a given value of . At the critical point . Despite this bad scaling with the volume, since is small we have been able to obtain results on lattices as large as with modest computing resources.
6 Results
One of the main goals of our work is to compute the critical exponents at the quantum critical point. These have been calculated earlier in Refs. [9, 10, 11, 12] using the HMC method with the traditional approach in which the fourfermion interaction is converted to a fermion bilinear with the help of an auxiliary field. However, these calculations were performed in the presence of a quark masses and on lattice volumes that were not very big. The presence of two infrared scales, namely the fermion mass and the length of the box can be difficult to take into account in the finite size scaling relations which can lead to large systematic errors. In our work, since the fermions are exactly massless, the analysis is much simpler and cleaner. We focus on , and in the vicinity of where we expect the following finite size scaling relations to hold:
(6.0a)  
(6.0b)  
(6.0c) 
For each observable, when the left hand side is plotted as a function of , curves for different values of must cross at . A combined fit to all the Eqs. (6) gives the critical exponents and and with a . The complete list of fit parameters are listed in Table LABEL:table:fit. The results from the combined fit are plotted in Fig. 5. For comparison, one of the earlier work finds and [11].
0.65(1)  0.37(1)  0.85(1)  0.2608(2)  0.369(3)  0.63(1)  0.52(2)  0.09(1) 
2.52(3)  2.53(5)  0.71(3)  0.10(1)  33.9(2)  5.0(1)  2.0(2)  2.5(5) 
7 Conclusions
In this work we have shown that the recently proposed fermion bag approach is a powerful technique for solving some fourfermion lattice field theories. Due to an interesting duality, the approach is efficient both at weak and strong couplings and continues to perform well at intermediate couplings. As a first application of the method we studied the critical behavior in the massless Thirring model and found an algorithm that practically eliminates critical slowing down when times are measured in units of sweep. The time to perform a sweep scales as where is the size of a fermion bag which is usually a small fraction of the volume. In the massless thirring model we found that at the quantum critical point. The Hybrid Monte Carlo method has never been successfully applied to massless fermions on large volumes. We have accomplished this using the fermion bag approach on lattices as large as with moderate computing resources. We were able to compute the critical exponents at the quantum critical point.
Acknowledgments
We thank Philippe deForcrand, Simon Hands, David Kaplan, KehFei Liu, Costos Strouthos and UweJens Wiese for discussions. This work was supported in part by the Department of Energy grants DEFG0205ER41368 and DEFG0200ER41132.
References
 [1] S. R. Sharpe, arXiv:0706.0218.
 [2] S. Chandrasekharan, Phys.Rev. D82 (2010) 025007.
 [3] S. Chandrasekharan, U. J. Wiese, Phys.Rev.Lett. 83 (1999) 3116.
 [4] E. Gull et.al., Rev.Mod.Phys. 83 (May, 2011) 349.
 [5] S. Chandrasekharan, U. J. Wiese, arXiv:1108.2461.
 [6] S. Mertens, C. Moore, arXiv:1110.1821.
 [7] W. Armour,S. Hands and C. Strouthos, Phys.Rev. B84 (2010) 075123.
 [8] I.H. Lee and R. E. Shrock, Phys.Rev.Lett. 59(1987) 14.
 [9] L. Del Debbio and S. Hands, Phys.Lett. B373 (1996) 171.
 [10] L. Del Debbio, S. J. Hands, and J. C. Mehegan, Nucl.Phys. B502 (1997) 269.
 [11] L. Del Debbio, in the proceedings of Lattice 1997, arXiv:heplat/9709034.
 [12] I. M. Barbour, N. Psycharis, E. Focht, W. Franzki, and J. Jersak, Phys.Rev. D58 (1998) 074507.
 [13] D. H. Adams and S. Chandrasekharan, Nucl.Phys. B662 (2003) 220.
 [14] A.N. Rubtsov, V.V. Savkin and A.I. Lichtenstein, Phys.Rev. B72 (2010) 035122.