Uncertainty Quantification in Lattice QCD Calculations for Nuclear Physics

Uncertainty Quantification in Lattice QCD Calculations for Nuclear Physics

Silas R. Beane Department of Physics, University of Washington, Box 351560, Seattle, WA 98195, USA silas@uw.edu    William Detmold Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA    Kostas Orginos Department of Physics, College of William and Mary, Williamsburg, VA 23187-8795, USA Jefferson Laboratory, 12000 Jefferson Avenue, Newport News, VA 23606, USA kostas@jlab.org    [ Institute for Nuclear Theory, Box 351550, Seattle, WA 98195-1550, USA
Abstract

The numerical technique of Lattice QCD holds the promise of connecting the nuclear forces, nuclei, the spectrum and structure of hadrons, and the properties of matter under extreme conditions with the underlying theory of the strong interactions, quantum chromodynamics. A distinguishing, and thus far unique, feature of this formulation is that all of the associated uncertainties, both statistical and systematic can, in principle, be systematically reduced to any desired precision with sufficient computational and human resources. We review the sources of uncertainty inherent in Lattice QCD calculations for nuclear physics, and discuss how each is quantified in current efforts.

pacs:
12.38.Gc,21.30.Cb

wdetmold@mit.edu

cor1]Martin J. Savage mjs5@uw.edu

Keywords: Lattice QCD, Uncertainty Quantification, Nuclear Physics

1 Introduction

Quantum chromodynamics (QCD) and the electroweak interactions are responsible for the nuclear forces, and consequently for the structure and interactions of all nuclei. Historically, the complexity of QCD has prevented direct calculation of the properties of low-energy and medium-energy nuclear systems. However, after decades of development, Lattice QCD (LQCD), a technique to numerically evaluate the QCD path integral, promises to permit QCD calculations of low-energy nuclear processes with uncertainties that can be systematically reduced to any desired precision with sufficient human and computational resources. Before having complete confidence in LQCD predictions for nuclear physics, it is critical to verify it as a reliable technique. This will be accomplished by demonstrating agreement with a diverse array of experimental measurements, and showing that the uncertainties of the LQCD calculations behave as expected with, for instance, increasing lattice volumes, numbers of gauge-field configurations and decreasing lattice spacings.

As with any meaningful prediction, the uncertainties associated with a LQCD calculation define its utility. A complete quantification of all of the uncertainties associated with any given calculation is essential for it to be scientifically complete and provide more than a calculational benchmark. During the last few years, the LQCD community has self-organized and assembled a compendium of lattice results, mainly of importance for particle physics [1], including a comprehensive analysis of all associated uncertainties. This compendium represents the consensus of the entire community and is along the same lines as the Particle Data Group summary of experimental results in high-energy physics. It is a valuable resource, both within and outside of the LQCD community. Few quantities of importance to nuclear physics currently appear in this compendium as many calculations remain in exploratory stages. As these calculations mature, we expect they too will be added to the LQCD compendium.

In this article, we outline the array of techniques required to perform LQCD calculations for Nuclear Physics, and identify the uncertainties that are inherent in each of these techniques. We discuss the procedures and uncertainties associated with generating the configurations of gluons fields, with the generation of the correlations between quarks and gluons, and finally with the extraction of physical information from hadronic and nuclear correlations. Our presentation is aimed at nuclear physicists who are not experts in lattice field theory methods.

2 Lattice QCD Technology

2.1 The QCD Path Integral

There is only one known way to rigorously define QCD non-perturbatively, and that is as the continuum limit of a lattice gauge theory. The spacetime lattice provides an ultraviolet regulator of the continuum field theory and admits numerical evaluation of the functional integrals required for calculating physical observables. The QCD partition function is

(1)

where is the QCD gauge field (describing the gluons), is the gauge field strength and , are the quark fields representing quarks of flavor . is the QCD covariant derivative and are the Dirac matrices. Physical observables are calculated from correlation functions of operators, , that are functions of the quantum fields (quarks and gluons), generically written as

(2)

The functional integrals above require regularization and can be straightforwardly defined on a discrete spacetime that we will take to be a regular hypercubic lattice. In order to preserve gauge invariance, the gauge fields are discretized as matrices, , associated with the links of the lattice (see Figure 1).

Figure 1: A two dimensional slice of the four dimensional spacetime lattice. and denote unit vectors in the indicated directions. denotes a quark field at the lattice site , denotes the gauge link from the lattice-site to the site , and denotes the Wilson plaquette centered at .

The simplest discretized form of the gauge action is the sum over all plaquettes, , formed from the product of links, , around elementary plaquettes of the lattice,

(3)

with

(4)

and is the lattice gauge coupling. Taking the naive continuum limit, this action reduces to the familiar continuum gauge action, . The action in Eq. (3) is the Wilson gauge action [2], and while this discretization is not unique, it is the simplest. It can be modified by adding larger loops of links with coefficients appropriately chosen to achieve a more rapid approach to the continuum [3], which is an essential goal of the calculation.

2.2 Including Quarks

Including the dynamics of quarks, which are defined on the vertices of the lattice, is a challenge. A naive discretization of the continuum action describing a single fermion introduces 16 lattice fermion flavors in four dimensions. The additional 15 light lattice fermion degrees of freedom, referred to as “doublers”, can be avoided through the use of several ingenious formulations of lattice fermions. Wilson fermions, which were the first to be introduced [2], eliminate the doublers by adding an irrelevant dimension five operator to the action that lifts the masses of the doublers, leaving only one light fermion in the spectrum. However, this approach explicitly breaks chiral symmetry and introduces lattice artifacts that scale as , where is the lattice spacing. Kogut-Susskind fermions [4] (staggered fermions) provide another way to remove some of the doublers and re-interpret the remaining four as four degenerate flavors. In this approach, a chiral symmetry remains unbroken and lattice artifacts scale as . Kogut-Susskind fermions become problematic when the required number of flavors is not a multiple of four (as is the case for QCD in nature). One approach to deal with this is to take the fourth root of the corresponding determinant (see below). Although this “rooting” is not justified at non-zero lattice spacing, current numerical evidence suggests that the effects are negligible for many quantities. Finally, the domain-wall [5, 6, 7] and overlap [8, 9] discretizations preserve a lattice chiral symmetry at finite lattice spacing and are doubler free. Unfortunately, such formulations are significantly more computationally expensive. In all cases, the lattice fermion action is of the form, , where is the fermion “vector” and is a sparse matrix 111In certain cases, such as with overlap fermions, the matrix is not sparse but has sparse-like properties. acting on the fermion vector, that depends on the gauge field, .

In the case of two quark flavors, the discretized partition function is

(5)

where the integrations over the quark fields (represented by Grassmann numbers) have been performed exactly, resulting in the determinant factor. Although the quark matrix represents one flavor, the determinant represents two flavors as . In the case of correlation functions defining observables, integrating over the quarks gives

(6)

where the operators depend on the inverse of the quark matrix and (possibly explicitly) on the gauge fields. The above expressions are only valid in the case of two flavors of quarks (the up and the down), assumed to have the same mass for these discussions, which is a good approximation to the low energy physics of QCD. The strange quark is easily accommodated by including in the partition function, as are the charm quarks through a similar factor (although their effects in quantities of importance to low-energy nuclear physics are expected to be small).

2.3 Monte Carlo Evaluation of the Path Integral

The evaluation of in Eq. (6) is the main numerical task faced in LQCD calculations. The integrations over the gauge fields are of extremely large dimensionality, and are made practical by restricting spacetime to a compact region. Given that QCD has a fundamental length scale of (), calculations must be performed in lattice volumes (volumes are denoted by , where is the number of lattice sites in each spatial direction and the number in the temporal direction) that have a physical size in order to control finite volume effects, and with lattice spacings in order to be close to the continuum limit. With moderate choices for the volume and lattice spacing, a lattice of sites is currently practical. Accounting for the color and spin degrees of freedom, such calculations involve degrees of freedom. The only practical way for this type of computation to be done is by Monte Carlo integration. Fortunately, the combination of the quark determinant and gauge action,

(7)

is a positive-definite quantity that can be interpreted as a probability measure and hence importance sampling methods can be used in performing the integrations. As will be discussed in detail in the next section, the basic procedure is to generate gauge field configurations representative of the probability distribution and then evaluate

(8)

At finite , the estimate of is approximate, with an uncertainty that can be shown to scale as .

Both for the gauge field configuration generation and the evaluation of Eq. (8), the linear system of equations,

(9)

must be solved, where is the quark mass and the vectors and will be discussed below. Since is sparse, iterative solvers can be used. The condition number of (the ratio of largest to smallest eigenvalue), and therefore the computational resources required for the solution of Eq. (9), is inversely proportional to the quark mass. Since the physical quark masses for the up and down quarks are small relative to the typical scale of QCD, has a large condition number, and it is only recently that calculations at the physical quark masses have become computationally feasible. The vast majority of the computational resources used in nuclear physics LQCD calculations are devoted to the solution of this linear system, both in the context of gauge-field generation and in the later stage of the calculation of physical observables through Eq. (8). Significant gains in the efficiency of these calculations have been achieved through applications of state-of-the-art numerical linear algebra algorithms such as deflation [10, 11, 12, 13], and multigrid [14, 15, 16], as will be further discussed below.

2.4 Gauge Field Generation: Hybrid Monte Carlo

The first stage of LQCD calculations is the generation of suitable ensembles of gauge configurations. At present, the most efficient algorithm for generating such ensembles is Hybrid Monte Carlo (HMC) [17]. Because of the quark determinant, methods that rely on local updates of the fields, such as the heatbath or the Metropolis algorithms, are of limited use as their computational requirements scale poorly with the volume, . HMC involves a noisy representation of the determinant and introduces global updates of the gauge fields, achieving volume scaling of where . Other methods, such as the -algorithm and the -algorithm [18], have also been used, however, such methods are not exact and have a small systematic error that must be carefully controlled. The global update of the gauge field using HMC is obtained through a Hamiltonian evolution from an initial gauge-field configuration and random initial momenta (drawn from a Gaussian distribution). In order to integrate the Hamiltonian dynamics, reversible discrete integrators are used so that detailed balance of the update procedure is maintained, as is required for Eq. (8) to be satisfied. The simplest forms of such integrators are of second order, however recently, higher order integrators have been employed following the work of Omelyan et al[19]. The volume scaling of the algorithm  [20, 21, 22, 23] depends on the integrator, with the resource requirements, , scaling as

(10)

where is the order of the integrator, is the pion mass, is the -meson mass (with ), is the volume of the system, is a constant of appropriate dimensions, and is an exponent that ranges between 4 and 6. Currently, several algorithmic improvements, such as preconditioned HMC [24, 25, 26], are often used to reduce and , and further, higher order integrators result in better volume scaling.

2.5 Continuum Limit and Autocorrelations

There are a large class of lattice gauge actions that have QCD as their continuum limit. For that reason, a variety of lattice actions are used by different collaborations worldwide. Comparing the continuum limit results obtained for a given quantity using different actions provides confidence that the calculations are performed correctly and uncertainties are quantified appropriately.

Continuum extrapolated results in the isospin limit are functions of only three parameters (four including strong isospin breaking), the values of the quark masses (up/down and strange) and the characteristic QCD scale, that emerges from quantum effects. The tuning of the quark masses can be performed so that chosen meson masses coincide with their experimental values. In the recent years, several calculations have used the ratios

(11)

to tune the bare quark masses, where and are the (isospin-averaged) pion and kaon masses, respectively, and is the mass of the baryon [27]. By demanding that these ratios reproduce their experimental values, the bare light- and strange-quark masses in the calculation(s) are determined. The continuum limit can be taken keeping these ratios fixed. In addition, , or equivalently the inverse lattice spacing , is determined in physical units (MeV) using the experimental value of another hadron mass, or a derived quantity such as the Sommer parameter [28] or the parameter [29, 30]. The mass of the baryon is currently a popular choice as it depends weakly on the up- and down-quark masses. Provided that the matching to experiment can be performed at the physical quark masses, the scale determination is robust, with different choices of quantities with which to match resulting in only small changes in the extracted scale which can be quantified. For a recent review on scale setting issues, the reader is referred to [31].

Uncertainties introduced by the choice of the hadronic observable for scale setting arise from the discretization, which is removed once the continuum limit is taken. Because such uncertainties percolate through to all computed observables, careful thought is needed in choosing the hadronic observable that sets the scale in order to minimize uncertainties in other physical quantities. A good choice of an observable is one that can be computed with the smallest systematic and statistical uncertainties. The robustness of the continuum extrapolation is improved if the scale setting observable is chosen to have only weak dependence on the discretization and the light-quark masses. The remaining systematic uncertainties from setting the scale arise from physics that is not included in the calculation, for instance, electromagnetic (EM) effects, isospin breaking effects, as well as the omission of vacuum polarization effects due to heavy flavors. All these effects are currently analyzed and steps are taken to minimize their impact. Calculations that include both isospin breaking and EM are under way [32, 33, 34, 35], as are calculations with dynamical charm quarks [35, 36, 37, 38].

For the continuum extrapolation, calculations at fine lattice spacings, , are required. Unfortunately, as the continuum limit is approached, the autocorrelation time (corresponding to the number of updates in the Markov chain of gauge configurations after which an observable is statistically independent) becomes large for some observables . This was first observed in quenched calculations [39, 40], and recently studied in detail in dynamical fermion calculations [41, 42]. It was found that for the topological charge, the integrated autocorrelation time . This observation further increases the computational resource requirements of calculations near the continuum limit. However, the use of open boundary conditions may help to resolve the problem [43, 44] (although other recent work [45] suggests this may be optimistic). Unreliable estimates of both the statistical and systematic uncertainties may arise from long autocorrelation times. The effects can be treated with statistical analysis methods on ensembles derived from Markov processes of length much longer than the autocorrelation time. Such methods are well understood and are part of the standard methodologies used in the LQCD community. For a careful discussion of autocorrelation effects the reader should consult Refs. [46, 47].

2.6 Quark Propagators

A second major ingredient in almost all LQCD calculations is the quark propagator, , which is given by the inverse of the Dirac operator. As seen in Eq. (6), after integrating out the quark degrees of freedom in the functional integrals defining the correlation functions that need to be studied to extract physical observables, an expression involving an integral over the remaining gauge degrees of freedom that depends on remains. The propagator is a matrix in spacetime and color and Dirac space, and each column is the solution of the equation

(12)

where G is a source that may be a Dirac delta function at a particular site, or a smeared version that has support in a particular region. In addition, momentum plane waves in fixed gauge, distillation and dilution vectors [48] and various other structures can be used as sources.

Given the dimensions of the objects involved in Eq. (12) (for current large-scale calculations, may be ), and the sparsity pattern of the Dirac matrix, iterative methods provide the only practical approach to solving this linear system and determining the necessary components of the propagator. Most modern calculations use Krylov-space based solvers such as conjugate gradient (CG), stabilised bi-conjugate gradient (BiCGSTAB), or deflated versions such as EigCG [12] for this task. Some discretized quark actions (e.g., the Wilson action) are such that the Dirac operator is not Hermitian, in which case many of the simplest algorithms must be applied to the normal equations, , instead of the direct system, with a resulting increase in computational resource requirements. In recent years, preconditioners that reduce the condition number of the matrix have become quite common. One such preconditioner, known as algebraic multigrid has been shown to be particularly efficient for the QCD problem [16, 49]. Solution of these linear systems forms a large part of the computational cost of LQCD calculations, and thus the calculations of these matrix inverses have been highly optimized for many computing architectures. In particular, optimised codes exist on IBM BlueGene and Cray supercomputers, clusters, nVidia GPUs [50, 51, 52] and Intel Xeon Phis.

For gauge-field configurations that are physically large compared to the QCD scale, , it is beneficial to make use of translational invariance to exploit the full statistical power of the computationally expensive gauge-field configurations. To this end, it is advantageous to compute propagators from multiple, physically-separated source locations on the same configuration, and subsequently average measurements over these different locations. This amounts to solving Eq. (12) repeatedly with the same , but multiple different right-hand sides. This makes the application of more complex solvers computationally viable; for solvers such as EigCG and multigrid, there is a significant setup cost involved that must be performed once but can then be reused to accelerate subsequent solves. By amortizing over a large number of solves, these algorithms lead to order-of-magnitude increases in computational speed compared to CG and even BiCGSTAB.

Since the methods used are iterative, applying a set of steps repeatedly until a convergence criterion is satisfied, the desired criterion and precision goal must be specified. Since there are significant fluctuations intrinsic to the importance sampling of the gauge field, it is only useful to solve the above linear systems to a precision that is marginally better than the gauge-field noise. However, for typical calculations at the present time, the desired precision of solves is typically a relative error on the norm , approaching machine precision. While the final solution may be required with double precision accuracy, it is possible to obtain this accuracy in a computationally expedient way by first solving the system in single precision and then using this solution as a starting point for the more expensive double precision solve, which will then converge in relatively few iterations. On GPUs, half-precision numerical representations are also available and result in effectively twice the performance of single precision. The QUDA library for propagator inversions on GPUs takes advantage of this feature and includes mixed half-double precision solves.

An interesting development in recent years has been the construction of improved estimators for many physical observables. These techniques aim to perform a modified set of measurements in which statistical fluctuations are small. A number of variants to this approach exist, such as low-mode averaging [53, 54] or truncated solver methods [55], which are being tested in single hadron calculations. One promising technique is covariant approximation averaging [56, 57]. These methods attempt to speed up the solution of the linear systems by calculating low eigenmodes exactly or by performing “sloppy precision solves” and then correcting for the residual. They are currently being used for calculations of nucleon form factors and related observables.

Many observables, such as flavour singlet meson masses and iso-singlet matrix elements, involve quark-line-disconnected contributions in which a quark propagator must be evaluated from every site in the lattice to itself. Each solution of Eq. (12) provides a propagator from a single site to everywhere, so in order to have propagators from every site to itself, solutions of Eq. (12) are required. This is a prohibitive task to perform deterministically, but significant progress has been made in estimating such all-to-all propagators using stochastic volume sources. Recent progress has also been made with the introduction of probing methods [58, 59]. Nevertheless, quantities that require all-to-all propagators remain challenging.

2.7 Mistuning Input Parameters

The tuning of the quark masses is accomplished by performing a combination of extrapolations from heavier masses and of low-statistics calculations in the vicinity of the parameter set of interest, followed by an interpolation to the desired point. As this is accomplished with a relatively small number of measurements compared to those involved in the actual production, the tuning will always be imperfect. Only after the production is complete, involving calculations at multiple lattice spacings and multiple volumes, are the meson masses and scale setting known with high precision. These can be used with great effect in subsequent tunings, however, the mistunings require that small corrections are made to the results that have been generated in order to make predictions. Consequently, multiple calculations are needed in the vicinity of the quark masses of interest in order to be able to systematically eliminate the impact of the mistuning. Alternatively, reweighting methods [60] can be used to replace the determinant terms in Eq. (6) with ones with corrected mass parameters. As the quark masses will be close to the desired ones, and all physical results will be smooth functions (for sufficiently small deviations), simple polynomial forms describing the behavior of the quantity of interest will be sufficient to interpolate to the desired quark masses. For precision calculations, the uncertainty associated with this mistuning of input parameters must be quantified.

3 Spectroscopy : Two-Point Correlation Functions

A central task of LQCD, is to perform hadron spectroscopy. To achieve this, quark propagators are contracted together with the appropriate flavor, Dirac and spacetime structure to generate correlation functions with the desired quantum numbers. These correlation functions are then analyzed with an array of statistical techniques to extract energies and energy differences and their corresponding uncertainties.

3.1 Euclidean Space Correlation Functions

For lattice actions with a positive-definite transfer matrix  [61, 62], such as the Wilson gauge and quark actions, Euclidean space two-point correlation functions are the sums of exponential functions. 222 For many improved actions, terms in the action extend over multiple time slices and the transfer matrix is not positive definite at the lattice scale. However, a positive definite effective transfer matrix emerges over physical length scales. In addition, domain-wall fermions do not have a single time slice 4D transfer matrix, and the correlation functions can exhibit additional sinusoidally modulated exponential behavior with a period set by the lattice spacing. Until the continuum limit is taken, this introduces a systematic error that is difficult to quantify. Consequently, calculations that seek to probe short distance details of QCD, such as the excited state spectrum, tend to use actions with no temporal improvement. If improvement is performed in this asymmetric way, this introduces a further systematic, in that the action is anisotropic – spatial and temporal directions are not interchangeable. This translates into an anisotropy between spatial and temporal lattice spacings that must be determined and its impact on final uncertainties must be quantified. The arguments of the exponentials are the product of Euclidean time with eigenvalues of the finite-volume Hamiltonian. For a lattice that has infinite extent in the time direction, the correlation functions become a single exponential at large times, dictated by the ground-state energy and the overlap of the source and sink with the ground state. As an example, consider the pion two-point function, , generated from interpolating operators of the form ,

(13)

where the sum over all lattice sites at each time slice, , projects onto momentum states. The source is not an eigenstate of QCD and not only couples to single pion states, but also to all other states with the quantum numbers of the pion. More generally, the source and sink can be distributed (smeared) over a subset of lattice sites to increase the overlap onto the ground state. Eq. (13) shows that the lowest energy-eigenvalue extracted from the correlation function corresponds to the mass of the (and, more generally, the mass of the lightest hadronic state that couples to the source and sink) in the finite volume.

Once such a correlation function has been calculated on a set of gauge-field configurations, the simplest objective is to extract the argument of the exponential function that persists at large times. One way to do this is to simply fit the correlation function over a finite number of time-slices to a single exponential function. A second method, that is useful in visually assessing the quality of the calculation, is to construct the effective-mass (EM) function from Eq. (13) as

(14)

where and are in lattice units. At large times, becomes a constant equal to the mass of the lightest state contributing to the correlation function 333 This is obviously the most simplistic approach to this problem. One well-known method to extract the ground state and excited state energies is the variational method [63, 64], which is discussed below.. The anti-periodic (periodic) boundary-conditions (BCs) in the time direction, imposed on the quark (gluon) fields, in order to recover the correct partition function, result in the correlation functions being sums of forward and backward propagating exponentials in the time direction.

More sophisticated methods aim to extract multiple energy eigenvalues. Fitting correlation functions to the sum of exponentials (or hyperbolic functions) to extract the ground state energy requires fitting ranges that start at time separations from the source that are large enough so that the and higher excited states make negligible contributions. The determination of the minimum time separation that can be included in the fit is sometimes subjective. Hence a systematic uncertainty from the choice of the minimum time separation in the fit is included, and is estimated by observing the variation of the extracted results as a function of the choice of fitting interval.

Multi-hadron correlation functions are somewhat more complex, particularly with a finite temporal direction due to many allowed combinations of hadrons propagating backwards in time. These require significantly more complex analysis  [65, 66].

3.2 Scaling with Source Density and Number of Configurations

It is interesting to explore the scaling of the uncertainties in the masses of the hadrons with the number of source locations and also with the number of gauge-field configurations. On any given configuration, it is possible to perform a number of measurements equal to the number of lattice sites (for a given source structure). However, the uncertainty in the extracted energies is expected to scale as with the number of measurements, , at low source density, but when the density approaches one source per hadronic volume, deviations from this scaling are expected. To demonstrate this behavior [67], the dependence of hadron masses on the number of sources obtained on an ensemble of gauge configurations is shown in Figure 2. The fractional uncertainties in the masses of the and nucleon are shown as a function of the number of sources used on each configuration. A simple fit of the form returns exponents and -0.41(3) for the , and nucleon, respectively.

Figure 2: The fractional uncertainty in the extracted masses of the (left panel) and nucleon (right panel) as a function of the number of sources on each configuration [67]. Statistical and systematic uncertainties have been combined in quadrature.

The uncertainty in the energy of the pion is seen to saturate at relatively low source density compared to that of the heavier hadrons. This behavior is expected from the differing Compton wavelengths of the hadrons. In contrast, the scaling with the number of configurations is seen to be consistent with for each of the hadrons, as expected.

Figure 3: The fractional uncertainty in the extracted masses of the and nucleon as a function of the number of gauge-field configurations [67]. Statistical and systematic uncertainties have been combined in quadrature.

Figure 3 shows the fractional uncertainty in the mass of the (left panel) and nucleon (right panel) as a function of the number of configurations. An extrapolation can be performed with a fit to the uncertainties in Figure 3 of the form . The exponents extracted in these fits are -0.55(4) and -0.38(4) for the and nucleon, respectively [67].

3.3 Analysis of Correlation Functions

3.3.1 Statistical Analysis Methods

Since Monte-Carlo integration is used to compute the correlation functions, they are subject to statistical uncertainty that must be carefully determined. The main observables extracted from the calculations presented in this review are energy eigenvalues and their differences, which contain information about phase shifts, scattering lengths and three-body interactions. The energy eigenvalues are extracted by fitting the relevant correlation functions to a sum of exponentials. The optimal values for the energy are extracted from correlated -minimization fits that take into account the correlations in the lattice calculations, both between different gauge configurations and between different times in a given configuration. In the case of energy shifts, the correlations between the energies of different states are also accounted for. In particular, the relevant parameters, such as the energies and the amplitude of each state that contributes to the correlation function, are determined as those that minimize

(15)

where is the fitting function, denotes the set of fitting parameters over which is minimized, and

(16)

are the average correlation function and correlation matrix, respectively. The uncertainties in the fitted parameters are determined by the boundaries of the ellipsoid defined by a given confidence level 444For a pedagogical presentation of fitting see Ref. [68]., typically or . It is important to account for correlations in a manner that gives the best estimate of the statistical uncertainty. Given independent measurements of an energy level, it is straightforward to obtain the sample mean and variance and thereby give an unbiased measure of the uncertainty in the mean. However, in computing scattering parameters, the procedure for determining the statistical uncertainties is somewhat more involved because the relation between the scattering amplitude and the energy levels of the two hadron system is highly nonlinear. First, one is interested in the energy differences between the energy levels of the two hadron system and the sum of the masses of the two free hadrons (similarly for the case of three or more hadrons). These energy differences can be determined in various ways. The simplest is to perform fits to correlation functions of the multi-hadron system and the single hadron system(s) generated on the same gauge-field configurations and to form correlated differences of the extracted energies. The ratios of correlation functions can also be analyzed, where Jackknife and Bootstrap resampling methods are used to determine the covariance matrix and then a correlated -fit is performed [68, 69].

Beginning with a sample of elements, single-elimination Jackknife removes the element, leaving a sample of elements. Taking to be the desired ratio of correlation functions computed with the sample omitted from the full ensemble and its ensemble average, the covariance matrix of the ratio of correlation functions is given by

(17)

The Bootstrap method is a generalization of Jackknife. Again, beginning with a sample of elements, in its simplest implementation Bootstrap forms a new sample of elements by randomly choosing values from the original sample, with repetitions allowed. This procedure is repeated times and a statistical analysis is carried out on each of the Bootstrap ensembles. Now denoting as the Bootstrap ensemble, the covariance matrix is estimated by

(18)

where now

(19)

The value of should be large enough so that stable and accurate estimates are obtained. In computing the mean and uncertainty, both Jackknife and Bootstrap lead to comparable results, as the distributions of correlation functions are smooth.

3.3.2 Fitting Methodology

Fitting correlation functions to a sum of exponentials is a difficult problem. It is significantly simplified if only the lowest energy eigenvalue is required. However, a great deal of spectral information about QCD resides in the energy levels above the ground state. In order to reduce the systematic uncertainty from excited-state contamination at small time separations, it is important to have a signal at large Euclidean times. However, at large times, the statistical uncertainties in most correlation functions grow exponentially and therefore the extraction of the lowest energy eigenvalue at large times typically results in large statistical uncertainties. In principle one can trade statistical uncertainty growth for systematic uncertainty reduction by developing improved sampling methods to reduce statistical uncertainties in the correlation functions. However, in practical LQCD calculations, one extracts as much information as possible from the correlation functions at short times where the statistical noise does not overwhelm the signal, but multiple exponentials contribute to the correlation functions.

Although the general multi-exponential fit problem is difficult and not well behaved, systems of correlation functions can be designed in order to optimize these fits. Variational analyses on symmetric positive-definite matrices of correlation functions have been successfully used in the LQCD community to extract the energy eigenvalues contributing to the correlation functions. These methods were originally introduced in Refs. [63, 64], and have been subsequently developed [70, 71, 72, 73].

Generalizing the pion correlation function of Eq. (13) to a set of operators, , of commensurate quantum numbers, the correlation functions can be defined

(20)

At large times, the correlation functions are dominated by the first few exponentials, and , , … can be extracted by considering multiple correlation functions. If sufficient resources exist to construct a basis of interpolating operators, the orthonormality of state vectors can be used to extract multiple energies in a controlled manner [63, 64]. This is achieved by solving the generalized eigenvalue problem of the form

(21)

where the and the are the principal correlators and eigenvectors, respectively. The utility of this method stems from the observation that at large times, the principal correlators satisfy

(22)

where is the gap between the level of interest and the level, where is the rank of . There are many implementations of this so-called variational method. In Figure 4, we show recent results for the nucleon excited spectrum from the Hadron Spectrum Collaboration [74] determined using these methods.

Figure 4: Excited nucleon spectra calculated at a pion mass of  [74]. The solid regions correspond to the confidence intervals for the energies of the states. [Figure reproduced with the permission of the Hadron Spectrum Collaboration.]

In many cases, computational cost precludes the construction of a basis of interpolating operators required for the variational method to be employed. In these cases, Matrix-Prony and other related methods [75] facilitate an extraction of the low-lying levels from a set of at least two correlation functions with distinct operator source and sink structure.

3.3.3 Non-Gaussian Fluctuations and Lepage’s Argument.

As QCD is a highly non-trivial interacting quantum field theory, the quantum fluctuations of the quark and gluon fields, and hence in the derived correlation functions, are non-Gaussian. Following Parisi [76], Lepage [77] explored the relation between the variance of a correlation function and its mean, and in the process identified the exponentially degrading signal-to-noise in baryonic systems. This argument can be generalized to arbitrary moments of any given correlation function. Denote the single nucleon correlation function (projected to zero momentum) as

(23)

where represents an interpolating operator for the nucleon, denotes a positive-energy projector, and the angle brackets denote the statistical average over calculations on an ensemble of gauge-field configurations. At short times, for operators that have a large overlap with the nucleon ground state, the moments of this correlation function are dictated by the multi-nucleon-anti-nucleon states, (neglecting the interactions between nucleons), and the distribution of correlation functions is asymmetric with a non-exponentially degrading signal-to-noise ratio 555 The magnitude of the suppression of lighter hadronic states with the same quantum numbers as the multi-nucleon and anti-nucleon state, as well as the same number of quark and anti-quark propagators, such as multi-pion states, depends upon the structure of the sources and sinks. When the sink is momentum projected over the lattice volume, overlap onto the multi-pion states are suppressed by the ratio of the volume of the nucleon compared to the lattice spatial volume, delaying the onset of the exponential degradation of the signal-to-noise ratio. . At large times, however, the lightest states with the appropriate quantum numbers dominate, and the moments scale as

(24)

The odd moments degrade exponentially compared with the even moments, leaving a non-Gaussian, but symmetric, distribution with a mean and variance . In Figure 5, we show the effective mass of the -baryon obtained at a pion mass of on an ensemble of anisotropic gauge-field configurations [78]. The inset histograms in this figure show the distribution of correlation functions, from which the time evolution from an asymmetric distribution with a non-zero mean value, to a non-Gaussian symmetric distribution suffering from an exponentially degrading signal-to-noise ratio is clearly evident.

Figure 5: An effective mass plot (blue points) obtained from -baryon correlation functions on an ensemble of anisotropic clover gauge-field configurations. The time axis is in temporal lattice units (tlu), while the energy axis is in spatial lattice units (slu). The insets (from left to right) show the (normalized) distribution of the correlation function at time and tlu from the source. This is derived from measurements on each of gauge-field configurations. These measurements were blocked down to 93 independent representative correlation functions, and Jackknife was used to generate the covariance matrix.

At large times, exponentially large computational resources are required to precisely extract the mean value of a baryon correlation function, which has an uncertainty for a large number of measurements, (where Gaussian statistics have been assumed to provide an estimate of the scaling of the uncertainty in the mean). A closer inspection of the time dependence of the moments of the correlation functions shows that there is an intermediate time range, dictated by the structure of the sources and sinks used to generate the correlation function [67], in which the signal-to-noise ratio is degrading much less severely than at asymptotically large times. This “Golden Window” is seen in the effective mass shown in Figure 5 between and . Efforts have been made [79] to optimize the signal-to-noise ratio by combining optimizing the overlap of the source and sink onto the correlation function(s) of interest with minimizing the overlap onto the corresponding variance correlation function(s).

It is worth attempting to understand the mechanisms responsible for the statistical behavior of the nuclear correlation functions. At each point in spacetime, the relevant parts of a light-quark propagator can be encoded in a matrix in Dirac color space. Loosely speaking, in color space, for each pair of Dirac indices, the pion correlation functions arise from the sum of the squared norms of the columns of this matrix while the nucleon correlation functions arise from its determinant. The elements of each column scale as , while the determinants scale as . As the norms and orientations of each of the columns of these matrices are fluctuating because of interactions with the gauge field, with average values that are becoming linearly dependent, the signal-to-noise problem arises.

3.3.4 Blocking, the Central Limit Theorem and Robust Estimators.

Correlation functions can be determined from multiple source locations on a given gauge-field configuration. These measurements are correlated with each other as they result from the same sample of gluon fields, and, in general, cannot be treated as statistically independent. Because the correlation functions become translationally invariant after averaging, these measurements can be averaged together (blocked) to generate one representative correlation function for that gauge-field configuration. More generally, because of the correlation between nearby gauge-field configurations produced in a Markov chain, quantified by analysis of auto-correlation functions, the measurements performed over multiple gauge-field configurations are typically blocked together to produce one representative correlation function from a “patch” of the Markov chain. For a long Markov chain, or multiple independent Markov chains, there will be a large number of independent representative correlation functions that, by the central limit theorem, will have an approximately Gaussian-distributed mean. In general, this set of blocked correlation functions are analyzed using correlated -minimization methods (see Sec. 3.3.1) assuming Gaussian statistics.

As computational resources are limited, only a finite number of measurements of each correlation function can be performed. The underlying distributions for nuclear correlation functions are non-Gaussian with extended tails, as seen in Figure 5, and therefore outliers are typically present in any sample, which can lead to poor convergence of the mean (for a discussion of the “noise” associated with these and other such calculations, see Ref. [80]). Dealing with outliers of distributions is required in many areas beyond physics, and there is extensive literature on robust estimators that are resilient to their presence, such as the median or the Hodges-Lehmann (HL) estimator. However, for the quantities we are interested in, it is the mean value (vacuum-expectation value) that is required, and not the median or mode. With sufficient sampling and blocking, the mean of the distribution of any given correlation functions is expected to tend to a Gaussian distribution by the central limit theorem, for which the mean, median and mode coincide.

Figure 6: Effective mass plots for the nucleon obtained from an ensemble of isotropic clover gauge-field configurations. Blocked correlation functions are analyzed with the mean using Jackknife (blue) and Hodges-Lehman using Bootstrap (red).

As an example, the nucleon effective masses obtained from an ensemble of isotropic clover gauge-field configurations (, ), analyzed with both the mean under Jackknife and Hodges-Lehman under Bootstrap, from blocked representative correlators, are shown in Figure 6. The non-Gaussian distribution present in the blocked correlation functions leads to a large estimated variance at late times, while the HL-estimator is more robust. Investigations in this direction are ongoing.

3.4 Observables

The energies and energy differences extracted from correlation functions calculated on one ensemble of gauge-field configurations deviate from those of QCD, even in the infinite sampling limit, because of the finite lattice spacing and the finite lattice volume. Further, there are also deviations because of unphysical values of quark masses and/or imperfections in their tuning.

3.4.1 Finite Lattice Spacing and the Continuum Limit.

Because of the computational resource requirements, most calculations of quantities of importance for nuclear physics have been performed at one, in some cases two, and in very few instances three lattice spacings. In contrast, simpler quantities, such as the meson decay constants, have been computed with precision at multiple lattice spacings and extrapolated to the continuum.

Figure 7: The MILC collaboration’s recent continuum extrapolation of  [81]. The solid (red) curve shows the result of fitting to the form , while the dashed (red) line corresponds to fitting a constant to the smallest two lattice spacings. The black point at corresponds to the continuum extrapolation, with the inner uncertainty being statistical and the outer being the statistical and systematic uncertainties combined in quadrature. [Figure reproduced with the permission of the MILC collaboration.]

As an example, the MILC collaboration’s [81] recent continuum extrapolation of the ratio of decays constants, , determined at a pion mass of with dynamical quarks, is shown in Figure 7. MILC has produced large ensembles of gauge-field configuration using a one-loop Symanzik-improved gauge action for the gluons and the HISQ (highly-improved staggered quark) action for the quarks with lattice spacings of and , allowing for continuum extrapolations involving four independent points. 666 Due to the mistunings of quark masses, as discussed previously, such continuum extrapolations also require interpolations in the quark masses. Two different extrapolations of their results, shown and described in Figure 7, provide consistent continuum limit values

The lattice-spacing dependence of observables can be determined from the Symanzik action [82], dictated by the symmetries of the discretized action, that describes the dynamics of the quarks and gluons at momentum scales much less than the inverse lattice spacing. The operators in this EFT are formed from the quark and gluon fields with arbitrary numbers of derivatives and insertions of the quark mass matrices, with coefficients that scale with the appropriate power of the lattice spacing. It is the matrix elements of these operators between the hadronic states of interest that dictate the lattice-spacing dependent deviations from QCD. While the Symanzik action lacks Lorentz invariance and rotational symmetry, it is constrained by the residual hypercubic symmetry of the discretized action. The presently available computational resources have not permitted calculations of sufficient precision to isolate lattice-spacing artifacts beyond polynomial in the lattice spacing, however, such terms are present, for instance, from the intrinsic logarithmic scale dependence of the coefficients in the Symanzik action.

In order to have confidence in the extraction of multibaryon binding energies and scattering phase shifts, and to be able to quantify one of the systematic uncertainties in these determinations, it is important to determine the single-hadron dispersion relations with precision. Example calculations of the energies of the pion and nucleon as a function of are shown in Figure 8 [83, 84], where the triplet of integers is related to the momentum of the state via .

Figure 8: Examples of the squared energy (in ) of the single pion and nucleon as a function of at the flavor-SU(3) symmetric point [83, 84]. The figures show the results of LQCD calculations performed on isotropic clover gauge-field configurations, along with the best linear fits.

In these LQCD calculations, the energy of the hadron can be related to its lattice momentum through dispersion relations of the form

(25)

where is the the anisotropy parameter for hadron (or equivalently its fractional speed of light ). In the continuum limit, this reduces to , as required. At any finite lattice spacing, the relation between energy and momentum of any given hadron involves an infinite number of terms that respect the underlying hypercubic symmetry, and this relation can only be determined by direct calculation. The measured dispersion relation, and associated uncertainties, are necessary for determining multihadron binding energies and scattering phase shifts, as will be discussed below. Even after accounting for the lattice dispersion relation, these quantities will have residual dependence upon the lattice spacing because of modifications to the hadronic interactions, and an extrapolation in lattice spacing is required to obtain their continuum limit values.

A final comment regarding finite lattice spacing concerns the recovery of rotational symmetry from the underlying hypercubic symmetry and the mixing of operators with those with lower angular momentum (and hence lower dimension). It has recently been shown that in the limit of a lattice spacing that is small compared with any of the intrinsic length scales of the system, including the renormalization scale needed for certain operators, the breaking of rotational symmetry scales as with , and its effects vanish in the continuum limit [85]. This results in a parametric suppression of higher-dimension operators mixing with ones of lower dimension.

3.4.2 Finite-Volume Effects and Boundary Conditions (I): Single Hadrons.

For localized hadronic systems, such as single mesons, baryons and nuclei, if the objective of a LQCD calculation is to determine its mass or binding energy, then it is desirable to work in the largest practical lattice volume, both in the spatial and temporal directions. Ground-state energies, in situations where the lattice volume and temporal extent are much larger than the hadronic size, have finite-volume (FV) effects that scale exponentially with the lattice volume, a result that follows straightforwardly from considering the FV system in terms of its image systems [86, 87, 88].

For single mesons and baryons, calculations of the FV effects have been performed in the p-regime of chiral perturbation theory (PT) (the regime in which the momentum and quark mass divided by the chiral symmetry breaking scale are the expansion parameters), and are found to agree well with the results obtained in LQCD calculations at unphysical light-quark masses, e.g. Refs. [78, 89, 90], but remain to be verified at the physical point. However, for the pseudo-scalar mesons, the FV modifications to the decay constants and masses have been calculated beyond the one-loop level [91], and it has been shown that removing these FV effects from the results of LQCD calculations improves the overall fit quality at and near the physical point [92]. These works suggest that loop-level PT calculations describe the FV modifications to meson masses in large volumes as expected. In using the low-energy EFT to determine the FV effects, it is implicit that the FV effects can be described by modifications of loop diagrams involving the lowest-lying mesons. The reason this type of seperation is possible is because the FV modification to the coefficients of the local operators in the low-energy EFT, in this case PT, are exponentially suppressed by the size of the hadron, scaling as , where is set by the mass of the or the chiral symmetry breaking scale, , as opposed to parametrically larger behavior that results from the loop diagrams.

Figure 9: The mass of the nucleon as a function of calculated on anisotropic gauge-field configurations at a pion mass of .

For baryons, the situation is more complex because of the poor convergence properties of baryon-PT. Figure 9 shows the volume dependence of the nucleon mass obtained at a pion mass of  [78]. Its volume dependence is expected to be linear in at leading order in the chiral expansion, consistent with what is observed in Figure 9. As a function of the pion mass, the FV effects in the nucleon mass are expected to scale as for fixed (for p-regime calculations), and are thus expected to be significantly smaller at the physical light-quark masses [78] for a given . However, PT does not appear to describe the quark-mass dependence of the nucleon mass over the range  [93], as discussed in Section 3.4.4, so it is an open question as to whether it continues to describe volume effects at lighter masses.

It is important to quantify the FV effects and to be able to remove them from LQCD calculations in order to compare the spectrum of the hadrons with those of nature. Further, it is vital to understand them in order to investigate two-hadron scattering amplitudes and nuclear binding energies. As nuclear binding energies are typically in the MeV range, the nucleon mass must be calculated with precision and accuracy that is . The results shown in Figure 9 provide guidelines for future calculations to fulfill such constraints.

While the discussions we have presented so far have been based upon the use of periodic BCs in the spatial directions, one is free to choose different BCs. Periodic BCs constrain the quark momentum to satisfy with being an integer triplet, and are a subset of a larger class of BCs called twisted BCs (TBCs). TBCs are those for which the quark fields acquire phases at the boundaries, , where is the twist angle in the Cartesian direction [94]. An arbitrary total spatial momentum can be selected for the hadronic system by a judicious choice of the twist angles of its valence quarks, , where is the sum of the twists of the valence quarks, again with . TBCs have been shown to be useful in LQCD calculations of the low-momentum transfer behavior of form factors required in determining hadron radii and moments, and for calculations of the vacuum polarization contributions to the muon , alleviating the need for large-volume lattices, e.g. Ref. [95]777 We note that twisting is usually applied to the valence quarks only, leading to a violation of unitarity in such calculations [96]. Nevertheless, the low energy properties of the resulting partially quenched theory are assumed to be described by partially quenched PT [97, 98], which can subsequently be used to correct for these effects in many cases. It was recently noted that twisting can be used to reduce the FV modifications to the hadron masses [99]. This can be accomplished by averaging the results of periodic BC and anti-periodic BC calculations, by twist-averaging, or by working with i-PBCs corresponding to a single twist of  [99].

Realistic calculations are performed in volumes with a finite time direction, with the quark fields satisfying anti-periodic BCs, corresponding to calculations of the system at a typically low, but non-zero temperature. One effect of the finite temperature is to introduce contributions to the correlation function from subsets of hadronic degrees of freedom propagating backwards in time. These give rise to energies in the correlation function that are lower than that of the ground state. However, such effects are exponentially suppressed by the length of the time direction. For interpolating functions , a correlation function calculated with anti-periodic BCs on the quark-fields becomes

(26)

where is the length of the time-direction and is the partition function. As an example, consider an interpolating operator that couples to the -state, which can be written in terms of hadronic field operators as , where the ellipses denote all other possible operators with the same quantum numbers and the ’s are a priori unknown overlap factors. In Eq. (26), this operator thus gives non-zero values of , , , and for all other states with the same quantum numbers as the source. Consequently, the correlation function contains exponentials with energies , , , , , ,…. In the zero temperature limit, only those exponentials with survive. States with energies less than can be interpreted as thermal excitations, for instance arising from the process shown in Figure 10.

Figure 10: Thermal contributions to correlation functions. The vertical lines indicate anti-periodic temporal boundaries and the grey regions represent the source and sink. The solid lines correspond to valence quark propagators.

As they dominate correlation functions at times , the thermal states are not simply a curiosity that can be safely ignored. The amplitudes of these states are exponentially suppressed by the temporal extent of the lattice times the energy of the backward going hadronic state. Consequently, the most important thermal states involve backward propagating pions, and the product must be large in order to suppress these states. As the chiral limit is approached, and the pion becomes lighter, this requires increasingly large computational resources.

3.4.3 Finite-Volume Effects and Boundary Conditions (II): Multiple Hadrons.

Extracting hadronic interactions from LQCD calculations is significantly more complicated than determining the spectrum of stable particles. This is encapsulated in the Maiani-Testa theorem [100], which states that -matrix elements cannot be extracted from infinite-volume Euclidean-space Green functions except at kinematic thresholds. This is clearly problematic from the nuclear physics perspective, as a main motivation for pursuing LQCD is to be able to compute reactions involving multiple nucleons. However, Euclidean-space correlation functions calculated in a finite volume can be used to extract -matrix elements, as has been known for decades in the context of non-relativistic quantum mechanics [101] and extended to quantum field theory by Lüscher [86, 88]. The allowed energies of two particles in a finite volume depend in a calculable way upon their elastic scattering amplitude for energies below the inelastic threshold.

Since Lüscher’s original analysis [86, 88], there have been many works that have generalized the analysis to cases such as boosted systems and those performed with twisted BCs, and have made explicit the formalism for higher angular momentum channels. For a bound system, a single hadron or a nucleus, that is compact compared with the lattice volume, the impact of the boundary is exponentially suppressed by the energy gap to the next lightest hadronic state. In the large volume limit, with the impact of the lattice spacing parametrically diminished in the continuum limit, a bound system can be classified by its SO(3) (and other) quantum numbers. In contrast, the (low-lying) continuum (scattering) states are intrinsically linked to the BCs, and are classified by irreps of the cubic group rather than SO(3). 888 One understands the recovery of SO(3) in the large-volume limit by considering the systems at fixed energy, with finite energy resolution and high excitation in the lattice volume, where the large multiplicity of a given cubic irrep allows for better resolution of angular momentum [102]. In volumes that are large compared with the range of the interaction, the energies of scattering states have a power-law dependence on the spatial extent of the lattice, with exponential corrections suppressed by the inverse of the range of the interaction. For two scalar particles in the irrep of the cubic group, with energy below the inelastic threshold, a direct relation between the FV energy shift, , and the s-wave phase shift, , only exists when the phase shifts are neglected. In that case, the Lüscher relation becomes

(27)

where and is the magnitude of the relative three momentum, derived from the energy shift (for identical mass hadrons). In general, for two-hadron systems, the FV energy shift receives contributions from all partial waves, and truncations must be made in order to extract phase-shift information. These truncations can be checked for self consistency by further calculations in different volumes, or with different BCs. However, the extraction of phase shifts using Lüscher’s method necessarily introduces systematic uncertainties that require quantification.

The extention of Lüscher’s method to multiple, coupled two-hadron channels has recently been detailed, e.g. [103, 104, 105, 106, 107, 108]. This extension has impact beyond low-energy nuclear physics, as it is essential for LQCD calculations in support of the GlueX experimental program at the Thomas Jefferson Laboratory that is focused on identifying gluonic excitations of hadrons and other exotic states (for recent work, see Ref. [109]). In such coupled systems, the energy eigenvalues obtained from LQCD calculations depend upon all of the phase shifts and mixing parameters describing the -matrix in non-trivial ways. This is true even after neglecting the channels that do not contribute in the infinite-volume limit. Each eigenvalue provides a combination of scattering parameters evaluated at that energy eigenvalue. Generally, assumptions have to be made as to the analytic structure of the amplitude near the energy eigenvalues in order to extract the scattering parameters. In some cases, this can be done in a constrained way by appealing to EFTs to provide an approximate form of the momentum dependence [75, 110]. The validity and robustness of the assumed form of the scattering amplitudes has to be systematically verified, and used to estimate the associated systematic uncertainty. Recently, Lüscher’s energy quantization conditions (QCs) have been extended to include the EM interactions between charged hadrons [111] in anticipation of future LQCD calculations.

Figure 11: The expected energy of two nucleons in the positive-parity isoscalar channel (which contains the deuteron) with boost vector as a function of , extracted from the (red) and (blue) QCs given in Ref. [106].

As the NN phase shifts and mixing parameters have been measured experimentally to relatively high precision, studies have been performed [107] to explore the impact of truncating the QCs that follow from a generalization of Lüscher’s work, building upon earlier studies [75]. In the case of the deuteron, the volume dependence of the two lowest energy eigenvalues obtained from the experimentally determined phase shifts and mixing parameter for the system boosted with one unit of lattice momentum (corresponding to the one-dimensional and two-dimensional irreps of the cubic group) are shown in Figure 11. Energies with and without the contributions from the sd-mixing parameter, , and the d-wave phase shifts in the channels (these can all contribute because of the absence of rotational symmetry) in the vicinity of the deuteron pole, are shown (contributions from the higher partial waves have been neglected). The differences in energies provide an estimate of the impact of truncating Lüscher’s QC. Interestingly, the observed sensitivity to suggests that calculating the energy splitting between these two irreps will allow for its determination.

Beyond two-hadron systems, there are ongoing efforts aiming to formulate relations between -matrix elements describing three-body systems and the FV energies of three-body states [112, 113, 115, 116, 117]. However, at this point in time, only a few systems involving more than two unbound hadrons have been explored with LQCD [65, 66, 118, 119].

3.4.4 Chiral Extrapolations.

LQCD has revolutionized our understanding of the quark-mass dependence of hadronic observables. It has provided precise determinations of low-energy constants defining PT, and has made explicit its limitations. Many mesonic observables are now being calculated at, or near, the physical pion mass, although often still in the isospin limit. Currently, the spatial volumes and temporal extents employed in such calculations remain somewhat small, but this situation is improving. Only a few quantities of interest to nuclear physics have so far been calculated near the physical point, specifically the ground-state baryon masses, e.g. Ref. [34], and nucleon matrix elements of quark bilinear operators, e.g. Ref. [120]. Currently, higher precision is required in all of these calculations in order to impact the experimental program.

For the most part, chiral extrapolations are currently still necessary and introduce further systematic uncertainties in the predictions of LQCD calculations. One arises from the fact that a set of coefficients have to be fit to the lattice results to perform an extrapolation at any given order in the chiral expansion. This truncation of the chiral expansion means that the -order fit potentially deviates from QCD by an amount characterized by the small expansion parameter raised to the power.

An observable that shows major deviations from expectations of PT is the mass of the nucleon.

Figure 12: A compilation of LQCD calculations of the nucleon mass, as a function of  [93]. [Figure reproduced with the permission of Andre Walker-Loud.]

Naively, the nucleon mass has an expansion in powers of the light-quark masses and non-analytic contributions from loop diagrams. Figure 12 shows a compilation of the nucleon mass from LQCD calculations [93], which are well reproduced by a chiral dependence that is linear in the pion mass, , in conflict with expectations from PT, .

It is now generally understood that, while the chiral expansion in the pseudo-Goldstone boson sector appears to converge relatively well, there is little indication that it is reliable for making predictions for the mass dependence of nucleon observables, e.g. Refs. [121, 122], except in the close vicinity of the chiral limit. Therefore, there is uncertainty associated with LQCD calculations that require chiral extrapolation that is challenging to reliably assess. Because of this behavior, there have been a number of efforts to partially resum higher orders in the chiral expansion, such as Finite-Range Regularization [122]. Typically this involves identifying a function that has the correct behavior near the chiral limit and the heavy-quark limit, and is a smooth function at intermediate pion masses, with parameter(s) associated with this behavior that are fit to LQCD results. While the functional forms are not obtained from the symmetries of QCD for all quark masses, they do, in many instances, provide fits that agree well with the LQCD results, and provide predictions at the physical point with small statistical uncertainties. The systematic uncertainties associated with such forms are difficult to assess. However, given the current level of statistical precision of LQCD calculations, they are likely estimated sufficiently well at present.

For multi-nucleon systems, the situation is somewhat different and far less certain. At heavy pion masses, a pionless EFT, with only contact operators and derivatives thereof, can be used to describe the results of LQCD calculations and then used to make predictions for other systems in the periodic table [123]. However, while this allows for an extrapolation in atomic number, it cannot be used for chiral extrapolations as the light-quark mass dependence is “hidden” in the coefficients of the operators. At lighter pion masses, the chiral interactions can be used to extrapolate [124, 125, 126, 127], and, in fact, isolating the light-quark mass dependence of the nuclear forces is essential for their refinement at the physical point. Only one such calculation exists at present, and this is for hyperon-nucleon scattering [128]. Lattice results obtained at a pion mass of were used to constrain the hyperon-nucleon interaction at leading order in the low-energy EFT, from which predictions were made at the physical pion mass which agreed, within uncertainties, with phenomenological parameterizations of experimental measurements. The uncertainties associated with this extrapolation were estimated from the size of contributions at next-to-leading order in the EFT expansion.

3.4.5 Electromagnetism

In most of the observables of interest in particle physics, EM corrections are fine-structure effects, entering at the sub -level. However, in the structure of moderate and large nuclei, EM becomes a leading effect. This is because of its long-range nature, compared to the short range nature of the nuclear forces. In many instances, LQCD calculations of the properties of the lowest-lying mesons are becoming sufficiently precise that isospin violation, from the up- and down-quark mass difference and the effects of EM, must be included. In a few cases, hadronic properties are now being calculated with non-degenerate quark flavors and with fully-dynamical QED, e.g. Ref. [34]. However, most of the LQCD calculations that include EM [129, 130, 131, 132, 33] do so for the valence quarks, but not in the generation of the gauge-fields, leading to a systematic uncertainty in those results. Including EM, while straightforward in principle, complicates the quark-mass tuning and introduces power-law finite-volume effects. One advantageous feature of LQCD calculations is that the EM coupling constant can be chosen to be an arbitrary value, within reason, to magnify the EM effects for the purpose of extracting them with improved precision and refining estimates of uncertainties.

4 Hadronic Structure: Three-Point Correlation Functions

LQCD has a much broader scope than purely the spectroscopic information available from the two-point correlators discussed up to now. A second class of observables of importance in nuclear physics that have undergone extensive study are aspects of hadron structure such as electromagnetic form factors, moments of parton distributions and generalized parton distributions and transverse-momentum dependent parton distributions (see Ref. [134] for a recent comprehensive review). These quantities correspond to matrix elements of local (and non-local) operators, and require the calculation of three-point correlation functions, as shown schematically in Figure 13.

Figure 13: Contributions to three point correlation functions: connected (left) and disconnected (right). The shaded circles correspond to source and sink interpolating operators while the crossed circle represents the current insertion.

Using the nucleon electromagnetic form-factors as examples, consider

(28)

where is an interpolating operator with the quantum numbers of the nucleon. In the limit of large Euclidean time separations between the source, current insertion and sink, the insertion of complete sets of states on either side of shows that this correlation function is given by the nucleon matrix element of this current, up to overlap and kinematic factors that can be extracted from simpler two point correlators,

where the last term is the desired matrix element of the electromagnetic current between ground-state nucleons of momentum and . An additional uncertainty that is introduced in these more complicated calculations is the dependence of the results on the source–operator and operator–sink time separations rather than just on the source–sink separation. As with two-point functions, the source couples to all eigenstates of the specified quantum numbers, but the high lying states are exponentially damped out in Euclidean time. However, the inserted current can couple back to the excited states, introducing more contamination. It is only in the limit of both and being large that the matrix elements can be extracted simply. Unfortunately this requires temporal separations of the source and sink that are larger than those needed for two-point correlation functions and so calculations of matrix elements are expected to be both noisier than, and subject to more excited state contamination than, the corresponding two-point functions. A number of techniques have been explored to address this issue, for instance, through the use of matrices of correlators and multiple state fits (see, for example, Ref. [120, 135, 136]).

In many studies, uncertainties were introduced into calculations of three-point correlation functions by the omission of quark-line disconnected diagrams (the right-hand diagram in Figure 13). In these terms, the quark field creation and annihilation operators in the inserted current self contract, resulting in a propagator from the insertion point to itself, and are present whenever the current under consideration has a flavor-singlet component. All-to-all propagator techniques, discussed previously, are used to calculate such contributions, and require substantial computational resources. In the single nucleon sector, sophisticated all-to-all propagator techniques have recently allowed complete calculations of the proton electromagnetic form factors, e.g. Ref. [137], and of the spin decomposition of the nucleon, e.g. Ref. [138]. Ongoing work to apply these methods to nuclei is underway.

5 Error Budgets

The results of a series of LQCD calculations of a given observable are presented as a central value, a statistical uncertainty and a systematic uncertainty (for correlated quantities, this is generalized to a central point and associated region in a multi-dimensional space). However, a great deal of information is contained in the various contributions to both such uncertainties. In precision calculations of weak matrix elements, and other fundamental quantities in particle physics, it is now standard to provide an “error budget” by tabulating all of the sources of uncertainty, an example of which is shown in Figure 14 [139].

Figure 14: The error budget for the decay constant and b-quark mass determined in the calculations of Ref. [139]. [Table reproduced with the permission of HPQCD collaboration.]

Included in the contributions to the total uncertainties are those from the statistics of the calculation of the quantity, from lattice perturbation theory (matching continuum QCD to lattice QCD), from chiral, continuum and infinite-volume extrapolations, from determining the lattice spacing, from tuning the quark masses, and from the absence of EM. In calculations of simple quantities, sufficient computational resources are now available to numerically determine and control most of these uncertainties, through calculations with multiple lattice volumes, spacings and quark masses. Even in cases where complete calculations such as these are prohibitively computationally expensive, a clear and complete error budget presenting all of the uncertainties, even if they result from estimates based upon dimensional analysis or EFT arguments, is informative.

6 Summary

Lattice quantum chromodynamics is a numerical technique with which to calculate strong-interaction observables in the low-energy regime with uncertainties that can be fully quantified and systematically reduced. In this article, we have attempted to summarize all of the sources of uncertainty that arise in starting from the handful of parameters that define QCD (the quark masses and the strong interaction length scale) and using the numerical machinery of LQCD to make predictions for low-energy observables, such as the meson and baryon spectra, the structure of the nucleon, and the masses and interactions of nuclei. At the time of writing this article, relatively simple quantities are being calculated at, and near, the physical light-quark masses, essentially eliminating one of the major uncertainties that has been present in the field for many years - the chiral extrapolation. While the lattice volumes and spacings that are computationally accessible at the present time are not ideal for nuclear physics, with increased computational resources and algorithmic improvements, we expect that calculations of many quantities of importance to nuclear physics with fully quantified statistical and systematic uncertainties will become routine in the near future.

References

References

  • [1] S. Aoki, Y. Aoki, C. Bernard, T. Blum, G. Colangelo, M. Della Morte, S. Dürr and A. X. El Khadra et al., arXiv:1310.8555 [hep-lat].
  • [2] K. G. Wilson, Phys. Rev. D 10, 2445 (1974).
  • [3] K. Symanzik, Nucl. Phys. B 226, 187 (1983).
  • [4] J. B. Kogut and L. Susskind, Phys. Rev. D 11, 395 (1975).
  • [5] D. B. Kaplan, Phys. Lett. B 288, 342 (1992) [arXiv:hep-lat/9206013].
  • [6] Y. Shamir, Nucl. Phys. B 406, 90 (1993) [arXiv:hep-lat/9303005].
  • [7] V. Furman and Y. Shamir, Nucl. Phys. B 439, 54 (1995) [arXiv:hep-lat/9405004].
  • [8] R. Narayanan and H. Neuberger, Nucl. Phys. B 443, 305 (1995) [arXiv:hep-th/9411108].
  • [9] H. Neuberger, Phys. Lett. B 417, 141 (1998) [arXiv:hep-lat/9707022].
  • [10] R. B. Morgan and W. Wilcox, math-ph/0405053.
  • [11] D. Darnell, R. B. Morgan and W. Wilcox, Linear Algebra Appl. 429, 2415 (2008) [arXiv:0707.0502 [math-ph]].
  • [12] A. Stathopoulos and K. Orginos, SIAM J. Sci. Comput. 32, 439 (2010) [arXiv:0707.0131 [hep-lat]].
  • [13] M. Luscher, JHEP 0712, 011 (2007) [arXiv:0710.5417 [hep-lat]].
  • [14] J. Brannick, R. C. Brower, M. A. Clark, J. C. Osborn and C. Rebbi, Phys. Rev. Lett. 100, 041601 (2008) [arXiv:0707.4018 [hep-lat]].
  • [15] R. Babich, J. Brannick, R. C. Brower, M. A. Clark, T. A. Manteuffel, S. F. McCormick, J. C. Osborn and C. Rebbi, Phys. Rev. Lett. 105, 201602 (2010) [arXiv:1005.3043 [hep-lat]].
  • [16] A. Frommer, K. Kahl, S. Krieg, B. Leder and M. Rottmann, arXiv:1303.1377 [hep-lat].
  • [17] S. Duane, A. D. Kennedy, B. J. Pendleton and D. Roweth, Phys. Lett. B 195, 216 (1987).
  • [18] S. A. Gottlieb, W. Liu, D. Toussaint, R. L. Renken and R. L. Sugar, Phys. Rev. D 35, 2531 (1987).
  • [19] I. P. Omelyan, I. M. Mrygloda, R. Folk, Comp. Phys. Commun. 151, 272 (2003)
  • [20] C. Bernard, N. Christ, S. A. Gottlieb, K. Jansen, R. Kenway, T. Lippert, M. Luscher and P. B. Mackenzie et al., Nucl. Phys. Proc. Suppl. 106, 199 (2002).
  • [21] A. Ukawa [CP-PACS and JLQCD Collaborations], Nucl. Phys. Proc. Suppl. 106, 195 (2002).
  • [22] C. Urbach, K. Jansen, A. Shindler and U. Wenger, Comput. Phys. Commun. 174, 87 (2006) [hep-lat/0506011].
  • [23] A. D. Kennedy, hep-lat/0607038.
  • [24] M. Hasenbusch, Phys. Lett. B 519 (2001) 177 [hep-lat/0107019].
  • [25] M. Hasenbusch and K. Jansen, Nucl. Phys. B 659, 299 (2003) [hep-lat/0211042].
  • [26] M. Luscher, Comput. Phys. Commun. 165, 199 (2005) [hep-lat/0409106].
  • [27] H. W. Lin et al. [Hadron Spectrum Collaboration], Phys. Rev. D 79, 034502 (2009) [arXiv:0810.3588 [hep-lat]].
  • [28] R. Sommer, Nucl. Phys. B 411, 839 (1994) [hep-lat/9310022].
  • [29] M. Luscher, JHEP 1008, 071 (2010) [arXiv:1006.4518 [hep-lat]].
  • [30] S. Borsanyi, S. Dürr, Z. Fodor, C. Hoelbling, S. D. Katz, S. Krieg, T. Kurth and L. Lellouch et al., JHEP 1209, 010 (2012) [arXiv:1203.4469 [hep-lat]].
  • [31] R. Sommer, PoS LATTICE 2013, 015 (2014) [arXiv:1401.3270 [hep-lat]].
  • [32] S. Drury, T. Blum, M. Hayakawa, T. Izubuchi, C. Sachrajda and R. Zhou, PoS LATTICE 2013, 268 (2013) [arXiv:1312.0477 [hep-lat]].
  • [33] G. M. de Divitiis et al. [RM123 Collaboration], Phys. Rev. D 87, no. 11, 114505 (2013) [arXiv:1303.4896 [hep-lat]].
  • [34] S. Borsanyi, S. Dürr, Z. Fodor, C. Hoelbling, S. D. Katz, S. Krieg, L. Lellouch and T. Lippert et al., arXiv:1406.4088 [hep-lat].
  • [35] A. Bazavov et al. [Fermilab Lattice and MILC Collaboration], arXiv:1407.3772 [hep-lat].
  • [36] A. Bazavov et al. [MILC Collaboration], Phys. Rev. D 87, no. 5, 054505 (2013) [arXiv:1212.4768 [hep-lat]].
  • [37] R. Sommer, F. Tekin and U. Wolff, PoS LATTICE 2010, 241 (2010) [arXiv:1011.2332 [hep-lat]].
  • [38] T. Chiarappa, F. Farchioni, K. Jansen, I. Montvay, E. E. Scholz, L. Scorzato, T. Sudmann and C. Urbach, Eur. Phys. J. C 50, 373 (2007) [hep-lat/0606011].
  • [39] Y. Aoki, T. Blum, N. Christ, C. Cristian, C. Dawson, T. Izubuchi, G. Liu and R. Mawhinney et al., Phys. Rev. D 69, 074504 (2004) [hep-lat/0211023].
  • [40] Y. Aoki, T. Blum, N. H. Christ, C. Dawson, T. Izubuchi, R. D. Mawhinney, J. Noaki and S. Ohta et al., Phys. Rev. D 73, 094507 (2006) [hep-lat/0508011].
  • [41] S. Schaefer et al. [ALPHA Collaboration], Nucl. Phys. B 845, 93 (2011) [arXiv:1009.5228 [hep-lat]].
  • [42] M. Bruno, S. Schaefer and R. Sommer, arXiv:1406.5363 [hep-lat].
  • [43] M. Lüscher and S. Schaefer, Comput. Phys. Commun. 184, 519 (2013) [arXiv:1206.2809 [hep-lat]].
  • [44] M. Lüscher, JHEP 1406, 105 (2014) [arXiv:1404.5930 [hep-lat]].
  • [45] G. McGlynn and R. D. Mawhinney, arXiv:1406.4551 [hep-lat].
  • [46] M. Luscher, arXiv:1002.4232 [hep-lat].
  • [47] U. Wolff [ALPHA Collaboration], Comput. Phys. Commun. 156, 143 (2004) [Erratum-ibid. 176, 383 (2007)] [hep-lat/0306017].
  • [48] M. Peardon et al. [Hadron Spectrum Collaboration], Phys. Rev. D 80, 054506 (2009) [arXiv:0905.2160 [hep-lat]].
  • [49] R. Babich, J. Brannick, R. C. Brower, M. A. Clark, S. D. Cohen, J. C. Osborn and C. Rebbi, PoS LAT 2009, 031 (2009) [arXiv:0912.2186 [hep-lat]].
  • [50] M. A. Clark, R. Babich, K. Barros, R. C. Brower and C. Rebbi, Comput. Phys. Commun. 181, 1517 (2010) [arXiv:0911.3191 [hep-lat]].
  • [51] R. Babich, M. A. Clark, B. Joo, G. Shi, R. C. Brower and S. Gottlieb, arXiv:1109.2935 [hep-lat].
  • [52] F. T. Winter, M. A. Clark, R. G. Edwards and B. Jo—, arXiv:1408.5925 [hep-lat].
  • [53] L. Giusti, C. Hoelbling, M. Luscher and H. Wittig, Comput. Phys. Commun. 153, 31 (2003) [hep-lat/0212012].
  • [54] T. A. DeGrand and S. Schaefer, Comput. Phys. Commun. 159, 185 (2004) [hep-lat/0401011].
  • [55] G. S. Bali, S. Collins and A. Schafer, Comput. Phys. Commun. 181, 1570 (2010) [arXiv:0910.3970 [hep-lat]].
  • [56] T. Blum, T. Izubuchi and E. Shintani, Phys. Rev. D 88, no. 9, 094503 (2013) [arXiv:1208.4349 [hep-lat]].
  • [57] E. Shintani, R. Arthur, T. Blum, T. Izubuchi, C. Jung and C. Lehner, arXiv:1402.0244 [hep-lat].
  • [58] A. Stathopoulos, J. Laeuchli and K. Orginos, arXiv:1302.4018 [hep-lat].
  • [59] E. Endress, C. Pena and K. Sivalingam, arXiv:1402.0831 [hep-lat].
  • [60] Q. Liu, N. H. Christ and C. Jung, Phys. Rev. D 87, no. 5, 054503 (2013) [arXiv:1206.0080 [hep-lat]].
  • [61] M. Luscher, Commun. Math. Phys. 54, 283 (1977).
  • [62] M. Luscher and P. Weisz, Nucl. Phys. B 240, 349 (1984).
  • [63] M. Lüscher and U. Wolff, Nucl. Phys. B 339, 222 (1990).
  • [64] C. Michael, Nucl. Phys. B 259, 58 (1985).
  • [65] W. Detmold, K. Orginos, M. J. Savage and A. Walker-Loud, Phys. Rev. D 78, 054514 (2008) [arXiv:0807.1856 [hep-lat]].
  • [66] W. Detmold and B. Smigielski, Phys. Rev. D 84, 014508 (2011) [arXiv:1103.4362 [hep-lat]].
  • [67] S. R. Beane, W. Detmold, T. C. Luu, K. Orginos, A. Parreno, M. J. Savage, A. Torok and A. Walker-Loud, Phys. Rev. D 79, 114502 (2009) [arXiv:0903.2990 [hep-lat]].
  • [68] T. A. DeGrand and D. Toussaint, Singapore, Singapore: World Scientific (1990) 750 p
  • [69] B. Efron, CBMS-NSF Regional Conference Series in Applied Mathematics, 1987.
  • [70] J. J. Dudek, R. G. Edwards, M. J. Peardon, D. G. Richards and C. E. Thomas, Phys. Rev. Lett. 103, 262001 (2009) [arXiv:0909.0200 [hep-ph]].
  • [71] J. Foley, A. O’Cais, M. Peardon and S. M. Ryan, Phys. Rev. D 75 (2007) 094503 [arXiv:hep-lat/0702010].
  • [72] S. Basak et al., Phys. Rev. D 76 (2007) 074504 [arXiv:0709.0008 [hep-lat]].
  • [73] B. Blossier, M. Della Morte, G. von Hippel, T. Mendes and R. Sommer, JHEP 0904, 094 (2009) [arXiv:0902.1265 [hep-lat]].
  • [74] R. G. Edwards et al. [Hadron Spectrum Collaboration], Phys. Rev. D 87, no. 5, 054506 (2013) [arXiv:1212.5236 [hep-ph]].
  • [75] S. R. Beane, W. Detmold, K. Orginos and M. J. Savage, Prog. Part. Nucl. Phys. 66, 1 (2011) [arXiv:1004.2935 [hep-lat]].
  • [76] G. Parisi and Y. -s. Wu, Sci. Sin. 24, 483 (1981).
  • [77] G. P. Lepage, ‘The Analysis Of Algorithms For Lattice Field Theory,” Invited lectures given at TASI’89 Summer School, Boulder, CO, Jun 4-30, 1989. Published in Boulder ASI 1989:97-120 (QCD161:T45:1989).
  • [78] S. R. Beane, E. Chang, W. Detmold, H. W. Lin, T. C. Luu, K. Orginos, A. Parreno and M. J. Savage et al., Phys. Rev. D 84, 014507 (2011) [arXiv:1104.4101 [hep-lat]].
  • [79] W. Detmold and M. G. Endres, Phys. Rev. D 90, 034503 (2014) [arXiv:1404.6816 [hep-lat]].
  • [80] M. G. Endres, D. B. Kaplan, J. -W. Lee and A. N. Nicholson, PoS LATTICE 2011, 017 (2011) [arXiv:1112.4023 [hep-lat]].
  • [81] A. Bazavov et al. [MILC Collaboration], Phys. Rev. Lett. 110, 172003 (2013) [arXiv:1301.5855 [hep-ph]].
  • [82] K. Symanzik, Nucl. Phys. B 226, 205 (1983).
  • [83] S. R. Beane, E. Chang, S. D. Cohen, W. Detmold, H. W. Lin, T. C. Luu, K. Orginos and A. Parreno et al., Phys. Rev. D 87, no. 3, 034506 (2013) [arXiv:1206.5219 [hep-lat]].
  • [84] S. R. Beane et al. [NPLQCD Collaboration], Phys. Rev. C 88, no. 2, 024003 (2013) [arXiv:1301.5790 [hep-lat]].
  • [85] Z. Davoudi and M. J. Savage, Phys. Rev. D 86, 054505 (2012) [arXiv:1204.4146 [hep-lat]].
  • [86] M. Luscher, Commun. Math. Phys. 105, 153 (1986).
  • [87] P. Hasenfratz and H. Leutwyler, Nucl. Phys. B 343, 241 (1990).
  • [88] M. Luscher, Nucl. Phys. B 354, 531 (1991).
  • [89] G. Colangelo and C. Haefeli, Nucl. Phys. B 744, 14 (2006) [hep-lat/0602017].
  • [90] G. Colangelo, A. Fuhrer and S. Lanz, Phys. Rev. D 82, 034506 (2010) [arXiv:1005.1485 [hep-lat]].
  • [91] G. Colangelo, S. Durr and C. Haefeli, Nucl. Phys. B 721, 136 (2005) [hep-lat/0503014].
  • [92] S. Durr, Z. Fodor, C. Hoelbling, S. Krieg, T. Kurth, L. Lellouch, T. Lippert and R. Malak et al., arXiv:1310.3626 [hep-lat].
  • [93] A. Walker-Loud, PoS CD 12, 017 (2013) [arXiv:1304.6341 [hep-lat]].
  • [94] P. F. Bedaque, Phys. Lett. B 593, 82 (2004) [nucl-th/0402051].
  • [95] C. Aubin, T. Blum, M. Golterman and S. Peris, Phys. Rev. D 88, no. 7, 074505 (2013) [arXiv:1307.4701 [hep-lat]].
  • [96] J. Flynn, A. Juttner, C. Sachrajda and G. Villadoro, PoS LAT 2005, 352 (2006) [hep-lat/0509093].
  • [97] S. R. Sharpe and N. Shoresh, Phys. Rev. D 62, 094503 (2000) [hep-lat/0006017].
  • [98] S. R. Sharpe and N. Shoresh, Nucl. Phys. Proc. Suppl. 83, 968 (2000) [hep-lat/9909090].
  • [99] R. A. Briceno, Z. Davoudi, T. C. Luu and M. J. Savage, Phys. Rev. D 89, 074509 (2014) [arXiv:1311.7686 [hep-lat]].
  • [100] L. Maiani and M. Testa, Phys. Lett. B 245, 585 (1990).
  • [101] K. Huang and C. N. Yang, Phys. Rev. 105, 767 (1957).
  • [102] T. Luu and M. J. Savage, Phys. Rev. D 83, 114508 (2011) [arXiv:1101.3347 [hep-lat]].
  • [103] V. Bernard, M. Lage, U.-G. Meissner and A. Rusetsky, JHEP 1101, 019 (2011) [arXiv:1010.6018 [hep-lat]].
  • [104] M. Doring, U. G. Meissner, E. Oset and A. Rusetsky, Eur. Phys. J. A 48, 114 (2012) [arXiv:1205.4838 [hep-lat]].
  • [105] A. Martinez Torres, M. Bayar, D. Jido and E. Oset, Int. J. Mod. Phys. Conf. Ser. 26, 1460057 (2014) [arXiv:1309.6681 [nucl-th]].
  • [106] R. A. Briceno, Z. Davoudi, T. Luu and M. J. Savage, Phys. Rev. D 88, 114507 (2013) [arXiv:1309.3556 [hep-lat]].
  • [107] R. A. Briceno, Z. Davoudi and T. C. Luu, Phys. Rev. D 88, no. 3, 034502 (2013) [arXiv:1305.4903 [hep-lat]].
  • [108] R. A. Briceno, Phys. Rev. D 89, 074507 (2014) [arXiv:1401.3312 [hep-lat]].
  • [109] J. J. Dudek, R. G. Edwards, P. Guo and C. E. Thomas, Phys. Rev. D 88, 094505 (2013) [arXiv:1309.2608 [hep-lat]].
  • [110] C. B. Lang, L. Leskovec, D. Mohler, S. Prelovsek and R. M. Woloshyn, arXiv:1403.8103 [hep-lat].
  • [111] S. R. Beane and M. J. Savage, arXiv:1407.4846 [hep-lat].
  • [112] S. R. Beane, W. Detmold and M. J. Savage, Phys. Rev. D 76, 074507 (2007) [arXiv:0707.1670 [hep-lat]].
  • [113] R. A. Briceno and Z. Davoudi, Phys. Rev. D 87, no. 9, 094507 (2013) [arXiv:1212.3398 [hep-lat]].
  • [114] M. T. Hansen and S. R. Sharpe, arXiv:1311.4848 [hep-lat].
  • [115] M. T. Hansen and S. R. Sharpe, arXiv:1408.5933 [hep-lat].
  • [116] W. Detmold and M. J. Savage, Phys. Rev. D 77, 057502 (2008) [arXiv:0801.0763 [hep-lat]].
  • [117] M. T. Hansen and S. R. Sharpe, Phys. Rev. D 86, 016007 (2012) [arXiv:1204.0826 [hep-lat]].
  • [118] S. R. Beane, W. Detmold, T. C. Luu, K. Orginos, M. J. Savage and A. Torok, Phys. Rev. Lett. 100, 082004 (2008) [arXiv:0710.1827 [hep-lat]].
  • [119] Z. Shi, arXiv:1211.0481 [hep-lat].
  • [120] J. R. Green, J. W. Negele, A. V. Pochinsky, S. N. Syritsyn, M. Engelhardt and S. Krieg, arXiv:1404.4029 [hep-lat].
  • [121] S. R. Beane, Nucl. Phys. B 695, 192 (2004) [hep-lat/0403030].
  • [122] R. D. Young, D. B. Leinweber and A. W. Thomas, Nucl. Phys. Proc. Suppl. 141, 233 (2005).
  • [123] N. Barnea, L. Contessi, D. Gazit, F. Pederiva and U. van Kolck, arXiv:1311.4966 [nucl-th].
  • [124] S. R. Beane and M. J. Savage, Nucl. Phys. A 713, 148 (2003) [hep-ph/0206113].
  • [125] E. Epelbaum, U. G. Meissner and W. Gloeckle, Nucl. Phys. A 714, 535 (2003) [nucl-th/0207089].
  • [126] S. R. Beane and M. J. Savage, Nucl. Phys. A 717, 91 (2003) [nucl-th/0208021].
  • [127] E. Epelbaum, H. Krebs, T. A. LŠhde, D. Lee and U. G. Mei§ner, Phys. Rev. Lett. 110, no. 11, 112502 (2013) [arXiv:1212.4181 [nucl-th]].
  • [128] S. R. Beane, E. Chang, S. D. Cohen, W. Detmold, H. -W. Lin, T. C. Luu, K. Orginos and A. Parreno et al., Phys. Rev. Lett. 109, 172001 (2012) [arXiv:1204.3606 [hep-lat]].
  • [129] T. Blum, T. Doi, M. Hayakawa, T. Izubuchi and N. Yamada, Phys. Rev. D 76, 114508 (2007) [arXiv:0708.0484 [hep-lat]].
  • [130] S. Basak et al. [MILC Collaboration], PoS LATTICE 2008, 127 (2008) [arXiv:0812.4486 [hep-lat]].
  • [131] T. Blum, R. Zhou, T. Doi, M. Hayakawa, T. Izubuchi, S. Uno and N. Yamada, Phys. Rev. D 82, 094508 (2010) [arXiv:1006.1311 [hep-lat]].
  • [132] S. Aoki, K. I. Ishikawa, N. Ishizuka, K. Kanaya, Y. Kuramashi, Y. Nakamura, Y. Namekawa and M. Okawa et al., Phys. Rev. D 86, 034507 (2012) [arXiv:1205.2961 [hep-lat]].
  • [133] S. Borsanyi, S. DŸrr, Z. Fodor, J. Frison, C. Hoelbling, S. D. Katz, S. Krieg and T. Kurth et al., Phys. Rev. Lett. 111, 252001 (2013) [arXiv:1306.2287 [hep-lat]].
  • [134] P. Hagler, Phys. Rept. 490, 49 (2010) [arXiv:0912.5483 [hep-lat]].
  • [135] T. Bhattacharya, S. D. Cohen, R. Gupta, A. Joseph, H. W. Lin and B. Yoon, Phys. Rev. D 89, 094502 (2014) [arXiv:1306.5435 [hep-lat]].
  • [136] B. JŠger, T. D. Rae, S. Capitani, M. Della Morte, D. Djukanovic, G. von Hippel, B. Knippschild and H. B. Meyer et al., arXiv:1311.5804 [hep-lat].
  • [137] S. Meinel, Talk presented at the 32nd International Symposium on Lattice Field Theory, Columbia University, June 2014. https://indico.bnl.gov/contributionDisplay.py/pdf?contribId=198&sessionId=4&confId=736
  • [138] M. Deka, T. Doi, Y. B. Yang, B. Chakraborty, S. J. Dong, T. Draper, M. Glatzmaier and M. Gong et al., arXiv:1312.4816 [hep-lat].
  • [139] B. Colquhoun, R. J. Dowdall, C. T. H. Davies, K. Hornbostel and G. P. Lepage, arXiv:1408.5768 [hep-lat].
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
229560
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description