QCD thermodynamics from lattice calculations with non-equilibrium methods: The equation of state
Michele Caselle, Alessandro Nada, and Marco Panero
Department of Physics and Arnold-Regge Center, University of Turin, and INFN, Turin
Via Pietro Giuria 1, I-10125 Turin, Italy
Platanenallee 6, D-15738 Zeuthen, Germany
A precise lattice determination of the equation of state in Yang-Mills theory is carried out by means of a simulation algorithm, based on Jarzynski’s theorem, that allows one to compute physical quantities in thermodynamic equilibrium, by driving the field configurations of the system out of equilibrium. The physical results and the computational efficiency of the algorithm are compared with other state-of-the-art lattice calculations, and the extension to full QCD with dynamical fermions and to other observables is discussed.
1 Introduction and motivation
The phenomenology of the strong interaction at high temperatures and/or densities remains one of the most interesting (yet somehow elusive) research areas in the physics of elementary particles. As nicely summarized by B. Müller in his lecture at the 2013 Nobel Symposium on LHC Physics [Muller:2013dea], the novel state of matter produced in nuclear collisions at LHC and RHIC reveals unique features: it is strongly coupled, but highly relativistic; at high temperature it displays the distinctive collective phenomena of a liquid, whereas at low temperatures it turns into a gas of weakly interacting hadrons; while its shear viscosity is nearly orders of magnitude larger than the one measured for superfluid helium and even orders of magnitude larger than the one of ultracold atoms [Schafer:2009dj], the ratio of the shear viscosity over the entropy density is actually lower than for those substances, and close to the fundamental quantum-mechanical bound [Policastro:2001yc]; moreover, it thermalizes in a very short time, close to the limits imposed by causality. Finally, the quark-gluon plasma (QGP) is not simply a “rearrangement” of ordinary nuclear matter: rather, it “creates” its own ground state, in which two characterizing features of the hadronic world, color confinement and dynamical chiral-symmetry breaking, are lost.
At the temperatures reached in present heavy-ion-collision experiments—which, when expressed in natural units , are of the order of hundreds of MeV [Braun-Munzinger:2015hba]—the QGP is strongly coupled: this demands a theoretical investigation by non-perturbative tools, and the regularization of quantum chromodynamics (QCD) on a Euclidean lattice [Wilson:1974sk] is the tool of choice for this purpose. Over the past few years, several physical observables relevant for finite-temperature QCD have been studied on the lattice (see refs. [Philipsen:2012nu, Ding:2015ona, Meyer:2015wax] for reviews): one of the most prominent among them is the QCD equation of state [Borsanyi:2013bia, Bazavov:2014pvz], which determines the evolution of the Universe shortly after the Big Bang, as well as the evolution of the matter produced in the “little bang” at ultrarelativistic nuclear colliders.
While state-of-the-art results for the QCD equation of state, obtained by different collaborations using slightly different types of lattice discretizations, are now consistent with each other, it is worth remarking that such computations still require large computational power, and the multiple extrapolations to the physical limit are far from trivial. For example, in the standard “integral” method [Engels:1990vr], the fact that quantum fluctuations at the lattice cutoff scale induce a strong ultraviolet divergence in the free energy associated with the QCD partition function, implies that bulk quantities at thermal equilibrium, such as the pressure at a finite temperature , have to be extracted by subtracting the corresponding quantities evaluated in vacuum, and are encoded in numbers that scale like ( being the lattice spacing, and the Euclidean spacetime dimension, i.e. four): this constrains the values of that can be probed in these simulations and, as a consequence, the control over systematic uncertainties affecting the extrapolation to the continuum. Similarly, in simulations with staggered fermions, residual taste-symmetry-breaking effects can have an impact on the extrapolation of the quark masses to the physical limit.
Due to these challenges, in the past few years there has been renovated interest in alternative methods to compute the equation of state. In particular, we would like to mention two recent studies, based upon the gradient flow [Kitazawa:2016dsl] and on the formulation of the theory in a moving reference frame [Giusti:2016iqr]: both of them have been successfully tested in Yang-Mills theory without quarks, and can be extended to full QCD without major obstructions [Kanaya:2016rkt, DallaBrida:2017sxr]. The thermal properties of a purely gluonic theory, albeit not relevant for a quantitative comparison with experiments, can reveal important universal features, shared by theories with different gauge symmetry [Boyd:1996bx, Lucini:2002ku, Lucini:2003zr, Lucini:2005vg, Bursa:2005yv, Bringoltz:2005rr, Bringoltz:2005xx, Pepe:2006er, Cossu:2007dk, Umeda:2008bd, Meyer:2009tq, Panero:2008mg, Panero:2009tv, Datta:2009jn, Wellegehausen:2009rq, Datta:2010sq, Borsanyi:2012ve, Mykkanen:2012ri, Lucini:2012wq, Lucini:2012gg, Asakawa:2013laa, Giusti:2014ila, Bruno:2014rxa, Bonati:2015uga, Francis:2015lha, Giusti:2016iqr, Hajizadeh:2017jsw, Kitazawa:2016dsl] and/or in different dimensions [Christensen:1991rx, Holland:2005nd, Holland:2007ar, Liddle:2008kk, Bialas:2008rk, Caselle:2011fy, Caselle:2011mn, Bialas:2012qz], and, by virtue of the limited computational power required for their numerical Monte Carlo simulation, provide a useful benchmark for new algorithms.
In this manuscript, we present yet another method to compute the equation of state, which is based on Jarzynski’s theorem [Jarzynski:1996oqb, Jarzynski:1997ef]: as will be discussed in detail in section 2, this theorem encodes an exact relation between the ratio of the partition functions associated with two different ensembles (which, in this case, are defined as those of the theory at two different temperatures) to an exponential average of the work done on the system during a non-equilibrium transformation driving it from one ensemble to the other. Jarzynski’s theorem is closely related to a set of powerful mathematical identities in non-equilibrium statistical physics, which have been developed since the 1990’s [Evans:1993po, Evans:1993em, Gallavotti:1994de, Gallavotti:1995de, Crooks:1997ne, Crooks:1999ep, Crooks:1999pe, Ritort:2004wf, MariniBettoloMarconi:2008fd]. A first example of application of Jarzynski’s theorem in numerical simulations of lattice gauge theory was presented in ref. [Caselle:2016wsw], but the technique is quite general and versatile, and can be used for a variety of different lattice QCD problems (at zero or at finite temperature). In section 3, after laying out the setup of our numerical calculations, we report a set of high-precision results for the equation of state obtained using this method, along with a detailed discussion of the underlying physics, and with a comparison to studies based on different methods [Borsanyi:2012ve, Giusti:2016iqr, Kitazawa:2016dsl]. Section LABEL:sec:discussion is devoted to a discussion of the computational efficiency of our method and to some concluding remarks. A summary of this work has been reported in ref. [Nada:2017xmz].
2 Jarzynski’s equality
Jarzynski’s equality [Jarzynski:1996oqb, Jarzynski:1997ef] is a theorem in statistical mechanics, that relates equilibrium and non-equilibrium quantities.
Consider a classical statistical system, which depends on a set of parameters (defined in a space ), and let denote its Hamiltonian, which is a function of the degrees of freedom . When the system is in thermal equilibrium at temperature , the partition function, defined as
(where denotes the sum over all possible configurations, and, depending on the nature of and on the theory, may be a finite or an infinite sum, a multiple integral, or a suitably defined functional integral), is related to the Helmholtz free energy via . In eq. (1), both the partition function and the free energy, like , are functions of . Let and denote two distinct values of in parameter space, and let and denote the partition functions of the system in thermodynamic equilibrium, when its parameters take values and , respectively. For a given physical observable , let denote the statistical average of in thermal equilibrium in the ensemble with parameters fixed to .
Consider now the situation in which the parameters of the system are varied as a function of time in a certain interval (which can be either finite or infinite) of extrema and , according to some, arbitrary but well-specified, function (or “protocol” for the parameter evolution), with and . Assume that, starting from an initial equilibrium configuration at , the parameters are let evolve in time, according to the function; accordingly, the dynamical variables respond to the variation in the parameters, and themselves evolve in time, spanning a trajectory in the field-configuration space. In general, the configurations at all are not thermalized, i.e. the parameter evolution drives the system out of equilibrium (except when is infinite, so that the switching process is infinitely slow). Let denote the total work done on the system during its evolution from to ; since the system is driven out of equilibrium, the mean value of the work obtained by averaging over an ensemble of such transformations, is in general larger than or equal to the free-energy difference of equilibrium ensembles with parameters and :
Note that is the amount of work dissipated during the parameter switch, which is directly related to the entropy variation, hence the inequality (2) is nothing but an expression of the second law of thermodynamics. Also, when the parameter switch is infinitely slow (i.e. for ) the system remains in thermodynamic equilibrium throughout the switching process, the transformation is reversible, and the equality sign holds.
However, if one considers the exponential average of the work, then it is possible to prove that it is directly related to through the following equality:
Eq. (3) is the main statement of Jarzynski’s theorem [Jarzynski:1996oqb].
Before discussing the proof of eq. (3) for generic , we observe that when , the equality holds: in this limit, the parameter switch from to is infinitely slow, the transformation becomes quasi-static, the system remains in equilibrium for the whole duration of the process, so that the work done on the system is equal to
for every trajectory interpolating between the initial and final ensembles. Hence, in this limit one has . Moreover, in this limit one also has , thus the left-hand side of eq. (3) can be written as
and eq. (3) is trivially recovered.
To prove eq. (3) for finite , let us first consider the case in which the system is initially in thermal equilibrium with a heat reservoir at temperature , but is isolated from it during the switching process from to . Then, one can express the average over the ensemble of trajectories appearing on the left-hand side of eq. (3) in terms of the time-dependent probability density in phase space . Given that at the system is in thermal equilibrium at temperature , satisfies the initial condition ; moreover, since the system is in isolation during the switching process, the time evolution of at is given by Liouville’s equation , where the quantity appearing on the right-hand side is the Poisson bracket of and . The evolution law expressed by Liouville’s equation is fully deterministic, and a one-to-one mapping exists between each configuration at a generic time and a configuration at the initial time . As a consequence, the work accumulated along a trajectory going through a configuration at a generic time is well-defined and equal to
Thus, the work accumulated during the evolution starting from and leading to a final configuration at is simply , and the average appearing on the left-hand side of eq. (3) can be expressed as
Liouville’s theorem implies the conservation of the trajectory density in phase space: hence, , so that eq. (7) can be rewritten as
If the system remains coupled to a heat reservoir doing the parameter switch (and the coupling of the system to the reservoir is sufficiently small), then this argument can be repeated for the union of the system and the reservoir, which can be thought of as a larger system, that remains isolated during the process. Then, the work performed on the system equals the difference of the total energy, evaluated on the final and on the initial configuration. This difference does not depend on the switching time, therefore it can be evaluated in the limit, in which, as we discussed above, eq. (3) holds. Actually, one can prove that the assumption of weak coupling between the system and the reservoir can be relaxed, if the reservoir is mimicked by a Nosé-Hoover thermostat [Nose:1984au, Hoover:1985zz] or a Metropolis algorithm, as is the case in Monte Carlo simulations.
Eq. (3) can also be derived using a master-equation approach, and assuming a completely stochastic (rather than deterministic) evolution for the trajectory [Jarzynski:1997ef]: let denote the conditional probability of finding a field configuration at time , given that the system was in configuration at time , and define the instantaneous transition rate from to as
Note that this quantity depends on time only through the time-dependence of . Consider now an ensemble of stochastic, Markovian temporal evolutions (or trajectories) of the system, given a certain, fixed time-evolution of its parameters, : the distribution density of these trajectories in phase space obeys
where the last equality is the definition of the operator. The formal solution of eq. (10), with the boundary condition that at the distribution density equals , can then be written as
If does not depend on time, then reduces to a standard, stationary Markov process: then, the distribution density becomes time-independent and the left-hand side of eq. (10) vanishes. In that case, the Markov process generates an ensemble of configurations distributed according to the canonical Boltzmann distribution for a system with Hamiltonian at temperature , i.e. , and
Eq. (12) means that the Markov process satisfies detailed balance.111One can also assume the stronger condition that, when , the Markov process always generates a canonical Boltzmann distribution, i.e. that for any, arbitrary, initial distribution : (13) so that, for sufficiently long times, the Markov process always leads to thermalization of any distribution. Note that eq. (13) is stronger than and implies eq. (12). For our present purposes, however, only eq. (12) is needed.
Let us assume that the initial distribution at time is a canonical one, , let denote the average value of over all trajectories going through a particular configuration at a generic time . Introducing the distribution defined as
the average of over all trajectories can be expressed as
From its definition by eq. (14), it is easy to see that the time derivative of is given by
where we used the fact that the initial distribution is a canonical one.
According to eq. (12), annihilates (where is an arbitrary constant factor), hence:
which proves Jarzynski’s theorem.
Note that, even though the distribution of is a canonical one only at , in the last term of eq. (20) the canonical partition function of the system at the final value of appears, and that this equation relates a genuinely out-of-equilibrium quantity (the average appearing in the first term) to a ratio of equilibrium quantities.
This proof of Jarzynski’s equality provides a natural way to implement a numerical evaluation of the free-energy difference appearing on the right-hand side of eq. (3) by Monte Carlo simulation:222A related idea underlies the annealed-importance-sampling technique [Neal:2001ai]. having defined a parameter evolution , with , that interpolates between the initial and final ensembles, and starting from a canonical distribution of configurations, one can drive the system out of equilibrium by varying as a function of Monte Carlo time, letting the configurations evolve according to any Markov process that satisfies the detailed-balance condition expressed by eq. (12), and compute during this process. The average expressed by the bar notation on the left-hand side of eq. (3) is then obtained by averaging over a sufficiently large number of such trajectories. This is the numerical strategy that we use in this work, in which the Euclidean action plays the rôle of .
We close this section with a word of caution. The computational efficiency of this method may strongly depend on the properties of the system under consideration: in particular, physical systems with a very large number of degrees of freedom (such as quantum field theories regularized on a spacetime lattice) have sharply peaked statistical distributions, hindering an accurate sampling of the configuration-space regions that contribute mostly to . If the different values of in different trajectories are much larger than the scale of typical thermal fluctuations (or of typical quantum fluctuations, for lattice simulations of quantum field theory), then is dominated by configurations in which the value of is much smaller than , and an accurate determination of may require a prohibitively large number of trajectories. Note, however, that, in the numerical calculation of free-energy differences by eq. (3), there exists a remarkable difference in the rôles of the initial and final ensembles: one assumes that the initial configurations are thermalized, while the field values at all (including, in particular, at ) are out of equilibrium. This asymmetry between the initial and target ensembles implies that, if the Monte Carlo determination of is biased by effects due to limited statistics, then carrying out the same calculation in the opposite direction will, in general, give a result different from . Conversely, verifying that a “direct” and a “reverse” computation give consistent results, provides a powerful test of the correctness of the calculation. This is a test that all results of our present work pass with success.
3 Lattice calculation of the equation of state
In this work, we investigate the behavior of QCD at finite temperature, and compute the equation of state via lattice simulations using an algorithm based on Jarzynski’s equality eq. (3).
In particular, we focus on the pure-glue sector, which captures the main feature of thermal QCD at the qualitative level: the existence of a confining phase at low temperatures, in which the physical states are massive color singlets, and a deconfined phase at high temperatures, in which chromoelectrically charged, light, elementary particles interact with each other through screened, long-range interactions.333We also remind the reader of some notable differences between pure-glue Yang-Mills theory and real-world QCD with dynamical quarks. In particular, in the pure-glue theory, the confining and deconfined phases are separated by a first-order phase transition taking place at a critical temperature which, when converted into physical units, is about MeV. By contrast, in QCD with physical quarks, the change of state from the confining to the deconfined regimes is rather a smooth crossover, taking place at a lower temperature, around MeV. However, it has been recently argued that the pure Yang-Mills dynamics could nevertheless be relevant for certain aspects of the physics of heavy-ion collisions’ experiments [Stoecker:2015zea, Stocker:2015nka]. Thermal screening of both electric and magnetic field components is, indeed, a characterizing feature of the deconfined phase of non-Abelian gauge theories, which defines it as a “plasma”. Asymptotic freedom implies that, when the temperature is very high, the physical coupling at the scale of thermal excitations, , becomes small; in this limit, chromoelectric fields are screened on distances inversely proportional to , while chromomagnetic fields are screened on lengths inversely proportional to , so that the theory develops a well-defined hierarchy of scales, between “hard” (of the order of ), “soft” (of the order of ), and “ultra-soft” (of the order of ) modes, and this separation of scales allows for a systematic treatment in terms of effective theories [Ginsparg:1980ef, Appelquist:1981vg, Nadkarni:1982kb, Braaten:1989mz, Braaten:1995jr, Braaten:1995cm, Kajantie:1995dw, Kajantie:2002wa]. The appearance of the soft and ultra-soft scales is due to the existence of infra-red divergences, which lead to a breakdown of the correspondence between the number of loops in Feynman diagrams and the order in in perturbative calculations [Linde:1980ts, Gross:1980br], and to the intrinsically non-perturbative nature of long-wavelength modes at all temperatures. Moreover, for plasma excitations on the energy scale of the deconfinement temperature, the physical coupling is not very small, so that the deconfined state of matter cannot be reliably modelled as a gas of free partons.
For these reasons, the study of the equation of state of QCD—or of its gluonic sector, that we are focusing on here—close to deconfinement requires non-perturbative techniques. We carry out this study by discretizing the Euclidean action of Yang-Mills theory on a hypercubic lattice of spacing , spatial volume and extent along the compactified Euclidean-time direction, using the Wilson action [Wilson:1974sk]
where , with the bare coupling, and
The partition function of the lattice theory is given by
(where is the Haar measure for the matrix defined on the oriented link from site to site ) and expectation values are defined as
The integrals on the right-hand side of eq. (24) are estimated numerically, by Monte Carlo integration, from a sample of field configurations produced in a Markov chain; our update algorithm combines one heat-bath [Creutz:1980zw, Kennedy:1985nu] and five to ten over-relaxation steps [Adler:1981sn, Brown:1987rra] on the link variables of the whole lattice: this defines a “sweep”. The uncertainties in these simulation results are estimated with the jackknife method [bootstrap_jackknife_book].
The physical temperature of the system is varied by varying , which, in turn, can be continuously tuned by varying : to this purpose, we set the scale of our lattice simulations by means of the Sommer scale [Sommer:1993ce] as determined in [Necco:2001xg]. The critical temperature is related to by [Francis:2015lha].444Note that, if is assumed to be of the order of fm (a figure consistent with phenomenological potential models for QCD), then the critical deconfinement temperature in Yang-Mills theory is almost twice as large as in QCD. The fact that deconfinement takes place at lower temperatures for theories with a larger number of colored degrees of freedom in the deconfined phase [Karsch:2000ps, Lucini:2005vg, Lucini:2012wq] is consistent with a qualitative argument, based on the mismatch between the number of degrees of freedom at low and at high temperatures (see also ref. [Pepe:2005sz]).
Our lattice determination of the equation of state rests on the following thermodynamic identity, relating the pressure to the free energy per unit volume ,
which holds in the thermodynamic limit, , and receives negligible corrections for the and values considered here [DeTar:1985kx, Elze:1988zs, Gliozzi:2007jh, Panero:2008mg, Meyer:2009kn]. Following the algorithmic strategy discussed in ref. [Caselle:2016wsw] for a benchmark study in the theory, we study how the dimensionless ratio varies as a function of the temperature, starting from an initial temperature :
In our simulations, we compute by means of Jarzynski’s equality, using (by tuning which, as stated above, the temperature can be varied continuously) as the parameter: is let evolve linearly with the Monte Carlo time between the initial () and final () values corresponding to and , respectively. More precisely, the interval is discretized in equal intervals of width , so that . Finally, one should remember that the and terms appearing on the left-hand side of eq. (26) also include contributions from quantum (non-thermal) fluctuations, that depend on the lattice cutoff and diverge in the continuum limit. These contributions can be removed from by evaluating the quantity appearing on the right-hand side of eq. (26) on a lattice of large hypervolume at at the same . This leads us to define the physical, renormalized pressure as
where is the variation in Euclidean action during a non-equilibrium trajectory in configuration space:
the and subscripts respectively indicate that this quantity is evaluated on a finite- or on a zero-temperature lattice, , and the bar denotes the average over a sample of non-equilibrium trajectories, which start from canonically distributed initial configurations . Note that the summands on the right-hand side of eq. (28) are given by the action difference induced by a variation of on the same field configuration. In practice, in order to scan a wide temperature range, from the confining to the deconfined phase, it is more convenient to divide the temperature interval in a number (that we denote as ) of smaller intervals. In particular, we choose these intervals in such a way that they do not stretch across different phases: this allows us to get rid of potential difficulties that might arise in the numerical sampling of configurations, when the algorithm tries to probe the physics at , by driving configurations in the phase out of equilibrium, without letting them thermalize.555A different computational strategy, that would allow the algorithm to avoid the critical point, consists in deforming the action by adding operators that could turn the deconfinement transition into a crossover (e.g. traces of Wilson lines in the Euclidean-time direction), and varying their coefficients to turn them on only near the critical temperature. This numerical strategy, however, is more complex, and we did not explore it in the present work. Dividing the range of interest in a different number of intervals that do not cross the phase transition should lead to the same physical results, but has some effect on the numerical efficiency of the simulation algorithm. In particular, smaller values of (i.e. broader intervals in ) typically require larger values of and more statistics. On the other hand, larger implies a larger overhead for thermalization of the initial configurations at the start of each transformation (in this work we used full thermalization sweeps at and at finite temperature).
We run our simulations on lattices with , , and and for (and typically ), according to the parameters listed in table LABEL:tab:parameters, where throughout, and denotes the total number of configurations used for each combination of parameters, given by the sum of the products over all intervals.