\thefigure \dbl The effective degrees of freedom for the energy density (g_{\rho}) and for the entropy density (g_{s}). The line width is chosen to be the same as our error bars at the vicinity of the QCD transition where we have the largest uncertainties. At temperatures T<1 MeV the equilibrium equation of state becomes irrelevant for cosmology, because of neutrino decoupling. The EoS comes from our calculation up to T=100 GeV. At higher temperatures the electroweak transition becomes relevant and we use the results of Ref. [Laine:2015kra]. Note that for temperatures around the QCD scale non-perturbative QCD effects reduce g_{\rho} and g_{s} by 10-15% compared to the ideal gas limit, an approximation which is often used in cosmology. For useful parametrizations for the QCD regime or for the whole temperature range see [SI].

The well-established theories of the strong interaction (QCD) and the electroweak theory determine the evolution of the early universe. The Hubble rate and the relationship between the age of the universe and its temperature () are determined by the equation of state (EoS). Since QCD is highly non-perturbative, the calculation of the EoS is a particularly difficult task. The only systematic way to carry out this calculation is based on lattice QCD. Here we present complete results of such a calculation. QCD, unlike the rest of the Standard Model, is surprisingly symmetric under time reversal, leading to a serious fine tuning problem. The most attractive solution for this is a new particle, the axion –a promising dark matter candidate. Assuming that axions are the dominant component of dark matter we determine the axion mass. The key quantities of the calculation are the previously mentioned EoS and the temperature dependence of the topological susceptibility () of QCD, a quantity notoriously difficult to calculate. Determining was believed to be impossible in the most relevant high temperature region, however an understanding of the deeper structure of the vacuum by splitting it into different sectors and re-defining the fermionic determinants has led to its fully controlled calculation. Thus, our two-fold prediction helps most cosmological calculations to describe the evolution of the early universe by using the fully controlled EoS and may be decisive for guiding experiments looking for dark matter axions. In the next couple of years, it should be possible to confirm or rule out post-inflation axions experimentally \citeSI if the axion’s mass is or is not found to be as predicted here. Alternatively, in a pre-inflation scenario our calculation determines the universal axionic angle that corresponds to the initial condition of our universe.

The Euclidean Lagrangian for the theory of the strong interaction is , where is the QCD coupling constant, represent the quark masses and runs over the four quark flavors. In QCD , where is a Hermitian matrix field denoting the gauge fields. The are Dirac-spinor fields representing the quarks and carry a “colour” index, which runs from 1 to 3. The form of the action is (almost) unambiguously defined by the condition of gauge invariance and renormalizability. For many quantities, the only systematic way to solve this theory is to discretize it on a Euclidean space-time lattice, determine the partition function stochastically and using smaller and smaller lattice spacings to extrapolate to the continuum limit (the limit with vanishing lattice spacing). In this paper, we use this lattice formulation to determine the EoS of QCD and the topological susceptibility for low temperatures up to very high temperatures. Finally, as an application, we combine them to present our results on the dark matter axion mass.

Our most important qualitative knowledge about the QCD transition is that it is an analytic crossover \citeAoki:2006we, thus no cosmological relics are expected. Outside the narrow temperature range of the transition we know that the Hubble rate and the relationship between temperature and the age of the early universe can be described by a radiation-dominated EoS. The calculation of the EoS is a challenging task \citeSI, the determination of the continuum limit at large temperatures is particularly difficult.

In our lattice QCD setup we used 2+1 or 2+1+1 flavours of staggered fermions with four steps of stout-smearing. For the gluonic sector tree-level improved gauge fields were used. The quark masses are set to their physical values, however we use degenerate up and down quark masses and the small effect of isospin breaking is included analytically. The continuum limit is taken using three, four or five lattice spacings with temporal lattice extensions of =6, 8, 10, 12 and 16. In addition to dynamical staggered simulations we also used dynamical overlap simulations with 2+1 flavours down to physical masses. The inclusion of an odd number of flavours was a non-trivial task, however this setup was required for the determination of at large temperatures in the several GeV region \citeSI.

Charm quarks start to contribute to the equation of state above 300 MeV. Therefore up to 250 MeV we used 2+1 flavours of dynamical quarks. Connecting the 2+1 and the 2+1+1 flavour results at 250 MeV can be done smoothly. For large temperatures the step-scaling method for the equation of state of Ref. \citeBorsanyi:2012ve was applied. We determined the EoS with complete control over all sources of systematics all the way to the GeV scale.

Two different methods were used to set the overall scale in order to determine the equation of state. One of them took the pion decay constant the other applied the scale \citeBorsanyi:2012zs. 32 different analyses (e.g. the two different scale setting procedures, different interpolations, keeping or omitting the coarsest lattice) entered our histogram method \citeDurr:2008zz,Borsanyi:2014jba to estimate systematic errors. We also calculated the goodness of the fit Q and weights based on the Akaike information criterion AICc \citeBorsanyi:2014jba and we looked at the unweighted or weighted results. This provided the systematic errors on our findings. In the low temperature region we compared our results with the prediction of the Hadron Resonance Gas (HRG) approximation and found perfect agreement. This HRG approach is used to parametrize the equation of state for small temperatures. In addition, we used the hard thermal loop approach \citeAndersen:2010wu to extend the EoS to high temperatures.

In order to have a complete description of the thermal evolution of the early universe we supplement our QCD calculation for the EoS by including the rest of the Standard Model particles (leptons, bottom and top quarks, , , Higgs bosons) and results on the electroweak transition \citeSI. As a consequence, the final result on the EoS covers four orders of magnitude in temperature from MeV to several hundred GeV.



Figure \thefigure: \dbl The effective degrees of freedom for the energy density () and for the entropy density (). The line width is chosen to be the same as our error bars at the vicinity of the QCD transition where we have the largest uncertainties. At temperatures  MeV the equilibrium equation of state becomes irrelevant for cosmology, because of neutrino decoupling. The EoS comes from our calculation up to  GeV. At higher temperatures the electroweak transition becomes relevant and we use the results of Ref. [Laine:2015kra]. Note that for temperatures around the QCD scale non-perturbative QCD effects reduce and by 10-15% compared to the ideal gas limit, an approximation which is often used in cosmology. For useful parametrizations for the QCD regime or for the whole temperature range see [SI].

Figure Document shows the result for the effective numbers of degrees of freedom as a function of temperature. The widths of the lines represent the uncertainties. The tabulated data are also presented in [SI]. Both the figure and the data can be used (similarly to Figure 22.3 of Ref. [Agashe:2014kda]) to describe the Hubble rate and the relationship between temperature and the age of the universe in a very broad temperature range. We now turn to the determination of another cosmologically important quantity, . In general the Lagrangian of QCD should have a term proportional to , the four-dimensional integral of which is called the topological charge. This term violates the combined charge-conjugation and parity symmetry (CP). The surprising experimental observation is that the proportionality factor of this term is unnaturally small. This is known as the strong CP problem. A particularly attractive solution to this fundamental problem is the so-called Peccei-Quinn mechanism [Peccei:1977hh]. One introduces an additional (pseudo-)scalar U(1) symmetric field. The underlying Peccei-Quinn U(1) symmetry is spontaneously broken –which can happen pre-inflation or post-inflation– and an axion field acts as a (pseudo-)Goldstone boson of the broken symmetry [Weinberg:1977ma, Wilczek:1977pj]. Due to the chiral anomaly the axion also couples to . As a consequence, the original potential of the axion field with its U(1) symmetry breaking gets tilted and has its minimum where . This sets the coefficient in front of to zero and solves the strong CP problem. Furthermore, the free parameter determines the mass of the axion at vanishing or non-vanishing temperatures by . Here is the susceptibility of the topological charge. We determined its value at , which [SI] turned out to be in the isospin symmetric case, where the first error is statistical, the second is systematic. For the error budget see [SI]. Isospin breaking results in a small, 12% correction [SI], thus the physical value is . In an earlier study of ours [Borsanyi:2015cka] we looked at in the quenched approximation. We provided a result within the quenched framework and reached a temperature about half to one third of the necessary temperatures for axion cosmology (a similar study with somewhat less control over the systematics is [Berkowitz:2015aua]). To obtain a complete result one should use dynamical quarks with physical masses. Dynamical configuration production is, however, about three orders of magnitude more expensive and the values are several orders of magnitude smaller than in the quenched case. Due to cutoff effects [SI] the continuum limit is far more difficult to carry out in dynamical QCD than in the pure gauge theory [Borsanyi:2015cka]. All in all we estimate that the brute-force approach to provide a complete result on in the relevant temperature region would be at least ten orders of magnitude more expensive than the result of [Borsanyi:2015cka]. Ref. [Bonati:2015vqz] used this approach, however it turned out to be quite difficult. As we will show our result for is many orders of magnitude smaller than that of Ref. [Bonati:2015vqz] in the cosmologically relevant temperature region. Whilst writing up our results, a paper [Petreczky:2016vrs] appeared with findings similar to ours, for which two remarks are in order. A common set of criteria for assessing the reliability of lattice results was set by FLAG and published in Ref. [Aoki:2013ldr]. They introduced for some quantities a measure indicating how far the continuum extrapolated result is from the lattice data. Firstly, in order to control the continuum extrapolation they demand to have a maximum difference of a few percent between the finest lattice data and the continuum extrapolated result. (In other words, no extrapolation with large cutoff effects are allowed.) In Ref. [Petreczky:2016vrs] the cutoff effects on the direct topological susceptibility measurements are orders of magnitude larger than the FLAG requirement (even for secondary quantities such as or for the exponent they are large), whereas here we show how to keep these cutoff effects on to the (10%) level. Secondly, FLAG requires three or more lattice spacings for controlled continuum extrapolation results. When attempting to determine for approximately T=3 GeV Ref. [Petreczky:2016vrs] fulfills this condition for the continuum extrapolation in a narrow temperature region between 250 to 330 MeV. However, no direct measurements are carried out at large temperatures, only an extrapolation using the behaviour in this narrow range, whereas we show how to determine all the way into the physically relevant 3 GeV region. Since cutoff effects are huge and the CPU demand is tremendous one faces significant physics challenges and needs dramatic algorithmic developments in a large-scale simulation setup in order to provide a complete answer for , thus resolve the tension between e.g. the two References [Bonati:2015vqz] and [Petreczky:2016vrs] (or other analytic techniques). The huge computational demand and the physics issue behind the determination of has two main sources. a.) The tiny topological susceptibility needs extremely long simulation threads to observe enough changes of the topological sectors and b.) In high temperature lattice QCD the most widely used actions are based on staggered quarks. When dealing with topological observables staggered quarks have very large cutoff effects. We solve both problems and determine the continuum result for for the entire temperature range of interest. ad a. We summarize our solution to problem a, which we call “fixed integration” (for a detailed discussion see [SI]). As a first step one takes a starting-point in the quark mass-temperature space, at which it is possible to determine using conventional methods. One such point could be e.g. the quenched theory at some low temperature (below the phase transition), or alternatively relatively large quark masses at not too high temperatures could also be used in practice. Next we determine, at this point, the weights of the various topological sectors in a given volume. This is done by taking derivatives with respect to the gauge coupling and and then by integrating these parameters one can calculate the change in the weights of the various topological sectors all the way to the new point. This provides the weights of the topological sectors and therefore in a new point. The technique is similar to the so-called integral method used to determine the equation of state [SI]. The CPU costs of the conventional technique scale as , whereas the new “fixed integration” method scales as . The gain in CPU time is tremendous. This efficient technique is used to obtain the final result for . Since we work with continuum extrapolated quantities both for the ratios in the starting-point as well as for their changes, one can in principle use any action in the procedure, we will use here overlap [SI] and/or staggered actions. ad b. We summarize our solution to problem b, which we call “eigenvalue reweighting”. According to the index theorem the continuum Dirac operator has exact zero modes with definite chirality on Q0 gauge configurations. Staggered fermions lack these exact zero modes, which introduces large to very large lattice artefacts when determining . We developed a reweighting technique, which resolves this problem. The method is based on the determination of the lowest eigenvalues of the Dirac operator and using the ratios of the eigenvalues and the mass in a reweighting procedure. The basic idea is to substitute the smallest eigenvalue (the would be zero modes) for a given configuration of a given topological charge with its continuum value and reweight according to it. Note that if we have a zero mode, the smallest eigenvalue should be in the continuum with massive fermions, whereas for staggered quarks it can be significantly higher. We correct for this by reweighting the configurations with the ratio of the lowest continuum and the corresponding staggered eigenvalue. The topological charge is measured by the Wilson-flow method. The details and a discussion on the validity of the technique is presented in  [SI]. Overlap fermions do not suffer from this problem and have exact zero modes. Even though they are computationally too expensive to be used for the entire project (e.g. the integration in the gauge coupling requires tens of millions of trajectories [SI]) the mass integration down to the physical point has been done with overlap quarks.



Figure \thefigure: \dbl Continuum limit of . The insert shows the behaviour around the transition temperature. The width of the line represents the combined statistical and systematic errors. The dilute instanton gas approximation (DIGA) predicts a power behaviour of with =8.16, which is confirmed by the lattice result for temperatures above GeV.


Figure \thefigure: \dbl Relation between the axion’s mass and the initial angle in the pre-inflation scenario. The post-inflation scenario corresponds to with a strict lower bound on the axion’s mass of =28(2). The thick red line shows our result on the axion’s mass for the post-inflation case. E.g. =50(4) if one assumes that axions from the misalignment mechanism contributes 50% to dark matter. Our final estimate is =50-1500 (the upper bound assumes that only 1% is the contribution of the misalignment mechanism the rest comes from other sources e.g. topological defects). For an experimental setup to detect post-inflationary axions see [SI]. The slight bend around eV corresponds to an oscillation temperature at the QCD transition [Aoki:2009sc, Borsanyi:2010bp].

Through combining these methods one can determine by controlling all the systematic uncertainties (see Figure Document). Therefore, removing several thousand percent cutoff effects of previous approaches leaving a very mild continuum extrapolation to be performed. In addition, the direct determination of all the way up to 3 GeV means that one does not have to rely on the dilute instanton gas approximation (DIGA). Note that a posteriori the exponent predicted by DIGA turned out to be compatible with our finding but its prefactor is off by an order of magnitude, similar to the quenched case. As a possible application for these two cosmologically relevant lattice QCD results, we show how to calculate the amount of axionic dark matter and how it can be used to determine the mass of the axion. As we have seen, is a rapidly decreasing function of the temperature. Thus, at high temperature (which is proportional to ) is small. In fact, much smaller than the Hubble expansion rate of the universe at that time or temperature (). The axion does not feel the tilt in the Peccei-Quinn Mexican hat type potential yet and it is effectively massless and frozen by the Hubble friction. As the Universe expands the temperature decreases, increases and the axion mass also increases. In the meantime, the Hubble expansion rate –given by our equation of state– decreases. As the temperature decreases to the axion mass is of the same order as the Hubble constant ( is defined by ). Around this time the axion field rolls down the potential, starts to oscillate around the tilted minimum and the axion number density increases to a nonzero value, thus axions as dark matter are produced. The details of this production mechanism, usually called misalignment, are quite well known (see e.g. [Wantz:2009it, SI]). In a post-inflationary scenario the initial value of the angle takes all values between - and , whereas in the pre-inflationary scenario only one angle contributes (all other values are inflated away). One should also mention that during the U(1) symmetry breaking topological strings appear which decay and also produce dark matter axions. In the pre-inflationary scenario they are inflated away. However, in the post-inflationary framework their role is more important. This sort of axion production mechanism is less well-understood and in our final results it is necessary to make some assumptions. The possible consequences of our results on the predictions of the amount of axion dark matter can be seen in Figure Document. Here we also study cases, for which the dark matter axions are produced from the decay of unstable axionic strings (see the discussion in the figure’s caption). For the pre-inflationary Peccei-Quinn symmetry breaking scenario the axion mass determines the initial condition of our universe. Acknowledgments. We thank M. Dierigl, M. Giordano, S. Krieg, D. Nogradi and B. Toth for useful discussions. This project was funded by the DFG grant SFB/TR55, and by OTKA under grant OTKA-K-113034. The work of J.R. is supported by the Ramon y Cajal Fellowship 2012-10597 and FPA2015-65745-P (MINECO/FEDER). The computations were performed on JUQUEEN at Forschungszentrum Jülich (FZJ), on SuperMUC at Leibniz Supercomputing Centre in München, on Hazel Hen at the High Performance Computing Center in Stuttgart, on QPACE in Wuppertal and on GPU clusters in Wuppertal and Budapest. Author contributions. SB and SM developed the fixed sector integral, TGK and KS the eigenvalue reweighting, FP the odd flavour overlap techniques, respectively. SB, SK, TGK, TK, SM, AP, FP and KS wrote the necessary codes, carried out the runs and determined the EoS and . JR, AP and AR calculated the DIGA prediction. K-HK and JR and AR worked out the experimental setup. ZF wrote the main paper and coordinated the project.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description