A critical look at cosmological perturbation theory techniques
Recently a number of analytic prescriptions for computing the non-linear matter power spectrum have appeared in the literature. These typically involve resummation or closure prescriptions which do not have a rigorous error control, thus they must be compared with numerical simulations to assess their range of validity. We present a direct side-by-side comparison of several of these analytic approaches, using a suite of high-resolution N-body simulations as a reference, and discuss some general trends. All of the analytic results correctly predict the behavior of the power spectrum at the onset of non-linearity, and improve upon a pure linear theory description at very large scales. All of these theories fail at sufficiently small scales. At low redshift the dynamic range in scale where perturbation theory is both relevant and reliable can be quite small. We also compute for the first time the 2-loop contribution to standard perturbation theory for CDM models, finding improved agreement with simulations at large redshift. At low redshifts however the 2-loop term is larger than the 1-loop term on quasi-linear scales, indicating a breakdown of the perturbation expansion. Finally, we comment on possible implications of our results for future studies.
diagrams \fmfwizard\fmfcmdvardef ocross = for i = 1 upto 4: origin – for j = 0 upto 10: (0.5up rotated (90i-45+90j/10)) – endfor endfor cycle enddef;
The character and evolution of the large-scale structure of the Universe has been the subject of much research in recent decades. As it is currently understood, large-scale structure grows through a process of gravitational instability starting from a nearly scale-invariant spectrum of Gaussian fluctuations at early times. On very large scales the matter distribution of our universe today is well modeled by linear perturbation theory. On scales below about Mpc, on the other hand, the dynamics are highly non-linear and we must resort to direct numerical simulations of the N-body problem to understand the clustering of matter or its tracers.
On intermediate, or quasi-linear, scales there is the possibility that the matter distribution may be modeled analytically by extending perturbation theory beyond linear order. This possibility has received renewed attention recently due to the interest in using baryon acoustic oscillations as a probe of the expansion history of the Universe and of the nature of dark energy Eisenstein (2005). Since the baryonic features are at large scales (Mpc) it is plausible that higher order perturbation theory could model subtle corrections to the linear result with some accuracy. More generally, investigation of perturbation theory may allow some improvement in theoretical predictions for the next generation of very large surveys.
Consequently, a number of new ideas have been introduced in recent years for computing statistical properties of the matter distribution, most importantly the 2-point function or power spectrum. Regrettably these approaches involve uncontrolled approximations, providing no simple way of estimating the theoretical uncertainty. Since perturbation theory is expected to fail on sufficiently small scales, the domain of validity of any particular approach is therefore unclear, and the only known way to test their accuracy is to compare their predictions with the results of N-body simulations. In the past this has been done on a case-by-case basis, with one theory tested for one cosmology against one suite of N-body simulations, focusing primarily on the power spectrum. Recently there have been some attempts to compare multiple theories simultaneously Nishimichi et al. (2008); Taruya and Hiramatsu (2008), or to examine statistics other than the power spectrum Bernardeau et al. (2008); Valageas (2008); Guo and Jing (2009). However a comprehensive comparison has been lacking, and with the recent proliferation of analytic techniques it is natural to ask how well these theories actually perform. With near-future observations potentially depending on these techniques and with recent advances in N-body algorithms and computing power, it is timely to revisit this issue.
In this paper we present a direct comparison of several recent analytic predictions for the clustering of matter on quasi-linear scales. We restrict our attention to the matter fluctuations, because very few of the existing treatments can handle biased tracers such as dark matter halos and galaxies. We use modern, high-resolution N-body simulations as our reference points, which provide highly accurate Evrard et al. (2008); Heitmann et al. (2008b, a) (though computationally expensive) estimates for statistical observables of the matter distribution. By comparing the analytic predictions for two cosmologies, one close to the current best-fit model and one more extreme, we are able to judge the relative merits of each approach.
The paper is organized as follows. In Section II we start by reviewing the dynamical equations that govern the evolution of the matter distribution and discuss the relevant statistical quantities that one may compute. We then continue by summarizing the different analytic approaches we consider in this paper. In Section III we describe the N-body simulations that are used as a reference point for the comparison. In Section IV we plot the various approaches together, discuss qualitatively how well they agree with simulations, and propose several ways to quantify this agreement. We discuss the results of this comparison in Section V, and make some closing remarks in Section VI.
Ii Analytic Methods
We start by reviewing the different analytic methods we consider - our goal is not to provide a comprehensive description of each method, but to provide an overview and highlight the relationships between the different methods.
ii.1 Dynamics and Linear Theory
By far the most popular approach to an analytic description of large-scale structure is to approximate the matter distribution as an irrotational fluid, characterized by a density constrast and a peculiar velocity divergence . The fluid equations, in Fourier space, are then (see Appendix A for a detailed derivation),
Here is conformal time, , and is the conformal Hubble parameter. (Note that we adopt the Fourier transform convention that puts the in the wavevector integral. We also omit the tilde that is usually used to decorate Fourier space quantities.) The non-linear nature of these equations is manifest in the mode-coupling integrals.
Working to linear order in and , we obtain
where is the density contrast at some early time when linear theory is certainly valid, is the linear growth function (normalized to 1 at ), and . At early times and . For convenience we define to be the linear density contrast today, i.e. . When convenient we also follow common practice and use as a time variable; for brevity we often suppress the time dependence of quantities altogether. It is further convenient to group and into a 2-component vector, which is proportional to in linear theory.
ii.2 Statistical observables
Inflation predicts, and observations have confirmed, that the initial fluctuations are predominantly adiabatic Hinshaw et al. (2008), almost scale-invariant Hinshaw et al. (2008), and very close to Gaussian Slosar et al. (2008). Under the assumption that the initial field is Gaussian all expectation values of moments of the evolved density and velocity fields can be expressed as integrals over the linear theory power spectrum. For example, the evolved 2-point function
whose components are all equal to in linear theory, can be expressed as integrals over powers of in order perturbation theory (e.g. Eq. (31)).
In general, to give a complete statistical description of the matter distribution at a given time, one would need to specify the entire hierarchy of connected -point correlators. For initially Gaussian fields which are close to linear, the higher order connection functions are small and have been compared to simulations in Bernardeau et al. (2002). We shall confine our attention to the 2-point function in this paper.
The non-linear propagator (Crocce and Scoccimarro (2006a, b); also known as the response function Valageas (2008)) measures the correlation between the evolved field and the initial conditions . It is formally defined as a functional derivative,
though its significance is easier to understand from the relation
which we shall take as a definition henceforth. At early times or at large scales there is near-perfect correlation (), but on small scales as non-linear evolution washes out the initial conditions.
Because we will make reference to it later, we also introduce here the quantity
which characterizes the scale at which non-linearities become important. In the Lagrangian formalism (see below) gives the variance of each component of the linear (or Zel’dovich) displacement.
ii.3 Beyond Linear Theory
The program is now to compute the statistics of the evolved density field in terms of the initial density field. This is simple in principle but difficult in practice, because the equations of motion are both non-linear and non-local (in both configuration space and Fourier space). Non-linearity forces one to seek a perturbative solution, since exact solutions to Eqs. (1) (even if they could be found) could not be combined to construct a realistic solution. A straightforward perturbative approach is hampered by computational costs, as non-locality implies that higher order terms involve mode-coupling integrals of ever higher dimension.
This situation has prompted a study of higher-order methods for statistical observables like the power spectrum. Many of these methods were borrowed from other areas of physics (notably particle physics and fluid mechanics L’vov and Procaccia (1995)) where they achieved mixed success. We review these below, highlighting the relationships between the different methods; the methods we consider are summarized in Table 2.
The most straightforward approach is to define a series solution to the fluid equations in powers of the initial density field (or equivalently, the linearly evolved density field, ). This is the basis behind standard perturbation theory (hereafter SPT; Peebles (1980); Juszkiewicz (1981); Vishniac (1983); Goroff et al. (1986); Makino et al. (1992); Jain and Bertschinger (1994)); a detailed description (including explicit expressions for to third order in ) is presented in the Appendix.
Comparisons with simulations (including those presented below) have shown that the domain of applicability of second order (in ) perturbation theory is rather small at . Furthermore, as we show below, going to third order is not guaranteed to improve agreement, leading one to question the convergence properties of such a series expansion. If one could carry out any expansion to infinite order it would (trivially) give the correct answer. This however is usually not possible. This has led various authors to investigate ways of summing subsets of the terms to arbitrary order in some expansion coefficent.
Renormalized perturbation theory (hereafter RPT, see Crocce and Scoccimarro (2006a, b); Crocce and Scoccimarro (2008)) is a variant of Dyson-Wyld resummation (see L’vov and Procaccia (1995) for a discussion in the context of hydrodynamics) and attempts to reorganize the perturbation expansion in terms of the non-linear propagator and non-linear vertex to improve convergence. In particular, if the vertex is approximated by its tree-level form then the power spectrum can be written as an expansion in the non-linear propagator. The resulting series is therefore no longer an expansion in powers of the initial density contrast, but rather “an expansion in orders of the complexity of the interaction” Wyld (1961).
In Crocce and Scoccimarro (2006b) the dominant contributions to the non-linear propagator are identified and summed explicitly in the high- limit, giving for large . Matching this behavior with the 1-loop propagator (valid at low ) gives a non-perturbative prediction for . Substituting this propagator in the first few diagrams of the reorganized expansion then gives a non-perturbative prediction for the power spectrum Crocce and Scoccimarro (2008). We implemented the 1-loop and 2-loop mode-coupling contributions as described in Crocce and Scoccimarro (2008).
The above methods work at the level of the density and velocity fields; an alternative approach is to use the fluid equations to derive equations of motion for the power spectrum and higher order correlators directly. Such an approach results in an infinite hierarchy of equations, which must be somehow truncated. The closure theory approach Taruya and Hiramatsu (2008) does so by approximating the 3-point correlator by its leading order expression in SPT. As in Crocce and Scoccimarro (2006b), can be computed explicitly in the low- and high- limits, and matched naturally in intermediate regimes. The power spectrum is then obtained order-by-order via a Born-like series expansion.
A variant of this approach (hereafter Time-RG theory Pietroni (2008)) assumes a vanishing trispectrum to truncate the hierarchy. The resulting equations of motion for the power spectrum and bispectrum can then be numerically integrated forward in time, starting at some sufficiently early redshift (where and ). Since the time evolution is performed numerically, the method also allows the proper treatment of models where the linear growth factor is scale-dependent (e.g. models with quintessence or massive neutrinos Lesgourgues et al. (2009)). This approach may be seen as a generalization of the renormalization group perturbation theory (hereafter RGPT) of McDonald (2007), which is an attempt to regulate the relative divergence of 1-loop SPT using renormalization group methods.
In Valageas (2001, 2002, 2004) a path-integral formulation of the Vlasov equation is developed in terms of the distribution function . In Valageas (2007) a similar technique is applied to the fluid equations (Eq. (21)). The key insight here is that statistical observables like the power spectrum may be obtained by taking functional derivatives of an appropriately constructed path integral (the generating functional). Straightforward perturbative evaluation of the generating functional reproduces the results of SPT, whereas applying large- expansion techniques and truncating at fixed order in leads to approximations for the power spectrum and propagator. These approximate solutions agree with SPT up to a fixed order in , but also include non-perturbative contributions corresponding to infinite partial resummations of the standard expansion. We focus attention on the steepest-descent method of Valageas (2007) (hereafter Large-N), as it is considerably easier to implement than the 2PI effective action method.
Lagrangian resummation theory Matsubara (2008a, b) is an extension of the well-developed Lagrangian perturbation theory. Lagrangian perturbation theory (hereafter LPT; see Buchert (1992); Buchert and Ehlers (1993); Buchert (1994)) has received less attention recently than its Eulerian counterpart as a method for investigating non-linear structure growth, partly because the Lagrangian picture breaks down once shell-crossing occurs. However, recent work Matsubara (2008a) has demonstrated that Lagrangian perturbation theory not only reproduces the SPT power spectrum at the lowest non-trivial order, but with a slight modification also yields a non-perturbative prediction for the power spectrum that corresponds to resumming an infinite set of terms in the standard expansion. We review LPT and the cumulant expansion in Appendix B.
In order to assess how well the perturbative expansions are doing, we need a reference for any given cosmology. We use a new set of large dynamic range N-body simulations well suited to this purpose. These computer programs simulate the same basic physical system (a collisionless matter ‘fluid’ interacting only through gravity) that the perturbative methods attempt to describe; hence the results of the two methods, though arrived at very differently, are directly comparable.
We have elected to investigate several different cosmologies, in an attempt to better identify where and why various analytic techniques succeed and/or fail. For simplicity we consider only flat models in the CDM family. We will highlight two: the first in which a cosmological constant dominates the late-time evolution and which is close to the best-fit cosmology (CDM: , , , and ) and an extreme model (CDM: , , , , ) with a critical density in matter and a larger present-day normalization which emphasizes the effects of non-linearity and the erasure of baryon acoustic oscillations through mode coupling.
For each cosmology the transfer function, , was computed by evolving the coupled Boltzmann, fluid, and Einstein equations using the publicly available package CAMB (http://www.camb.info). The resulting power spectra were then used both as input to the perturbative methods and to generate initial conditions for the N-body simulations (Table 1 gives the amplitude of the dimensionless power at some fiducial wavenumbers).
A number of numerical issues need to be addressed in order to ensure that our simulations provide an adequate reference. Our workhorse simulations each employ equal mass dark matter particles in a periodic, cubical box of side length Gpc. By employing such large volumes we are highly insensitive to the periodicity of the box, which represents a fair sample of the Universe Heitmann et al. (2008a). There is very little power at the fundamental mode, even at : . The lowest few modes obey linear growth to sub-percent accuracy and we run enough different realizations to ensure that the spectrum at the scales of interest is well determined. The large number of particles ensures that the spectrum is well converged for the -modes of interest, which we checked explicitly by comparing simulations of different box sizes. The simulations are evolved from , with the particles perturbed from an initially uniform grid using the Zel’dovich approximation. The rms particle move was about 5% of the mean interparticle spacing. Comparison with second order Lagrangian perturbation theory initial conditions showed that this starting redshift is sufficently high that transients from the Zel’dovich start are irrelevant for the scales and redshifts of interest.
Most of the evolutions were performed with a parallel particle-mesh code. To cross-check our results we used two high force resolution N-body codes: the TreePM code White (2002) and Gadget-II Springel (2005). These have each been tested against a suite of other codes Evrard et al. (2008); Heitmann et al. (2008b, a), with very good agreement. We ran a subset of our simulations using all three codes to quantify the level of precision for the box size and particle loading of relevance here. With its default time stepping, the TreePM code produces dark matter power spectra in agreement with those from Gadget-II to better than out to and to for . However these runs prove to be quite time consuming. If we set the time step in the TreePM code to
which evolves from 5% steps at early times to 1% steps as , we find a shortfall of power of approximately at but very little difference for . We choose the same time stepping for the particle-mesh code, which results in very short run times allowing an ensemble of simulations to be performed. With this step the particle-mesh power spectra show a significant deficit of power (compared to TreePM or Gadget-II) beyond but for , the region of interest here, the agreement is better than .
To compute the power spectrum at different output times the particles were binned onto a regular, Cartesian grid using charge-in-cell assignment Hockney and Eastwood (1988) and the resulting density field was Fourier transformed. The Fourier modes were squared, corrected for the gridding by dividing by the Fourier transform of the charge assignment scheme, and binned into bins equally spaced in . The average of was assigned to the average in the bin and shot-noise was subtracted assuming it was Poisson. The binning introduces artifacts at low , where the sampling on the grid is sparse and the dimensionless power spectrum is steep, but these are small for the scales of most relevance to us. Similarly there is some evidence that the shot-noise in simulations is not scale-invariant (Poisson), but the correction is negligibly small on the scales of interest here.
The non-linear propagator was computed by cross-correlating the initial density field with the final one Crocce and Scoccimarro (2006b). Similar to the power spectrum, this quantity is obtained by Fourier transforming both fields, multiplying their Fourier coefficients, correcting for gridding, and then binning the results.
The velocity statistics are more problematic, because while the density and momentum fields must vanish where there are no tracer particles, the same is not true of the velocities. Thus estimates of the velocity field must employ a smoothing technique. Similarly the velocity field is more sensitive to finite force resolution. On the other hand comparison of the velocity fields with the density fields is less sensitive to finite volume scatter. For this reason we use a different set of simulations, with more particles (up to 3 billion) in smaller boxes (Gpc down to Mpc) evolved with the TreePM code, for the velocity statistics. Comparison with different smoothing schemes, box sizes and particle loadings show that with these choices our results are well converged on the scales of interest White et al. (2008).
iv.1 The Power Spectrum
We begin our analysis by comparing the predictions of SPT against our simulation results. Figure 1 shows the linear theory, 1-loop SPT, and 2-loop SPT power spectrum for CDM and CDM. While 2-loop SPT is a marked improvement over 1-loop SPT at , it’s actually worse than 1-loop at . The effect is most apparent in CDM, which has larger and . This break-down in standard perturbation theory is not entirely surprising: since the order term in SPT goes like , at any given scale one expects higher order terms to become comparable in magnitude to lower order terms at sufficiently late times. Our results suggest that at BAO scales (roughly ) the break-down occurs between and .
A common heuristic prescription dictates that 1-loop SPT can be trusted to 1% for wavenumbers satisfying Jeong and Komatsu (2006). On the other hand a strict application of perturbation theory implies that 1-loop SPT can be trusted to 1% for wavenumbers where the 2-loop contribution is 1% of linear theory. In Figure 1 we indicate the predicted domain of validity of 1-loop SPT according to these two criteria. For comparison we also indicate where the 2-loop contribution is within 3% of linear theory. One sees that the agreement with simulations is slightly better than what our more rigorous criterion suggests. For instance for CDM at , at . At this wavenumber 1-loop SPT overshoots the reference spectrum by about 3%, whereas 2-loop SPT undershoots the reference spectrum by 5%. For CDM at the situation is much worse, with 1-loop SPT overshooting by only 6% at , but 2-loop SPT undershooting by almost 20%.
RPT and closure theory have also been developed to two loops. Given the above conclusions about SPT, it is natural to make the same comparison between the 1-loop and 2-loop predictions from RPT and closure theory. In Figure 2 we show the matter power spectrum for these theories at tree, 1-loop, and 2-loop order for both CDM and CDM. For closure theory it appears that going to 2-loop order extends the range of agreement with simulations, although the wiggles of the power spectrum are not matched in detail. For RPT, as with SPT, the 2-loop result is systematically high, whereas the 1-loop result performs fairly well below . Agreement with simulations can be improved by changing the damping scale in the propagator. In Crocce and Scoccimarro (2008) the damping scale was modified by calculating with the linear theory expression (Eq. 7), but using the non-linear power spectrum and integrating only up to . This leads to a additional suppression of and hence on the relevant scales, bringing the theory into better agreement with simulations Crocce and Scoccimarro (2008). At present this correction has not been derived from first principles and we have not included it, but it appears that improvements in this direction could be important.
Figure 3 shows the predicted power spectrum for the remainder of the theories that we consider in this work. With Figures 1 and 2, these figures give an overview of the agreement between our N-body simulations and the perturbation theories for CDM and CDM. Some of the trends can be seen easily in these figures, and are generic across cosmologies and redshifts. For instance 1-loop SPT, which is the same as 1-loop LPT, always overpredicts at high . Lagrangian resummation theory on the other hand is much too strongly damped beyond the first wiggle. Large- theory more or less traces 1-loop SPT before turning over, while time-RG theory and RGPT follow the general trends of the N-body data without fitting any particular feature precisely. (Note that the nearly perfect agreement between RGPT and simulations for CDM at is likely spurious, as this level of agreement is not seen for other cosmologies or at other redshifts.) RPT and closure give nearly identical tree-level predictions, and very similar 1-loop predictions for . Closure theory appears to benefit greatly from going to 2-loop order, whereas for RPT even at it appears that 2-loop does worse than 1-loop.
While we have run many realizations of each cosmology to reduce run-to-run
variance, one sees in Figures 1, 2 and
3 that the N-body data are still noisy at low , which makes
it difficult to make quantitative statements about the performance of the
perturbation theories. To overcome this we define a
‘reference spectrum’ which interpolates the N-body results at high and
intermediate with the 1-loop SPT calculation at low .
This eliminates the large scatter from the finite number of modes in the
simulations and any biases from the finite bin sizes at low , while still
retaining the information from the simulations at larger . This gives a
smooth function, defined for all , which can be used as a reference
to make a quantitative comparison.
Given the large number of simulations we have run, the uncertainty in the
N-body results is small before perturbation theory becomes invalid and we
can see a significant range of for which theory and simulation agree well.
This makes our final results insensitive to how the matching is done.
Our recipe for producing a reference spectrum is to treat both the N-body
results and 1-loop SPT as independent measurements of the true power spectrum,
with errors given by the run-to-run variance within a wavenumber
All of the theories beyond linear correctly predict the ‘dip’ below linear theory which can be most clearly seen in Figure 4 around . This is sometimes referred to as pre-virialization, and arises because the non-linear growth of the density and velocity fields is slower than linear on scales where the effective spectral index is more positive than (about) (see Bernardeau et al. (2002) for further discussion). In this region use of any of the methods provide significant improvements over linear theory.
|1-loop SPT||Peebles (1980); Juszkiewicz (1981); Vishniac (1983); Goroff et al. (1986); Makino et al. (1992); Jain and Bertschinger (1994)||0.08||0.10||0.11||0.13||0.14|
|2-loop SPT||Fry (1994)||0.04||0.06||0.09||0.23||0.20|
|1-loop RPT||Crocce and Scoccimarro (2006a, b); Crocce and Scoccimarro (2008)||0.10||0.13||0.15||0.16||0.20|
|1-loop Closure||Taruya and Hiramatsu (2008)||0.09||0.10||0.14||0.15||0.18|
To gain an overview of the range of validity of the various methods we list in Table 2 the smallest value of at which each method departs from our reference spectrum by 1% for CDM (a comparison with other schemes defined in the literature is presented in Table 3). As expected, the methods perform better at smaller scales the higher the redshift. All of the methods out-perform linear theory, owing to the marked effects of pre-virialization, however none of the methods appear to be accurate beyond at .
iv.2 Testing the dynamics
While comparison of the power spectrum is the most common test for perturbation theory, it is also useful to test if perturbation theory is correctly describing the underlying dynamics. To do so, we examine some of the constituent pieces from the simulations, and compare to the perturbation theory predictions.
Figure 6 compares the non-linear propagator
from the simulations with the predictions of analytic models. Only RPT and Lagrangian resummation give the expected behavior, , for large .
|Jeong and Komatsu (2006)||SPT||0.12||0.28||0.07||0.25|
|Sefusatti and Komatsu (2007)||SPT||0.10||0.19||0.08||0.19|
|Nishimichi et al. (2008)||SPT||0.08||0.09||0.08||0.09|
|Nishimichi et al. (2008)||RPT||0.09||0.09||0.09||0.10|
|Nishimichi et al. (2008)||Closure||0.09||0.09||0.09||0.10|
Comparisons of perturbation theory with simulations typically focus on the density auto-correlation function or power spectrum. However perturbation theory also makes predictions for the (irrotational) velocity field which can be checked against simulations. In Figure 7 we show the cross-correlation coefficient
for several theories, compared with the same quantity measured from simulations. In linear theory identically. On physical grounds one expects to see a decoherence of density and velocity fields on small scales, and indeed the simulations show for large . None of the analytic theories correctly reproduce this behavior. SPT and Time-RG theory follow the downward turn of the simulation data initially, but then predict an unphysical very soon after non-linear corrections become important. RPT and closure theory perform somewhat better, in that never exceeds unity, but the level of agreement with simulations is still not good above . (Note that we have displayed here only the 1-loop predictions from these theories.) The deviation in seems to be driven mostly by the densities, with perturbation theory performing better at the same scale for the velocities than the densities (see Figure 8).
Standard perturbation theory has a simple and direct theoretical motivation, and results in explicit integral expressions at any order. If taken to infinite order, it provides an exact solution (though to an idealized problem). While standard perturbation theory works well at high redshift and large scales, our results indicate that the standard expansion is badly behaved at the redshifts and scales most accessible to observation, in that higher order terms are comparable in magnitude to lower order terms. Although one expects the expansion to converge if taken to sufficiently high order, this comes at a great computational cost. With advances in raw computing power it may one day become possible to perform the calculation to the requisite order, but in the near future this approach seems impracticable.
On the other hand, it should be emphasized that SPT performs rather well at high redshifts, . Figure 1 shows that 2-loop SPT at agrees with simulations to 1% out to or beyond (where the simulations themselves become unreliable). At these redshifts SPT not only provides a reasonable theoretical prediction for the matter power spectrum on observationally relevant scales, but also an estimate of the theoretical uncertainty on this prediction.
RPT is essentially a rearrangement of the standard expansion, so like SPT it is an exact solution if carried out to all orders. While this rearrangement appears to improve the convergence properties of the perturbation series, it makes it unclear what small quantity (if any) we are actually expanding in. Furthermore, RPT does not actually provide closed-form expressions for the power spectrum, but rather integral relations where is expressed in terms of mode-coupling integrals of itself. Thus in addition to truncating the loop expansion at finite order, a fully consistent implementation of RPT requires solving for according to an iterative scheme, of which the explicit expressions presented in Crocce and Scoccimarro (2008) represent only the first step. The error associated with this approximation has (to our knowledge) yet to be quantified.
Closure theory derives from a very different perturbative scheme than RPT, yet the results obtained are superficially quite similar. There is no obvious way to provide error estimates on the results of closure theory, however, as the closure equations are obtained from heuristic approximations rather than a systematic expansion. Furthermore the propagator in closure theory shows unrealistic oscillations for large . As mentioned previously, the closure equations are solved approximately in Taruya and Hiramatsu (2008) by means of a Born-like expansion. Recently Hiramatsu and Taruya (2009) an attempt has been made to solve the closure equations numerically without resort to such a Born-like expansion. The resulting predictions for the power spectrum appear to agree better with simulations than the results presented here, although it is difficult to draw any firm conclusions from the information provided.
Time-RG theory is based on a single well-defined approximation: the vanishing of the trispectrum. The validity of this approximation can easily be checked in simulations, and in principle this could allow one to quantify the theoretical uncertainty in the method. As most easily seen in Figure 5, although time-RG theory follows the general trends of our reference spectrum over a wider range than other methods, it comes up short by 1-2% over the entire quasi-linear regime. It also gives an unphysical prediction for the density-velocity cross-correlation.
The large- expansion utilizes more sophisticated theoretical machinery than other resummation techniques. While the path-integral formalism for computing clustering statistics is exact, the errors introduced by the large- expansion are difficult to quantify, as ’’ is a fictitious parameter. Although the large- expansion corresponds to an infinite partial resummation of the standard perturbative expansion, from our results it seems that this resummation offers little improvement over 1-loop SPT in the quasi-linear regime. The grossly unphysical behavior of the propagator in this theory is likely responsible for this effect. As mentioned previously, we have focused attention on the steepest-descent method rather than the 2PI effective action method of Valageas (2007). The latter method produces a more reasonable propagator, and likely results in a better prediction for the power spectrum, although at an increased computational cost.
Like SPT, the Lagrangian resummation prescription of Matsubara (2008a) also results in easy to compute, explicit integral expressions. These are well behaved at large , allowing e.g. to be computed, and there are natural extensions to redshift space and to halo bias Matsubara (2008b). For the real-space mass power spectrum considered here it offers a marginal improvement over 1-loop SPT for , although the damping prefactor strongly overcompensates as one moves further into the quasi-linear regime.
Our results have interesting implications for generating a suite of simulations aimed at constraining the matter power spectrum. If we can trust perturbative methods for , then we can focus the computational resources on higher . Assuming Gaussian fluctuations, obtaining 1% accuracy in a bin requires modes. There are modes from a periodic box of side length , so our 1% constraint at translates into
or an equivalent volume of smaller simulations. This constraint is most difficult to meet at , since is larger and the simulations must be evolved for longer. As an example with the default parameters listed above we would require 27 simulations, each Gpc on a side, to obtain percent level constraints on the power spectrum of CDM in a 10% band near at but at we could trust perturbation theory at this scale and focus the simulations on where three times fewer simulations of the same size are needed.
Perturbative methods have a long history in cosmology, and are widely used in many fields of physics. Many of the techniques reviewed herein were first developed in other fields and applied to other problems, with varying levels of success, before being pressed into service for modeling cosmological perturbations. In this paper we have studied a variety of these methods as applied to predicting the large-scale clustering of cold, collisionless matter in an expanding Universe. Our results indicate that the analytic theories correctly model the approach to non-linearity and work well when the perturbations are small, but none of the available theories are up to the challenge of fully describing the behaviour of matter on quasi-linear scales at late times. We have emphasized the need to study a range of different cosmologies and to look at a variety of different statistical observables, as accidental agreement between theory and simulations is possible if one only considers the power spectrum. We have computed the 2-loop contribution to SPT and found that the standard perturbative expansion is badly behaved at low redshifts, even on scales where 1-loop SPT was previously believed to be valid. This provides further motivation for studying alternative analytic approaches based on non-perturbative methods, though at the same time it emphasizes the need for error control in analytic methods.
This work has made use of a large number of high dynamic range N-body simulations, against which we can compare the analytic models. We make these data public in Table 4 to aid future work in the field. In addition a flexible software package that implements the perturbation schemes described in this paper is available from the authors.
We thank Román Scoccimarro, Martín Crocce, Pat McDonald, Salman Habib, and Katrin Heitmann for helpful discussions. The simulations presented in this paper were carried out using computing resources of the National Energy Research Scientific Computing Center and the Laboratory Research Computing project at Lawrence Berkeley National Laboratory. NP is supported by NASA Hubble Fellowship NASA HST-HF-01200.01 and an LBNL Chamberlain Fellowship.
Appendix A Eulerian Perturbation Theory
Here we briefly recap the derivation of the fluid equations in the Eulerian picture, and the assumptions that are made in perturbative treatments (Peebles (1980); Juszkiewicz (1981); Vishniac (1983); Goroff et al. (1986); Makino et al. (1992); Jain and Bertschinger (1994); see Bernardeau et al. (2002) for a review). The matter content of the Universe is modeled as a large collection of identical particles of mass , interacting only through mutual gravitational attraction. For low densities and sub-horizon scales, such forces are adequately described by Newtonian gravity in a uniformly expanding background, with the Newtonian potential sourced by inhomogeneities in the density field. The distribution function for such a set of particles obeys the Vlasov equation. The N-body methods are essentially a Monte-Carlo evolution of the Vlasov equation where the Monte-Carlo tracer super-particles move along characteristics.
Analytically one typically invokes the single-stream approximation, which assumes that all particles at a given point move together with the same velocity . This amounts to demanding that , where is the distribution function and is the Dirac delta function. This assumption is explicitly violated once shell crossing occurs in gravitational collapse, but is thought to be a reasonable approximation for small density constrasts. The velocity moments of the Vlasov equation then give the familiar fluid equations (e.g. Peebles (1980))
where is the conformal Hubble parameter.
It is conventional to further assume that the vorticity of the velocity field vanishes, i.e. that the fluid is irrotational. This assumption is motivated by noting that at linear order, and is well supported by simulations Percival and White (2008); Pueblas and Scoccimarro (2008). Under this approximation the velocity field is completely specified by its divergence , and the fluid equations reduce to
In Fourier space , giving
As long as and are small, the right-hand sides of the fluid equations are small and can be dropped; this approximation defines linear theory. The solution to the resulting linearized fluid equations may be written as
where is the density contrast at some early time when linear theory is certainly valid, is the linear growth function (normalized to 1 today), and . At early times and . Note that a possible decaying mode contribution, proportional to at early times, is forced to zero in linear theory by the condition that be well-behaved as . Note also that the mode-coupling integrals vanish for , so linear theory is always valid in some neighborhood of , even at late times. For convenience we define to be the linear density contrast today, i.e. .
It often proves convenient to use as a time variable, and to combine and into a two-component field
If we introduce
the fluid equations may be recast as
and the vertex only has nonzero entries and . The initial fields at time are denoted
and the linear theory solution is simply .
a.1 Beyond linear order
Standard perturbation theory (hereafter SPT; Peebles (1980); Juszkiewicz (1981); Vishniac (1983); Goroff et al. (1986); Makino et al. (1992); Jain and Bertschinger (1994)) defines a systematic series solution to the fluid equations (1) in powers of the initial density contrast (or equivalently in powers of the current linearly evolved density contrast ). In an Einstein-de Sitter universe, where and , the expansion may be written as
where and are time-indepedent mode-coupling integrals over powers of the initial density field:
where , , and .
While the Einstein-de Sitter approximation is convenient, it is not necessary Takahashi (2008). However we have confirmed that an accurate approximation is to substitute the growth factor for ,
with the same mode-coupling integrals as above for and . The validity of this approximation is ultimately traced to the fact that the ratio is very nearly unity over the entire lifetime of the universe for CDM cosmologies, since Bernardeau et al. (2002).
To compute statistical observables it is convenient to introduce diagrammatic rules for keeping track of the various terms in the perturbation series Goroff et al. (1986). The function (or ) may be represented as in Figure 9, where the open circles denote factors of , and the vertex denotes a momentum-conserving integral of (or ) over intermediate wavevectors . Algebraically the order contribution is obtained by isolating all terms of order from the ensemble average , i.e.
The quantity may be represented diagrammatically by “multiplying” the diagrams for and . Since the initial field (and hence ) is Gaussian, ensemble averages of powers of may be expanded in terms of the 2-point function according to Wick’s theorem. Then the product of the diagrams and is given by summing over all possible pairings of their open circles, where open circles are paired according to the rule
with the additional understanding that any diagram containing a tadpole (a fragment connected to the rest of the diagram by a single edge) vanishes identically.
As an example we show in Figure 10 how to obtain the 2nd order contribution . Notice that after invoking momentum conservation at vertices and translational invariance of the 2-point function, only a single wavevector remains to be integrated. In general all diagrams contributing to contain loops, requiring integration over independent wavevectors. For this reason we often classify power spectrum terms by their number of loops rather than their “order,” which is a potentially ambiguous concept.
With this expansion, statistical observables may be computed straightforwardly in SPT to any fixed order. For example, the first correction to the matter power spectrum (second order in the initial power spectrum, fourth order in the initial density contrast, or 1-loop in the diagrammatic idiom) is given by
where is the linear power spectrum and Makino et al. (1992)