Precision modelling of the matter power spectrum in a Planck-like Universe
We use a new suite of high-resolution -body simulations to improve the calibration of the nonlinear matter power spectrum in the CDM model, specifically centred on the Planck mission best fit parameters. On large-scales (), our semi-analytic model evaluates a slightly modified version of the 2-loop calculation from the Multi-point Propagator Theory of Bernardeau et al. (2012) to predict the matter clustering. On smaller scales (), we transition to a smoothing-spline-fit model, that characterises the differences between the Takahashi et al. (2012) recalibration of halofit2012 and our simulations. We then use an additional suite of high-resolution simulations to explore the response of the power spectrum to variations in the cosmological parameters around the Planck best-fit. In particular, we examine variations in: the time evolution of the dark energy equation of state (, ); the matter density ; the physical densities of cold dark matter and baryons ; and the primordial power spectrum amplitude , spectral index , and its running . We use these simulations to construct a set of correction functions, which we use to further improve halofit’s dependence on cosmological parameters. Our newly calibrated model reproduces all of our data with precision. On including the possibility of various systematic errors, such as choice of -body code, resolution, etc., based on inspection of the scaled second order derivatives we estimate the accuracy should be over the hyper-cube: , , , , , , , up to and out to . Outside of this range the model smoothly reverts to halofit2012. We release all of our numerical power spectra data along with the C-code NGenHalofit at: https://CosmologyCode@bitbucket.org/ngenhalofitteam/ngenhalofitpublic.git.
keywords:Cosmology: large-scale structure of Universe.
The power spectrum of matter fluctuations contains a wealth of information about the cosmological model and the initial distribution of density perturbations in the early universe. Its accurate measurement and evolution is therefore one of the main goals of modern cosmology. In recent years a number of different methods have been devised to extract this information: galaxy clustering (Davis & Peebles, 1983; Feldman, Kaiser & Peacock, 1994, e.g.), cluster counts (White et al., 1993; Lima & Hu, 2004), shear-shear correlation functions (e.g. Miralda-Escude, 1991; Kaiser, 1992), correlations of absorption features in the Lyman alpha forest (Croft et al., 1998), correlations in the 21cm emission from neutral hydrogen (Loeb & Zaldarriaga, 2004), etc. Each of these observables has a number of problematic modelling issues, however one commonality is the need to understand the matter density power spectrum in the nonlinear regime. Currently, following this to high accuracy over a wide range of scales can only be achieved using -body methods. However, computation of the nonlinear power spectrum for all of the cosmological models of interest is currently prohibitively expensive.
On large-scales, before shell-crossing, one can use Eulerian perturbative methods to follow the evolution of density and velocity divergence perturbations in to the weakly nonlinear regime (Juszkiewicz, 1981; Vishniac, 1983; Goroff et al., 1986; Makino, Sasaki & Suto, 1992; Jain & Bertschinger, 1994). Until relatively recently such methods (described as Standard Perturbation Theory SPT), were hindered by the fact that the so-called ‘loop corrections’ would result from the cancellation of large positive and negative higher order terms to produce a small correction to the linear spectrum. Extending this to higher orders would result in even more fine cancellations. However, in the last decade significant progress was made on this problem through the development of renormalised perturbation theory (Crocce & Scoccimarro, 2006b, a, hereafter RPT) and the multi-point propagator approach (Bernardeau, Crocce & Scoccimarro, 2008, hereafter MPT). In this framework certain diagrams in the perturbative series could be summed to include an infinite number of terms, leaving a sequence that involved the summation of positive terms that were of decreasing importance on a given quasi-linear scale. In this approach the power spectrum can be modelled at to subpercent accuracy on scales . Currently, work is ongoing to develop an effective field theory approach to modelling the nonlinear evolution of the cosmic fields (Carrasco, Hertzberg & Senatore, 2012). This treats the coarse grained phase space perturbatively, with the small scale smoothed components of the phase-space generating an effective sound speed, and thus requiring the modelling of a stress tensor. It has been claimed by Carrasco et al. (2014) that this method can yield predictions accurate to better than 1% on scales . However, recent work by Baldauf, Mercolli & Zaldarriaga (2015) suggests that, owing to the scale-dependence of the effective-sound-speed parameter , the gains are more likely to be limited to . Ultimately, it likely that the perturbative approaches will be limited to the scales associated with shell-crossing.
To probe deeper into the nonlinear regime various semi-analytic methods have also been developed: Hamilton et al. (1991) developed a scaling ansatz for the integrated correlation function, that was extended to the power spectrum by Peacock & Dodds (1994, 1996). In this approach it was assumed that the power spectrum on a given scale was a function of the linear spectrum on a remapped scale that was based on continuity arguments. At the turn of the Millennium the halo model was developed by various authors (Seljak, 2000; Peacock & Smith, 2000; Ma & Fry, 2000), in which the large-scale contribution arises from the correlations between different haloes, and the small-scale one from the correlation of dark matter particles in the same halo. Elements of these ideas and the SPT were melted together in the halofit code developed in Smith et al. (2003). This method was further improved by Takahashi et al. (2012) who recalibrated the fitting function and introduced an explicit dependence on the dark energy equation of state parameter . The current claimed precision is for for and 10% for .
More recently, several empirical approaches have been developed, largely inspired by techniques borrowed from the field of machine learning. An example of such an approach is the Neural Network model of Agarwal et al. (2012, 2014). They mapped a six parameter cosmological parameter space using more than 6380 simulations and claim that their PkANN code can generate power spectra with better than 1% precision on scales for redshifts . However, the simulations are, relatively speaking, of low resolution and of small volume (, ) by modern standards. The small boxes mean that they do not accurately capture large-scale nonlinearities associated with the Baryon Acoustic Oscillations (Smith, Scoccimarro & Sheth, 2007; Crocce & Scoccimarro, 2008). Furthermore, as was demonstrated in Heitmann et al. (2010) and more recently Schneider et al. (2016), such low-resolution runs are unlikely able to capture the small-scale structure () with the accuracy required for lensing surveys. This unfortunately reduces the practical utility of PkANN.
Another impressive development is the Coyote Universe project, which has led to the construction of various emulators and in particular the CosmicEMU code for predicting nonlinear matter power spectra (Heitmann et al., 2009, 2010; Lawrence et al., 2010; Heitmann et al., 2014). In Heitmann et al. (2010) it was claimed that the CosmicEMU code captured the matter power spectrum to better than 1% precision for (). This was upgraded in Heitmann et al. (2014) to include variations in the Hubble parameter and an extension to smaller scales ().
Lastly, another important contribution is the work of Mead et al. (2015, 2016) who takes yet again a different approach to the problem. In their work they assume that the halo model is broadly correct at generating nonlinear power spectra. However they argue that it is wrong in detail and introduce several phenomenological modifications, which they argue accounts for missing physics from the model: first, BAO suppression is introduced through a Gaussian damping à la RPT; a graceful vanishing of the 1-Halo term on large scales to guarantee linear theory; halo oblation – to account for the fact that not all haloes are spherical NFW profiles (Navarro, Frenk & White, 1997). These modifications introduce 2 new free parameters. Using the node points of the CosmicEMU code to calibrate these, they find that their HMCODE can recover power spectra at the level of a few percent for . The main advantage of this approach is that it enables physically motivated extrapolation beyond the constrictive grids of models required by the machine learning codes. It also enables flexibility for the inclusion of new physics, such as baryonic effects and modifications to the dark matter model and gravity.
The aim of this paper is to examine a number of these tools and confront them with a new series of relatively high-resolution -body runs that are centred around the Planck CMB mission’s best fit cosmological parameter set (Planck Collaboration et al., 2014). Furthermore, we aim to build a power spectrum tool that enables accurate and precise predictions of the power spectra in this region of parameter space. The requirements of the method are that: it must use an accurate Einstein-Boltzmann solver linear input power spectrum, such as can be provided by CAMB (Lewis, Challinor & Lasenby, 2000); it must evaluate a state-of-the-art perturbation theory scheme to generate predictions on large-scales that are suitable for modelling evolution of BAO; it must interpolate to state-of-the-art -body simulations on small scales; lastly, it must gracefully return to one of the lower precision methods outside of the Planck parameter region; it must be fast to evaluate and cover and . One further requirement is that it must be able to describe a time evolving dark energy CDM parameter space and with the inclusion of a potential running of the primordial power spectral index.
The paper breaks down as follows: In §2 we provide an overview of key theoretical concepts, define the cosmological framework and identify 8 cosmological parameters that we wish to constrain from data. In §3 we describe the suite of cosmological simulations and provide an overview of their generation. In §4 we describe how we estimate the power spectra and construct a composite fiducial spectrum from various runs. We also validate the initial conditions. In §5 we make a comparison between our spectra and parameter-free predictions from the 2-loop MPT. In §6 we compare our fiducial runs with the predictions from various semi-analytic and fitting function methods. In §7 we present our new semi-analytic model and show how well it can model results from our fiducial set of runs. In §8 we explore the dependence of the nonlinear power spectrum on the cosmological parameters and in §9 we build the cosmology dependent corrections for our new model and test them. Finally, in §10 we summarise our findings, conclude and discuss future improvements to the method.
2 Theoretical background
2.1 The power spectrum
The density field of matter at spatial position and at time is written as . We denote the spatial mean of this field as . We are mostly interested in the matter density contrast (or overdensity field sometimes simply referred to as the density field):
where the above field is by definition mean zero, i.e. , where angled brackets denote an ensemble average process at fixed coordinate time. A complete statistical description of the –field can be obtained through determining the -point correlation functions (Scherrer & Bertschinger, 1991): where for example is the two-point correlation function. For a number of reasons we will prefer to work in Fourier space, with the transform convention:
where is a sufficiently large volume that the coherence length of the correlators is . On assuming that the two-point correlation function is statistically homogeneous (i.e. ), one can easily show that the Fourier space dual of the correlation function is the power spectrum:
where denotes the Dirac delta function and the power spectrum is given by:
For the case of statistically isotropic fields, the power spectrum depends only on the magnitude of . For the case of a Gaussian Random Field all of the statistical information is fully captured in the power spectrum. This makes the power spectrum the lowest order clustering statistic of interest for cosmology and it is the main subject of interest for this paper. In real surveys it is usually not measured directly but it can be extracted modulo reasonable modelling assumptions.
2.2 Cosmological model and fiducial parameters:
The various combinations of large-scale structure data (Alam et al., 2017), weak lensing data (Köhlinger et al., 2017; DES Collaboration et al., 2017), CMB data (Planck Collaboration et al., 2014) and Type Ia Supernovae data (Betoule et al., 2014), have identified the flat, time evolving, dark energy dominated cold dark matter model (hereafter CDM) as the cosmological model of interest. One of the major challenges for modern cosmology is to accurately determine the best fit parameters in this model. The flat CDM model can be characterised by 8 parameters:
where and are the parameters that govern the time evolution of the equation of state for dark energy, assuming that the equation of state can be parameterised in the form:
denotes the present day energy density of dark energy; and are the physical densities in cold dark matter and baryons today – being the dimensionless Hubble parameter. Note that owing to the assumed flatness, other parameters can be derived from the above set: the matter density parameter is , and the Hubble parameter is .
The matter power spectrum is initialised by specifying the primordial power spectrum of curvature perturbations and we make use of the following form (Komatsu et al., 2009; Planck Collaboration et al., 2014):
where is the primordial amplitude, and are the spectral index and the running of the spectral index, all of which are determined at the pivot scale 111For a discussion of the relation between the primordial curvature power spectrum and the matter power spectrum see Smith & Simon (2018).. The running of the spectral index can also be equivalently written as:
and is interesting to include, since placing constraints on this may help constrain inflationary models (Vieira, Byrnes & Lewis, 2017).
In this work we will examine the dependence of the nonlinear matter power spectrum on these parameters and also develop a semi-empirical approach that will improve the accuracy of current modelling. In what follows we will assume as out fiducial model parameters that are close to the best fit set obtained from the Planck Collaboration et al. (2014): , , , , , , , . Before continuing, we note that we are not aware of any recent study that has explored the dependence of the nonlinear power spectrum on using -body simulations. In what follows we shall present results for .
|F1 (Big Box)||3000.0||0.05||269.0||1||49.0||63|
3 The Däemmerung Simulations
We have generated a series of -body simulations to explore cosmic structure formation in the nonlinear regime.
3.1 Overview of fiducial runs
For our fiducial simulations we have generated two types of run: small box runs that have high resolution and a large volume run to capture large-scale nonlinearities. Each of the simulations was generated as follows: first, we adopted a fiducial cosmological model and for this we chose cosmological parameters that were consistent with the Planck CMB analysis best-fit (Planck Collaboration et al., 2014). The exact values that we used were presented in §2.2 and are also repeated in the first row of Table 2. We then ran the Einstein-Boltzmann solver code CAMB to generate the linear theory matter power spectrum at for the fiducial model. This was used as the input linear power spectrum for our upgrade of the publicly available 2LPT222http://cosmo.nyu.edu/roman/2LPT/ C-code developed in Scoccimarro et al. (2012) – our upgrade makes various modifications to the original code, in particular the use of FFTW3 MPI parallel Fourier Transform libraries and the code has been tested for particle loads up to . The linear power spectrum was rescaled back to using the appropriate linear growth factor (for further details as to how we calculate this for all our models see §3.3). The -body particles were distributed onto a cubical lattice of size and the particles were then displaced off their lattice points using the 2LPT recipe.
The -body simulations were then evolved under gravity in an expanding universe framework using the OpenMPI, parallel Tree–PM code Gadget-3 developed by Springel (2005) and Angulo et al. (2012) and used for the generation of the Millennium XXL Virgo run. The upgraded features of the code meant that halo and sub-halo catalogues along with various statistical measures, including the matter power spectra, were calculated ‘on-the-fly’. Each of the small-box runs was performed with dark matter particles, in a comoving box of size , yielding a mass per particle of . The large-box runs also followed dark matter particles, but in a comoving box of size , yielding a mass per particle of .
We output 60 snapshots between and , with a hybrid linear-logarithmic output spacing that matched the Millennium Run I simulation (Springel et al., 2005). The simulations were run on the SuperMUC machine at the Leibniz Rechnum Zentrum in Garching and also the MPG Hydra cluster in Garching. The full particle data storage per run was of the order 20 TB. Various properties of interest for the runs are presented in Table 2. For the small-box runs we adopted this particle mass resolution and box-size for the reason that this would lead to converged 1% precision results on the power spectrum on scales (Schneider et al., 2016) and that it would also enable accurate tracking of sub-structures that are required for semi-analytic galaxy formation modelling.
For completeness we have also included in Table 2 a list of the Gadget-3 code parameter settings that we used. As was shown in Reed et al. (2013) and Smith et al. (2014) a careful choice of these parameters is required to keep numerical errors below the percent level. Here, in order to increase integration accuracy, we have chosen a relatively small-timestep, which was set through the parameter , and where . We have used this for all runs and a typical small box run required several thousand timesteps to complete.
3.2 Overview of cosmology variations
In order to explore the dependence of the nonlinear structure formation on the cosmological parameters we have generated a further set of 16 small-box simulations. Rather than sample our 8-dimensional parameter space for maximum coverage, as has been done for example in the Coyote Universe Project, where a Latin hypercube approach was adopted, we instead focus on the idea of generating a Taylor expansion model around our fiducial point, but relative to some preexisting theoretical model. For our 8 parameters this can be done to good accuracy by running an extra two simulations for each parameter: one that represents a small positive increase in the parameter and another that gives the response for a negative change. The exact values of the cosmological parameters that we have used for each variation are listed in Table 2.
The procedure for generating each of the variational simulations was exactly as described for the case of the small-box fiducial runs. In order to minimise large-scale cosmic variance between runs we have matched the Fourier mode phase distribution of each run to those of the first fiducial small-box run.
As stated, all of the variation simulations were performed using the standard Gadget-3 code, with the exception of the runs that explore variations in the time evolution of the dark energy equation of state parameter . In order to perform these runs we made the following modifications to the Gadget-3 code. First, we made the global replacement of the parameter throughout the code. Second, we introduced two new free parameters w0 and wa into the structure global_data_all_processors and the snapshot header structure io_header, ensuring that the total byte size remained the same. Third, we made the following global replacement for the Hubble parameter, contained in the darkenergy.c file accessed through double INLINE_FUNC hubble_function(double a)]:
and where we have defined
and is the curvature density parameter. Finally, some additional small adjustments were also necessary to the following parts of the code io.c, begrun.c and read_ic.c, which was mainly for I/O of the new parameters.
3.3 Linear growth factor
In generating the initial particle distributions using the 2LPT algorithm we need to evolve back the linear theory power spectrum (from CAMB) to the start redshift, which for all our runs was . We do this by computing the linear growth factor and for all of the models that we consider this can be done as follows.
At early times the matter density , can be expressed as:
where gives the time evolution of the density perturbation in the growing mode and gives the normalised expansion factor at the initial time. Under the assumption that CDM and baryon fluctuations are unbiased with respect to each other (see Somogyi & Smith, 2010, for a discussion), this function can be obtained by solving the second order, ordinary differential equation (ODE) that results for a single collisionless fluid (Linder & Jenkins, 2003):
where indicates derivatives with respect to the expansion factor and for the case of our time evolving dark energy model given by Eq. (7), the time dependent coefficients are:
In Appendix A we show how one can use a 4th order Runge-Kutta algorithm to solve the above differential equation to obtain the growing solution and as a by product the logarithmic growth rate . The appendix shows that the relative error in the solution is for all times of interest.
Figure 1 compares the time evolution of the linear growth factor for the four dark energy models listed in Table 2 with that of our fiducial model. The left panel shows the results for the two constant models and the right panel the same but for the models. All of the growth functions have been normalised to have the same amplitude at the initial time . The solid line denotes the results from the 4th order Runge-Kutta method. We see that a 10% variation in will lead to growth variations of and that variations in lead to variations of the order , and roughly double that for their impact on the power spectrum.
The figure also shows the growth functions that you would get if you were to assume that the solution of Heath (1977) would hold for the case of pressured fluids (for more details and discussion see Appendix A). As the figure clearly shows, the growth functions from the Heath approach are inaccurate at the level of several percent for the dark energy models of interest and so should not be used for model building and predictions where high accuracy is required.
Lastly, the figure also shows the result of evaluating the approximate expression (dashed lines in Fig. 1):
where (Linder, 2005). This provides an excellent description (of the order 0.1% precision) of the variations in the growth for the dark energy models considered. However, since this also involves a numerical integral we recommend the reader to code up the Runge-Kutta solution, since it is more general and flexible.
4 The measured power spectra
4.1 Estimating the power spectrum
For a given realisation of the density field, an estimator for the power in a given Fourier mode is:
where is the volume of the simulation. However, the noise in such an estimate is very large and in order to overcome this one must sum over a set of Fourier wavemodes in a thin -shell. Introducing the binning function for and zero otherwise, the power spectrum estimate averaged over a bin of width, , is given by
The computation of the power spectrum that we employ is embedded in the Gadget-3 code itself and so makes efficient use of the built-in domain decomposition routines and also the fast MPI parallel FFTs as implemented by FFTW (Johnson & Frigo, 2008). In brief, the simulation particles are distributed on to a cubical lattice. The particles are then assigned to the FFT mesh using the ‘cloud-in-cell’ (CIC) mass assignment scheme (note that we reuse the PM force grid for this so the grid size is specified by PMGRID). This gives the discrete density field convolved with the CIC window. This we Fourier transform using the FFT algorithm to obtain . The CIC scheme is then deconvolved using (Jing, 2005):
where . The power spectrum of the point-sampled field is then estimated using Eq. (19):
where is the number of Fourier modes in a -space shell.
Following Peebles (1980) the above estimate of the true power spectrum is biased by the addition of the variance from the point sampling procedure. However, this can be corrected for using:
where is the power spectrum of the underlying continuous field.
In what follows, we use FFT grids with cells per dimension, and this sets the minimum and maximum spatial frequencies for a given power spectrum to: and . In practice, the power on length scales will get ‘aliased’ to larger spatial scales (Jing, 2005), and so we take the rule of thumb . Thus for the runs the low- and high- cut-offs for the power spectra are and , respectively. For the box run the cut-offs are: and .
4.2 Validation of the initial conditions
In Figure 2 we present the ratios of the initial power spectra for the variational runs with respect to the fiducial model as generated by our 2LPT code and evolved using Gadget-3 to . Each of the eight panels shows the results for variations in one of the cosmological parameters, with all of the others frozen. The positive/negative variation in each of the cosmological parameters is denoted by the red/blue points. The solid red and blue lines denote the predictions from linear theory as obtained from the CAMB code. One can see that taking the ratio with matched phase initial condition means that the cosmic variance has been cancelled on large-scales. We also note that the measured data points and linear predictions agree to very high accuracy, validating the initial density spectra for each simulation.
4.3 Evolution of the raw fiducial power spectra
In Figure 4 we show the evolution of the raw matter power spectra measured over several redshifts. In each panel the red points denote the mean of the small-box runs and the error bars are the standard deviation as determined from the 10 realisations. The blue points denote the results from the large-box run and the shaded blue region shows the predicted 1– error bar that results from assuming the density field is Gaussianly distributed (see for example Scoccimarro, Zaldarriaga & Hui, 1999; Smith, 2009):
where is the number of Fourier modes in a given -space shell and where is the spacing of the -space shell. The dashed blue lines denote linear theory predictions.
We see that there is very good agreement between the large-box and small-box runs on large and quasi-linear scales. There is an increase in the power associated with the big-box relative to the small-box runs at around . This can be attributed to the effects of aliasing. These effects can be mitigated by only considering scales , where , with being the size of the FFT grid.
4.4 Construction of the composite fiducial spectra
Rather than using the raw spectra in what follows we construct a super composite of the large- and small-box fiducial runs. We do this by selecting a partition scale and then we select all of the modes from the Big-box run with and all of the data that have from the small-box runs. We expect a small discontinuity at the partition scale owing to the fact that the large-box runs are more than a factor of 200 times lower resolution than the small-box runs, which means that on small scales we expect the large-box runs to be slightly lower in amplitude. For all spectra we take .
In Figure 4 we show the result of the construction of the composite spectrum. We have colour coded the points with using blue and those with with red. In this plot we have also removed some of the large-scale cosmic variance by rescaling the spectrum of each realisation in the following way:
where we take . Note that the shaded region gives the 1 error region computed before any rescaling takes place. The end result is that the composite spectrum smoothly covers more than three orders of magnitude and cuts off at .
5 Comparison with analytic perturbation theory
5.1 Perturbative methods
As the Universe expands small primordial matter over-densities aggregate through gravitational instability. When averaged over sufficiently large enough scales, and on scales smaller than the horizon, the evolution of these fluctuations can be modelled using the Newtonian fluid equations expressed in expanding coordinates. As the system evolves, nonlinear mode coupling takes place and the presence of large-scale wave-modes modulates the growth of structure on all scales (Peebles, 1980; Bernardeau et al., 2002).
As mentioned earlier, in recent years there has been significant progress in developing methods to improve the range of applicability of nonlinear perturbation theory. Of particular note are the RPT and MPT approaches (Crocce & Scoccimarro, 2006b, a; Bernardeau, Crocce & Scoccimarro, 2008). Subsequent research has focused on various extensions of these schemes. We note that currently much interest surrounds the construction of an Effective Field Theory for large-scale structure (Carrasco, Hertzberg & Senatore, 2012). However, we will not explore this here, since as has been shown in Baldauf, Mercolli & Zaldarriaga (2015), the additional complexity and need for a, possibly scale-dependent, free-parameter means that this technique is not generally applicable ‘right out of the box’. Moreover, even when fully calibrated this approach would offer improvements over MPT only on scales where, in this work, we will look to -body simulations for the correct answer. In what follows, we will therefore focus on implementing and testing the MPT method against our runs.
5.2 The power spectrum in the Multi-point Propagator Theory
In this theory the matter power spectrum can be expressed as an infinite sum over nonlinear -point propagators:
where denotes the initial matter power spectrum determined at some initial time. The are the ‘multi-point propagators’, which can loosely be understood as the memory that the final field at a given point retains from multiple connections to the initial field. More formally, they can be defined:
where is a doublet which denotes the late time density and velocity divergence fields and where denotes the doublet at some initial time and when the initial fields are set to start in the growing mode we have that with . The operation on the left-hand-side of Eq. (28) means take the th functional derivative of the final state with respect to initial states. The angle brackets mean compute the expectation of the resultant expression. Lastly the above definition for the -point propagators can be connected to the term in Eq. (27) through the expression:
where repeated indices are summed over.
As can be understood from inspection of Eq. (27) one major advantage of the MPT expansion over the standard perturbation theory scheme is that all of the terms in the expansion are clearly positive. Further, as the order is increased the contributions to the power spectrum appear to contribute to increasingly small scales. Hence, on large-scales, one can confidently truncate the sequence at a finite number of ‘loops’.
5.3 MPT recipe à la MPTbreeze:
In this work we will desire to use the MPT approach to describe the power spectrum in the weakly nonlinear regime, and in order to do this we follow Crocce, Scoccimarro & Bernardeau (2012) and truncate the propagator expansion after the first three terms in Eq. (27). In the diagrammatic language this corresponds to renormalised versions of tree level, one and two loop corrections, respectively. Explicitly, this is:
and where the 1-, 2- and 3-point propagators are given by:
where for brevity we have suppressed the arguments of the functions. The symmetrised gravitational mode coupling kernels up to third order and the function are given in Appendix B.
Considering Eq. (30), the first term on the right-hand side is directly proportional to the linear power spectrum and the square of the 1-point propagator, which is a direct indicator of the ‘memory’ of the initial conditions on a particular scale to that same scale at late times. The second term is the ‘one-loop’ correction, this term can be simplified as follows: firstly, a quick inspection of the kernel indicates that it depends only on the magnitudes of the two vector arguments and the cosine of the angle between them. Hence, on choosing the vector to denote the -axis of a spherical polar coordinate system one can immediately integrate out the azimuthal angle. Secondly, the exponential term depends on the sum of the two vector arguments, and by momentum conservation we have that the sum always results in , hence it may be factored out of the integrals. This leaves us with:
with and . We are thus left with an integration in 2-D which can be rapidly evaluated with a standard Gaussian quadrature routine and we employ repeated use of the GSL standard library routine gsl_integration_qag.
Consider the third term on the right-hand-side of Eq. (30), this is termed the ‘two-loop’ correction and in order evaluate this we first substitute in. Again, we notice that exponential term can be extracted. Next, we follow Crocce, Scoccimarro & Bernardeau (2012) and reduce the dimensionality of the integration by one integral through noting the following: without loss of generality we can fix the to lie along the -axis. Next we can restrict to lie in the plane. Finally, must be unrestricted. With these choices, the relevant vectors in spherical polar coordinates as:
On integrating out the redundant azimuthal angle of the vector we are left with the following 5-D integral
were and . This integral can be efficiently computed using Monte Carlo integration techniques and we employ the Vegas algorithm supplied by the CUBA-4.2 package (Hahn, 2016). Note that in the numerical implementation of the integrals Eqs (34) and (37) we perform the radial integrals over the restricted domain: where the lower bound is fixed at and the upper limit varies according to .
Figure 5 shows the 2-loop calculation of the MPT power spectrum of Eq. (30) evaluated using the above procedure and for our Planck-like fiducial model. The left panel shows and the right . The upper panel of each figure shows the absolute power and one can see that the sum of the MPT propagators adds signal to the spectrum at increasingly higher wavemodes. It is interesting to note that at , relative to linear theory, there is a 2–3% suppression of power on very large scales (), followed by an amplification that starts around . In addition, while the 1-point propagator shows tens of percent difference from linear theory at , the sum of the 1- and 2-point propagators gives an amplitude that is coincidentally only different by a percent or so.
Figure 5 also compares the theoretical predictions with the composite power spectrum blue points. The shaded blue region shows the 1– error region, obtained assuming that the density field is a Gaussian random field (c.f. Eq. (25)). The lower panels in the figure show the ratio of the spectra with respect to the linear theory. We see that the MPT calculation describes the results of the simulation up to with relatively good accuracy. There are however some small but notable differences, in particular, we see that for the data at the MPT predictions slightly over-predict the data by . For the case of the data this discrepancy is pushed to slightly higher wavenumbers. We therefore explored whether we might correct for this.
5.4 An ad hoc correction to MPT
We found that the MPT predictions and the data could be brought into better agreement by introducing the following ad hoc correction. Considering again the propagator expansion of the power spectrum Eq. (27), we note that if we were to slightly increase the amount of decorrelation of the initial conditions with the final conditions, then this would reduce the predictions on the relevant scales. We find that this can most easily be achieved by recomputing that appears in Eq. (31) with a nonlinear matter power spectrum model such as halofit2012, which we might call i.e. in Eq. (92) we make the replacement (see Eq. (93)).
Unfortunately, since the resumed 1-point propagator multiplies all of the higher order propagators (see Eqs (32) and (33)), this alteration has the effect of considerably damping all of the loop terms. We obviate this by only using in the computation of the 1-point propagator:
and leaving and unchanged. In implementing this approach we had to pay special attention to the limits of the integral for since we found that the amount of damping was sensitive to the upper limit. After some trial and error we found that adopting the upper limit produced acceptable results. A higher value for this cut-off would lead to too much damping.
In Figure 5 we indicate our correction to the MPT power spectrum implementation by the solid yellow/gold lines. Up to we see that for both of the redshifts considered this recipe leads to improved predictions. We shall therefore adopt this corrected MPT formulation as the means for generating the nonlinear matter power spectrum on scales . Before continuing, we also point out that our large-box simulation has a volume of , and since typical surveys cover a smaller volume we expect that the modelling errors on these large-scales would fall below sample variance errors.
6 Comparison with semi-analytic methods
We now compare the power spectra measured from our -body runs with various semi-analytic and emulator methods.
6.1 Comparison with Halofit2003 and Halofit2012
The perturbative methods such as SPT and MPT are unable to describe the evolution of structure once shell-crossing takes place. This we shall refer to as the deeply nonlinear regime. In order to understand how structures collapse and evolve on these scales we need to make use of fully non-perturbative schemes such as -body simulations and study the phenomenology of the structures formed.
The halofit prescription for modelling the nonlinear matter power spectrum, originally developed in Smith et al. (2003) has the following form for the power spectrum:
where represents the quasi-linear suppression and amplification of nonlinear structures that are loosely associated with the phenomenological effects arising from a 2-halo like term and is a 1-halo shot-noise like term (borrowing loosely from the language of the halo model). Parameterised analytic forms for these two functions were devised in Smith et al. (2003) and the best fit parameters were established through fitting to the root mean square difference between the model and a suite of power spectra from -body simulations on small scales and 1-loop SPT on large scales. This approach was recently upgraded in the work of Takahashi et al. (2012) who recalibrated the fitting parameters against a series of improved simulations and extended the method to include a time evolving dark energy equation of state.
Figure 6 shows the ratio of the composite power spectra derived from our fiducial runs, with the updated halofit model of Takahashi et al. (2012). In all of the panels the blue and red points with error bars denote the composite spectrum from the simulations. The blue dashed line shows the linear theory prediction, which is clearly a poor fit to the measured data for most of the scales of interest. The original halofit algorithm is given by the blue dot-dashed line. This appears to underestimate the true nonlinear power on scales at by roughly 10%. For this discrepancy rises sharply.
On the other hand, the recalibrated halofit model of Takahashi et al. (2012) provides a good description of the data at the 5% level on all scales. This rises slightly at , however this discrepancy is likely owing to the fact that our data approaches the Nyquist frequency of the FFT mesh at , which leads to a small increase in the power. However, it fails to capture the BAO oscillations on large scales, and also seems to overpredict the amount of structure on quasi-linear scales by . This is entirely consistent with the claimed accuracy of the fitting function.
6.2 Comparison with CosmicEMU
We now examine how the predictions from the CosmicEMU model (Heitmann et al., 2014) compare with the estimates from our simulations. We obtained the latest version of the code (Version 3), which included updated constraints from the “Mira-Titan Universe” runs. Since the code returns