DESI and other Dark Energy experiments in the era of neutrino mass measurements
We present Fisher matrix projections for future cosmological parameter measurements, including neutrino masses, Dark Energy, curvature, modified gravity, the inflationary perturbation spectrum, non-Gaussianity, and dark radiation. We focus on DESI and generally redshift surveys (BOSS, HETDEX, eBOSS, Euclid, and WFIRST), but also include CMB (Planck) and weak gravitational lensing (DES and LSST) constraints. The goal is to present a consistent set of projections, for concrete experiments, which are otherwise scattered throughout many papers and proposals. We include neutrino mass as a free parameter in most projections, as it will inevitably be relevant – DESI and other experiments can measure the sum of neutrino masses to eV or better, while the minimum possible sum is eV. We note that constraints on Dark Energy are significantly degraded by the presence of neutrino mass uncertainty, especially when using galaxy clustering only as a probe of the BAO distance scale (because this introduces additional uncertainty in the background evolution after the CMB epoch). Using broadband galaxy power becomes relatively more powerful, and bigger gains are achieved by combining lensing survey constraints with redshift survey constraints. We do not try to be especially innovative, e.g., with complex treatments of potential systematic errors – these projections are intended as a straightforward baseline for comparison to more detailed analyses.
The Fisher matrix formalism is a standard tool for forecasting the statistical ability of future experiments to measure cosmological parameters of interest Albrecht et al. (2006); Bassett et al. (2011); Cooray (1999); Giannantonio et al. (2012); Huterer et al. (2006); Hamann et al. (2012); Joudaki and Kaplinghat (2012); Kitching and Amara (2009); More et al. (2013); Weinberg et al. (2012). The Fisher matrix is the expectation value of the second derivative matrix of log likelihood with respect to parameters of interest
where is likelihood, and are parameters and the average is over all possible realizations of the data assuming a certain fiducial model. In the limit of Gaussian likelihood, the Fisher information matrix can be thought of as the inverse of a typical covariance matrix. In particular, in the limit of Gaussian likelihood surface, is the expected error on the parameter assuming the values of all other parameters are known, while is the marginalized error on the parameter . The Cramér-Rao bound also stipulates that no unbiased estimator of performs better than the Fisher matrix error and therefore Fisher information is the lower limit on the error obtainable from a given data set using an optimal estimator.
Fisher matrix forecasts generally should not be taken literally for poorly constrained parameters where the likelihood will often be non-Gaussian Wolz et al. (2012); Khedekar and Majumdar (2013); Basse et al. (2013); Costanzi Alunno Cerbolini et al. (2013), but even then the sense that the constraint is poor is generally preserved. When used thoughtfully, Fisher matrices allow us to map out the progression of parameter measurements we can expect from future experiments. Unfortunately, results of these exercises are generally scattered throughout the literature (or worse, non-public proposals), often with slightly different assumptions and methodologies that make direct comparisons difficult. The main purpose of this paper is to make predictions for a suite of models and planned experiments with a consistent set of assumptions.
We do not attempt to be particularly innovative, but one thing we emphasize is that our basic cosmological model assumes that the neutrino mass is not known. Since the effect of massive neutrinos is important given the accuracy of the data we consider and since it is unlikely that it will be measured in terrestrial experiments in the next decade, it is a logical necessity to include it as a free parameter in all forecasts.
We also note that no projection method can ever give an unambiguously fair comparison of experiments, as it is impossible to anticipate and model all possible sources of systematic errors (see e.g. Pullen and Hirata (2012); Ross et al. (2012)). It is also difficult to forecast our advances in theoretical problems in the next decade (for example, our understanding of non-linear matter clustering, redshift-space distortions and biasing of cosmic tracers for the large scale structure probes; similar issues exist for all other probes). Therefore, we sometimes give pessimistic and optimistic forecasts (e.g., quoting BAO-only errors for a galaxy redshift survey is essentially a very pessimistic use of the survey).
The rest of this paper is structured as follows: In §II we define our cosmological parameters and fiducial model, and discuss general methodology. In §III we discuss our Cosmic Microwave Background, i.e., Planck, treatment. In §IV we discuss spectroscopic, i.e., redshift surveys like DESI. In §V we discuss photometric, i.e., gravitational lensing-oriented surveys. In §VI we give the main results on parameter constraints. Finally in §VII we give some discussion and conclusions. (In the Appendix we give some more traditional FoM numbers without free neutrino mass, and discuss the issue of overlapping lensing and redshift surveys.)
This is intended to be a technical reference paper – see, e.g., Weinberg et al. (2012) for a much higher ratio of pedagogy to tables.
Ii Parameters, fiducial model and general methodology
In §VI we give constraint projections for this baseline model first, and then several scenarios with added parameters.
ii.1 Baseline model parameters
Our baseline model is flat CDM with massive neutrinos. This model is specified by eight parameters which are listed, together with their fiducial values, in Table 1.
|Physical baryon density|
|Physical matter density (including neutrinos which are non-relativistic at )|
|deg||Angular size of sound horizon at the surface of last scattering (a substitute for, e.g., Hubble’s constant)|
|eV||Sum of neutrino masses (we assume they are degenerate)|
|Amplitude of the primordial power spectrum at (for the numerical Fisher matrix we actually use )|
|Spectral index of primordial matter fluctuations, i.e.,|
|Optical depth to the last scattering surface assuming instantaneous reionization.|
|Ratio of tensor to scalar perturbations (we assume inflationary tensor fluctuation’s spectral index )|
|pressure/density ratio for Dark Energy at the present time|
|Rate of change of Dark Energy equation of state in the formula|
|0||Curvature of the homogeneous model|
|0||Modification of the growth factor|
|1||Arbitrary normalization multiplier applied to the linear perturbations at , i.e., all our observables except the CMB.|
|0||Running of the spectral index with pivot scale|
|0||normalization of local model quadratic non-Gaussianity of the initial perturbations|
|3.046||Effective number of neutrino species ( dark radiation).|
Parameter symbols have their conventional meanings. Capital s are densities of various components expressed as a fraction of critical density today. Small case s correspond to physical density (for the component ), where is the dimensionless reduced Hubble’s parameter today . Matter density contains contribution from baryons, dark matter and massive neutrinos
(this is used only where massive neutrinos are non-relativistic). Parameter is angular size of sound horizon at the surface of last scattering, i.e.
is the sound speed of cosmic plasma ( is the speed of light and is the ratio of baryon to photon energy density), is the comoving angular diameter distance, and is the scale factor at the redshift of decoupling as given in a fitting formula in Hu (2005):
Our standard fiducial parameter values follow Planck results, specifically the P+WP+highL+BAO column of Table 5 of Planck Collaboration et al. (2013). As mentioned before, in addition to the conventional 6 parameters of the minimal cosmological model, we also always vary the neutrino mass and the amount of tensor modes. Varying T/S is largely irrelevant to anything else in the paper because the T/S measurement is completely dominated by Planck and essentially uncorrelated with anything else – the error is always , so we do not print it in tables (note that this error is surely optimistic due to lack of consideration of foregrounds Armitage-Caplan et al. (2011), and note that it does depend on the fiducial value of , e.g., we find for fiducial ).
ii.2 Extended model parameters
Our first extension beyond the baseline model is to the Dark Energy Task Force (DETF) Figure of Merit (FoM) scenario Albrecht et al. (2006), except with the DETF definition modified to include marginalization over neutrino mass. As usual, we define the equation of dark-energy where and are the Dark Energy pressure and density. A cosmological constant is equivalent to . Linear dependence of the equation of state on expansion factor is allowed by introducing parameters and in . The DETF FoM was originally defined as the inverse of the area inside the 95% confidence constraint interval, but we follow the subsequently generally adopted modified normalization convention that the FoM is simply where and is chosen to make the errors on and independent. We follow the DETF standard of allowing curvature, parameterized by , to be free in this scenario (i.e., marginalized over when computing the FoM). While we believe that it is more useful to compute FoMs with free neutrino mass, in the Appendix we give results following the original DETF convention of fixing the neutrino mass, to show the difference and allow comparison with past calculations.
Going beyond the DETF FoM model, our next extension is to modify gravity following a model similar to but not exactly that of Albrecht et al. (2009). Rather than defining with as a free parameter, we define , where is the free parameter, with a fiducial value of zero, and is computed given the background evolution and assuming GR. This is of course exactly equivalent to the usual parameterization if is exactly described by with unvarying , but allows for any variation in within GR to be properly propagated (we originally implemented this form because neutrinos certainly modify in principle, but in practice it did not make a noticeable difference – note that with neutrinos here is defined to only include CDM and baryons). Similarly following Albrecht et al. (2009), we include a parameter representing a multiplicative offset of the amplitude of perturbations, (Albrecht et al. (2009) called it ), relative to the GR-predicted amplitude at (applied to the power, to decouple the low redshift amplitude from CMB measurements), i.e., for every use of the power spectrum other than the CMB, we multiply it by (as pointed out by Linder (2009), equation (29) of Albrecht et al. (2009) and the text that follows it could be interpreted as defining in a way that deviated from 1 even within GR – clearly this would be a bad thing, although the definition of parameters list at the start of §III of Albrecht et al. (2009) suggests that they really intended the definition to be the one we are using here, which does not have this problem). We include , , and as free parameters in the modified gravity scenario, as the main point is to see how well these things can be distinguished (generally a realistic modified gravity model would contain its own background evolution modifications, but these will be degenerate with changes in a Dark Energy equation of state).
We add a running of the inflationary perturbation spectral index, to the baseline model as a single parameter extension, i.e., in that case
Another single-parameter extension describes non-Gaussianity, , parameterizing the usual local model:
where is proportional to the initial potential fluctuations and is the underlying Gaussian initial field.
Finally, we consider a single parameter extension allowing for “dark radiation”, i.e., a contribution to the relativistic energy density of the Universe which otherwise does nothing. As is traditional, we call the parameter for this , measuring the amount of radiation in units of the amount contributed by a massless standard model neutrino, but it should be kept in mind that a measurement of this parameter differing from the standard model value 3.046 would not necessarily literally imply extra neutrinos, only extra radiation of some kind.
ii.3 Fisher matrix parameter errors
Through this work we assume Gaussian likelihoods and propagate experimental designs into the Fisher matrices for the intermediate products of individual experiments, such as Fisher matrices for power spectrum measurements, or BAO distance scale parameters. These intermediate results are in turn used to form Fisher matrices for cosmological parameters for individual experiments. Except as otherwise discussed, we assume experimental errors are independent and combine experiments by adding their Fisher matrices.
For a typical vector of measured quantities, , for which we can assume the likelihood function is Gaussian, with vector of means, , that is predictable given parameters , and covariance which we can assume is independent of the parameters, the Fisher matrix is:
While in general the covariance matrix does depend on parameters, this dependence becomes a sub-dominant part of the likelihood function once the parameters are sufficiently precisely determined Eifler et al. (2009); Kilbinger and Munshi (2006), i.e., essentially the same limit in which the Fisher matrix is an accurate estimator of errors to begin with. Eifler et al. (2009) show that it is important to compute the covariance for a model sufficiently close to the best fit, and all Fisher matrix calculations implicitly assume that this is done. This equation is used repeatedly throughout the paper, e.g., the observable could be a measurement of the BAO distance scale, or the CMB ’s, or galaxy band powers, etc.
Iii Cosmic Microwave Background: Planck
We include the Planck CMB satellite as a baseline experiment in all projections. Without it our interpretation of low redshift measurements would be dominated by constraints on strongly degenerate directions that are actually irrelevant in global constraints. Planck constraints are expected to improve with future releases including polarization, so we continue to project results using a Fisher matrix, following, e.g., Hu (2002). We assume a usable fraction of the sky . We assume 3 channels can be effectively used for cosmological measurements, 100, 143, and 217 GHz, with FWHM resolution , and 4.99 arcmin, temperature noise , and per resolution element and polarization noise , and The Planck Collaboration (2006) (i.e., noise is in units of the mean temperature). We use all up to 2000 for temperature and 2500 for polarization.
To compute the Fisher matrix we use equation (11) with the observables being the autocorrelations of temperature, and and modes of polarization, and the cross-correlation between temperature and mode polarization, i.e., , , , and , at each multipole .
Defining , where is the estimated value and the true value, with and equal , , , the covariance matrix for the observables is given by
Note that for the CMB we always sum over integer , not aggregated bands as discussed below for photometric surveys. We assume that different are uncorrelated, which they will not be with , but this will not affect the results as long as models have no fine structure in .
, , and and contain a noise contribution which we compute using
where is in radians Hu (2002). or can be substituted for to compute and noise power.
We do not include CMB lensing Oyama et al. (2013); Hall and Challinor (2012); de Putter et al. (2009), which could provide additional constraints which will probably be qualitatively similar to the galaxy lensing surveys discussed below, but with much different systematics.
iii.1 Planck projections vs. reality
Obviously our overall Planck projections are stronger than the published results, because they include polarization. This should change as new results are published. Our resolution numbers accurately reflect the achieved ones Planck collaboration et al. (2013), and we are consistent with the Planck power spectrum paper Planck collaboration et al. (2013) in using the 100, 143, and 217 GHz channels. For , Planck collaboration et al. (2013) use only 58% of the sky at 100 GHz, and 37% at 143 and 217 GHz, so it appears that we are optimistic in using 70%, although there is some suggestion in Planck collaboration et al. (2013) that they expect the fractions to improve in future releases. For Planck collaboration et al. (2013) uses 87% of the sky so we are a bit pessimistic there. The achieved noise in the published 15 month data set is very similar to projections in the 143 and 217 GHz channels, but somewhat worse in the 100 GHz channel (http://www.sciops.esa.int/wikiSI/planckpla/index.php?title=HFI_performance_summary&instance=Planck_Public_PLA), which should nevertheless almost achieve the goal performance that we use in projections by the end of the 30 month extended mission.
One of the most important remaining uncertainties is whether or not low- Planck polarization measurements will be sufficiently clean to achieve the error we project, which determines the error on the CMB measurement of the power spectrum amplitude, which is compared in turn to lower redshift amplitude measurements from redshift-space distortions or lensing to determine things like neutrino mass.
Iv Spectroscopic surveys
In this section we first describe how we compute projections for redshift surveys, including galaxy clustering, quasar clustering, and correlations of Ly forest absorption in quasar spectra. Then we describe the specific redshift surveys we include.
iv.1 Galaxy and quasar clustering
Galaxies and quasars are point tracers of the underlying cosmic structure. The physics of how they trace the dark matter fluctuations is well understood based on arguments about locality of galaxy formation McDonald and Roy (2009); Baldauf et al. (2012); McDonald (2006) and heuristic understanding that astrophysical objects form in the peaks (halos) of the primordial density fields Desjacques (2013). On very large scales bias is scale independent and redshift-space distortions are described by linear perturbation theory Kaiser (1987). Beyond-linear perturbative corrections can be used on intermediate scales before perturbation theory breaks down entirely on small scales Jeong and Komatsu (2009a); Seljak and McDonald (2011); Nishizawa et al. (2013); Wang and Szalay (2012); Chan et al. (2012); Matsubara (2011); Song et al. (2013); Vlah et al. (2012); Okumura et al. (2012a); Shaw and Lewis (2008); Baumann et al. (2012).
The basic model for galaxy clustering (or quasar clustering – when quasars are treated as clustering objects, as opposed to probes of the Ly forest, there is no fundamental difference between them and galaxies) is simply linear bias and a shot noise contribution, i.e.,
where is the bias for tracer type , is the growth rate, i.e., , is the cosine of the angle between the wavevector and our line of sight, is the linear theory mass power spectrum, and is the number density. We generally assume fiducial biases follow constant , where is the linear growth factor normalized by .
Isolating the BAO feature gives the most robust, but pessimistic, view of the information that one can recover from galaxy clustering measurements, since BAO can be measured even in the presence of large unknown systematic effects (very generally these will not change the BAO scale Seo et al. (2010)). To compute isolated galaxy BAO errors we use a lightly modified version of the code that accompanies Seo and Eisenstein (2007), assuming 50% reconstruction, i.e., reduction of the BAO damping scale of Seo and Eisenstein (2007) by a factor 0.5, except at very low number density, where we degrade reconstruction based on White (2010). This method has held up well under close scrutiny Sherwin and Zaldarriaga (2012); Ngan et al. (2012); Taruya et al. (2009). We generally quote errors on the transverse and radial BAO scales as errors on and , respectively, where is the BAO length scale. For galaxy and quasar clustering these errors always have nearly identical correlation coefficient 0.4.
To understand what we mean by “50%” reconstruction (and the details of our broadband calculations below, even though they don’t use the same code), one has to understand how the computation in the code of Seo and Eisenstein (2007) works. Conceptually at least (i.e., before some approximations they make for purely technical reasons) they start with the idea that the observable in the Fisher matrix calculation is the BAO-only part of the power spectrum as damped by non-linear evolution in the form of Lagrangian displacements, specifically:
where is includes the usual bias and RSD factors. The Lagrangian displacement distances are estimated to be and . These damping factors, along with the RSD factor, are taken outside the Fisher matrix derivatives to avoid using their structure to measure distance, which means they have the effect of modifying the covariance used in the Fisher matrix by a factor of their inverse. What we mean by “50% reconstruction” is that and are both multiplied by a factor 0.5 relative to the above unreconstructed values. We very roughly estimate a degradation in this reconstruction due to shot-noise following White (2010). The reconstruction multiplier used, , is obtained by interpolating over the table defined by the vectors , where . For , , while for , i.e., at low number density there is no reconstruction, while at high density the factor is the traditional 0.5. Note that the covariance effectively used in the BAO Fisher calculation (before including the factors pulled outside the derivatives) is still computed using the full linear theory power spectrum with shot-noise, i.e., equation (15).
Our primary modification of the code of Seo and Eisenstein (2007) is to allow multiple galaxy populations probing the same volume of space to be combined. We do this optimally by summing their contribution to the signal-to-noise mode-by-mode, i.e., .
Once we have estimated the covariance matrix of and using the code of Seo and Eisenstein (2007), we propagate these constraints into more fundamental parameter constraints using the usual Fisher matrix equation (11). The observables are and with dependence of on parameters included through equation 4.
We also quote errors on an isotropic dilation factor , defined as the error one would measure on a single parameter that rescales radial and transverse directions by equal amounts. Note that this is used only when a simpler condensation of the information in the covariance matrix is desired, e.g., for plotting basic experimental power vs. redshift. The constraints are used for all constraints on more fundamental parameters. But to be more explicit, the fractional change in for which we project errors, , is defined by
where and are the angular diameter distance and Hubble parameter in a fiducial Universe. The effective sensitivity of to and depends on the experimental scenario. For example, the simplest cases are easy to understand: for a purely transverse measurement (e.g., photometric survey) , while for a purely radial measurement (e.g., something closer to the Ly forest, although it is not purely radial) (or, if one is concerned about nonequivalent units, ). For intermediate cases like typical galaxy clustering the appropriate combination of and can always be determined given the covariance matrix between them. Note that this error on is clearly not in general equivalent to an error on the specific combination of parameters that determine the volume element, , as is easy to see by considering the purely radial or purely transverse examples. While those cases give a perfectly well-defined error on , the error on is formally infinite, because one of the two parts is unconstrained. Of course, one can add the assumption that and vary proportionally, but then saying “” just becomes an oblique way to say – the powers applied to and have no real meaning. would be directly measured by something that was really sensitive to volume, e.g., counts of a source with known physical number density).
Going beyond BAO, we use “broadband” galaxy power, i.e. measurements of the power spectrum as a function of redshift, wavenumber and angle with respect to the line of sight. This treatment automatically recovers all available information, i.e. not just the shape of the isotropic power spectrum, but also redshift-space distortions, Alcock-Paczynski Alcock and Paczynski (1979), and of course also the BAO information. Discussing isolated redshift-space distortions as is often done, or the monopole power spectrum alone (which may sometimes be used for neutrino mass constraints), may be useful for pedagogical reasons, but generally once we go beyond BAO there is no clear systematic advantage in any subset of the broadband information, so it makes sense to just use all of it.
Bias uncertainty is modeled by a free parameter in each redshift bin, generally of width , for each type of galaxy. Our results are not sensitive to the redshift bin width, maybe surprisingly. For example, we show an explicit comparison of some cases in Table 18, and have checked other cases (e.g., the neutrino mass constraints we show are identical to two significant digits between and bins). We believe the reason for this is that the extra freedom allowed by, say, splitting an already fairly narrow bin in half, i.e., for the bias in one half to go up while the bias in the other goes down, still summing to what would have been the bias for the coarser bin, is not generally at all degenerate with cosmological parameters, because cosmological models generally do not predict this kind of rapid, anti-correlated change in relevant quantities like .
We compute the broadband Fisher matrix using the usual generic equation (11), evaluated by taking numerical derivatives of with respect to all parameters. To include all geometric effects appropriately, the observable band power measurements are written in observable coordinates, i.e., radial distance is measured in and transverse distance in degrees, i.e.,
where obs stands for “observed” and com stands for “comoving” (recall that ). Band power measurements are labeled by and , which are held fixed under numerical derivatives with respect to parameters.
The covariance matrix of band power errors, , where is the estimate and is the true power, with and labeling potentially multiple tracers of LSS in the same volume of space, is
where is the volume of the survey and and are the bin widths (this formula is really only correct in the small-bin limit, and in practice we make the bins fine enough that the Fisher calculation is effectively an integral – note that as defined here that integral only covers ). Equation (20) is valid for all combinations of , , , and (e.g., even if some are equal). Recall that , so shot noise enters the errors through terms where or are equal to or . The prefactor in Equation (20) accounts for sample variance due to finite volume. Different bands of and are assumed to be independent, which, as usual, is not strictly true for a finite volume survey but will be irrelevant as long as the theoretical power spectrum does not have fine structure in (surveys with narrow strips may violate this condition with respect to the BAO wiggles, but not the large surveys we are most interested in).
We use broadband power up to some quoted . At we continue to use BAO information as usual (to be clear, we compute the usual BAO fisher matrix, but cut modes with out of the integration, since they are already included in the broadband calculation). We use two simple choices of , 0.1 and 0.2 . These should not be taken literally as a scale up to which we think linear theory will be sufficient for an analysis of future high precision data. Deviations will clearly be present even at . These cutoffs are just intended to give an idea of the sensitivity of results to the effective scale where information is recovered after making corrections for non-linearity, presumably including some marginalization over beyond-linear bias parameters (information at higher might be used to constrain these parameters – this is why we write , where “eff” stands for “effective”, instead of simply ). It will be a major program of the next decade to figure out exactly how to do this fitting in practice for a high precision survey like DESI – how well we can do this will determine how well we can measure parameters. The value 0.1-0.2 is motivated by, e.g., the finding of Okumura et al. (2012b) that a Taylor series representation of redshift space distortions could be summed to high precision up to , after which it appeared that the power spectrum had become deeply non-linear, i.e., information is probably hopelessly scrambled, and many other similar findings (e.g., Vlah et al. (2012); Gil-Marín et al. (2012); Taruya et al. (2013); Song et al. (2013); Okumura et al. (2012a); Vallinotto and Linder (2013); Linder and Samsing (2013)).
As an additional measure to be sure we are not making unreasonable predictions using the non-linear regime, we use the same information damping factors from Seo and Eisenstein (2007) that we use for BAO for the broadband signal, i.e., the exponential factor in equation (16). This is well-motivated from a theoretical point of view, i.e., the damping is related to the propagator of Crocce and Scoccimarro (2008), which suppresses all linear theory information, not just BAO. We also include the same reconstruction factor, as there is no logical reason why similar methods could not be used to recover non-BAO information, although this has not been worked through yet. The logic for also using for broadband power (i.e., not relying on these damping factors as our only cutoff), and only using BAO beyond that, is that, while for BAO we largely only need to worry about the statistical effect of damping the signal relative to effective noise power coming from higher order terms, to use the broadband power this effective noise must actually be predicted, which is generally harder. To avoid using the scale of this damping as a new standard ruler, we again pull it outside the Fisher matrix derivatives, effectively multiplying the power used to compute the covariance by its inverse (of course the RSD factor is not pulled out of the derivatives, as here it is part of the signal that we are interested in). To reiterate: the damping factors are applied as an additional limitation on the higher power spectrum, in addition to , and the reconstruction factor is included to make the broadband treatment consistent with the isolated BAO treatment (i.e., note that if we did not do this the BAO information at would not be the same between isolated BAO and broadband cases, which clearly does not make sense, although we could in principle adjust it as an additional step).
iv.1.3 Isolated redshift-space distortions
Frequently, galaxy constraints beyond BAO are described as a measurement of a single redshift-space distortion (RSD) amplitude as a function of , e.g., “”. While it is always nice to have a one dimensional scalar function to make simple plots, we do not use this method for our main results because it requires us to either ignore or marginalize away the uncertainty in geometry and the primordial power spectrum shape. It is easy enough, although harder to visualize, to just include all of the broadband information. However, since it has come to be expected, we do quote isolated RSD errors as a function of redshift for BOSS and DESI, calculated using exactly the code described in McDonald and Seljak (2009). This calculation makes the assumption that both the geometry and power spectrum shape are fixed by external constraints. This is probably the only scenario in which an isolated RSD measurement is a useful thing to quote, but we do not claim it applies in our cases. Specifically, if we assume the -dependence of is known, the formula decomposes into a function of two parameters, and , where is the rms normalization of the linear mass density fluctuations as a function of . In tables, we identify the maximum used to compute the error on by labeling it , e.g., means the error calculation included information up to . These fractional errors are equivalent to what one usually sees quoted as an error on “”, i.e., here is intended only loosely as a parameter normalizing the power spectrum, not to mean that you necessarily have a direct measurement of fluctuations on the scale of radius spheres. We need to make the scale of sensitivity more explicit, because we have more than one. As in the broadband case, we always include the information damping factors of Seo and Eisenstein (2007), with reconstruction.
Spectroscopic surveys designed for galaxy redshift surveys can often also probe large-scale structure using the Ly forest McDonald et al. (2006a); Slosar et al. (2011), i.e., the Lyman- absorption by neutral gas in the intergalactic medium in the spectra of high redshift quasars (or maybe at faint enough magnitudes Lyman-break galaxies McDonald and Eisenstein (2007)).
We model the three dimensional power spectrum of Ly fluctuations using the analytic formula of McDonald (2003):
where is the linear bias parameter, the redshift space distortion parameter and is a non-linear correction calibrated from simulations, i.e.,
where . The first term in the exponential represents non-linear growth of real space power (at the central model of McDonald (2003), and ), the 2nd term represents pressure smoothing of small-scale structure ( and ), and finally the 3rd term represents Fingers-of-God-type suppression of radial power (, , , and ). Table I of McDonald (2003) gives parameter dependence of , , and fitting parameters of .
Similar to galaxies, the Ly forest can be viewed pessimistically as a probe of the BAO feature, or more optimistically using the broadband and smaller scale power spectrum.
The BAO distance scale has recently been measured in the three dimensional correlation of the Ly forest in nearby quasar lines of sight from the BOSS survey Busca et al. (2013); Slosar et al. (2013). Using one third of the final BOSS area, the authors were able to measure BAO distance scale at redshift with an uncertainty of 2% Slosar et al. (2013).
The parameters of equation (21) given by McDonald (2003) are only valid near . For BAO error estimates, which we want to make well away from this redshift and do not require us to use detailed parameter dependence, we simply use the central model parameters (the Planck model happens to have nearly exactly the same amplitude and slope of the power spectrum relevant to the Ly forest, although the WMAP model was lower) with the power spectrum additionally multiplied by a factor to match the evolution of the 1D power with redshift McDonald et al. (2006a). Except as otherwise noted, we use the method of McDonald and Eisenstein (2007) to estimate the obtainable errors (a similar method was derived by McQuinn and White (2011)).
McDonald and Eisenstein (2007) derived the three dimensional flux power from a hypothetical 3D Fourier transform of a Ly forest data set to be
(we say “hypothetical” because we generally would not do the data analysis with a literal 3D Fourier transform, but any near-optimal analysis should obtain similar results). Here is the true 3D flux power spectrum that one would measure with infinite sampling, is the one dimensional power spectrum along single lines of sight, is the power spectrum of the weighted quasar sampling function, and is the weighted pixel noise power. McDonald and Eisenstein (2007) derived that
where is the length of the forest in a quasar spectrum, is the pixel width
where is the luminosity function of observed quasars as a function of magnitude m, is the weight as a function of ,
where is the pixel noise, which will generally be a function of magnitude. The weight function is
where is, following Feldman et al. (1994), taken to be the signal power at some typical wavenumber, which we take to be , , and (this must be determined iteratively because it both determines and depends on the weights). All of this is discussed in more detail in McDonald and Eisenstein (2007). It was a guess in McDonald and Eisenstein (2007) that must be restricted to be less than the Nyquist frequency corresponding to the typical separation between quasars, but quickly after we realized that this definitely must be correct because it is the only way to recover the correct 1D Fisher matrix in the limit of infinitely sparse quasars.
Given equation (23), the Ly forest Fisher matrix calculations proceed similarly to the galaxy calculations, i.e., we evaluate the basic Fisher matrix equation (11) with as the observable, and compute the covariance matrix using in equation (20).
In contrast to past projections which often used the rest wavelength range Å (following McDonald et al. (2006a)), we expand the range to include the Ly forest and move slightly closer to the quasar, Å, reflecting our increasing confidence that we understand the relevant issues well enough to measure BAO across this range Iršič et al. (2013). Gains from this enhancement of effective number density (and cross-correlations with quasars below) are substantial because the measurement is quite sparse, i.e., in what for galaxies we would call the shot-noise limited regime.
We isolate the BAO signal by subtracting a smoothed version of the power spectrum from the wiggly one and then using the residual wiggles in our Fisher matrix derivatives. We follow the procedure of Seo and Eisenstein (2007) in dividing the noise contribution to the power errors by the RSD factor rather than including this factor in the derivative term, which would lead to artificial (i.e., non-BAO-distance) breaking of the degeneracy between radial and transverse distance errors (this was not done in McDonald and Eisenstein (2007), leading to some underestimation of the degeneracy between and ). We also include the Seo and Eisenstein (2007) damping factors, with no reconstruction. We have tested that our approach agrees with Seo and Eisenstein (2007) to percent level given matching assumed data sets. To be clear, the primary difference between the Ly forest BAO Fisher matrix calculation and the galaxy version is the need to evaluate the integrals over the quasar luminosity function and spectrograph noise distribution to determine the signal to noise level as a function of redshift, with another difference being that we compute the error on BAO distance through direct Fisher matrix derivatives of the wiggles-only power spectrum rather than the procedure of Seo and Eisenstein (2007) of averaging over a cosine squared approximation for the derivatives (but again, we have checked that these methods agree remarkably precisely).
iv.2.2 Broadband and 1D power
The correlation of Ly absorption in quasar spectra can provide other cosmological information beyond BAO. Several studies have already constrained cosmological parameters from the line of sight power spectrum McDonald et al. (2006a); Palanque-Delabrouille et al. (2013a); Bird et al. (2011); Afshordi et al. (2009); Seljak et al. (2006a); Viel et al. (2006); Goobar et al. (2006); Seljak et al. (2005); Viel et al. (2005); Gratton et al. (2008), and one can also obtain valuable information from the full shape of the three-dimensional clustering McDonald (2003). In the projections below we distinguish between Ly forest BAO measurements and broadband measurements that include the one dimensional power spectrum measurement.
For interpreting broadband measurements, we need the parameter dependence of Table I of McDonald (2003), i.e., and , along with the fitting parameters of , depend on the amplitude and slope of the linear power spectrum, temperature-density relation McDonald et al. (2001); Rudie et al. (2012), and mean level of absorption Faucher-Giguère et al. (2008), all of which are varied in our Fisher matrix calculations. To help constrain these parameters, we include the 1D power spectrum that could be measured from (existing) high resolution spectra McDonald et al. (2000); Kim et al. (2004).
The constraints from the Ly forest are difficult to predict accurately, because they require careful simulation work to achieve McDonald et al. (2005a, b); Viel and Haehnelt (2006); Regan et al. (2007) – more careful than the community has been able to muster so far. The numbers we give are intended to be a good central value guess, i.e., while there is uncertainty, it is at least as likely that we could do better as worse, basically because we intentionally leave a lot of information for “contingency.” For these projections we continue to use the rest wavelength range Å, although the Ly forest region should provide valuable complementary information. We do not include the bispectrum or any other statistics besides the power spectrum, which are known to be powerful for breaking IGM model degeneracies (e.g., Mandelbaum et al. (2003); Viel et al. (2004)). We do not use cross-correlations with quasar density. Finally, we only use the redshift range 2-2.7. The original reason for this was the limited range of applicability of the parameters of McDonald (2003), but it has the effect of reserving the large amount of higher redshift information to help allow for expansion in the modeling uncertainty.
iv.3 LyaF-quasar cross-correlation
The cross-correlation of quasars with the Ly forest Font-Ribera et al. (2013) provides a complementary measurement of BAO at high redshift. We use a high-noise approximation to combine separately computed constraints from Ly forest and quasars into one. In general, if we have multiple observable tracers, , of the mass density field, , of the form where is the generalized bias (including the RSD factor) and is the noise for tracer where , and we assume the noise is uncorrelated (a generally good but not always perfect assumption Hamaus et al. (2010, 2012); Seljak et al. (2009)), it is easy to show that the optimally weighted estimate of has noise variance , where this applies mode-by-mode in Fourier space. This is equivalent, for galaxies where , to the statement that we can simply add to find the signal-to-noise ratio for the optimal combined tracer, where (as mentioned above, this is how we do multiple-tracer BAO calculations, including full and dependence). The rms fractional error on a combined BAO measurement can then be approximated by where is the fractional error we would find from the given volume in the zero noise limit and is evaluated at the typical and of the BAO feature (remember that for the Ly forest does depend on ). If we have BAO measurements from the individual tracers, which obey , we can re-write in terms of and in the high noise limit obtain , i.e., the combined fractional BAO error is given by the inverse sum of individual fractional BAO errors, not by the inverse quadrature sum as it would be for measurements in different volumes (the same approach could be used to derive the error without the high noise approximation, it would just produce a more complicated-looking equation). This inverse sum property makes the addition of a subdominant tracer like the quasars surprisingly valuable, compared to our usual intuition based on inverse quadrature sums.
We will justify the high noise approximation for DESI below. It is certainly possible to do a full multiple-tracer Fisher matrix calculation for Ly forest and quasars, like we do for ELG, LRG, and QSO tracers at lower redshift, but this approximation should be sufficient and is easier given the available code. We use cross-correlations with quasars only for BAO measurements, not for broadband, although generally they should add information there too.
BOSS Dawson et al. (2013) is a 10000 sq. deg. survey that is almost completed. Analyzers have chosen a certain redshift binning for the data Reid et al. (2012), but we give the continuous numbers we have been using for Fisher matrix projections in Table 2.
We use . For the Ly forest calculations we use the luminosity function of Jiang et al. (2006), for magnitude , multiplied by 0.73, in order to match the observed number density of quasars Slosar et al. (2013).
The published analyses of the first BOSS data release (DR9) Anderson et al. (2012); Reid et al. (2012) give us the opportunity to evaluate the Fisher matrix projections relative to achieved reality, which we do in the next two subsections, for BAO and RSD. DR9 covered an effective area of 3275 square degrees and these analyses focused only on the high redshift sample, CMASS, which dominates the redshift distribution above ; the formal redshift cuts were and did not include any LOWZ targets. The number densities were very similar to our assumption (by design). Reid et al. (2012) found clustering amplitude corresponding to , or for our model with , corresponding to , in contrast to 1.7 that we assume in the Fisher matrix calculations.
iv.4.1 BOSS BAO projections vs. reality
To match the CMASS sample presented in Anderson et al. (2012), we combine the three redshift bins in Table 2 at , 0.55, and 0.65, halving the volume for to account for the observed redshift range of (with much-suppressed number density at the low end). We combine the errors by simply taking the inverse of the square root of the sum of inverse squares of fractional errors. The combined errors are rescaled to account for the effective area of 3275 square degrees. For the dilation factor, , this gives projections of 1.85% error without reconstruction and 1.08% with reconstruction for the DR9 CMASS sample. Using the bias measured from the data as mentioned above, , instead of , and a slightly more exact match to the number density distribution, we derive 1.94% and 1.15% (this 6% change in distance error is so small that we continue to use the traditional for projections). This is to be compared with the average of the mocks in Fig. 13 of Anderson et al. (2012), which is approximately 2.6% and 1.8% before and after reconstruction, respectively, and therefore the mock results in Anderson et al. (2012) are 1.34 (before reconstruction) and 1.57 (after reconstruction) times the Fisher matrix projections. The actual measured results from the data are better than expected, 1.7% both before and after reconstruction, but if we believe the mocks accurately represent the statistics of the measurement these differences must be just statistical fluctuations, and in any case the post-reconstruction ratio of error measured from data to Fisher estimate is still 1.48.
We can only speculate about the reasons for this discrepancy, i.e, before reconstruction and after reconstruction. First, note that Anderson et al. (2012) find practically identical errors using the power spectrum or correlation function, so a difference between these two approaches cannot be the explanation. Anderson et al. (2012) used a spherically averaged power spectrum while our Fisher matrix assumes an anisotropic power spectrum when deriving an isotropic error. According to Takahashi et al. (2009) (their eq. ), using a spherically averaged power spectrum would return a slightly worse error than using an anisotropic power spectrum and then projecting two dimensional errors in and on ; for the fiducial value of , we only expect difference in error. A non-Gaussian aspect could exist in a likelihood curve of the dilation factor in the real survey, however Anderson et al. (2012) estimates the likelihood to be fairly Gaussian. Meanwhile we expect that the sample variance on the errors of the dilation scale from the finite number of mocks is not big enough to explain this discrepancy; we expect for the 600 mocks (i.e., ). The level of a Finger-of-God in the CMASS sample introduces only a small effect in the Fisher errors and therefore cannot explain the discrepancy. It is possible that small scale power due to nonlinear structure growth and bias could have increased an effective shot noise level relative to the underlying BAO signal, although an amount of power large enough to increase the errors this much should be obvious in the band-power measurement. Finally, the power spectrum/correlation function estimators used in Anderson et al. (2012) are not precisely optimal, although we would not expect the effect to be this large.
The discrepancy between CMASS DR9 and the Fisher formalism increases after reconstruction. We note that the Fisher matrix errors have not been as rigorously tested for the reconstructed field as the original field (although see Ngan et al. (2012), which did do careful tests with a reconstructed field, although only for the mass density). The dependence of the noise properties of the reconstructed field on the details of the reconstruction needs detailed tests in the future. For example, the reconstruction in Anderson et al. (2012); Reid et al. (2012) adapted conventions to restore isotropy on large scales, which were not used in many of the tests of the method, and may not have well understood noise properties.
We note that the geometry of DR9 was significantly stripey, i.e., not a nice compact (e.g., square) 3275 sq. deg. Anderson et al. (2012), which could lead to an unavoidable degradation in the measurement relative to the Fisher matrix (basically because the -band correlation length could become too large to resolve the BAO wiggles), could degrade reconstruction, and could exacerbate any sub-optimality in the analysis. The next generation of analyses will be quite compact, so it will be interesting to see if the discrepancy is reduced. In fact, while this paper was under review, BOSS published an analysis of the 8500 sq. deg. DR11 data set Anderson et al. (2013). Their mock-based mean error estimates, from their Table 4, are % above our projections for this volume, with this factor essentially identical pre- and post-reconstruction. This does suggest that a substantial part of the problem with DR9 reconstruction was the pathological geometry. By the same token, the relatively modest improvement in the pre-reconstruction results suggests that the remaining discrepancy is not likely to be related to geometry. However, it is small enough to be more likely explained by combinations of the other reasons discussed above.
The bottom line is: the errors predicted by BOSS collaboration mocks are somewhat (% in DR11) larger than standard Fisher matrix projections for them, and we aren’t entirely sure why, considering the many tests that have been done. However, we have already identified 10% worth of fixable sub-optimality above, so overall this does not seem like a big problem for our projections for future surveys. There are three logical possibilities that should be explored for the remainder: the Fisher projections are overly optimistic somehow, the analysis of the mocks and data is noticeably sub-optimal, or some imperfection in the mocks leads to even an optimal analysis producing incorrectly large errors.
iv.4.2 BOSS redshift-space distortion projections vs. reality
Predictions for RSD are generally less certain than for BAO (really we only project a range of possibilities, for different ), but it is still useful to see how they compare to the real analysis of Reid et al. (2012). From Table 2, we estimate the expected fractional error on () to be 4.0 (2.0)% for 10000 sq. deg., or 7.0 (3.4)% for DR9.
Model 2 in Reid et al. (2012), which treats as a free parameter and marginalizes over uncertainty in the linear matter power spectrum and flat CDM distance-redshift relation, finds , which is an 8.0% fractional error relative to the measured value, or 6.9% of our fiducial (there is some statistical error in this kind of percent error, of order the error itself, i.e., if the measured value fluctuates low, a percent error based on it will be larger, and vice versa). Table 2 of Reid et al. (2012) shows that the uncertainty remaining after including CMB constraints on the linear matter power spectrum or CDM distance-redshift relation does not contribute to the DR9 error budget for measuring , so we can directly compare with the Fisher projections above. Table 2 of Reid et al. (2012) also shows that the uncertainty would be reduced to 7.0% (6.0% of our fiducial value) if a Finger-of-God nuisance parameter were held fixed. Since the DR9 analysis was performed in configuration space and the Fisher analysis in Fourier space, an attempt to compare at equal can only be approximate. Reid and White (2011) found an approximate mapping between minimum configuration scale and an equivalent , which suggests for the DR9 analysis . The bottom line seems to be that the DR9 results are consistent with the projections assuming . This is consistent with the idea that fits will go to somewhat larger than , but lose some information to marginalization over nuisance nuisance parameters describing nonlinear effects such as Fingers-of-God, and some to non-Gaussian errors that enhance the data covariance matrix above the naive Fisher prediction. We conclude that the scheme adopted in this work gives a reasonable estimate of constraints that are achievable today with ; we are optimistic that future theoretical improvements will further enhance the constraining power of future surveys.
eBOSS is a proposed extension of BOSS that would cover 7500 sq. deg., 6000 sq. deg. focused entirely on quasars and LRGs at slightly higher redshift than BOSS LRGs, and another 1500 sq. deg. that adds ELGs similar to those discussed below for DESI (in addition to LRGs and quasars as in the 6000 sq. deg.). Table 3 shows basic numbers for eBOSS Zhao et al. (2014). eBOSS will also target quasars at over the 7500 sq. deg., and re-observe some BOSS quasars to obtain better signal-to-noise in the spectra, in order to improve the Ly BAO measurement from BOSS by . For simplicity, we do not consider the Ly part of eBOSS in our analysis.
We assume that we can add the constraints from the two areas as if they are independent (in the usual way of this kind of Fisher matrix, this will be correct up to survey edge effects).
For HETDEX (http://hetdex.org), we do not have a complete redshift distribution, only the number 0.8 million galaxies, area 420 sq. deg., and a redshift range Chiang et al. (2013), so we use a fixed . We use bias Chiang et al. (2013). Table 4 shows the basic HETDEX numbers.
In the interest of limiting the length of our main results tables, we only include a limited set of cases using HETDEX.
DESI (short for Dark Energy Spectroscopic Instrument Levi et al. (2013)) is a galaxy and quasar redshift survey likely to run on the Mayall 4 meter telescope at Kitt Peak National Observatory near Tucson, AZ, over an approximately five year period from 2018 to 2022. The baseline area is 14000 sq. deg. We also consider the possibility that the spectrograph could then be moved to the twin Blanco telescope in Chile to cover another sq. deg. We take the BigBOSS numbers to represent DESI, although this is not set in stone (see Schlegel et al. (2011); Mostek et al. (2012a), but the numbers we actually use are revised ones presented in Mostek et al. (2012b); Levi et al. (2013)). DESI will target 3 types of objects: Luminous Red Galaxies (LRGs) are bright, highly biased red objects that are easy to target from spectroscopic data (since the galaxy target selection for BOSS is not exactly the same as for the SDSS I and II LRGs, they are distinguished as CMASS and LOWZ galaxies in BOSS analysis papers; however, at the level of this paper they are essentially the same class of objects, which we will call LRGs). A second class of objects are Emission Line Galaxies (ELGs) Mostek et al. (2013), which require a higher resolution spectrograph to type and redshift, since this is only possible if the OII doublet is resolved. ELGs are considerably less biased than LRGs. For ELGs we use Mostek et al. (2013). Finally, we use quasars as tracers of cosmic structure. Quasars are difficult to target photometrically, especially in the redshift range , but can be very efficiently targeted using variability. They are very highly biased, but are limited by their limited number density, which is considerably lower than that of LRGs and ELGs. For quasars we use , loosely based on Ross et al. (2009).
Numbers we use for Fisher matrix projections are given in Table 5.
The quasar luminosity function use for the Ly forest calculation follows Palanque-Delabrouille et al. (2013b), for magnitude , with a 0.8 reduction in numbers to allow for targeting inefficiency. The spectral signal-to-noise ratio that we use, computed using the BBspecsim code Mostek et al. (2012a), is shown in Figure 1.
Note that we always absorb BOSS and eBOSS into DESI, as they will be physically overlapping (it was not necessary to combine BOSS and eBOSS because they cover distinct redshift ranges).
At the heart of the Ly forest data at , for DESI quasar clustering, while scaling from the Ly forest BAO errors gives for the Ly forest i.e., any corrections to the high-noise limit when combining the two will be modest, although we are on the borderline of applicability for this approximation. Note that we have not included reconstruction of the non-linear damping of the BAO feature, which might produce a small (%) improvement.
We only include the redshift survey from Euclid, assuming 15000 sq. deg. and a total of 50 million galaxies, based on Laureijs et al. (2011) (note that there is some uncertainty in the expected number of galaxies). Adding Euclid lensing should be qualitatively similar to LSST. Numbers we use for Euclid Fisher matrix projections are given in Table 6.
We assume fiducial . For simplicity, we assume Euclid does not overlap with DESI. To the extent that there is some overlap, there will be some degradation of combined constraints relative to what we quote (we do not see any case where this should critically change one’s basic picture of how well parameters can be measured).
Note that the WFIRST-AFTA report Spergel et al. (2013) appears to suggest that these Euclid numbers are very optimistic. They forecast that Euclid will find factors of 8, 16, and 30 lower number density, at , 1.5, and 1.9, respectively, than they forecast for WFIRST, which correspond to factors 0.38, 0.49, and 0.49 smaller number density than we use in this paper – this would obviously lead to some degradation of our projections for Euclid. To be clear: we continue to use the relatively optimistic “official” Euclid numbers, with 50 million total galaxies as shown in Table 6. The WFIRST report suggests that these numbers that we use are too high by a factor (the factors 8, 16, and 30 are relative to the much higher WFIRST densities – we quote these numbers to be very accurate about what the WFIRST report says).
We implement WFIRST-2.4 following Spergel et al. (2013). Number densities come from their Table 2.2. The bias formula that we use for Euclid, , happens to be exactly equal to the formula of Spergel et al. (2013), , at , and within 10% over the full redshift range, so we use this also for WFIRST. The numbers we use for Fisher projections are given in Table 7.
In the interest of limiting the length of our main results tables, we only include a limited set of cases using WFIRST.
iv.10 Summary of S/N and BAO distance errors vs. redshift for redshift surveys
Coincidentally, the BAO scale and non-linear scale are quite similar, so BAO errors can summarize well the general relative constraining power of redshift surveys and their redshift dependence. The signal-to-noise for typical BAO-scale modes in redshift space is shown in Fig. 2.
We evaluate at , , an approximate center-of-weight point for BAO measurements. We chose the numbers 0.14 and 0.6 by looking for the point where corresponded to the optimum in a trade-off between area and number density at fixed total number of objects (specifically, for the full range of parameters covered by DESI LRGs and ELGs). We think this definition reflects the origin of the idea that is a special point, but it should be kept in mind that achieving by this definition does leave a survey significantly farther away from the sample variance limit than the traditional definition , .
Projected BAO distance errors are shown in Fig. 3.