Observational bounds on the cosmic radiation density
Abstract
We consider the inference of the cosmic radiation density, traditionally parameterised as the effective number of neutrino species , from precision cosmological data. Paying particular attention to systematic effects, notably scaledependent biasing in the galaxy power spectrum, we find no evidence for a significant deviation of from the standard value of in any combination of cosmological data sets, in contrast to some recent conclusions of other authors. The combination of all available data in the linear regime prefers, in the context of a “vanilla+” cosmological model, (95% C.L.) with a bestfit value of 2.6. Adding data at smaller scales, notably the Lyman forest, we find (95% C.L.) with 3.8 as the best fit. Inclusion of the Lyman data shifts the preferred upwards because the value derived from the SDSS Lyman data is inconsistent with that inferred from CMB. In an extended cosmological model that includes a nonzero mass for neutrino flavours, a running scalar spectral index and a parameter for the dark energy, we find (95% C.L.) with 3.0 as the best fit.
MPP200750
TUMHEP668/07
1 Introduction
The observed global properties of the universe can be remarkably well described by the CDM model in conjunction with simple initial conditions for the primordial density fluctuation spectrum. In its simplest form the model is geometrically flat and represented by nontrivial values for six key parameters: the baryon density, the dark matter density, the Hubble parameter, the amplitude and spectral index of primordial adiabatic scalar fluctuations, and the optical depth to reionisation. No single additional parameter provides a substantially better fit to currently available data, a situation summarised by Max Tegmark’s dictum, “vanilla rules ok” [1].
There are however many ways to extend this vanilla model, some of which are physically wellmotivated, such as a nontrivial equation of state for the dark energy, or a running spectral index for the spectrum of primordial density fluctuations. An extension with a nonvanishing hot dark matter component is actually unavoidable because neutrinos are known to have mass and the current direct laboratory limits are so loose that neutrino hot dark matter could easily play an important role. Many authors have sought to constrain neutrino masses in the context of CDM cosmology by inference from cosmological data, and found no evidence for a nonvanishing value on the level of precision that can be achieved with existing data.
Another extension invokes a nonstandard radiation density, traditionally parameterised by the effective number of neutrino species, with being the standard value [2]. This tradition dates back to the time before LEP at CERN measured the number of ordinary neutrino species to be 3 and big bang nucleosynthesis (BBN) provided the only significant upper limit on the number of particle families. Today, constraining with cosmological data is primarily a consistency test of standard particle physics with concordance cosmology and of concordance cosmology with itself because one can compare the radiation density allowed by BBN with that implied by precision cosmological data which probe physics at different epochs.
This exercise has been performed by several groups before [3, 4, 5, 6, 7, 8] and after [9, 10, 11, 12, 13, 14, 15] the release of the WMAP 3year data [15, 16, 17]. Some of these recent results suggest surprisingly large values for , with 95% C.L. intervals that do not always include the standard value [9, 13, 15]. The apparent conflict of these results and the exciting possibility of a deviation from the minimal cosmology has motivated us to reexamine the cosmological determination. Our goals are twofold: first, to identify the source of discrepancy in previous analyses, and second, to provide an uptodate estimation of within more general model frameworks.
One possible source for the overestimation of is an incorrect statistical methodology. The popular software GetDist, an analysis package frequently used in conjunction with the Monte Carlo Markov Chain generator CosmoMC [18, 19] for cosmological parameter estimation, provides by default 1D error estimates based on the central rather than the minimal credible interval, although the latter is more meaningful for inference problems. These constructions differ significantly for skewed distributions, but become identical in the Gaussian limit. We find that this effect can indeed be significant if one uses a small number of data sets that are not very constraining, since in these cases the 1D marginal posterior distribution for often has a long tail towards large values as a result of strong degeneracies with other parameters. However, when many data sets are combined and conspire to remove these degeneracies, the 1D posterior for usually becomes narrow enough to approach the Gaussian limit. Therefore the different error construction methods are probably not the main source of discrepancy.
The two main problems we have identified that affect the determination of are (i) an unusually large fluctuation amplitude reconstructed from the Lyman forest data [20] relative to that inferred from WMAP, and (ii) the treatment of scaledependent biasing in the galaxy power spectrum inferred from the main galaxy sample of the Sloan Digital Sky Survey data release 2 (SDSSDR2) [21, 22]. The first issue is well known, and its complete investigation—involving elaborate astrophysical modelling—is beyond the scope of the present work. The second issue is more subtle. In previous analyses, scaledependent biasing in SDSSDR2 has either been ignored [15], or treated with empirical correction formulae under overly restrictive conditions [9, 13]. We will explain this issue in more detail in section 4 below. Here we anticipate that no exotic values for will be found if one either avoids smallscale data altogether or if one avoids artificially constraining assumptions about the extent of the scale dependence.
To derive our estimate for we begin in section 2 with a description of our cosmological parameter framework, and in section 3 the cosmological data to be used. In section 4 we discuss the problem of galaxy bias and its scale dependence. In section 5 we compare different statistical inference methods frequently encountered in the context of cosmological parameter estimation, and the way they provide “bestfit parameters” and associated error estimates. In section 6 we study in a minimal cosmological model which has a nonstandard radiation density as the only extension to vanilla cosmology. We use this simple scenario as a benchmark to compare results from different combinations of data and with different statistical methods. In section 7 we consider an extended model that includes as free parameters also a constant dark energy equation of state parameter, a running spectral index, and neutrino masses. In the framework of standard Bayesian statistics we provide credible intervals for . In section 8 we summarise our findings.
2 Cosmological models
We perform our inference in the framework of a cosmological model with vanishing spatial curvature and described by eleven free parameters,
(2.1) 
Here, the physical dark matter density , the baryon density , the Hubble parameter , the optical depth to reionisation , the amplitude , and the spectral index of the primordial scalar power spectrum are collectively labelled the “vanilla” parameters. They represent the simplest parameter set necessary for a consistent interpretation of currently available data.
The next three parameters denote a nonzero neutrino fraction of the present day dark matter content, the number of massive neutrino species, assuming a common mass value for all of them, and the total effective number of massless plus massive neutrinos. Of course, can also include other forms of radiation. With these definitions, enters the presentday energy density as
(2.2) 
During the radiationdomination epoch the total energy density is
(2.3) 
where and are the photon and neutrino temperatures respectively.
The last two parameters in equation (2.1) represent a constant equation of state parameter for the dark energy , and a running parameter in the scalar power spectrum defined at the pivot scale .
The vanilla cosmological model is defined by holding all nonvanilla parameters fixed at their standard values given in table 1. In the same table we also show the priors assumed for all cosmological fit parameters. We shall consider several scenarios, each including as a free parameter.
Parameter  Standard  Prior 1  Prior 2 

—  –  
—  –  
—  –  –  
—  –  
—  –  
—  –  
0  –  
0  –  
3.046  –  
–  
0  – 
Minimal model
Our minimal model (section 6) has seven free parameters, namely, vanilla+, while the other parameters are fixed at their standard values. In particular, all neutrinos are assumed to be massless. Most constraints on in the recent literature were derived within this framework [15, 9, 11, 12, 13]. Therefore, the minimal model lends itself as a benchmark case to the study of differences and similarities between our results and those of previous authors, as well as differences between different analysis methods.
Extended models
As in the minimal model, our extended models (section 7) always include the vanilla parameters and . In addition, we include neutrino masses and hence the parameter . Extended models with as a free parameter were also considered in Refs. [3, 7, 8, 10]. However, there are many different ways to incorporate neutrino masses into the analysis. We shall consider two scenarios. In the first, we assume that all degrees of freedom represented by have equal mass , i.e., . An increased effective number density of ordinary neutrinos could be due to, for example, a chemical potential in the neutrino phase space.^{1}^{1}1Technically, even though a chemical potential does increase the neutrino number density, our treatment does not fully cover this case because it entails a neutrino velocity dispersion different from the standard nondegenerate Fermi–Dirac distribution.
A second way to include neutrino masses, to be denoted , is to fix , i.e., the standard density of ordinary neutrinos, each with a mass , is guaranteed. The remaining species are massless degrees of freedom that truly represent radiation; we do not assume anything about its physical nature. The prior will be used in this case.
In both cases we consider also more elaborate scenarios in which and are treated as free parameters, motivated by the wellknown degeneracies between and [3], and between and [23]. Studying these larger models and comparing them with simpler ones illustrates how well combinations of different data sets can break these degeneracies.
3 Data
3.1 Cosmic microwave background (CMB)
3.2 Large scale structure (LSS)
The large scale matter power spectrum has been inferred from the galaxy clustering data of the Sloan Digital Sky Survey (SDSS) [1, 25, 21, 22] and the Twodegree Field Galaxy Redshift Survey (2dF) [26]. In particular, the luminous red galaxies (LRG) sample from the recent SDSS data release 5 (DR5) supersedes all previous power spectrum measurements in terms of statistical significance [1, 25]. However, the “old” spectrum retrieved from the SDSS main galaxy sample from data release 2 (SDSSDR2) [21, 22] is still drawing attention, primarily because the parameter estimates inferred therefrom appear to be in conflict with those derived from other probes. We shall therefore analyse this data set as well. As it turns out, the apparent discrepancy can be explained in terms of scaledependent bias (section 4).
3.3 Baryon acoustic oscillations (BAO)
The baryon acoustic oscillations peak has been measured in the SDSS luminous red galaxy sample [27]. We use all 20 points in the twopoint correlation data set supplied in Ref. [27] and the analysis procedure described therein, including power spectrum dewiggling, nonlinear corrections with the Halofit package [28], corrections for redshiftspace distortion, and analytic marginalisation over the normalisation of the correlation function. Except for the last marginalisation, these corrections are applied largely for cosmetic reasons; we obtain essentially the same results even without them.
3.4 Type Ia supernovae (SNIa)
We use the luminosity distance measurements of distant type Ia supernovae provided by Davis et al. [29]. This sample is a compilation of supernovae measured by the Supernova Legacy Survey (SNLS) [30], the ESSENCE project [31], and the Hubble Space Telescope [32], as well as a set of 45 nearby supernovae. In total the sample contains 192 supernovae.
3.5 Hubble space telescope key project (HST)
In some cases we use the direct measurement of the Hubble parameter from the HST key project, [33].
3.6 Lyman forest (Ly)
Measurements of the flux power spectrum of the Lyman forest has been used to reconstruct the matter power spectrum on small scales at large redshifts. By far the largest sample of spectra comes from the SDSS survey. This data set was carefully analysed in McDonald et al. [20] and used to constrain the linear matter power spectrum. The derived linear fluctuation amplitude at and is , and the effective spectral index . These results were derived using a very elaborate model of the local intergalactic medium in conjunction with hydrodynamic simulations.
While the Ly data provides in principle a very powerful probe of the fluctuation amplitude on small scales, the question remains as to the level of systematic uncertainty in the result. The same data has been reanalysed by Seljak et al. [9] and Viel et al. [34, 35, 36], with somewhat different results. Specifically, the normalisation found in Refs. [34, 35, 36] is lower than that reported in Ref. [20].
We shall use the default Ly module provided in the CosmoMC package in some parts of our analysis. This module uses the SDSSLy data based on McDonald et al. [20], and does not support the parameters , and in our extended models (it does support , however). Therefore, the Ly data will be analysed only in the context of the minimal model.
We stress that our Ly results would likely be somewhat different if the Viel et al. analysis of SDSSLy had been used. However, when all available cosmological data sets are used in combination, the Ly data carries relatively little weight in the combined fit for and is not crucial for our conclusions.
4 Scaledependent bias
The conventional wisdom behind using galaxy survey data to infer the underlying matter distribution is that, on sufficiently large scales, the galaxy power spectrum traces that of the total matter content calculated from linear theory up to a constant, scaleindependent bias factor,
(4.1) 
This relation is of course not exact, and its region of applicability limited. On sufficiently small scales we expect nonlinear evolution to cause its breakdown.
One obvious source for correction is the nonlinear growth of the underlying matter density field on scales . Another is the violation of scale independence for the galaxy bias. The latter arises from the fact that galaxy formation takes place preferentially in dark matter halos with certain optimal masses, which are themselves biased tracers of the matter distribution [37, 38]. Indeed, depending on the galaxy morphology, theoretical modelling and numerical simulations suggest that the galaxy bias can deviate markedly from scale independence already at nominally linear scales [39, 40]. The problem this presents to cosmological parameter estimation is immediate: power spectrum measurements on scales in the vicinity of carry substantial weight in statistical inferences because of their small formal error bars. Improper handling of the galaxy bias will therefore likely yield misleading results, a point we discuss in more detail below.
Unfortunately, neither theoretical modelling nor simulations are as yet able to accurately predict the galaxy bias and its scale dependence. In the meantime, we have the option to either (i) cut the data at a suitably small , usually , or, if we want to use more data points, (ii) introduce some fitting formula that models crudely the effect of a scaledependent bias and then marginalise over the associated nuisance parameters. For the latter approach and in the framework of CDM cosmologies, Ref. [26] suggests the formula
(4.2) 
where is fixed, and and are free parameters to be marginalised. While the issue of bias correction was not explored in the parameter estimation analysis of SDSSDR2 [22], both options (i) and (ii) were considered in the context of the vanilla model by the 2dF [26] and the SDSSDR5 [1] teams in their respective analyses. Both analyses found that, after marginalisation over , additional data beyond in option (ii) lead to no significant deviation in the cosmological parameter estimates or improvement in the errors compared to those obtained with the simpler option (i).
Conversely, if we ignore the issue of scaledependent bias and adhere strictly to the relation (4.1), then it has been shown that the 2dFinferred tends towards higher values with increasing [26]. More strikingly, analyses of the SDSSDR5 data show that the bestfit values inferred on scales and differ by 2– under the constant bias assumption (4.1) [25]. Significant scale dependence in the galaxy bias has been put forward to explain the apparent tension between the galaxy power spectra measured by 2dF and SDSS, the latter of which tends to select the more stronglybiased red galaxies [25, 41]. For the purpose of constraining a possible nonstandard radiation density, we note that the wellknown degeneracy between and means that any inference of will be highly sensitive to how we handle the bias issue, a point also raised in Ref. [12]. We consider both a conservative and a more speculative approach.
Conservative approach: LSSlin
In the conservative approach, we use power spectrum data only on scales that are safely linear,

2dFlin, (17 bands),

SDSSDR2lin, (11 bands), and

SDSSLRGlin from DR5, (11 bands).
The combined set of these data is denoted LSSlin. We adopt the constant bias assumption (4.1) for each data set, and marginalise over each of the three bias parameters with a flat prior.
Speculative approach: LSSQ
In the speculative approach, we use data sets collectively denoted as LSSQ that include

2dFQ, (32 bands),

SDSSDR2Q, (14 bands), and

SDSSLRGQ from DR5, (20 bands),
with values chosen to conform with the analyses of Refs. [1] and [15]. Here, we use the bias correction formula (4.2) and marginalise over each set of and with flat priors.^{2}^{2}2Some recent analyses use a Gaussian prior of when fitting the 2dF data. We point out that these numbers are in fact derived from the 2dF data itself [26]. We feel it is inconsistent to feed them back into a fit as a prior. Our motivation for caution in this case owes itself to the fact that the formula (4.2) was originally developed and calibrated for CDM cosmologies; there is a priori no guarantee that it would apply also to nonstandard models.
We note that Seljak et al. [9] and Mangano et al. [13] also used the bias correction formula (4.2) on the SDSSDR2 data. However, they adopted a Gaussian prior on of that is predetermined from numerical simulations. As we shall see, this choice tends to bias their results towards large values of . We believe this is the main origin of the discrepant values reported by different groups.
5 Statistical inference
5.1 Bayesian inference
We use standard Bayesian inference techniques, and explore the model parameter space with Monte Carlo Markov Chains (MCMC) generated using the publicly available CosmoMC package [18, 19].
Given a set of data , a direct probabilistic interpretation for the degree of belief in the parameters of an assumed underlying model is given by the posterior probability distribution
(5.1) 
Here, the likelihood function quantifies the agreement of the data with an assumed set of parameter values, while the prior probability represents our belief in what the true parameter values should be before any data is taken. This inherent subjectivity of Bayesian inference is a point of much criticism. A pragmatic approach is to employ uniform priors and “let the data decide”. However, this approach is not entirely free of subjectivity, particularly when it comes to credible interval construction and marginalisation (section 5.4).
5.2 Point estimates
The posterior probability serves as the starting point for any further inference. A natural point of reference is the posterior mode
(5.2) 
representing the most probable parameter values given the data and priors. Note that we sometimes refer to the posterior mode as the “bestfit”, although strictly speaking the term refers to those parameter values that maximise the likelihood and is equivalent to only for uniform priors. Another commonly used point estimate is the posterior mean or “expectation value”
(5.3) 
For onedimensional distributions, one may also define the median , where 50% of the posterior’s volume lie on either side.
5.3 Credible intervals
In addition to point estimates one needs credible regions in parameter space that express the degree of uncertainty in the inference. A closed but not necessarily connected hypersurface , called a % credible region, can be constructed such that the hypervolume contains a fraction of the total volume beneath ,
(5.4) 
This definition is not unique. In the 1D case, two popular choices are

Central credible interval (CCI) The credible interval means that equal fractions of the posterior’s volume lie in and . The CCI is always connected and contains the median .

Minimum credible interval (MCI) For a unimodal distribution, and are chosen to minimise . This amounts to placing around the peak of the posterior. In general the posterior may be multimodal, and the MCI is constructed such that the posterior at any point inside is larger than that at any point outside. The MCI need not be connected, but always includes the mode .
These constructions coincide only under special circumstances, e.g., if the posterior probability is Gaussian with respect to . The top two panels of figure 1 show realistic examples of a CCI and an MCI that are very different.
Which of these constructions should we adopt? Since our goal is to find the most probable set of parameter values, we believe that the MCI is more adequate because it singles out regions of parameter space with the highest probability densities. In particular, the MCI always includes the “bestfit” parameter (more precisely, the mode). Finally, for multidimensional posteriors, only the MCI is uniquely defined.
We discuss these matters in such detail because CosmoMC’s popular companion package GetDist outputs for 1D intervals a CCI, not an MCI, a property that does not always seem to be recognised. Moreover, under the default settings, GetDist does not output the median , the point estimate naturally associated with the CCI, but rather the expectation value .
5.4 Marginalisation of the posterior
For multiparameter models typically encountered in cosmology, the information carried by the multidimensional hypersurface is often not useful in practice and must be “compressed.” It is common to map the posterior probability onto a lowerdimensional subspace by the process of marginalisation,
(5.5) 
where represents the parameters in the dimensional subspace. Point estimates for and credible regions may then be constructed from the marginal posterior probability in analogy to section 5.3 above.
Marginalisation favours regions of parameter space that contain a large volume of the probability density in the marginalised directions. This “volume effect” can sometimes lead to counterintuitive results, such as suppression of the probability density for the global best fit parameters if they appear within sharp peaks or ridges that contain little volume. Moreover, the concept of volume itself depends on the choice of parameters. For example, a flat prior on a parameter or one on its logarithm have completely different effects on the volume in that parameter direction. Therefore, other methods of mapping the multidimensional posterior onto a lowerdimensional space can be useful.
5.5 Maximisation of the posterior
A complementary approach to marginalisation is to project onto the dimensional subspace by maximising along the remaining directions,
(5.6) 
The resulting dimensional profile posterior has the advantage of preserving the true peak of the original dimensional posterior probability and hence the global best fit . Figure 1 shows a realistic example of a 1D marginal and a 1D profile posterior in juxtaposition.
In addition, we introduce an effective chisquare measure for the goodnessoffit relative to the global best fit,
(5.7) 
For , we define loosely the “” and “” intervals as the 1D regions satisfying respectively and . We emphasise that these intervals have no formal probabilistic interpretation. However, they do provide a raw assessment, unplagued by volume effects, of how well a given parameter value agrees with the data relative to the global best fit, and have the virtue of being invariant under reparameterisation of the model. Of course, if is Gaussian, then the and intervals thus derived coincide with the 1D marginal 68% and 95% minimum and central credible regions [42]. Maximisation was used in some recent studies of cosmological inference [7, 8, 10, 11, 12].
For simplicity our maximisation intervals are extracted from the same MCMC chains used to construct the Bayesian credible intervals. However, we caution that MCMC techniques are strictly speaking not designed for this purpose; there exist sophisticated optimisation methods such as simulated annealing that are much better suited to the task.
The bottom panel of figure 1 shows a realistic example of a onedimensional interval constructed according to equation (5.7). For a very nonGaussian situation such as depicted in this figure, the point estimates and corresponding credible intervals derived by the methods discussed here are very different.
6 Constraints in the minimal model
6.1 Numerical results
To study the impact of different statistical methodologies and of different combinations of data sets, we use the minimal model (i.e., vanilla+) as a benchmark case. Each entry in table 2 gives a point estimate and the lower and upper ends of the appropriate 68% and 95% credible intervals for . The first column indicates the combinations of cosmological data sets. To illustrate the strong degeneracy between and the Hubble parameter in some data sets and its consequences, we have used two different tophat priors: the loose prior 1 () and the more constraining prior 2 ().
In the columns showing the Bayesian central credible interval, we use the posterior mean as a point estimate, which is the default output of GetDist. The Bayesian minimum credible interval is derived from the 1D marginal posterior probability distribution for and the corresponding point estimate is the 1D marginal posterior mode . In the case of maximisation, the point estimate is the global best fit . Here, the associated intervals are the effective and regions defined by equation (5.7).
6.2 Interpretation of statistics
To compare estimates from different inference schemes, consider first the top half of table 2. The posterior mean and the CCI, i.e., the default output of GetDist, show a preference for large for almost all combinations of probes. The combinations WMAP, WMAP+SDSSDR2, and WMAP+SNIa, in particular, appear to disfavour the standard value at more than 68% (prior 1). However, any evidence for disappears as soon as we impose the tighter prior 2 on . This trend stems from the degeneracy which leads to a long tail of high in the 1D marginal posterior (figure 1). The tail in turn pushes the posterior mean and the CCI to larger values. Imposing a tighter prior on suppresses the tail and reduces this effect.
In contrast, the 1D marginal posterior mode and the global best fit pick out the parameters with the highest probability densities, and turn out to be insensitive to the choice of prior. The tail region still has a strong impact on the upper MCI limits, but the lower limits are relatively unaffected. The constraints from WMAP in table 2 provide an excellent illustration of this point.
The and intervals from maximisation depend even less on the prior, since this construction makes no reference to the volume of the posterior and is therefore insensitive to tail regions once the 1D profile posterior drops below relative to the peak. As argued earlier (section 5.3), in Bayesian inference only the MCI provides a meaningful answer to the question, what are the most probable values of implied by the data. Our explicit examples show that inference based on the CCI, the default output of GetDist, can lead to incorrect conclusions.
6.3 Scaledependent bias
Turning to the issue of bias in the galaxy power spectrum, we see in table 2 that the two different measures introduced in section 4 to bypass or account for the scale dependence, namely, using only linear data at , or adopting the bias correction formula (4.2), generally produce consistent results. The agreement between WMAP+SDSSDR2lin and WMAP+SDSSDR2Q, and between WMAP+SDSSLRGlin and WMAP+SDSSLRGQ are excellent, suggesting that the effects of scaledependent biasing have been successfully ameliorated. The WMAP+2dFlin and WMAP+2dFQ results do show a slight discrepancy at roughly the 68% level. This can most likely be put down to statistical fluctuations, but recall that the bias correction formula (4.2) has not been tested for nonstandard cosmologies and its application here is, strictly speaking, experimental.
The analyses of Seljak et al. [9] and Mangano et al. [13] found a very high for WMAP+SDSSDR2+SNIa, which can only be accommodated within our corresponding MCI estimates, for WMAP+SDSSDR2Q and for WMAP+SDSSDR2lin, at more than the 68% level. Both groups used the bias correction formula (4.2), but adopted the Gaussian prior , a range supposedly determined from numerical simulations, although no source is cited. As a test, we have performed a fit of WMAP+SDSSDR2Q+SNIa using the same Gaussian prior on . We find (MCI) and (CCI), which include the high values of Refs. [9, 13] in the 68% region. Excluding SNIa from the fit yields essentially the same constraints.
These test results clearly indicate that the choice of prior plays an important role in the inference of . In this case, the choice of tends to push the preferred to higher values. We are not able to reproduce the very tight error bars for reported in Refs. [9, 13], which may be due to different priors assumed for the marginalised parameters, or because of a slightly larger adopted in these analyses. However, we also observe a peculiar feature in their credible intervals: the 68% interval is some three times smaller than the 95% interval. This suggests some highly nonGaussian behaviour in their marginal posterior for , because in a Gaussian distribution, the ratio of the intervals is .
The dependence on the prior traces its origin to a degeneracy between and . Figure 2 shows the 2D marginal 68% and 95% allowed regions in space for the data set WMAP+SDSSDR2Q+SNIa. Evidently, imposing the restrictive prior cuts off much of the parameter space that favours low values of . To our knowledge no simulation of mock galaxy catalogues involving a nonstandard value has ever been reported in the literature. Without the backing of simulations (or other independent input) there is no justification to impose a restrictive prior on when performing a fit with as a free parameter. The best strategy in such circumstances is to use a broad and uniform prior on , as adopted in our analysis and also advocated in Ref. [1].
To summarise, we find that imposing a prior for the WMAP+SDSSDR2Q+SNIa fit biases the preferred to higher values. This may account for the difference between our result and those reported in Refs. [9, 13].^{3}^{3}3For completeness, we quote here the constraints on derived from WMAP+SDSSDR2 using 19 data bands (i.e., ) in the vanilla model: (68% C.L.). Here, five additional data points at large values allow one to place much tighter constraints on than is possible with only 14 data bands used in, e.g., figure 2. This result should be compared with for WMAP+SDSSLRG (20 bands) [1] and for WMAP+2dF (36 bands) [26] for the same model.
6.4 Combining all data sets
Having identified and corrected the problematic issues, we now turn to our own estimates. An inspection of table 2 reveals that, except for those sets including Ly, none of the combinations of probes shows any significant evidence for , a value that always sits comfortably within the 68% MCI. The combination of all linear data together with HST (Alllin+HST) gives . Discarding HST leaves the best fit unchanged, but slightly loosens the credible intervals.
Including nonlinear data in the galaxy power spectrum tends to reduce the numbers a little to (AllQ), essentially because 2dFQ prefers a low . Adding HST shifts it up again to . We repeat that the bias correction formula (4.2) may not be applicable in nonstandard cosmologies so that numbers from the Q sets must be interpreted with caution.
Another interesting feature is that, with the exception of WMAP+2dFQ, all combinations of data sets prefer a nonzero at the 95% level or better. This is in contrast to the results of Ref. [12], which finds no lower 95% limit from the WMAP data alone. We have not investigated where the differences come from. As mentioned before, the WMAP+2dFQ data set tends to prefer lower values of and as such produces no lower 95% limit on .
The Ly data appear to be the only data set that prefers a much larger value of , with WMAP+Ly disfavouring at 95%. When combined with other data sets, however, the evidence against is weakened to the 68% level, for AllQ+Ly+HST, because 2dFQ’s preference for small values tends to pull in the opposite direction.
The origin of Ly’s preference for large values of can be gleaned from figure 3. The Ly data prefer a much higher amplitude of density fluctuations at small scales, quantified by , than other data sets. This is particularly evident in the bottom panels of figure 3. The higher value required by Ly forces upwards and cuts away the allowed region for low values. As can be seen in the same figure, with the inclusion of Ly, the upper bound on comes mainly from the HST prior on . Since and both control the epoch of matter–radiation equality and are thus strongly degenerate, a large can only be accommodated by a high value of . However, such high values are strongly disfavoured by the HST data.
6.5 Towards Gaussianity
A striking feature in table 2 is that when all data sets are combined, the three different statistical methods give almost identical results. The reason is that the combination of CMB, LSS, and SNIa data effectively breaks all parameter degeneracies and yields a posterior distribution that is very close to Gaussian, a limit in which all three methods must give the same result. The lower half of table 2 nicely confirms this expectation.
7 Extended models
We now consider constraints on in the context of extended models that allow also for nonvanishing neutrino masses. As in the case of the minimal model, we calculate the bounds within a conservative approach using only linear data (Alllin), as well as a more speculative one that utilises the stronger, but more modeldependent AllQ data set. Since, as we saw in section 6, exhibits a strong degeneracy with the Hubble parameter in some data sets, we consider both options of including and excluding the HST data in our analysis. We do not use the Ly data for the extended models. Table 3 shows our constraints on for four choices of extended models: vanilla+ extended with , ++, , and ++.