Accurate weak lensing of standard candles. II. Measuring \sigma_{8} with Supernovae

Accurate weak lensing of standard candles. II. Measuring with Supernovae

Miguel Quartin Instituto de Física, Universidade Federal do Rio de Janeiro, CEP 21941-972, Rio de Janeiro, RJ, Brazil    Valerio Marra Institut für Theoretische Physik, Universität Heidelberg, Philosophenweg 16, 69120 Heidelberg, Germany    Luca Amendola Institut für Theoretische Physik, Universität Heidelberg, Philosophenweg 16, 69120 Heidelberg, Germany
Abstract

Soon the number of type Ia supernova (SN) measurements should exceed 100,000. Understanding the effect of weak lensing by matter structures on the supernova brightness will then be more important than ever. Although SN lensing is usually seen as a source of systematic noise, we will show that it can be in fact turned into signal. More precisely, the non-Gaussianity introduced by lensing in the SN Hubble diagram dispersion depends rather sensitively on the amplitude of the matter power spectrum. By exploiting this relation, we are able to predict constraints on of % (%) for a catalog of 100,000 (500,000) SNe of average magnitude error 0.12, without having to assume that such intrinsic dispersion and its redshift evolution are known a priori. The intrinsic dispersion has been assumed to be Gaussian; possible intrinsic non-Gaussianities in the dataset (due to the SN themselves and/or to other transients) could be potentially dealt with by means of additional nuisance parameters describing higher moments of the intrinsic dispersion distribution function. This method is independent of and complementary to the standard methods based on CMB, cosmic shear or cluster abundance observables.

Gravitational lenses, Observational cosmology, Supernovae, Large Scale Structure of the Universe
pacs:
98.62.Sb, 98.80.Es, 97.60.Bw, 98.65.-r

I Introduction

Standard candles, in particular supernovae Ia (SNe), are one of the most important and reliable estimators of distance in cosmology Riess et al. (1998); Perlmutter et al. (1999). As is well known, the evidence for cosmological acceleration rests principally on their properties and on their calibration. Since the discovery of acceleration, a large effort has been devoted to testing and improving the calibration of the SNe and to correcting their light curves in order to achieve data samples as free of systematics as possible March et al. (2011a); Amendola et al. (2013).

Since their light comes from relatively high redshifts, SNe are expected to be lensed to some extent by intervening matter along the line of sight. The correction induced by this effect is normally subdominant but will become one of the major sources of uncertainty when richer and deeper SN catalogs will be collected in the next years. The Large Synaptic Survey Telescope (LSST) project plans for instance to collect up to half a million SNe in ten years Abell et al. (2009), a huge increase from the roughly 1000 SNe known so far.

The effect of gravitational lensing will in general change the intrinsic distribution function of the SN magnitudes, increasing the scatter and introducing some non-Gaussianity, if originally absent. In part I of our present investigation Marra et al. (2013), we have obtained the lensing variance, skewness and kurtosis of the SN distribution via sGL, a fast simulation method developed in Kainulainen and Marra (2009, 2011a, 2011b). The results were directly confronted to -body simulations and shown to fit them very well up to a redshift of order 1.5, with the advantage of being given as a function of the relevant cosmological parameters. These fits can be employed to take into account the lensing extra scatter for any value of the cosmological parameters and also to model the lensing non-Gaussianity.

In this paper we propose instead to use the accurate determination of the lensing moments of Ref. Marra et al. (2013) to measure the cosmological parameters. As is often the case in cosmology, what was once a noise to be eliminated can become a signal when either data or modeling improve. Such idea in the present context has been first discussed in Bernardeau et al. (1997); Hamana and Futamase (2000); Valageas (2000) and later further developed in Dodelson and Vallinotto (2006). We improve upon Dodelson and Vallinotto (2006) in two ways. First, we use not just the variance of the lensing signal but the 3rd and 4th order moments as well. Second, we do not assume that the intrinsic SN variance is fixed, but we marginalize over it at every redshift bin independently. The first step boosts the sensitivity (this was first proposed in Bernardeau et al. (1997)) while the second the robustness of the method and allows us to show that a fundamental cosmological parameter, , can indeed be measured by LSST survey using SN lensing alone to within 3%–7%, a value that is competitive with usual methods based on cosmic shear, cosmic microwave background (CMB) or cluster abundance, and completely independent of these. In particular, it does not rely on measuring galaxy shapes (as cosmic shear) and is therefore immune to the systematics associated to the cross-correlation of intrinsic galaxy ellipticities. Also, it does not require to extrapolate the amplitude from recombination epoch to today, as with the CMB technique, nor to make assumptions on the threshold of formation of structures that are needed when employing galaxy clusters. It is therefore a relatively direct measurement of that can cross-check the results obtained via these other methods.

It is interesting to note that our proposal is essentially to carry out a one-point statistics on the supernova distribution on the Hubble diagram. This contrasts with other proposed methods which rely on two or higher point statistics such as that of Cooray et al. (2006), where SN lensing and their inferred magnification was used as a tracer of dark matter clustering. In fact, by not relying on two-point statistics we avoid issues related to spatial correlations such as those arising from finite survey areas.

The main assumption that is needed for the SN lensing method is that the supernovae have an intrinsic magnitude distribution that is Gaussian, so that the entire non-Gaussianity can be attributed to lensing. In the future, this assumption can be directly tested by building a large calibrated sample of local supernovae. In principle, however, one could also include in the analysis an intrinsic non-Gaussianity and marginalize over it.

In order to make our method more directly applicable to future datasets, we will base our estimation of on the moments of the lensing distribution. One could use however also the full likelihood, again obtained via the sGL simulations. We will show however that a simplified likelihood based only on the first few moments is a very good approximation to the full likelihood.

The paper is organized as follows. In Section II we will describe the universe and lensing model adopted, and discuss the statistical properties of the lensing PDF as far as the central moments are concerned. In Section III we will examine the impact of lensing on the SN analysis, and in Section IV we will quantify how tightly can SNe constrain . Finally, we will conclude in Section V. We will discuss some more technical details in the Appendices A and B, and provide in Appendix C redshift-dependent fits for the second-to-fourth central moments of the lensing PDF which are simplified versions of the original fits in Marra et al. (2013).

Ii Lensing moments

In this Section we will first describe the model we will use to compute the moments of the lensing PDF. Then we will discuss the properties of the cumulants as far as the convolution of the lensing and supernova distributions is concerned.

ii.1 Universe and lensing model

We will calculate the second-to-fourth central moments of the lensing PDF using the results of Ref. Marra et al. (2013). There, accurate analytical fits as a function of were given for the broad ranges . The dependence of the lensing moments on other parameters (such as ) was shown to be almost negligible. The results of Ref. Marra et al. (2013) were obtained using the stochastic gravitational lensing (sGL) method introduced in Refs Kainulainen and Marra (2009, 2011a, 2011b). The sGL method is based on (i) the weak lensing approximation and (ii) generating stochastic configurations of inhomogeneities along the line of sight.

Regarding (ii), the matter contrast is modeled according to the so-called “halo model” (see, for example, Neyman and Scott (1952); Peebles (1974); Scherrer and Bertschinger (1991); Seljak (2000); Ma and Fry (2000); Peacock and Smith (2000); Scoccimarro et al. (2001); Cooray and Sheth (2002)), where the inhomogeneous universe is approximated as a collection of different types of halos whose positions obey the linear power spectrum. The halo model assumes that on small scales the statistics of matter correlations is dominated by the internal halo density profiles, while on large scales the halos are assumed to cluster according to linear theory.111In Marra et al. (2013) correlations in the halo positions were, however, neglected. As shown in Kainulainen and Marra (2011a, b), this should be indeed a good approximation for the redshift range of in which we are mainly interested in this paper. The halos were modeled using the halo mass function given in Ref. Jenkins et al. (2001), which has a good degree of universality White (2002). The use of other mass functions such as the one given in Ref. Courtin et al. (2011) does not change substantially the results of our analysis. The halo profiles are modeled according to the Navarro-Frenk-White (NFW) profile Navarro et al. (1996), which is able to model both galaxy-sized halos and superclusters with an appropriately chosen concentration parameter. The concentration parameter depends on the cosmology and we use the universal and accurate model proposed in Ref. Zhao et al. (2009).

Regarding (i), the lens convergence in the weak-lensing approximation is given by the following integral evaluated along the unperturbed light path Bartelmann and Schneider (2001):

(1)

where the quantity is the local matter density contrast (which is modeled as described above), the density is the constant matter density in a comoving volume, and the function gives the optical weight of a matter structure at the comoving radius . The functions and are the scale factor and geodesic time for the background FLRW model, and is the comoving position of the source at redshift . Also, depending on the curvature , respectively. At the linear level, the shift in the distance modulus caused by lensing is expressed in terms of the convergence only:

(2)

Eq. (1) connects the statistical distribution of matter to the statistical distribution of convergences. The sGL method for computing the lens convergence is based on generating random configurations of halos along the line of sight and computing the associated integral in Eq. (1) by binning into a number of independent lens planes. A detailed explanation of the sGL method can be found in Kainulainen and Marra (2011b, a, 2009) and a publicly available numerical implementation, the turboGL package, at turbogl.org.

Because of the theoretical approximations (weak lensing and halo model approximation) and modeling uncertainties (halo mass function and concentration parameter model) intrinsic in the sGL modeling, the results of Ref. Marra et al. (2013) can be relied upon at the level of 10%.

ii.2 Cumulants cumulate

Observationally, the lensing PDF is convolved with the intrinsic standard-candle distribution. Now, there are fundamental statistical quantities which are additive over convolutions: the cumulants. In other words, if and are two independent random variables, then the cumulants of the convolution are just given by .

Here and in the following, we will use and to denote respectively the -th raw moment, central moment and cumulant. We will abuse this notation and often refer to the first raw moment (the mean) using the notation for the first central moment which is identically zero. If is a PDF then by defining the generating function

(3)

one has that the cumulants and moments are defined as

(4)
(5)

We will initially assume that all standard candles have a distribution which is intrinsically Gaussian. By intrinsically we mean neglecting any systematic effect (such as lensing itself) that might distort this distribution. A Gaussian has only two non-zero cumulants, namely and . The weak lensing PDF, in turn, has by definition an (almost) negligible mean, and thus so that for the lensing PDF the moments are all central moments.

Using the relation between cumulants and central moments, to wit

(6)
(7)
(8)

(where we used the fact that ) and the additivity of the cumulants discussed above we can write the first central moments of the convolved standard-candle PDF as (after straightforward manipulation)

(9)
(10)
(11)

Here is the variance of the unlensed standard candles, which is sourced by the intrinsic dispersion and by the observational error :

(12)

We will assume later on that is negligible as compared to , such that . This assumption has no influence on our method and results.

There are some immediate conclusions from the above relations. First, it means that one can directly compare a measure of , which is the unnormalized skewness (not to be confused with the normalized skewness, defined by ), with a theoretical prediction such as the fitting function for provided in Ref. Marra et al. (2013). Second, it means that using other moments besides just one can break the degeneracy between the cosmological parameters (basically and ) and the nuisance parameter . Such a degeneracy was arguably the most important limitation of a previous work in the literature Dodelson and Vallinotto (2006). We will come back to this possibility in Section IV.

Incidentally, since (for many SNe) the mean gives a very precise measurement of (roughly independent of lensing), if one has precise independent measurements of one can use (9) to make an estimate of the intrinsic dispersion by subtracting from the variance due to lensing and the experimental error. This can be an interesting result on its own, as it can help understand the physics of SNe. We nevertheless do not explore this possibility further in this work.

Iii Impact of lensing on SN analysis

The standard use of standard candles such as supernovae is to map the luminosity distance-redshift relation of the background FLRW model so as to constrain the content of the universe. From this point of view any fluctuation of the SN distance modulus, such as instrumental error and lensing, is seen as noise (completely opposite will be the approach followed in the next Section IV). Therefore, as far as the standard SN analysis is concerned lensing has to be dealt with appropriately so as not to bias the parameter extraction of the background parameters. It is important to account for both the skewness and the cosmology dependence of the lensing PDF Amendola et al. (2010). In particular, the redshift dependence of lensing can distort and rotate the confidence-level contours.

The inclusion of lensing complicates the usual analysis with the effect that a standard analysis is not longer possible or consistent.222Even without lensing the usual supernovae analysis, which is iterative and sets is not ideal as it can lead to biases and does not allow determination of or model-selection analysis. See Kim (2011); March et al. (2011b); Lago et al. (2011) for more details. The skewness of the lensing PDF violates indeed the Gaussian assumption, while the cosmology dependence of the normalization of the likelihood imposes that the latter is kept contrary to what is done in the usual analysis. For example, if one is using a Gaussian likelihood with error depending on the theoretical parameters, then one cannot at the same time keep the parameter-dependent normalization factor and use a mock catalog with . Indeed, a catalog is supposed to have the likelihood peaked at the chosen fiducial model, but this will not be the case as the parameter-dependent normalization factor will tilt the likelihood surface in a way which depends on the error.

Figure 1: Comparison of the original SN Gaussian PDF (, dashed brown) with the lensed supernova PDF ( of Eq. (13), green), obtained by the convolution of the lensing PDF (, dot-dashed blue) with . We assume mag, a redshift of and the WMAP9-only best-fit values as fiducial model. As can be seen, the distortion of from Gaussianity is small around the peak, but it gets large in the high-magnification tail. For this plot we employ a cut (or mag) for the lensing PDF, as discussed in Ref. Marra et al. (2013). See Section III for more details.

A full likelihood analysis uses the lensing PDF, in our case either directly obtained from turboGL or reconstructed using the log-normal approximation as proposed in Ref. Marra et al. (2013).333The lensing PDF may also be obtained using the “universal” lensing PDF of Ref. Wang et al. (2002). We denote the lensing PDF as . Within our approximations has negligible mean. The SN likelihood is then obtained by convolving the lensing PDF with the SN uncertainty distribution, which we assume to be Gaussian in the distance moduli and denote with . The latter uncertainty is sum of the intrinsic source brightness scatter and of the observational errors, as discussed in Eq. (12). The likelihood function for a single SN observation is then

(13)

where are the th SN redshift, distance modulus and uncertainty, respectively. The quantity is the predicted distance modulus for a source at redshift in a flat CDM model of matter density parameter :

(14)

where luminosity distance and Hubble rate are

(15)
(16)

where .

The quantity is an unknown offset sum of the supernova absolute magnitudes, of -corrections and other possible systematics. Figure 1 depicts the three distributions , and , where was modeled using the log-normal template proposed in Ref. Marra et al. (2013). As it can be seen, the distortion in from Gaussianity is small near the peak, but it gets large in the high-magnification tail. Finally, we define the total likelihood function as the product of all independent likelihood functions in the data sample, further marginalized over the unknown :

(17)

where we have explicitly stressed the dependence of on the intrinsic dispersion , which we will leave as a set of free parameters , to be marginalized over in each redshift bin . Since is degenerate with we are effectively marginalizing also over the expansion rate of the universe.

As said earlier, SN observations are usually used to determine background parameters, in our case , but generally also , etc. Therefore, the likelihood of Eq. (17) has to be further marginalized over , , and the question is how lensing affects . What we found is that while the effect of lensing can distort, increase and rotate the confidence-level contours of , it does not substantially bias the position of its maximum. The reason is twofold. First, the skewness induced by lensing is not sizable as far as the mean magnitudes are concerned and the variance added by lensing is subdominant with respect to the intrinsic dispersion of the supernovae, as shown by Figure 1. Second, the cosmology dependence of lensing will become less and less important as more precise measurements will restrict the available range of cosmological parameters within which the effect of lensing can vary. Although sizable biases were found in Amendola et al. (2010), we now trace it to the toy model employed for the matter field (an universe populated with NFW halos). The much more realistic current modeling of turboGL produces weaker lensing effects.

The main focus of this paper is, however, not on but on the signal hidden in the scatter of the SN distance moduli, in particular the information relative to . Following this point of view, we will then use directly the likelihood of Eq. (17).

Iv Constraining with SN data

iv.1 The Method-of-the-Moments

In this section we come back to the question of whether one can use measurements of the observed distributions of standard candles at different redshifts to constrain the statistics of matter inhomogeneities, in particular . This idea was first proposed by Ref. Dodelson and Vallinotto (2006) for supernovae, but in that paper the authors focused mainly on using the additional variance due to lensing. This is problematic, as in principle one does not know what is the value of the intrinsic dispersion or even if it is constant in redshift or not. Thus, even in the limit of perfect modeling of instrumental errors and no extra systematics, a measurement of a growing with could be attributed to some sort of evolution effect of the supernovae rather than to lensing. The only way out would be to reach a very good level of physical understanding of the explosion process (and of other systematic effects) to be able to accurately predict what should look like. In this work we break this degeneracy between the cosmological parameters and the nuisance parameter using other moments besides just . In particular, we propose to measure the cosmological parameters and the distribution of at the same time by using the information contained in the mean and the first three central moments (which we will collectively refer to simply as ).

At this point we could just use the full likelihood of Eq. (17), which automatically contains all the available lensing information. However, we develop here an alternative approach which focuses, as outlined above, on the information carried by the lensing moments. There are mainly three reasons to do so: (i) it is computationally faster as data can be binned in redshift444The full likelihood of (17) cannot be binned in redshift without losing information about moments above the second one. and easily implemented in numerical codes; (ii) it does not require knowledge of the full lensing PDF and instead simply needs the theoretical prediction of the second-to-fourth lensing moments (available as analytical fits in Marra et al. (2013)), which can directly be confronted with observations; and (iii) it will be essentially a approach (without need of convolutions), with all its advantages such as the ability to easily include nuisance parameters (for instance, describing some intrinsic non-Gaussianity of supernovae). The disadvantage is that the likelihood gets somewhat more complicated as a result of the correlations between the moments, and in principle requires some information on high moments.

Figure 2: Comparison of the moment analysis using different combinations of moments for the case of supernovae in LSST and assuming as usual a fiducial mag. Left: using only; middle: using ; right: using . We see that after marginalizing over in each redshift bin, very little information is gained from only, and one has to resort to at least the third moment. Note that although adds some extra information, most of it comes from .

The idea is very simple: we build a likelihood at each redshift bin directly for the first four moments , to be called the method-of-the-moments (MeMo) likelihood:

(18)
(19)
(20)

where the vector is the theoretical prediction for the moments, and its second-to-fourth components are defined in (9)–(11). The mean is the theoretical distance modulus of (14). The quantity is the vector of fiducial or measured (sample) moments. In the former case it is evaluated at the fiducial model, while in the latter case its components are:

(21)
(22)

where are the SN distance moduli observed in the redshift bin centered at . The covariance matrix is built using the fiducial (or observed) moments and therefore does not depend on cosmology (but it does on ). This can be understood intuitively from (19) as in this case the function is minimized, as expected, by . Consequently, the normalization factor (basically the determinant of ) is irrelevant and we have neglected it in (18). The number of moments to be used in this analysis is in principle arbitrary as each new moment adds information. However, as will be shown below, for supernova almost all of the information is already included using (and a very good fraction of it already in ).

In the Gaussian limit of , the covariance matrix relative to the redshift bin is simply

(23)

where is the variance of the dataset, fiducial or observed, at the redshift bin centered on . In the former case it is , with the latter evaluated at the fiducial flat CDM model given by the 9-year WMAP-only best-fit values Hinshaw et al. (2013). As discussed in Section II.1 we will consider only the dependence of lensing on . The fiducial values of the latter two are and , respectively. The quantity is the number of SNe in the -th redshift bin. is a good approximation in the limit in which the deviation from the Gaussianity induced by lensing is small; in this case the covariance will be dominated by the Gaussian sampling variance. As we discuss in Appendix B, this is the case for standard candles with .

In the case of a general distribution the covariance matrix is more complicated. It can be written in a more compact way in terms of the cumulants:

(24)

where we denote by “–” the symmetric terms. As can be seen, an accurate estimation of requires knowledge of all central moments up to (see Appendix A for the relation between cumulants and moments). Nevertheless, as we will show, most of the information is contained in , so in practice one would only need to go up to . Moreover, as explained above, is to be evaluated at the fiducial values, so it is only necessary to know these moments at these fiducial values, and not as a function of (as carried out in Marra et al. (2013)). As discussed before, the only nonzero cumulant of the Gaussian supernova PDF is . Therefore can be evaluated, for each redshift bin, at the fiducial model simply by using [see (6) and (9)] and computing all the using (25)–(31) where the , i.e., the moments relative to the lensing PDF. Both Eqs (23) and (24) were obtained using the Mathematica package mathStatica in the limit of large number of observations in each bin.

The MeMo likelihood of Eq. (18) depends on the intrinsic dispersions , which are let free and independent in each redshift bin. In principle one has two choices in order to make forecasts of future constraints on : either fixing to a fiducial value (say mag, constant in redshift Bernstein et al. (2012); Abell et al. (2009)) or leave it as a set of free parameters, to be marginalized over in each redshift bin. The former is computationally convenient as effectively reduces the dimension of the full likelihood by one. The latter is the more appropriate approach, and is the one recommended for analyzing real data. One has to choose the appropriate marginalization intervals for the flat priors to be used in (18). Although in principle these could be set as , for all future SN catalogs here considered, the interval mag proved sufficient.

Figure 3: Comparison of the full likelihood analysis (colored contours) with the MeMo-likelihood one (solid green contours), using the full 4 dimensional covariance matrix of as given in Eq. (24), for the case of supernovae in LSST and assuming a fixed mag. Also shown is the MeMo results using only the diagonal part of the covariance matrix (black dotted contours). We see that the MeMo likelihood, using the mean and the second-to-fourth central moments, correctly reproduces the results relative to the full likelihood. Note that the fiducial of the MeMo contours was slightly changed in this plot to coincide with the one of the full likelihood and allow better comparison.
Figure 4: Comparison of the full covariance matrix (24) constraints (green contours) with 3 possible approximations: the Gaussian matrix (23) (black dashed contours), using only the diagonal part of (24) (black dotted contours) and using only (brown long-dashed contours) for the case of supernovae in LSST and marginalizing over all (note that Figure 3 in contrast assumes a fixed value of ). As can be seen, most of the information is already contained on the first 3 moments. Note also that the non-Gaussian diagonal matrix is a very good approximation after marginalization over , whereas the full Gaussian matrix underestimates to less than 50% of the real value.

Figure 2 shows the comparison of the constraints using different combinations of moments as forecasted for the LSST 100k catalog, explained below in Section IV.2. We see that after marginalizing over in each redshift bin, very little information is gained from only, and one has to resort to the third and (with marginal improvement) the fourth moment. This was arguably the biggest caveat of the original work Dodelson and Vallinotto (2006): in order to extract information from the variance only they had to assume the intrinsic dispersion to be well known. However, when one allows complete freedom for the different , almost no information can be collected from just the mean and the variance.

Figure 3 confronts the full likelihood analysis with the MeMo, using assuming for computational convenience a fixed value of mag in all bins. We see that by employing the full covariance matrix one can get a very good agreement between both methods. In what follows we will therefore stick to the MeMo method (using ) when deriving our forecasts.

The fact that, as seen from Figure 2, the fourth moment adds little constraining power to the analysis has the important consequence that the MeMo likelihood can be limited to mean, variance and skewness. This fact made more evident in Figure 4 where we plot together both set of contours. This clearly makes the MeMo method more robust against the presence of SN outliers that could possibly bias the value of the higher moments. That being said, in what follows we will forecast results assuming all four moments are used.

Figure 5: In green, forecast of cosmological constraints using the central moments , , and and supernovae from: (left) DES, if the scatter could be reduced to 0.12 mag (the predicted scatter given in Bernstein et al. (2012) is somewhat larger); (middle) LSST, with SNe, also assuming a fiducial intrinsic scatter of 0.12 mag (as predicted in Abell et al. (2009)); and (right) same for LSST but for SNe. The red contours are the “lensing-only” likelihood, see text.

Since, at least at first glance, the total convolved PDF is not largely dissimilar from a Gaussian (see Figure 1), one naturally wonders whether the full covariance matrix (24) can be approximated by the Gaussian one (23). We have carried out this test adding also a second approximation for (18), namely using only the (full, non-Gaussian) diagonal terms. Figure 4 also compares the three covariances for the case of LSST 100k (see Section IV.2). As can be seen, the non-Gaussian corrections to the errors are very relevant, and broaden the constraints on by a factor greater than two. However, these lensing corrections become negligible for smaller redshift as discussed in Appendix B. We also conclude that the diagonal part of carries almost the same information as the full matrix. Due to this finding, we provide in Appendix B fitting functions with respect to lensing for the diagonal components of .

It is worth pointing out at this point that the formalism developed in this section could be generalized in a straightforward way so as to include a fiducial for . In other words, it would be possible to add some non-Gaussianity – possibly due to the SN themselves and/or to other transients – also for the intrinsic supernovae PDF and marginalize over these additional nuisance parameters.

iv.2 Constraints from future supernova surveys

In this Section we forecast the precision with which one can measure with future supernova data. For this we study three different catalogs, based on 2 surveys: Dark Energy Survey (DES) Bernstein et al. (2012) and LSST Abell et al. (2009). DES is already operational, and is expected to observe around 3000 SN during its observational cycle. LSST is expected to be operational by the end of the decade, and should observe a tantalizing amount of roughly 50000 SN per year. It is assumed in Abell et al. (2009) that these can be measured with an average scatter of mag. We therefore assume in what follows a fiducial a scatter of mag constant in all redshifts.555Note that only the fiducial is assumed constant. I.e., we still marginalize over in each redshift bin, as described in Section IV. The 3 cases we will consider here are (i) DES, with the distribution given in Bernstein et al. (2012) (note that the same work predicts a higher intrinsic scatter, varying in between 0.14 and 0.25 mag); (ii) LSST, with the redshift distribution given in Abell et al. (2009) and a total of supernovae (which we dub “LSST 100k” and which should correspond to 2 years of observation); and (iii) same as (ii) but for a total of SNe (“LSST 500k”, corresponding to the full 10-year survey).

Figure 5 depicts the constraints on that can be obtained with the future supernova surveys described above. From left to right, we use DES, LSST 100k and LSST 500k. The green contours are the final constraints, using (18) and (24). Note that LSST is thus forecasted to measure with error bars of around 0.057 with SNe (0.023 with ), which is around the interesting () level. For DES, however, the statistical information is underwhelming, and one will not gain good knowledge on , even though we assumed an intrinsic scatter smaller than the one estimated by the DES collaboration Bernstein et al. (2012). The red contours, mostly of bookkeeping interest, depict the constraints provided by the lensing effect only (without using information about the mean). In other words, from the non-Gaussianity introduced by lensing on the final PDF. More precisely, they are the constraints obtained substituting the first row and column of (24) by zeros. Note that the red and green contours are almost perpendicular, which shows that the final constraints can be approximated by taking independently mean (used in standard SNe analyses) and lensing effects.

Figure 6: The posterior distribution of (in magnitudes) in different redshift bins of width using LSST 100k. Although this catalog has 18 such bins, we depict only 6 for clarity. For low- lensing is small and the posterior is wide. For intermediate the constraints are the tightest (they peak at around ), because that is where there are more SNe. For the highest -bins the number of forecasted supernovae decreases and the peaks broaden again.
Figure 7: Constant central-moment contours, both for the total convoluted PDF (left) and for the lensing-only PDF (right). For each moment all contour levels are separated by a constant amount. Note that there is a near but not exact degeneracy, which explains why the higher-moment-only contours (i.e. without the mean) have a finite area (as in the case of LSST in the range of parameters considered – see Figure 5).

Besides constraints in , the central moments (specially ) also give information on the intrinsic scatter of SNe in each redshift bin. Figure 6 depicts the achievable precision in measuring in different redshift bins of width using LSST 100k. Although this catalog has 18 such bins, we only depict 6 of them for clarity. The posteriors are all normalized to have unit area. The precision with which one can measure the different varies roughly from 0.01 to 0.03 mag, depending on the redshift.

To illustrate the physics behind the red (“lensing-only”) contours of Figure 5, it is useful to depict the contours of constant value for the different central moments. This is done in Figure 7, both for the total convoluted PDF (left) and for the lensing-only PDF (right). Note that there is a near but not exact degeneracy, which explains the fact that although very broad, the lensing-only contours are in fact closed.

V Conclusions

In this paper we developed a method to measure by exploiting the lensing effects of matter clustering along the line of sight of SNe, extending the results of Dodelson and Vallinotto (2006). We have shown that one can obtain interesting constraints in a survey with one or few hundred thousands supernovae, as in the LSST survey. In particular, we find that can be estimated to within 3% if 500,000 supernovae with average magnitude error 0.12 are collected. This method is independent of and complementary to the standard methods using CMB, cosmic shear or cluster abundance and bypasses the need to assume a constant intrinsic variance as in Dodelson and Vallinotto (2006). We still assume that the SN magnitude distribution does not have an intrinsic non-Gaussianity, although in principle one can marginalize over this extra parameter.

Instead of employing the full likelihood, we have shown that the method of moments (MeMo) provides a very good approximation and is much faster to implement. Even the simplest case of diagonal approximation works with sufficient accuracy (we provide fitting functions in Appendix B). We have shown that it is enough to use the first three moments (loosely speaking, mean, variance and skewness) so as to capture the non-Gaussian information available in the full likelihood. This should make the MeMo method robust against the presence of SN outliers that could possibly bias the value of higher moments. The needed moments as a function of cosmological parameters have been already derived in Marra et al. (2013).

One could at first be suspicious about the feasibility of measuring the full covariance matrix (24), as it assumes we are able to measure all moments up to the . Nevertheless, we have two reasons to believe this is not as hard as it seems. First, similarly to what was discussed in part I Marra et al. (2013), using the results of Takahashi et al. (2011) we evaluated the dependence of the first 8 moments on the point at which we cut the tail of the PDF. The dependence is not very strong and saturates somewhere between of 0.2 and 0.7, depending on the moments. This range reflects the fact that higher moments indeed require a higher value of for more precise results. However: (i) even for the moment using would result in only a error in the value of ; (ii) as we have shown in the text, a very good estimate of can be achieved neglecting the moment entirely, and thus dropping the need for either the and moment for the computation of the covariance matrix. For this issue could become important, not so much because one needs a slightly larger but mainly because turboGL in its current form starts to deviate from the numerical simulations due to medium and strong-lensing corrections.

Finally, we note that this method can be extended to other parameters, for instance to the growth function of matter perturbations. This will be explored in future work.

Figure 8: Value of the correction factors for the variance of the moments using the WMAP9 fiducial values of . Bottom to top: , and . For supernova forecasts in the usual domain these corrections can be large, specially for . For higher redshifts and for higher values of they become even larger.
Acknowledgements.
It is a pleasure to thank Stephan Hilbert, Martin Makler, Bruno Moraes, Ribamar Reis, Peter Schneider, Brian Schmidt and Ryuichi Takahashi for fruitful discussions. LA and VM acknowledge support from DFG through the TRR33 program “The Dark Universe”. MQ is grateful to Brazilian research agencies CNPq and FAPERJ for support and to ITP, Universität Heidelberg for hospitality during part of the development of this project.

Appendix A Higher lensing moments

As discussed in Section IV, to compute the covariance matrix for the MeMo method one needs an estimation of all the central moments up to at the given fiducial cosmology. We thus extend here the relations between cumulants and central moments all the way to :

(25)
(26)
(27)
(28)
(29)
(30)
(31)

In part I of this work Marra et al. (2013) we provided fits for , and . For the present work we have also computed the values of . Surprisingly, it turns out even these very high moments, when computed with turboGL are still in very good agreement with -body simulations such as Takahashi et al. (2011).

Appendix B Lensing corrections for the covariance matrix

Although we have shown in Figure 4 that the Gaussian covariance matrix (23) is not a very good approximation for the supernova catalogs here considered, it can actually be used up to intermediate redshifts, for which lensing and thus also the non-Gaussianity are small. We compute the corrected values of the variance for a given in terms of normalized correction factors , defined by

(32)

so that represents a corrected variance which is twice the Gaussian ones in (23). Figure 8 shows this for the diagonal terms of (24). Note that the corrections start to be relevant for , especially for the higher moments.

Figure 9: Comparison between the simple fits (37)–(39) of the lensing moments (black dashed line) and the turboGL output (red connected dots). The fiducial cosmology used is the best fit obtained by the Planck Collaboration. All values are in magnitudes.

As shown in Fig. 4, the diagonal of the covariance matrix of Eq. (24) is sufficient to obtain an accurate forecast of the constraints on . Consequently, we provide fitting functions with respect to redshift in the range (average error below 5%) for the diagonal components of (note that this assumes mag):

(33)
(34)
(35)
(36)

which can be directly used to calculate the constraints on (but possibly also on other parameters).

Appendix C Simplified Redshift-dependent fits

In this Section we will provide simple redshift-dependent fits for the second-to-fourth central moments of the lensing PDF. They have been computed using the mass function of Ref. Courtin et al. (2011) and assuming as fiducial cosmology the best fit obtained by the Planck Collaboration when assuming a spatially flat CDM model and fitting to observations of the cosmic microwave background and baryon acoustic oscillations (Ade et al., 2013, (Table 5, last column)). These fits are complementary to the flexible cosmology-dependent fits presented in part I of our present investigation Marra et al. (2013), and are meant to be used when the dependence on is not important and when the smaller redshift range is enough:

(37)
(38)
(39)

The comparison of these fits with the full numerical calculations is shown in Fig. 9.

The first of the above fits can also be more directly compared to other independent estimates of the additional lensing variance. For instance, early Supernova Legacy Survey data was used to estimate (again, in magnitudes)  Jonsson et al. (2010) and  Kronborg et al. (2010), while recent independent theoretical computations estimated  Ben-Dayan et al. (2013).

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
193949
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description