The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: double-probe measurements from BOSS galaxy clustering & Planck data – towards an analysis without informative priors
We develop a new methodology called double-probe analysis with the aim of minimizing informative priors in the estimation of cosmological parameters. Using our new methodology, we extract the dark-energy-model-independent cosmological constraints from the joint data sets of Baryon Oscillation Spectroscopic Survey (BOSS) galaxy sample and Planck cosmic microwave background (CMB) measurement. We measure the mean values and covariance matrix of , , , , , , , , , which give an efficient summary of Planck data and 2-point statistics from BOSS galaxy sample. The CMB shift parameters are , and , where is the redshift at the last scattering surface, and and denote our comoving distance to and sound horizon at respectively; is the baryon fraction at . The advantage of this method is that we do not need to put informative priors on the cosmological parameters that galaxy clustering is not able to constrain well, i.e. and .
Using our double-probe results, we obtain , , and assuming CDM; assuming oCDM; assuming CDM; and assuming oCDM; and and assuming CDM. The results show no tension with the flat CDM cosmological paradigm. By comparing with the full-likelihood analyses with fixed dark energy models, we demonstrate that the double-probe method provides robust cosmological parameter constraints which can be conveniently used to study dark energy models.
We extend our study to measure the sum of neutrino mass using different methodologies including double probe analysis (introduced in this study), the full-likelihood analysis, and single probe analysis. From the double probe analysis, we obtain (68%/95%) assuming CDM and (68%/95%) assuming CDM. This paper is part of a set that analyses the final galaxy clustering dataset from BOSS.
keywords:cosmology: observations - distance scale - large-scale structure of Universe - cosmological parameters
We have entered the era of precision cosmology along with the dramatically increasing amount of sky surveys, including the cosmic microwave
background (CMB; e.g., Bennett
et al. 2013; Ade et al. 2014a), supernovae (SNe; Riess
et al. 1998; Perlmutter
et al. 1999), weak lensing (e.g., see Van Waerbeke &
Mellier 2003 for a review), and large-scale structure from galaxy redshift surveys, e.g. 2dF Galaxy Redshift Survey (2dFGRS; Colless
et al. 2001, 2003, Sloan Digital Sky Survey (SDSS, York
et al. 2000; Abazajian
et al. 2009, WiggleZ (Drinkwater
et al., 2010; Parkinson
et al., 2012), and the Baryon Oscillation Spectroscopic Survey (BOSS;
et al. 2013; Alam
et al. 2015) of the SDSS-III (Eisenstein
et al., 2011).
The future galaxy redshift surveys, e.g. Euclid
The galaxy redshifts samples have been analysed studied in a cosmological context (see, e.g., Tegmark et al. 2004; Hutsi 2005; Padmanabhan et al. 2007; Blake et al. 2007; Percival et al. 2007; Percival et al. 2010; Reid et al. 2010; Montesano et al. 2012; Eisenstein et al. 2005; Okumura et al. 2008; Cabre & Gaztanaga 2009; Martinez et al. 2009; Sanchez et al. 2009; Kazin et al. 2010; Chuang et al. 2012; Samushia et al. 2012; Padmanabhan et al. 2012; Xu et al. 2013; Anderson et al. 2013; Manera et al. 2012; Nuza et al. 2013; Reid et al. 2012; Samushia et al. 2013; Tojeiro et al. 2012; Anderson et al. 2014b; Chuang et al. 2013a; Sanchez et al. 2013; Kazin et al. 2013; Wang 2014; Anderson et al. 2014a; Beutler et al. 2014b; Samushia et al. 2014; Chuang et al. 2013b; Sanchez et al. 2014; Ross et al. 2014; Tojeiro et al. 2014; Reid et al. 2014; Alam et al. 2015; Gil-Marín et al. 2015a, b; Cuesta et al. 2015).
Eisenstein et al. (2005) demonstrated the feasibility of measuring and an effective distance, from the SDSS DR3 (Abazajian et al., 2005) LRGs, where corresponds to a combination of Hubble expansion rate and angular-diameter distance . Chuang & Wang (2012) demonstrated the feasibility of measuring and simultaneously using the galaxy clustering data from the two dimensional two-point correlation function of SDSS DR7 (Abazajian et al., 2009) LRGs and it has been improved later on in Chuang & Wang (2013b, a) upgrading the methodology and modelling to measure , , the normalized growth rate , and the physical matter density from the same data. Analyses have been perform to measure , , and from earlier data release of SDSS BOSS galaxy sample Reid et al. (2012); Chuang et al. (2013a); Wang (2014); Anderson et al. (2014a); Beutler et al. (2014b); Chuang et al. (2013b); Samushia et al. (2014).
There are some cosmological parameters, e.g. (the physical baryon fraction) and (the scalar index of the power law primordial fluctuation), not well constrained by galaxy clustering analysis. We usually use priors adopted from CMB measurements or fix those to the best fit values obtained from CMB while doing Markov Chain Monte Carlo (MCMC) analysis. There would be some concern of missing weak degeneracies between these parameters and those measured. These could lead to incorrect constraints if models with very different predictions are tested, or double-counting when combining with CMB measurements. One might add some systematics error budget to be safe from the potential bias (e.g., see Anderson et al. (2014a)). An alternative approach is to use a very wide priors, e.g. 5 or 10 flat priors from CMB, to minimize the potential systematics bias from priors (e.g., see Chuang et al. (2012); Chuang & Wang (2012)). However, the approach would obtain weaker constraints due to the wide priors. In this study, we test the ways in which LSS constraints are combined with CMB data, focussing on the information content, and the priors used when analysing LSS data. Since CMB data can be summarized with few parameters (e.g., see Wang & Mukherjee (2007)), we use the joint data set from Planck and BOSS to extract the cosmological constraints without fixing dark energy models. By combining the CMB data and the BOSS data in the upstream of the data analysis to constrain the cosmological constraints, we call our method ”double-probe analysis”. Our companion paper, Chuang et al. (2016), constrains geometric and growth information from the BOSS data alone independent of the CMB data, thereby dubbed ”single-probe”, and combines with the CMB data in the downstream of the analysis. Note that we assume there is no early time dark energy or dark energy clustering in this study. and will be well constrained by CMB so that we will obtain the cosmological constraints without concerning the problem of priors. The only input parameter which is not well constrained by our analysis is the galaxy bias on which is applied a wide flat prior. In principle, our methodology extract the cosmological constraints from the joint data set with the optimal way since we do not need to include the uncertainty introduced by the priors.
In addition to constraining dark energy model parameters, we extend our study to constrain neutrino masses. High energy physics experiments provides with the squared of mass differences between neutrino species from oscillation neutrino experiments. Latest results are and for the normal hierarchy () and for the inverted mass hierarchy () (Olive & Group 2014). Cosmology shows as a unique tool for the measurement of the sum of neutrino masses , since this quantity affects the expansion rate and the way structures form and evolve. estimations from galaxy clustering has been widely studied theoretically (see Hu et al. 1998; Lesgourgues & Pastor 2006 for a review) and with different samples such as WiggleZ (see Riemer-Sørensen et al. 2014; Cuesta et al. 2015) or SDSS data (see Aubourg et al. 2015; Beutler et al. 2014a; Reid et al. 2010; Thomas et al. 2010; Zhao et al. 2013). At late times, massive neutrinos can damp the formation of cosmic structure on small scales due to the free-streaming effect (Dolgov, 2002). Existing in the form of radiation in the early Universe, neutrinos shift the epoch of the matter-radiation equality thus changing the shape of the cosmic microwave background (CMB) angular power spectrum. They affect CMB via the so called Early Integrated Sachs Wolfe Effect and they influence gravitational lensing measurements (e.g., see Lesgourgues et al. 2006). Recent publications have attempted to constrain , imposing upper limits (Seljak et al., 2006; Hinshaw et al., 2009; Dunkley et al., 2009; Reid et al., 2010; Komatsu et al., 2011; Saito et al., 2011; Tereno et al., 2009; Gong et al., 2008; Ichiki et al., 2009; Li et al., 2009; de Putter et al., 2012; Xia et al., 2012; Sanchez et al., 2012; Giusarma et al., 2013) and some hints of lower limits using cluster abundance results (Ade et al., 2014b; Battye & Moss, 2014; Wyman et al., 2014; Burenin, 2013; Rozo et al., 2013). We measure the sum of neutrino mass using different methodologies including double probe analysis (introduced in this study), the full-likelihood analysis, and single probe analysis (Chuang et al. 2016; companion paper).
This paper is organized as follows. In Section 2, we introduce the Planck data, the SDSS-III/BOSS DR12 galaxy sample and mock catalogues used in our study. In Section 3, we describe the details of the methodology that constrains cosmological parameters from our joint CMB and galaxy clustering analysis. In Section 4, we present our double-probe cosmological measurements. In Section 5, we demonstrate how to derive cosmological constraints from our measurements with some given dark energy model. In Section 6, opposite to the manner of dark energy model independent method, we present the results from the full-likelihood analysis with fixing dark energy models. In Section 7, we measure the sum of neutrino mass with different methodologies. We summarize and conclude in Section 8.
2 Data sets & mocks
2.1 The SDSS-III/BOSS Galaxy Catalogues
The Sloan Digital Sky Survey (SDSS; Fukugita et al. 1996; Gunn
et al. 1998; York
et al. 2000; Smee
et al. 2013) mapped over one quarter
of the sky using the dedicated 2.5 m Sloan Telescope (Gunn
et al., 2006).
The Baryon Oscillation Sky Survey (BOSS, Eisenstein
et al. 2011; Bolton
et al. 2012; Dawson
et al. 2013) is part of the SDSS-III survey.
It is collecting the spectra and redshifts for 1.5 million galaxies, 160,000 quasars and
100,000 ancillary targets. The Data Release 12 (Alam
et al., 2015) has been made publicly available
2.2 The Planck Data
Planck (Tauber et al., 2010; Planck Collaboration I, 2011) is the third generation space mission, following COBE and WMAP, to measure the anisotropy of the CMB. It observed the sky in nine frequency bands covering the range 30–857 GHz with high sensitivity and angular resolutions from 31’ to 5’. The Low Frequency Instrument (LFI; Bersanelli et al., 2010; Mennella et al., 2011) covers the bands centred at 30, 44, and 70 GHz using pseudo-correlation radiometers detectors, while the High Frequency Instrument (HFI; Planck HFI Core Team, 2011) covers the 100, 143, 217, 353, 545, and 857 GHz bands with bolometers. Polarisation is measured in all but the highest two bands (Leahy et al., 2010; Rosset et al., 2010). In this paper, we used the 2015 Planck release (Planck Collaboration I, 2015), which included the full mission maps and associated data products.
2.3 The Mock Galaxy Catalogues
We use 2000 BOSS DR12 MultiDark-PATCHY (MD-PATCHY) mock galaxy catalogues (Kitaura et al., 2015b) for validating our methodology and estimating the covariance matrix in this study. These mock catalogues were constructed using a similar procedure described in Rodríguez-Torres et al. 2015 where they constructed the BOSS DR12 lightcone mock catalogues using the MultiDark -body simulations. However, instead of using -body simulations, the 2000 MD-PATCHY mocks catalogues were constructed using the PATCHY approximate simulations. These mocks are produced using ten boxes at different redshifts that are created with the PATCHY-code (Kitaura et al., 2014). The PATCHY-code is composed of two parts: 1) computing approximate dark matter density field; and 2) populating galaxies from dark matter density field with the biasing model. The dark matter density field is estimated using Augmented Lagrangian Perturbation Theory (ALPT; Kitaura & Hess (2013)) which combines the second order perturbation theory (2LPT; e.g. see Buchert (1994); Bouchet et al. (1995); Catelan (1995)) and spherical collapse approximation (see Bernardeau (1994); Mohayaee et al. (2006); Neyrinck (2013)). The biasing model includes deterministic bias and stochastic bias (see Kitaura et al. (2014); Kitaura et al. (2015) for details). The velocity field is constructed based on the displacement field of dark matter particles. The modeling of finger-of-god has also been taken into account statistically. The mocks match the clustering of the galaxy catalogues for each redshift bin (see Kitaura et al. (2015b) for details) and have been used in recent galaxy clustering studies (Cuesta et al., 2015; Gil-Marín et al., 2015a, b; Rodríguez-Torres et al., 2015; Slepian et al., 2015) and void clustering studies (Kitaura et al., 2015a; Liang et al., 2015). They are also used in Alam et al. (2016) (BOSS collaboration paper for final data release) and its companion papers (this paper and Ross et al. (2016); Vargas-Magana et al. (2016); Beutler et al. (2016a); Satpathy et al. (2016); Beutler et al. (2016b); Sanchez et al. (2016a); Grieb et al. (2016); Sanchez et al. (2016b); Chuang et al. (2016); Slepian et al. (2016a, b); Salazar-Albornoz et al. (2016); Zhao et al. (2016); Wang et al. (2016)
We develop a new methodology to extract the cosmological constraints from the joint data set of the Planck CMB data and BOSS galaxy clustering measurements fitting the LSS data with parameter combinations defining the key cosmological dependencies, while including CMB constraints to simultaneously constrain other parameters. This means that we can define constraints that can subsequently be used to constrain a wide-range of Dark Energy models. Similar approaches have been applied to these data separately. Our work is the first to investigate how in detail this joint analysis should be performed.
3.1 Likelihood from BOSS galaxy clustering
In this section, we describe the steps to compute the likelihood from the BOSS galaxy clustering.
Measure Multipoles of the Two-Point Correlation Function
We convert the measured redshifts of the BOSS CMASS and LOWZ galaxies to comoving distances by assuming a fiducial model, i.e., flat CDM with and which is the same model adopted for constructing the mock catalogues (see Kitaura et al. (2015b) ). To compute the two-dimensional two-point correlation function, we use the two-point correlation function estimator given by Landy & Szalay (1993):
where is the separation of a pair of objects and is the cosine of the angle between the directions between the line of sight (LOS) and the line connecting the pair the objects. DD, DR, and RR represent the normalized data-data, data-random, and random-random pair counts, respectively, for a given distance range. The LOS is defined as the direction from the observer to the centre of a galaxy pair. Our bin size is Mpc and . The Landy and Szalay estimator has minimal variance for a Poisson process. Random data are generated with the same radial and angular selection functions as the real data. One can reduce the shot noise due to random data by increasing the amount of random data. The number of random data we use is about 50 times that of the real data. While calculating the pair counts, we assign to each data point a radial weight of , where is the radial number density and Mpc (see Feldman et al. 1994).
The traditional multipoles of the two-point correlation function, in redshift space, are defined by
where is the Legendre Polynomial (0 and 2 here). We integrate over a spherical shell with radius , while actual measurements of are done in discrete bins. To compare the measured and our theoretical model, the last integral in Eq.(3.1.1) should be converted into a sum,
where Mpc in this work.
Fig.1 shows the monopole () and quadrupole () measured from the BOSS CMASS and LOWZ galaxy sample compared with the best fit theoretical models.
We are using the scale range Mpc and the bin size is 5 Mpc. The data points from the multipoles in the scale range considered are combined to form a vector, , i.e.,
where is the number of data points in each measured multipole; here . The length of the data vector depends on the number of multipoles used.
Theoretical Two-Point Correlation Function
et al. (2016), companion paper, we use two models to compute the likelihood of the galaxy clustering measurements. One is a
fast model which is used to narrow down the parameters space scanned; the other is a slower model which is used to calibrate the results
from the fast model.
Fast model: The fast model we use is the two-dimensional dewiggle model explained in Chuang et al. (2016), companion paper. The theoretical model can be constructed by first and higher order perturbation theory. We first adopt the cold dark matter model and the simplest inflation model (adiabatic initial condition). Computing the linear matter power spectra, , by using CAMB (Code for Anisotropies in the Microwave Background, Lewis et al. 2000) we can decomposed it into two parts:
where is the “no-wiggle” power spectrum calculated using Eq.(29) from Eisenstein & Hu (1998) and is the “wiggled” part defined by previous Eq. (4). Nonlinear damping effect of the “wiggled” part, in redshift space, is approximated following Eisenstein et al. (2007) by
Thus dewiggled power spectrum is
We include the linear redshift distortion as follows (reference (Kaiser, 1987)),
where is the linear galaxy bias and is the linear redshift distortion parameter.
To compute the theoretical two-point correlation function, , we Fourier transform the non-linear power spectrum by using Legendre polynomial expansions and one-dimensional integral convolutions as introduced in Chuang & Wang (2013a).
We times calibration functions to the fast model by
so that it mimics the slow model presented bellow. We find the calibration parameters, , , , and , by
comparing the fast and slow models by visual inspection.
It is not critical to find the best form of calibration function and its parameters as the model will be callibrated later when
performing importance sampling with slow model.
Slow model: The slower but accurate model we use is ”Gaussian streaming model” described in Reid & White (2011). The model assumes that the pairwise velocity probability distribution function is Gaussian and can be used to relate real space clustering and pairwise velocity statistics of halos to their clustering in redshift space by
where and are the redshift space transverse and LOS distances between two objects with respect to the observer, is the real space LOS pair separation, , is the real space galaxy correlation function, is the average infall velocity of galaxies separated by real-space distance , and is the rms dispersion of the pairwise velocity between two galaxies separated with transverse (LOS) real space separation (). , and are computed in the framework of Lagrangian () and standard perturbation theories (, ).
For large scales, only one nuisance parameter is necessary to describe the clustering of a sample of halos or galaxies in this model: , the first-order Lagrangian host halo bias in real space. In this study, we consider relative large scales (i.e. Mpc), so that we do not include , to model a velocity dispersion accounting for small-scale motions of halos and galaxies. Further details of the model, its numerical implementation, and its accuracy can be found in Reid & White (2011).
We use the 2000 mock catalogues created by Kitaura et al. 2015b for the BOSS DR12 CMASS and LOWZ galaxy sample to estimate the covariance matrix of the observed correlation function. We calculate the multipoles of the correlation functions of the mock catalogues and construct the covariance matrix as
is the number of the mock catalogues, is the number of data bins, is the mean of the element of the vector from the mock catalogue multipoles, and is the value in the elements of the vector from the mock catalogue multipoles. The data vector is defined by Eq.(3). We also include the correction, , introduced by Hartlap et al. (2007).
Compute Likelihood from Galaxy Clustering
The likelihood is taken to be proportional to (B.P., 1992), with given by
where is the length of the vector used, is the vector from the theoretical model, and is the vector from the observed data.
As explained in Chuang & Wang (2012), instead of recalculating the observed correlation function while computing for different models, we rescale the theoretical correlation function to avoid rendering the values arbitrary. This approach can be considered as an application of Alcock-Paczynski effect (Alcock & Paczynski, 1979). The rescaled theoretical correlation function is computed by
where is the theoretical model computed in Sec. 3.1.2. Here, and would be the input parameters and and are Mpc, at (LOWZ) and Mpc, at (CMASS). Then, can be rewritten as
where is the vector computed by eq. (2) from the rescaled theoretical correlation function, eq. (15). is the vector from observed data measured with the fiducial model (see Chuang & Wang 2012 for more details regarding the rescaling method).
3.2 Likelihood from Planck CMB data
Our CMB data set consists of the Planck 2015 measurements (Planck
Collaboration I, 2015; Planck
Collaboration XIII, 2015).
The reference likelihood code (Planck
Collaboration XI, 2015) was downloaded from the
Planck Legacy Archive
3.3 Markov Chain Monte-Carlo Likelihood Analysis
We perform Markov Chain Monte Carlo (MCMC) likelihood analyses using CosmoMC (Lewis & Bridle, 2002; Lewis, 2013). The fiducial parameter space that we explore spans the parameter set of , , , , , , , ,, , , , . The quantities and are the cold dark matter and baryon density fractions, is the power-law index of the primordial matter power spectrum, is the curvature density fraction, is the equation state of dark energy, is the dimensionless Hubble constant ( km sMpc), and is the normalization of the power spectrum. Note that, with the joint data set (Planck + BOSS), the only parameter which is not well constrained is . We apply a flat prior of on it. The linear redshift distortion parameter can be expressed as . Thus, one can derive from the measured and .
Generate Markov chains with fast model
We first use the fast model (2D dewiggle model) to compute the likelihood, and generate the Markov chains. The Monte Carlo analysis will go through many random steps keeping or throwing the computed points parameter space according to the Markov likelihood algorithm. Eventually, it will provide the chains of parameter points with high likelihood describing the constraints to our model.
Calibrate the likelihood using slow model
Once we have the fast model generated chains, we modify the weight of each point by
where and are the likelihood for given point of input parameters in the chains. We save time by computing only the ”important”
points without computing the likelihood of the ones which will not
be included in the first place. The methodology is know as ”Importance sampling”. However, the typical Importance sampling method is to add likelihood of some additional data set to the given chains, but in this study, we replace the likelihood of a data set.
4 Double Probe results
The 2-point statistic of galaxy clustering can be summarized by , , , (e.g. Chuang & Wang (2013a)). In some studies, was not included since a strong prior had been applied. Instead of using and , one uses the derived parameters and to summarize the cosmological information since these two quantities are basically uncorrelated to , where is the sound horizon at the redshift of the drag epoch and is the of the fiducial cosmology. In this study, is well constrained by the joint data set but we still use and because they have tighter constraints.
Wang & Mukherjee (2007) showed that CMB shift parameters , together with , provide an efficient and intuitive summary of CMB data as far as dark energy constraints are concerned. It is equivalent to replace with , the redshift to the photon-decoupling surface (Wang, 2009). The CMB shift parameters are defined as (Wang & Mukherjee, 2007):
and is the redshift to the photon-decoupling surface given by CAMB (Lewis et al., 2000)
The angular comoving distance to an object at redshift is given by:
which has simple relation with the angular diameter distance .
In additional to the shift parameters, we include also the scalar index and amplitude of the power law primordial fluctuation and to summarize the CMB information.
From the measured parameters , , , , , , , ,, , , , , we derive the parameters , , , , , , , , to summarize the joint data set of Planck and BOSS galaxy sample. Table 1 and 2 show the measured values and their normalized covariance. A normalized covariance matrix is defined by
where is the covariance matrix.
To conveniently compare with other measurements using CMASS sample within (we are using ), we extrapolated our measurements at : and Mpc (see Table 9 of Alam et al. 2016).
5 constrain parameters of given dark energy models with double-probe results
In this section, we describe the steps to combine our results with other data sets assuming some dark energy models. For a given model and cosmological parameters, one can compute , , , , , , , , . one can take the covariance matrices, , of these 12 parameters (galaxy sample are divided in two redshift bins). Then, can be computed by
where the angular diameter distance is given by:
and , , for , , and respectively; and the expansion rate the universe is given by
where , and the dark energy density function is defined as
is defined in relation to the linear growth factor in the usual way as
where is the growing solution to the second order differential equation writen in comoving coordinates
We will be writing as:
being the top-hat window function. Thus
In this way, one just need to compute linear theory to get to reproduce and combine CMB plus galaxy information. These equations assume no impact from massive neutrinos, mainly working for the cases of massless or approximately massless neutrinos. When including neutrino species with a given mass one needs to solve the full Boltzmann hierarchy as shown in Ma & Bertschinger (1995); Lewis & Challinor (2002).
Table 3 lists the constraints on the parameters of different dark energy models obtained using our double-probe measurements. The results show no tension with the flat CDM cosmological paradigm.
6 Full-likelihood analysis Fixing Dark Energy Models
To validate our double-probe methodology, we perform the full-likelihood MCMC analyses with fixing dark energy models. The main difference of this approach comparing our double-probe analysis is that it has been given a dark energy model at first place. Opposite to the double probe approach, one cannot use the results from the full-likelihood analysis to derive the constraints for the parameters of other dark energy models. Since the dark energy model is fixed, the quantities, , , , , would be determined by the input parameters, , , , , , , , , as shown in Eq. 24, 25, 27, and 31. We show the results in Table 4. In Fig. 3, 4 and 5, we compare these results with our double-probe approach and the single-probe approach (Chuang et al. (2016); companion paper). We find very good agreement among these three approaches. Note that deriving the dark energy model constraints from our double-probe measurements is much faster than the full run. For example, using the same machine, it takes hours to obtain the constraints for CDM using double-probe measurements, but takes 6 days to reach similar convergence for the full likelihood MCMC analysis (slower with a factor of 60).
Up to this point we have introduced two methodologies for extracting cosmological information, the double-probe method and a full likelihood analysis. Moreover, we are comparing these results with a third methodology already introduced in Chuang et al. 2016 also called single-probe analysis combined with CMB. We show here motivations for the use of each of them:
Double-probe: Joint fit to LSS data and CMB constraining the full set of cosmological parameters without the need of extra knowledge on the priors. This methodology allow us to test on the prior information content assumed by other probes and give us the tool to have a dark energy independent measurements from LSS and CMB combined.
Full fit: Fit of cosmological parameter set to LSS and CMB data, requiring an assumption of a dark energy model (i.e. not going through , and as intermediate parameters) from the beginning. This methodology provides a tool to check the information content of the data and we take it to be the answer to recover from other methodologies as it does not have extra assumptions appart from the dark energy model.
Single-probe+CMB: Likelihoods are determined from the BOSS measurements of , , , together with Planck data. This methodology provides, in its first step, measurements of large scale structure independent of CMB data, thus showing as a good tool to test possible tensions between data sets.
7 measurements of neutrino mass
In this section, we will focus on measuring the sum of the neutrino mass using different methodologies described in previous sections. First, we repeat the double-probe analysis described in Sec. 3.3 with an additional free parameter, , and present the constraints on cosmological parameters. Second, we repeat the MCMC analysis with the full likelihood of joint data set described in Sec. 6 and find that the full shape measurement of the monopole of the galaxy 2-point correlation function introduces some detection of neutrino mass. However, since the monopole measurement is sensitive to the observational systematics, we provide another set of cosmological constraints by removing the full shape information. Third, we also obtain the constraint of using the single probe measurement provided by Chuang et al. (companion paper).
7.1 measuring neutrino mass using double probe
Note first that for the study of , we replace with (e.g. see Aubourg et al. (2015)), since depends directly on . Thus, we use the following set of parameters from the double probe analysis while measuring neutrino mass, , , , , , , , , .
As described in Sec. 5, one can constrain the parameters of given dark energy models using Table 5 and 6. Table 7 presents the cosmological parameter constraints assuming some simple dark energy models. Figure 6 shows the probability density for for different dark energy models. Our measurements of using double probe approach are consistent with zero. The upper limit (68% confidence level) varys from 0.1 to 0.35 eV depending on dark energy model.
In addition, we also derive the cosmological constraints by using the results with fixed , i.e. Table 1 and 2 with replaced by . Different from Table 3 (see Sec. 5), we include as one of the parameters to be constrained. The results are shown in Table 8. We find that the results are very similar to Table 7, which showing our double probe measurements are insensitive to the assumption. Fig. 7 shows this point in a clear way by comparing the 2D contours when including a covariance matrix varying (using Table 5 and 6) or fixing (using Table 1 and 2). We see that they lie on top of each other. Moreover, Fig. 7 also exhibit the constraint given by on the and parameters. We find the constraint on become tighter while that in stays the same when including the constraint. This is a good news for future experiments as their power on the neutrino constraint would not highly rely on the growth rate measurements which are more sensitive to the observational systematics.
Furthermore, we have also checked the impact of adding supernovae Ia (SNIa) data, dubbed Joint Light-curve Analysis (JLA) (Betoule et al., 2014) and find that the upper limit of decrease because SNIa breaks the degeneracy of the constraint from Planck+BOSS (see Fig. 8). In this way, we can get tighter constraints on the upper limit by including SNIa data.