Single-Probe Measurements from BOSS DR12

The Clustering of Galaxies in the Completed SDSS-III Baryon Oscillation Spectroscopic Survey: single-probe measurements from DR12 galaxy clustering – towards an accurate model

Abstract

We analyse the broad-range shape of the monopole and quadrupole correlation functions of the BOSS Data Release 12 (DR12) CMASS and LOWZ galaxy sample to obtain constraints on the Hubble expansion rate , the angular-diameter distance , the normalised growth rate , and the physical matter density . We adopt wide and flat priors on all model parameters in order to ensure the results are those of a ‘single-probe’ galaxy clustering analysis. We also marginalise over three nuisance terms that account for potential observational systematics affecting the measured monopole. The Monte Carlo Markov Chain analysis with such wide priors and additional polynomial functions is computationally expensive for advanced theoretical models. We develop a new methodology to speed up by scanning the parameter space using a fast model first and then applying importance sampling using a slower but more accurate model.

Our measurements for DR12 galaxy sample, using the range Mpc Mpc, are , , , = , , , at and , , , at where is the comoving sound horizon at the drag epoch and Mpc is the sound scale of the fiducial cosmology used in this study.

In addition, we divide each sample (CMASS and LOWZ) into two redshift bins (four bins in total) to increase the sensitivity of redshift evolution. However, we do not find improvements in terms of constraining dark energy model parameters. Combining our measurements with Planck data, we obtain , , and assuming CDM; assuming oCDM; assuming CDM; and and assuming CDM. Our results show no tension with the flat CDM cosmological paradigm. This paper is part of a set that analyses the final galaxy clustering dataset from BOSS.

keywords:
cosmology: observations - distance scale - large-scale structure of Universe - cosmological parameters

1 Introduction

The cosmic large-scale structure from galaxy redshift surveys provides a powerful probe of the properties of dark energy and the time dependence of any cosmological model in a manner that is highly complementary to measurements of the cosmic microwave background (CMB) (e.g., Bennett et al. 2013; Ade et al. 2014a), supernovae (SNe) (Riess et al., 1998; Perlmutter et al., 1999), and weak lensing (see e.g. Van Waerbeke & Mellier 2003 for a review).

The amount of galaxy redshift surveys has dramatically increased in the last decades. The 2dF Galaxy Redshift Survey (2dFGRS) obtained 221,414 galaxy redshifts at (Colless et al., 2001, 2003), and the Sloan Digital Sky Survey (SDSS, York et al. 2000) collected 930,000 galaxy spectra in the Seventh Data Release (DR7) at (Abazajian et al., 2009). WiggleZ collected spectra of 240,000 emission-line galaxies at over 1000 square degrees (Drinkwater et al., 2010; Parkinson et al., 2012), and the Baryon Oscillation Spectroscopic Survey (BOSS, Dawson et al. 2013) of the SDSS-III (Eisenstein et al., 2011) has observed 1.5 million luminous red galaxies (LRGs) at over 10,000 square degrees. The newest BOSS data set has been made publicly available in SDSS Data Release 12 (DR12, Alam et al. 2015). The planned space mission Euclid1 will survey over 30 million emission-line galaxies at over 15,000 deg (e.g. Laureijs et al. 2011), and the upcoming ground-based experiment DESI2 (Dark Energy Spectroscopic Instrument) will survey 20 million galaxy redshifts up to and 600,000 quasars () over 14,000 deg (Schlegel et al., 2011). The proposed WFIRST3 satellite would map 17 million galaxies in the redshift range over 3400 deg, with a larger area possible with an extended mission (Green et al., 2012).

The methodologies of the data analyses of galaxy clustering have also developed along with the growing survey volumes. The observed galaxy data have been analysed, and the cosmological results delivered, using both the power spectrum (see, e.g., Tegmark et al. 2004; Hutsi 2005; Padmanabhan et al. 2007; Blake et al. 2007; Percival et al. 2007; Percival et al. 2010; Reid et al. 2010; Montesano et al. 2012), and the correlation function (see, e.g., Eisenstein et al. 2005; Okumura et al. 2008; Cabre & Gaztanaga 2009; Martinez et al. 2009; Sanchez et al. 2009; Kazin et al. 2010; Chuang et al. 2012; Samushia et al. 2012; Padmanabhan et al. 2012; Xu et al. 2013; Oka et al. 2014; Hemantha et al. 2014). Similar analyses have been also applied to the SDSS-III BOSS (Ahn et al., 2012) galaxy sample (Anderson et al., 2013; Manera et al., 2012; Nuza et al., 2013; Reid et al., 2012; Samushia et al., 2013; Tojeiro et al., 2012; Anderson et al., 2014b; Chuang et al., 2013a; Sanchez et al., 2013; Kazin et al., 2013; Wang, 2014; Anderson et al., 2014a; Beutler et al., 2014; Samushia et al., 2014; Chuang et al., 2013b; Sanchez et al., 2014; Ross et al., 2014; Tojeiro et al., 2014; Reid et al., 2014; Alam et al., 2015; Gil-Marín et al., 2015a, b; Cuesta et al., 2015).

In principle, the Hubble expansion rate , the angular-diameter distance , the normalized growth rate , and the physical matter density can be well constrained by analysing the galaxy clustering data alone. Eisenstein et al. (2005) demonstrated the feasibility of measuring and an effective distance, , from the SDSS DR3 (Abazajian et al., 2005) LRGs, where corresponds to a combination of and . Chuang & Wang (2012) measured and simultaneously using the galaxy clustering data from the two dimensional two-point correlation function of SDSS DR7 (Abazajian et al., 2009) LRGs. The methodology has been commonly known as the application of Alcock-Paczynski effect (Alcock & Paczynski, 1979) on large-scale structure. The methodology has been improved and also applied to different galaxy samples, e.g., see Chuang & Wang (2013a, b); Reid et al. (2012); Blake et al. (2012); Xu et al. (2013).

Galaxy clustering allows us to differentiate between smooth dark energy and modified gravity as the cause for cosmic acceleration through the simultaneous measurements of the cosmic expansion history and the growth rate of cosmic large scale structure, (Guzzo et al., 2008; Wang, 2008; Blake et al., 2012). However, to measure , one must determine the galaxy bias , which requires measuring higher-order statistics of the galaxy clustering (see Verde et al. 2002). Song & Percival (2009) proposed using the normalized growth rate, , which would avoid the uncertainties from the galaxy bias. Percival & White (2009) developed a method to measure and applied it on simulations. Wang (2012) estimated expected statistical constraints on dark energy and modified gravity, including redshift-space distortions and other constraints from galaxy clustering, using a Fisher matrix formalism. has been measured from observed data in addition to and (e.g., see Samushia et al. 2012; Blake et al. 2012; Reid et al. 2012; Chuang et al. 2013a; Wang 2014; Anderson et al. 2014a; Beutler et al. 2014; Chuang et al. 2013b; Samushia et al. 2014) determined from the SDSS DR7 LRGs. Blake et al. (2012) measured , , and from the WiggleZ Dark Energy Survey galaxy sample. Analyses have been performed to measure , , and from the SDSS BOSS galaxy sample (Reid et al., 2012; Chuang et al., 2013a; Wang, 2014; Anderson et al., 2014a; Beutler et al., 2014; Chuang et al., 2013b; Samushia et al., 2014; Alam et al., 2015; Gil-Marín et al., 2015b).

In Chuang et al. (2013b), we minimize the potential bias introduced by priors and observational systematics, so that one can safely combine our single-probe measurements with other data sets (i.e. CMB, SNe, etc.) to constrain the cosmological parameters of a given dark energy model. However, due to the large parameter space, the Monte Carlo Markov Chain analysis becomes expensive and makes it difficult to use more advanced/slow models. In this study, we develop a new methodology to speed up the analysis with two steps: 1) generate Marcos chains with a fast model (less accurate); 2) replace/calibrate the likelihoods with a accurate model (slower). For convenience, we use the ”Gaussian streaming model” described in Reid & White (2011), while there have been more developments, e.g. Carlson et al. (2013); Wang et al. (2014); Taruya et al. (2013); Vlah et al. (2013); White (2014); Taruya et al. (2014); Bianchi et al. (2015); Vlah et al. (2015); Okumura et al. (2015). Although the model we use might not be the most accurate model to date, it is good enough for our purpose and the scale ranges used in this study as we will demonstrate.

This paper is organized as follows. In Section 2, we introduce the SDSS-III/BOSS DR12 galaxy sample and mock catalogues used in our study. In Section 3, we describe the details of the methodology that constrains cosmological parameters from our galaxy clustering analysis. In Section 4, we present our single-probe cosmological measurements. In Section 5, given some simple dark energy models, we present the cosmological constraints from our measurements and the combination with other data sets. We compare our results with other studies in 6. We summarize and conclude in Section 7.

2 Data sets

2.1 The CMASS and LOWZ Galaxy Catalogues

The Sloan Digital Sky Survey (Fukugita et al. 1996; Gunn et al. 1998; York et al. 2000; Smee et al. 2013) mapped over one quarter of the sky using the dedicated 2.5 m Sloan Telescope (Gunn et al., 2006). The Baryon Oscillation Sky Survey (BOSS, Eisenstein et al. 2011; Bolton et al. 2012; Dawson et al. 2013) is part of the SDSS-III survey. It has collected the spectra and redshifts for 1.5 million galaxies, 160,000 quasars and 100,000 ancillary targets. The Data Release 12 (Alam et al., 2015) has been made publicly available4. We use galaxies from the SDSS-III BOSS DR12 CMASS catalogue in the redshift range and LOWZ catalogue in the range . CMASS samples are selected with an approximately constant stellar mass threshold (Eisenstein et al., 2011); LOWZ sample consists of red galaxies at from the SDSS DR8 (Aihara et al., 2011) image data. We are using 800,853 CMASS galaxies and 361,775 LOWZ galaxies. Note that the number of galaxies used in this study is slightly smaller than the one used by the Alam et al. (2016) (BOSS collaboration paper for final data release) by . The difference is in the LOWZ sample used (see Alam et al. 2016 for details). The effective redshifts of these sample are and respectively. The details of generating these samples are described in Reid et al. (2016). In addition, we split both CMASS and LOWZ samples into two redshift bins (four bins in total). The effective redshifts are 0.24, 0.37, 0.49, 0.64; and numbers of galaxies are 154367, 207408, 425612, 375241.

2.2 Mock Catalogues

In this study we rely on a set of 2,000 mock galaxy catalogues explicitly produced to resemble the clustering of the BOSS DR12 data. In particular we make use of the MD-Patchy BOSS DR12 mock galaxy catalogues (Kitaura et al., 2015b). These mocks are generated with the PATCHY code (Kitaura et al., 2014), which uses second-order Lagrangian perturbation theory (2LPT; e.g. see Buchert (1994); Bouchet et al. (1995); Catelan (1995)) combined with a spherical collapse model (see Bernardeau 1994; Mohayaee et al. 2006; Neyrinck 2013) on small scales to obtain a dark matter field on a mesh (augment Lagrangian perturbation theory (ALPT); Kitaura & Hess 2013). This mesh is then populated with galaxies following a deterministic and a stochastic bias, whose parameters have been constrained to match accurately the 2- and 3-point statistics (Kitaura et al., 2015). The calibration was performed on accurate N-body-based reference catalogues using halo abundance matching to reproduce the number density, clustering bias, selection function, and survey geometry of the BOSS data on 10 redshift bins (Rodríguez-Torres et al., 2015). Peculiar motions were divided into two categories in these mocks, coherent and quasi-virialised. While the first ones are obtained using ALPT, the second ones have a density dependent stochastic component which was calibrated for the 10 redshift bins. The mock catalogues were constructed assuming CDM Planck cosmology with {}, and a Hubble constant () given by . As shown in a mock catalogue comparison study (Chuang et al. 2015), PATCHY mocks are accurate within 5% on scales larger than 5 Mpc/h (or smaller than 0.5 h/Mpc in Fourier space) for monopole and within 10-15% for quadrupole. Kitaura et al. (2015b) had also demonstrated the accuracy of BOSS PATCHY mock catalogues which are in very good agreement with the observed data in terms of 2- and 3-point statistics. These mocks have been used in recent galaxy clustering studies (Cuesta et al., 2015; Gil-Marín et al., 2015a, b; Rodríguez-Torres et al., 2015; Slepian et al., 2015) and void clustering studies (Kitaura et al., 2015a; Liang et al., 2015). They are also used in Alam et al. (2016) (BOSS collaboration paper for final data release) and its companion papers including this paper and Ross et al. (2016); Vargas-Magana et al. (2016); Beutler et al. (2016a); Satpathy et al. (2016); Beutler et al. (2016b); Sanchez et al. (2016a); Grieb et al. (2016); Sanchez et al. (2016b); Pellejero-Ibanez et al. (2016); Slepian et al. (2016a, b); Salazar-Albornoz et al. (2016); Zhao et al. (2016); Wang et al. (2016).

3 Methodology

In this section, we describe the measurement of the multipoles of the correlation function from the observational data, construction of the theoretical prediction, and the likelihood analysis that leads to constraining cosmological parameters and dark energy.

3.1 Two-Dimensional Two-Point Correlation Function

We convert the measured redshifts of the BOSS CMASS and LOWZ galaxies to comoving distances by assuming a fiducial model, i.e., flat CDM with and which is the same model adopted for constructing the mock catalogues (see Kitaura et al. 2015b). We use the two-point correlation function estimator given by Landy & Szalay (1993):

(1)

where is the separation of a pair of objects and is the cosine of the angle between the directions between the line of sight (LOS) and the line connecting the pair the objects. DD, DR, and RR represent the normalized data-data, data-random, and random-random pair counts, respectively, for a given distance range. The LOS is defined as the direction from the observer to the centre of a galaxy pair. Our bin size is Mpc and . The Landy and Szalay estimator has minimal variance for a Poisson process. The random catalogue is generated with the radial and angular selection function of the observed galaxies. One can reduce the shot noise due to random data by increasing the amount of random points. The number of random data we use is about 50 times that of the observed galaxies. While calculating the pair counts, we assign to each data point a radial weight of , where is the radial number density and Mpc (see Feldman et al. 1994). We include the combination of the observational weights assigned for each galaxy by

(2)

where is the final weight to assign on a galaxy ; is for removing the correlation between CMASS galaxies and both stellar density and seeing; and correct for missing objects due to the redshift failure and fiber collision. The details are described in Reid et al. (2016) (see also Ross et al. 2012). Later, we will also test the impact of systematics by removing from the analysis.

3.2 Multipoles of the Two-Point Correlation Function

The traditional multipoles of the two-point correlation function, in redshift space, are defined by

(3)

where is the Legendre Polynomial (0 and 2 here). We integrate over a spherical shell with radius , while actual measurements of are done in discrete bins. To compare the measured and our theoretical model, the last integral in Eq.(3) should be converted into a sum,

(4)

where Mpc in this work.

Fig.1 shows the monopole () and quadrupole () measured from the BOSS CMASS and LOWZ galaxy sample compared with the best fit theoretical models. We split both CMASS and LOWZ sample into two redshift bins and show the multipoles from these four bins in Fig. 2.

We are using the scale range Mpc and the bin size is 5 Mpc. Fig. 1 and2 show the measured multipoles from various redshift ranges and their best fits. The theoretical models will be described in the next section.

Figure 1: Measurement of monopole and quadrupole of the correlation function from two redshift bins. Left panel: measurements from the BOSS DR12 LOWZ galaxy sample within compared to the best-fitting theoretical models (solid lines). The per degree of freedom (d.o.f.) is 0.91. Right panel: measurements from the BOSS DR12 CMASS galaxy sample within compared to the best-fitting theoretical models (solid lines). The /d.o.f. is 1.07. The error bars are the square root of the diagonal elements of the covariance matrix. In this study, our fitting scale ranges are Mpc Mpc; the bin size is Mpc.
Figure 2: Measurement of monopole and quadrupole of the correlation function from four redshift bins. Top left panel: measurements from the BOSS DR12 LOWZ galaxy sample within compared to the best-fitting theoretical models (solid lines). The /d.o.f. is 0.71. Top right panel: measurements from the BOSS DR12 LOWZ galaxy sample within compared to the best-fitting theoretical models (solid lines). The /d.o.f. is 1.20. Bottom left panel: measurements from the BOSS DR12 CMASS galaxy sample within compared to the best-fitting theoretical models (solid lines). The /d.o.f. is 1.15. Bottom right panel: measurements from the BOSS DR12 CMASS galaxy sample within compared to the best-fitting theoretical models (solid lines). The /d.o.f. is 0.85. The error bars are the square root of the diagonal elements of the covariance matrix. In this study, our fitting scale ranges are Mpc Mpc; the bin size is Mpc.

3.3 Theoretical Two-Point Correlation Function

We use two theoretical models for this study. One is the two-dimensional dewiggle model (Eisenstein et al., 2007) and the other is the Gaussian streaming model (Reid & White, 2011). The former model is very fast but less accurate for high bias tracers; the latter is more accurate but much slower in terms of computation. We develop a new methodology to take the advantages from both of them.

Fast model – two-dimensional dewiggle model

We use the fast model (two-dimensional dewiggle model) which includes the linear bias, nonlinear evolution at BAO scales, linear redshift space distortion, and nonlinear redshift space distortion at BAO scales on top of the linear theoretical model. The theoretical model can be constructed by first and higher order perturbation theory. The procedure of constructing our fast model in redshift space is the following: First, we adopt the cold dark matter model and the simplest inflation model (adiabatic initial condition). Thus, we can compute the linear matter power spectra, , by using CAMB (Code for Anisotropies in the Microwave Background, Lewis et al. 2000). The linear power spectrum can be decomposed into two parts:

(5)

where is the “no-wiggle” or pure CDM power spectrum calculated using Eq.(29) from Eisenstein & Hu (1998). is the “wiggled” part defined by Eq. (5). The nonlinear damping effect of the “wiggled” part, in redshift space, can be well approximated following Eisenstein et al. (2007) by

(6)

where is the cosine of the angle between and the LOS, is the growth rate, and is computed following Crocce & Scoccimarro (2006) and Matsubara (2008) by

(7)

The dewiggled power spectrum is

(8)

Besides the nonlinear redshift distortion introduced above, we include the linear redshift distortion as follows in order to obtain the galaxy power spectrum in redshift space at large scales (Kaiser, 1987),

(9)

where is the linear galaxy bias and is the linear redshift distortion parameter.

We compute the theoretical two-point correlation function, , by Fourier transforming the non-linear power spectrum . This task is efficiently performed by using Legendre polynomial expansions and one-dimensional integral convolutions as introduced in Chuang & Wang (2013a).

The purpose of using fast model is to mimic the slow model in a very efficient way. We thus define the following calibration functions to the fast model:

(10)
(11)

where we find the calibration parameters, , , , and Mpc, by comparing the fast and slow models from a visual inspection. Later, we will explain that the calibration parameters will speed up the convergence but will not bias the results when doing a MCMC analysis. Therefore, it is not critical to find the optimal form of calibration function and its parameters.

Slow model – Gaussian streaming model

We use an advanced model called ”Gaussian streaming model” described in Reid & White (2011). The model assumes the pairwise velocity probability distribution function is Gaussian and can be used to relate real space clustering and pairwise velocity statistics of halos to their clustering in redshift space by

(12)

where and are the redshift space transverse and LOS distances between two objects with respect to the observer, is the real space LOS pair separation, , and are the real and redshift space galaxy correlation functions respectively, is the average infall velocity of galaxies separated by real-space distance , and is the rms dispersion of the pairwise velocity between two galaxies separated with transverse (LOS) real space separation (). , and are computed in the framework of Lagrangian () and standard perturbation theories (, ).

For large scales, only one nuisance parameter is necessary to describe the clustering of a sample of halos or galaxies in this model: , the first-order Lagrangian host halo bias in real space. One would need another parameter, , to model an additive, isotropic velocity dispersion accounting for small-scale motions of halos and galaxies (one halo term). However, in this study, we consider relative large scales (i.e. Mpc), so that we do not include this parameter. Further details of the model, its numerical implementation, and its accuracy can be found in Reid & White (2011).

Model for observational systematic errors

It is well known that the observations could be contaminated by systematic effects, e.g. see Ross et al. (2012) and Ross et al. (2016; companion paper). To obtain robust and conservative measurements, we include a model for systematics. The model is a simple polynomial given by

(13)

where , , and are nuisance parameters. Following Chuang et al. (2013b), we only include the systematics model for the monopole of the correlation function since the quadrupole is insensitive to the systematics effects of which we are aware. On the other hand, if we add another polynomial to the quadrupole as it is usually done in papers of measuring BAO only (e.g. Anderson et al. 2014a; Cuesta et al. 2015), we would not be able to extract any information from redshift space distortions.

3.4 Covariance Matrix

We use the 2000 mock catalogues created by Kitaura et al. 2015b for the BOSS DR12 CMASS and LOWZ galaxy sample to estimate the covariance matrix of the observed correlation function. We calculate the multipoles of the correlation functions of the mock catalogues and construct the covariance matrix as

(14)

where

(15)

is the number of the mock catalogues, is the number of data bins, is the mean of the element of the vector from the mock catalogue multipoles, and is the value in the elements of the vector from the mock catalogue multipoles. We are using the scale range Mpc and the bin size is 5 Mpc. The data points from the multipoles in the scale range considered are combined to form a vector, , i.e.,

(16)

where is the number of data points in each measured multipole; here is the same for all the redshift bins. The length of the data vector depends on the number of multipoles used. We also include the correction, , introduced by Hartlap et al. (2006).

3.5 Likelihood

The likelihood is taken to be proportional to (B.P., 1992), with given by

(17)

where is the length of the vector used, is the vector from the theoretical model, and is the vector from the observed data.

As explained in Chuang & Wang (2012), instead of recalculating the observed correlation function while computing for different models, we rescale the theoretical correlation function to avoid rendering the values arbitrary (the amount of information from data sample used needs to be fixed when computing ). This approach can be considered as an application of Alcock-Paczynski effect (Alcock & Paczynski, 1979). The rescaled theoretical correlation function is computed from

(18)

where is the theoretical model described in Sec. 3.3, and can be rewritten as

(19)

where is the vector computed from eq. (4) from the rescaled theoretical correlation function, eq. (18). is the vector from observed data measured with the fiducial model (see Chuang & Wang 2012 for more details regarding the rescaling method).

3.6 Markov Chain Monte-Carlo Likelihood Analysis

Basic procedure

We perform Markov Chain Monte-Carlo likelihood analyses using CosmoMC (Lewis & Bridle, 2002). The parameter space that we explore spans the parameter set of , , , , , , , , , , . The quantities and are the matter and baryon density fractions, is the power-law index of the primordial matter power spectrum, is the dimensionless Hubble constant ( km sMpc), and is the normalization of the power spectrum. The linear redshift distortion parameter can be expressed as . Thus, one can derive from the measured and . Among these parameters, only , , , , are well constrained using the BOSS galaxy sample alone in the scale range of interest. We marginalize over the other six parameters, , , , , , , assuming a flat prior over the range , , , , , respectively, where the flat priors on and are centred on the Planck measurements with a width of ( is taken from Ade et al. 2014b). These priors are sufficiently wide to ensure that CMB constraints are not double counted when our results are combined with CMB data (Chuang et al., 2012).

Generate/calibrate Markov chains with fast/slow model

We first use the fast model (2D dewiggle model) to compute the likelihood, and generate the Markov chains. This step will make many trials (keep or throw away based on the MCMC algorithm) and eventually provides the chains of parameter points describing the parameter constraints and exclude the low likelihood regions of the parameter space.

Once we have the chains generated using the fast model, we modify the weight of each point in the chains by

(20)

where and are the likelihoods for a given point of input parameters in the chains and is the original weight of the given point. We save time by computing only the ”important” points without computing the likelihood of a point which we will not include eventually. The methodology is known as ”importance sampling”. However, the typical application of the importance sampling method is to add a likelihood from some additional data set to a given set of chains, but in this study, we will use it to replace the likelihood of a data set with a more accurate version.

It takes about 9 hours to find the best fit value using CosmoMC (i.e. action=2) with the slow model and 30 minutes (20 times faster) with the fast model.

On the scales we use for comparison with the BOSS galaxy data, the theoretical correlation function only depends on cosmic curvature and dark energy through the parameters , , , and assuming that dark energy perturbations are unimportant (valid in the simplest dark energy models). Thus we are able to extract constraints from clustering data that are independent of dark energy.

4 Results

4.1 Validate the Methodology using Mock Catalogues

In this section, we will test our methodology by applying it to the mock catalogues. We first demonstrate that using the mean of the correlation functions is equivalent to using individual correlation functions from the mocks. We obtain the measurements from the first 100 CMASS mock catalogues within . We use the fast model only and do not include the polynomial modelling of the systematics in these tests. The left panel of Fig. 3 shows the distribution of the measurements of and , where is the comoving sound horizon at the drag epoch and Mpc is the sound scale of the fiducial cosmology used in this study. We also show the measurements from the weighted mean (using inverse variance weighting) of 100 correlation functions from these mocks. One can see that the weighted mean of the 100 individual measurements (blue square) is very close to the measurement from the mean correlation function from 100 mocks (black circle). We conclude that one can use the mean correlation function to represent the tests for multiple correlation functions. The right panel of Fig. 3 shows the scatter of the measurements of and from the same analysis above.

Note that the computing time is still expensive even after speeding up the analysis using the fast-slow model method described in previous sections. Therefore, instead of applying the test using the correlation function from an individual mock catalogue, we use the mean of the correlation functions from all the mocks. From these tests, we can see whether our methodology would introduce some bias or not. A small bias can be better detected using 2000 rather than 100 mock correlation functions. Therefore, we validate our methodology by applying our methodology on the mean correlation functions from 2000 mocks for different redshift bins and present the results in Table 1. One can see that for all the parameters in all the redshift bins, we recover the input parameters to within . We show the results using the calibrated dewiggle model in Appendix A which also recovers the input parameters within reasonable precision, . However, given that they are more realistic, we use the results from the Gaussian streaming model as our fiducial results.

Figure 3: Left panel: The small red crosses indicate the measurements of and from 100 individual CMASS mock catalogues. We show their weighted mean (with inverse variance weighting; blue square) and the measurement from the mean correlation function of the 100 mock catalogues (black circle); Right panel: The small red crosses indicate the measurements of and f from 100 individual CMASS mock catalogues.
input values
deviation uncertainty () & & & & &
input values
deviation uncertainty () & & & & &
input values
deviation uncertainty () & & & & &
input values
deviation uncertainty () & & & & &
input values
deviation uncertainty () & & & & &
input values
deviation uncertainty () & & & & &
Table 1: Measurements of , , , , and from the mean of 2000 correlation functions, where the unit of is and the units of and are Mpc. The effective redshifts are 0.32, 0.59, 0.24, 0.37, 0.49, 0.64. We show the means and standard deviations, input values, the differences between mean and input values (in percentage), and the standard deviations in percentage.

4.2 Measurements of Cosmological Parameters from BOSS galaxy clustering

We now present the dark energy model independent measurements of the parameters , , , , and , obtained by using the method described in previous sections. We also present derived parameters including , , , and with

(21)

where is the comoving sound horizon at the drag epoch calculated by CAMB and Mpc is the of the fiducial cosmology used in this study (same as the one used by the mock catalogues). We use instead of since it is more insensitive to the approximate formula used for computing . is the effective distance which can be measured from the spherical averaged correlation function or power spectrum (e.g. see Eisenstein et al. 2005).

Table 2 lists the mean and standard deviation obtained from the MCMC likelihood analysis from the DR12 galaxy correlation function. We measure , , , , , , (they are not independent), using the range Mpc Mpc, at the different redshift bins, i.e. , , , , , . The effective redshifts are 0.32, 0.59, 0.24, 0.37, 0.49, 0.64. The covariance matrices for these measurements can be found in the Appendix.

To conveniently compare with other measurements using CMASS sample within (we are using ), we extrapolated our measurements at : , Mpc, and Mpc (see Table 9 of Alam et al. 2016).

In the next section, we will describe how to use our results of single-probe measurements combining with other data set to constrain the parameters of given dark energy models.

Table 2: Our measurements of , , , , , , , from BOSS DR12 data at the different redshift bins stated, using the range Mpc Mpc; is 147.66 Mpc in this study; the unit of is and the units of and are Mpc. The effective redshifts of these redshift bins are 0.32, 0.59, 0.24, 0.37, 0.49, 0.64.

5 Constraining cosmological parameters of given dark energy models

5.1 Likelihood derivation

In this section, we describe the steps to combine our results with other data sets assuming some dark energy models. Here, we use the results from two redshift bins, (LOWZ) and (CMASS), as an example. For a given model and cosmological parameters, one can compute , , , and . From Tables 5 and 6 in Appendix B and the standard deviations in Table 2, one can compute the covariance matrices, and , of these four parameters. Then, and can be computed by

(22)

and

(23)

where

(24)
(25)

and

One can include the cosmological constraints from the SDSS/BOSS galaxy clustering by adding in the MCMC analysis, due to the negligible correlation of these samples.

5.2 Constraining Dark Energy Parameters combining with external data sets

In this section, we present examples of combining our galaxy clustering results with the Planck CMB data assuming specific dark energy models. The Planck data set we use is the Planck 2015 measurements (Adam et al., 2015; Ade et al., 2015a). The reference likelihood code (Aghanim et al., 2015) was downloaded from the Planck Legacy Archive5. Here we combine the Plik baseline likelihood for high multipoles () using the TT, TE and EE power spectra, and the Planck low- multipole likelihood in the range (hereafter lowTEB). We also include the Planck 2015 lensing likelihood (Ade et al., 2015b), constructed from the measurements of the power spectrum of the lensing potential (hereafter referred as ”lensing”). When using the Planck lensing likelihood, the parameter is always set to 1 (Ade et al., 2015a).

Table 3 shows the cosmological constraints assuming flat CDM, oCDM (non-flat CDM), CDM (constant equation of state of dark energy), oCDM (non-flat CDM), CDM ( time-dependent equation of state) and oCDM (non-flat CDM). In addition to using 2 redshift bins, we use 4 redshift bins but we do not find any improvement in terms of constraining cosmological parameters. It should indicate that the the models we are testing are still simple and do not benefit from higher redshift sensitivity. In addition, some information (pair counts) would be lost when we slice the sample into more bins. In Fig. 4, 5, and 6, we show 2D marginalized contours for and confidence levels for and (CDM model assumed); and (oCDM model assumed); and (CDM model assumed); and (oCDM model assumed); and (CDM model assumed); and (oCDM model assumed). One can see that all the constraints are consistent with flat CDM.

or
Planck+2bins (CDM)
Planck+4bins (CDM)
Planck+2bins (oCDM)
Planck+4bins (oCDM)
Planck+2bins (CDM)
Planck+4bins (CDM)
Planck+2bins (oCDM)
Planck+4bins (oCDM)
Planck+2bins (CDM)
Planck+4bins (CDM)
Planck+2bins (oCDM)
Planck+4bins (oCDM)
Table 3: The cosmological constraints from 2 redshift bins and 4 redshift bins combined with Planck data assuming CDM, non-flat CDM (oCDM), CDM,