A new perspective on Dark Energy modeling via Genetic Algorithms
We use Genetic Algorithms to extract information from several cosmological probes, such as the type Ia supernovae (SnIa), the Baryon Acoustic Oscillations (BAO) and the growth rate of matter perturbations. This is done by implementing a model independent and bias-free reconstruction of the various scales and distances that characterize the data, like the luminosity and the angular diameter distance in the SnIa and BAO data, respectively, or the dependence with redshift of the matter density in the growth rate data, . These quantities can then be used to reconstruct the expansion history of the Universe, and the resulting Dark Energy (DE) equation of state in the context of FRW models, or the mass radial function in LTB models. In this way, the reconstruction is completely independent of our prior bias. Furthermore, we use this method to test the Etherington relation, ie the well-known relation between the luminosity and the angular diameter distance, , which is equal to 1 in metric theories of gravity. We find that the present data seem to suggest a 3- deviation from one at redshifts . Finally, we present a novel way, within the Genetic Algorithm paradigm, to analytically estimate the errors on the reconstructed quantities by calculating a Path Integral over all possible functions that may contribute to the likelihood. We show that this can be done regardless of the data being correlated or uncorrelated with each other and we also explicitly demonstrate that our approach is in good agreement with other error estimation techniques like the Fisher Matrix approach and the Bootstrap Monte Carlo.
A decade after the fundamental discovery of the acceleration of the Universe Riess:2004nr (), cosmologists are still puzzled as to the nature of the dark energy supposed to be responsible for such acceleration. Many alternatives have been proposed: a cosmological constant (or more generally an approximately constant vacuum energy); an effective scalar field (like in quintessencequin (),kes () etc.); various relativistic ether fluids (like e.g. Chaplygin gasGorini:2004by ()); several ad hoc modifications of general relativity on UV scales (for instance scalar-tensor theories Boisseau:2000pr (), gravityfR (), Gauss-Bonnet termsfRG (), Torsion cosmologyShie:2008ms (), etc.); extra symmetries like conformal Weyl gravityMannheim:2011ds (),Mannheim:2011is () or massive gravitonsVainshtein (); introduction of extra dimensions (like in DGP modelsDGP (), or Kaluza-Klein compactifications); new effective interactions (like the GalileonGalileon () field, etc.); finally there is the possibility that the observed dimming of supernovae is not due to acceleration in a FRW universe but to large inhomogeneities from gravitational collapse, or from large LTB VoidsGarciaBellido:2008nz (); GarciaBellido:2008gd (); GarciaBellido:2008yq (); Alonso:2010zv (). For reviews on many of these interesting alternatives we refer the interested reader to Ref. review ().
All these alternatives could in principle be distinguishable if we had detailed knowledge about the redshift evolution of a handful of parameters associated with concrete physical quantities, like the matter content , the rate of expansion , the luminosity distance , the angular diameter distance , the galaxy and cluster number counts , the deceleration parameter , the cosmic shear , the density contrast growth function , and , the Jeans length of perturbations , the anisotropic stresses of matter , the scalar torsion parameter , the bulk viscosity , etc.
Fortunately, we now live in an era where all these parameters could in principle be measurable in the not too far future thanks to the proposed probes of LSS and the CMB, that will determine the following observables with increasing accuracy: the matter power spectrum , the SnIa distance modulus , the Baryon Acoustic Oscillation scale, , the cluster number counts , the galaxy and halo mass functions , the lensing magnification and convergence , the Redshift Space Distortions and bias of LSS, the local rate of expansion , the age of the Universe , the CMB temperature and polarization anisotropies , the Integrated Sachs-Wolfe and the Sunyaev-Zeldovich effects, and possibly also the fractal dimension of space time.
There is, however, a limit to what we can say about the physics responsible for acceleration from observations. How many parameters can we constrain? What is the optimal parametrization of the linear perturbation equations? How much can we extract from nonlinear regime? Can we interpolate between super-horizon scales sub-horizon mildly-nonlinear and full nonlinear scales? Can we parameterize wide classes of models? What is the role of systematics on uncertainties? All these are crucial questions that will have to addressed in the near future when we have a significant increase in data from LSS and CMB. For the time being, we should be as open as possible to any of the above alternatives. In that respect we would like to pose the appropriate questions to the right number of cosmological observables without having to assume a model, i.e. without any prejudices as to the underlying space-time structure and dynamics.
In particular, we don’t want to determine the likelihood contours for the N-dim parameter space of a fully specified CDM model, but rather approach the observational data sets without prejudices, in a formal way, and then use the outcome to constrain whatever model or realization we may envision, be it CDM, or DGP, or , or LTB, etc. This is the reason we develop the Genetic Algorithm (GA) approach to DE modeling. Genetic algorithms have been used in the past in Cosmology, see Bogdanos:2009ib (); Nesseris:2010ep (), to address supernovae distance modulus fitting in a model independent way, but this is the first time it is done in full scale to the multiprobe space of Dark Energy modeling.
In Section II we describe the Genetic Algorithms as applied to the cosmological problem at hand and we also describe a novel prescription for the computation of errors in the context of GA by doing a path integral over all possible functions that may contribute to the likelihood. In Section III we give an account of the data sets used in this paper, the type Ia supernovae (SnIa) data, the Baryon Acoustic Oscillations (BAO) data and the growth rate data, used for the computation of the redshift dependence of a handful of important parameters like the luminosity and angular diameter distances and respectively and the matter density . In Section IV we describe the results of the GA analysis in the context of various models: FRW/DE, LTB, and modified gravity theories. In the context of FRW/DE models, we compare many different diagnostics like the equation of state , the deceleration parameter and the statistic and we find that the best in constraining the evolution and properties of DE is the deceleration parameter as it requires no prior assumptions or a priori set parameters like does and has the smallest errors especially at small redshifts. In the case of the LTB models, we reconstruct the mass radial function and we find that it is in some tension with the commonly used profiles found in the literature. For the case of the modified gravity theories we reconstruct the important parameter , which is a smoking gun signature for deviations from GR if it is different from unity at any redshift, and we find a bump at intermediate redshifts. We also include a test of the Etherington relation in Section V, and find a similar deviation in the same redshift range. However, since the statistical significance is not high enough we cannot draw any strong conclusions. Finally, in Section VI we give our conclusions.
Ii Genetic Algorithms
In this section we will briefly introduce the Genetic Algorithms (GA). For a more detailed description and the application of GAs to cosmology we refer the interested reader to Refs. Bogdanos:2009ib () and Nesseris:2010ep (). The GAs are algorithms loosely modeled on the principles of the evolution via natural selection, where a population of individuals evolves over time under the influence of operators such as mutation (random change in an individual) and crossover (combination of two or more different individuals), see Fig. 1 for a schematic description of the operations. The probability or in other terms the “reproductive success” that an individual will produce offspring (the next generation) is proportional to the fitness of this individual, where the fitness function measures how accurately it describes the data and in our case it is taken to be a .
The algorithm begins with an initial population of candidate individuals (in our case functions), which is randomly generated based on a predefined grammar of allowed basis functions, eg etc and operations etc. In each successive generation, the fitness functions for the individuals of the population are evaluated and the genetic operators of mutation and crossover are applied in order to produce the next generation. This process iterated until some termination criteria is reached, e.g. a maximum number of generations. To make the whole process more clear we can also summarize the various steps of the algorithm as follows:
Generate an initial random population of functions based on a predefined grammar.
Calculate the fitness for each individual in the current population .
Create the next generation by probabilistically choosing individuals from to produce offspring via crossover and mutation, and possibly also keeping a proportion of the previous generation .
Repeat step 2 until a termination goal has been achieved, eg a maximum number of generations.
We should point out that the initial population depends solely on the choice of the grammar and the available operations and therefore it only affects how fast the GA converges to the best-fit. Using a not optimal grammar may result in the algorithm not converging fast enough or being trapped in a local minimum. Also, two important parameters that affect the convergence speed of the GA are the mutation rate and the selection rate. The selection rate is typically of the order of and it affects the number of individuals that will be allowed to produce offspring. The mutation rate is usually of the order of and it expresses the probability that an arbitrary part of individual will be changed. If either of the two rates is much larger than these values then the GA may not converge at all, while if the two rates are much smaller then the GA will converge very slowly at some local minimum.
The difference between the GAs and the standard analysis of observational data, ie having an a priori defined model with a number of free parameters, is that the later method introduces model-choice bias and in general models with more parameters tend to give better fits to the data. Also, GAs have a clear advantage to the usual techniques when the parameter space is large, far too complex or not well understood, as is the case with dark energy. Finally, our goal is to minimize a function, in our case the , not using an a priori defined model, but through a stochastic process based on a GA evolution. In this way, no prior knowledge of a dark energy or modified gravity model is needed and our result will be completely parameter-free.
In other words, the GA does not require us to arbitrarily choose a DE model, but uses the data themselves to find this model. Also, it is parameter free as the end result does not have any free parameters that can be changed in order to fit the data. So, in this sense this method has far less bias than any of the standard methods for the reconstruction of the expansion history of the Universe. This is the main reason for the use of the GAs in this paper. The Genetic Algorithms were first implemented in the analysis of cosmological data in Refs. Bogdanos:2009ib () and Nesseris:2010ep ().
ii.2 An example
In this Section we will briefly describe a simple example222This is only a schematic description of how the GA works and this is done solely for the sake of explaining the basic mechanisms of the GA. of how the GA determines the best-fit given a dataset. We will not describe any of the technicalities but instead we will concentrate on how the generations evolve over time, how many and which individuals are chosen for the next generation etc. Also, in order to keep our example as simple as possible we will assume that our grammar includes only basic functions like polynomials , etc, the trigonometric functions , , the exponential and the logarithm . As mentioned in the previous section, the first step in the GA is setting up a random initial population which can be any simple combination of the grammar and the allowed operations, eg , and . The number of functions in the genetic population is usually on the order of a few hundreds.
Next, given some data the algorithm measures the fitness of each solution by calculating their defined as
where for our hypothetical example , and . The selection per se is done by implementing the “Tournament selection” method which involves running several “tournaments”, sorting the population with respect to the fitness each individual and after that a fixed percentage, adjusted by the selection rate (see the previous subsection), of the population is chosen. As mentioned before, the selection rate is of the order of of the total population, but lets assume that the two out of the three candidate solutions ( and ) are chosen for the sake of simplicity.
The reproduction of these two solutions will be done by the crossover and mutation operations. The crossover will randomly combine parts of the “parent” solutions, for example this may be schematically shown as
In the next step the GA will implement the mutation operation. The probability for mutation is adjusted by the mutation rate, which as we mentioned is typically of the order of . In this example this may be a change in the exponent of some term or the change in the value of a coefficient. For example, for the candidate solution this can be schematically shown as , where the power of the term was changed from to .
Finally, at the end of the first round we have the three candidate solutions
In the beginning of the next round the fitness of each individual will again be determined, which for our population could be , and the selection and the other operations will proceed as before. It is clear that even after just one generation the combined of the candidate solutions has decreased dramatically and as we will see in the next section, it usually takes a bit more than 100 generations to reach an acceptable and around 500 generations to converge on the minimum. After a predetermined number of generations, usually on the order of , has been reached or some other goal has been met, the GA will finish, with the best-fitting function of the last generation being considered as the best-fit to the data.
ii.3 Error analysis
ii.3.1 Uncorrelated data
One possible shortcoming of the Genetic Algorithms is that they do not directly provide a method to estimate the error on the derived best-fit function. One way to around this issue was used in Ref. Nesseris:2010ep (), were a bootstrap Monte Carlo was performed in order to estimate the error of the derived best-fit. We refer the reader to Refs. Nesseris:2010ep () and press92 () for more details on the bootstrap approach to error estimation. In this paper we will follow a more direct approach by deriving some analytic results. First we will remind the reader about some important facts regarding the normal distribution. If the normal distribution with a zero mean and a variance is given by
Then in the case of one variable only the confidence interval (CI) or equivalently the region that contains approximately the of the values is found by calculating the area within or
where is the error function. This can easily be generalized to s as . In our case, our likelihood functional is
where is the function to be determined by the Genetic Algorithm, is the corresponding chi-squared for data points , defined Eq. (1). Then, determining the normalization constant is more complicated than that of a simple normal distribution, as we have to integrate over all possible functions or in other words perform a “path integral”
where indicates integration over all possible values of . The reason for this is that as the GA is running it may consider any possible function no matter how bad a fit it represents, due to the mutation and crossover operators. Of course, even though these “bad” fits will in the end be discarded, they definitely contribute in the total likelihood and have to be included in the calculation of the error estimates of the best-fit.
The infinitesimal quantity can be written as , where and are assumed to mean and respectively, and we will for the time being assume that the function evaluated at a point is uncorrelated (independent) from that at a point . Therefore, Eq. (5) can be recast as
which means that . Therefore, the likelihood becomes
or if we take into account our assumption that the function evaluated at each point is independent
Now, we can calculate the error around the best-fit at a point as
If we demand that the errors correspond to the error of a normal distribution, then from Eq. (9) we can solve the following equation for numerically
and therefore determine the error of the best-fit function at each point . However, this will lead to knowledge of the error in specific points , which is not ideal for our purpose, ie to have a smooth, continuous and differentiable function. Therefore, we will create a new chi-square defined as
and we will also parameterize with a second order polynomial . Finally, we minimize the combined chi-squared for the parameters , where is given by Eq. (1) and is given by Eq. (11). Then, the region for the best-fit function will be contained within the region .
The reason why we perform a combined fit of both chi-squares is that we need to ensure that the derived error region corresponds to small perturbations around the best-fit and not to the whole infinite dimensional functional space usually scanned by the genetic algorithm. This is inline to what is usually done in traditional error estimates either with the Fisher matrix approach, where the errors are estimated from the inverse Fisher matrix, ie the covariance matrix, evaluated at the minimum, or in Monte Carlo simulations, where the errors are determined by sampling the parameter space around the best-fit press92 ().
ii.3.2 Correlated data
Our results can also be generalized even in the case when the data are correlated with each other. In this case the chi-square can be written as
where is the covariance matrix. The derivation proceeds as usual but now we will have to rotate the data to a basis where they are not correlated with each other. To do so we define a new variable , where can be found by decomposing the inverse covariance matrix by using Cholesky decomposition and then the chi-squared can be written as . Then, we have that and the functional integration can proceed as usual and the normalization can be found. Unsurprisingly, the resulting normalized likelihood can be written as
With this in mind we can also calculate the confidence interval (CI) for the best-fit function . The procedure is similar as in the uncorrelated case, but now we must now rotate to the new basis . However, now instead of integrating over we will instead integrate over the region
with given by Eq. (12). Rotating to the new basis we have
and Eq. (14) becomes
where, after doing a gaussian integral, the is given by
For uncorrelated measurements we have that and therefore . Then by choosing the minus sign for , Eq. (20) reduces to Eq. (9) as expected. In order to actually calculate the error for the best-fit function , we can now follow the same procedure as before by minimizing the combined chi-squared but now with the given by Eq. (12) and in Eq. (11) the instead given by Eq. (20).
ii.3.3 Comparison with other methods
In order to assess how rigorous the path integral method described above is, we will now consider an explicit example and compare it numerically with a Bootstrap Monte Carlo and the Fisher Matrix approach.
First, we created a set of 31 mock data points with noise in the range based on the model
where the parameters have the values respectively. The choice of the model was completely ad hoc, except for the requirement to be well behaved (smooth) and that it exhibits some interesting features, such as a maximum at some point . This was done in order to test if our methodology is capable of detecting such features and if it can estimate the correct values for the errors at the same time. Then we fitted the model of Eq. (21) back to the mock data by minimizing the . The best-fit was found to be with a . The original or “real” model along the best fit for the same model are shown in the top two plots of Fig. 2. The error region on the top right plot of Fig. 2 was estimated by following a Fisher Matrix approach, where the error of the best-fit function can be calculated by evaluated at the best-fit, as described in press92 (), while the dummy variables correspond to the three parameters . The covariance matrix can be simply calculated as the inverse of the Fisher matrix where evaluated at the best-fit.
For the Bootstrap Monte Carlo, we created 100 mock data sets from the original or “real” data with replacement and then applied the GA on them to determine the best-fit for each one press92 (). After following the prescription as described in Ref. Bogdanos:2009ib (), we calculated the error region and we present it in the bottom left plot of Fig. 2. There seems to be a sweet spot in the error region at , ie very small errors, but this is due to the stochastic nature of the Bootstrap and the behavior (sharp transition) of the data at this region, ie “below” the best-fit for and above the “best-fit” at .
Finally, we also calculated the error region based on the Path Integral approach described in the previous sections. In this case, the best-fit by the GA was found to be
for , which clearly is much better than the best-fit of the real model itself! This obviously reinforces our belief that the GAs can provide really good descriptions of the underlying data. However, it should be noted that extrapolating the best-fit GA function outside , ie in the region , is not advised since the lack of data in that region will lead to dramatic deviations of the GA prediction and the real model.
The best-fit function and its errors are shown in the bottom right plot of Fig. 2. As it can be seen, the agreement between all three methods is remarkably good, something which lends support to our newly proposed method for the error estimation in the GA paradigm. Furthermore, we should note that since only the Bootstrap Monte Carlo is applicable to the case of the GAs (as the best-fit has no parameters over which one may differentiate, like one would do with the Fisher Matrix), our method has the obvious advantage that it is much faster that the Bootstrap, less computationally expensive and equally reliable.
Iii The data
iii.1 Supernovae of type Ia
The analysis of SN Ia standard candles is based on the method described in Ref. Lazkoz:2007cc (). We will mainly use the recently released update to the Union 2 set, i.e. the Union v2.1 dataset Suzuki:2011hu ().
The SN Ia observations use light curve fitters to provide the apparent magnitude of the supernovae at peak brightness. This is related with the luminosity distance through , where is the absolute magnitude. In flat FRW models, the luminosity distance can be written as , where is the Hubble parameter and is the scale factor of the FRW metric. In LTB models, the luminosity distance can be written as , where is the scale factor but which now also depends on the radial coordinate. In the FRW limit we have .
|Dataset||Information||FRW/DE models||LTB models||GA function|
|BAO||333where in both cases and for the LTB|
|Growth-rate||,||444where is the mass radial function.|
Defining the dimensionless luminosity distance as , the theoretical value of the apparent magnitude is
where Lazkoz:2007cc ().
The theoretical model parameters are determined by minimizing the quantity
where is the number of the SN Ia dataset, is the set of parameters to be fitted, and are the errors due to flux uncertainties, intrinsic dispersion of SN Ia absolute magnitude and peculiar velocity dispersion. The theoretical distance modulus is defined as
where . More details for the usual minimization of (24) are described in detail in Refs. Nesseris:2004wj (); Nesseris:2005ur (); Nesseris:2006er (). Therefore, by fitting the SnIa data with the Genetic Algorithm we directly reconstruct the luminosity distance , which from now on we will call . At this point it should be noted that in determining the best-fit we don’t make any a priori assumption about the curvature of the universe or any dark energy model, but instead we allow the GA to determine the luminosity distance directly from the data and the result is shown in Fig 3. As it can be seen, the prediction of the GA is very close to best-fit flat CDM model with .
iii.2 Baryon Acoustic Oscillations
The BAO data are usually given, in FRW models, in terms of the parameter , where is the BAO scale at the drag redshift, assumed known from CMB measurements, and Percival:2009xn ()
is the usual volume distance. From this it is easy to see that for scales as .
In LTB models one has to take into account the effect of the inhomogeneous metric, which induces and extra factor . Therefore, for these models Zumalacarregui:2012pq ()
using a suitable early time .
In this analysis we use the 6dF, the SDSS and WiggleZ BAO data shown in Table 2. The WiggleZ collaboration Blake:2011en () has measured the baryon acoustic scale at three different redshifts, complementing previous data at lower redshift obtained by SDSS and 6DFGS Percival:2009xn ().
The chi-square is given by
In order to simplify the analysis we will actually use the function
where the function is to be determined by the Genetic Algorithm and and the factor was introduced in order to remove the singularity at low . The reason we use the constant for is twofold: first, one should note that just shifts vertically and the difference with the “real” value can be easily accommodated by the GA and second, we also want to use the best-fit to reconstruct the LTB as well, which is actually a matter dominated model. In Fig. 4 we show the GA best-fit compared to CDM for .
iii.3 The growth rate of density perturbations
The growth rate data are given in terms of the combination of parameters where is the growth rate of matter perturbations
and is the linear growth factor which, for models where dark energy has no anisotropic stress, satisfies the following ODE
where is a hypergeometric function defined by the series
on the disk and by analytic continuation elsewhere, see Ref. handbook () for more details. Also, corresponds to the rms mass fluctuation and is given by
From the definitions of and it is easy to prove the following
The actual data that we used in the present analysis and their corresponding references are shown in Table 3. We should note that some of the data in Table 3 come from surveys that overlap in the same volumes of space and this would in principle induce correlations of the data points. More specifically, the proper way to analyze these data would be to take into account their covariance matrix and calculate a total , as was the case in the BAO data mentioned above. However, since this covariance matrix is not currently available 555The full covariance matrix between growth rate data-points at different redshifts will in principle be available in future surveys like DES. we perform the analysis by using the data “as is”, ie without a covariance matrix.
Finally, by applying the GA on the data we will directly determine the parameter . In Fig. 5 we show the difference between the GA best-fit and the best-fit () CDM model . The gray shaded region represents the error of the GA best-fit. In Fig. 6 we show the evolution of in terms of the scale factor and its error region (gray shaded area). We also show for various values of (also indicated by the arrows) and . The two vertical dotted lines indicate the regions where we trust our reconstruction the most . Notice that there is a sweet-spot at (indicated by an arrow) due to degeneracies in for the const. model that, even if we had data at these high redshifts, limits its ability to exclude parts of the parameter space. Thus, this justifies and enhances the importance of our bias-free reconstruction by using GAs. Finally, clearly the best region to look for deviations from models and perhaps detecting modifications of gravity is clearly at , since as it can bee seen in Fig. 6 at this redshift region the difference between the different models and the extrapolation from the GA best-fit becomes maximal.
In Fig. 7 we show the normalized growth factor . The dashed line corresponds to a flat matter dominated model , the solid black line to a CDM model with , the dotted line to an open model OCDM with and the black dot-dashed to the best-fit GA model. The gray shaded region corresponds to the error region and the reason that it looks rather squashed is that, as seen in Eq. (37), we had to normalize to unity at .
Iv Method and results
In the next subsections we will apply our GA method to various models, from generic FRW models of Dark Energy to Large Voids of Gigaparsec size, with approximate LTB metric, and modified gravity theories like . We will write the various observables as specific functions of redshift, and will then constrain their functional form with the GA algorithm that best fits the observational data.
iv.1 Generic FRW/DE models
Taking into account all the ways we can extract information from the data, as discussed in the previous section, we now perform the exercise to reconstruct an FRW Dark Energy model. In particular we are interested in the reconstruction of the Dark Energy equation of state and the deceleration parameter . The equation of state in given by
which can also be written as
while the deceleration parameter is given by
In order to accommodate other models like the LTB we will also follow a more geometrical approach for and define the deceleration parameter as
where is the trace of the congruence of geodesics, where is the shear (vorticity) tensor of the same congruence, and we have used the Raychaudhuri equation in the second equality. In a FRW space-time, both shear and vorticity vanish, but in a LTB model the background shear is non-zero and the first term in the RHS of (4.4) becomes .
We will also consider the Om statistic which is defined as Sahni:2008xx ()
which is constant and equal to only when corresponds to a CDM model
but evolves with in a characteristic manner for different DE models, see Ref. Sahni:2008xx () for more details. Therefore, it can used as a diagnostic if is determined independently, as it was shown in Ref. Nesseris:2010ep ().
Our reconstruction method requires an a priori known value for for but fortunately this is not the case for , which can be directly estimated by direct differentiations from the best-fit GA . We show plots of all these important functions () in Figs. 8, 9, 10 and 11. As is easily seen in Fig. 8 the results are consistent with a CDM model with , however the reconstruction is much more precise in the case of than . Also, the Om statistic while it is consistent with a CDM model with , suffers from large errors especially in small redshifts, which is in agreement with the results found in Nesseris:2010ep ().
Therefore, out of all the different diagnostics, the deceleration parameter seems to provide the best constraints of the cosmology assuming our model independent approach.