Confronting the relaxation mechanism for a large cosmological constant with observations
In order to deal with a large cosmological constant a relaxation mechanism based on modified gravity has been proposed recently. By virtue of this mechanism the effect of the vacuum energy density of a given quantum field/string theory (no matter how big is its initial value in the early universe) can be neutralized dynamically, i.e. without fine tuning, and hence a Big Bang-like evolution of the cosmos becomes possible. Remarkably, a large class of models of this kind, namely capable of dynamically adjusting the vacuum energy irrespective of its value and size, has been identified. In this paper, we carefully put them to the experimental test. By performing a joint likelihood analysis we confront these models with the most recent observational data on type Ia supernovae (SNIa), the Cosmic Microwave Background (CMB), the Baryonic Acoustic Oscillations (BAO) and the high redshift data on the expansion rate, so as to determine which ones are the most favored by observations. We compare the optimal relaxation models found by this method with the standard or concordance CDM model, and find that some of these models may appear as almost indistinguishable from it. Interestingly enough, this shows that it is possible to construct viable solutions to the tough cosmological fine tuning problem with models that display the same basic phenomenological features as the concordance model.
Current observations [1, 2] indicate that the cosmological constant (CC) , or equivalently the vacuum energy density , is non-vanishing and positive, and of the order of . This value is close to of the present critical energy density in our universe, , and therefore is very small in Particle Physics units (being equivalent to having a mass density of a few protons per cubic meter). In itself, the inclusion of the tiny value as a cosmological term in the gravity action would not be a problem. However, the modern fundamental physical theories suggest that large contributions to the classical CC parameter (in particular those sourced by quantum fluctuations of the matter fields) are induced, resulting into a huge initial CC value of order which is fed into the cosmos already in the phase transitions of the early universe, with in the range between the electro-weak (EW) scale and the Planck scale . Even if “only” the EW scale of the Standard Model (SM) of Particle Physics would be involved in the induced CC value, the discrepancy with respect to the observed value entails orders of magnitude111Let us note that the QCD scale of the strong interactions, , could also trigger an induced vacuum energy density , which would then be roughly orders of magnitude larger than . We have nevertheless taken the EW value because it is larger and is considered also as a robust fundamental scale in the structure of the SM of Particle Physics.. This is of course preposterous. Despite none of these vacuum contributions is directly supported by experimental evidence yet, there is hardly a theory beyond the SM that does not come with a large value of induced by the huge vacuum energy density predicted by the theory and whose origin may be both from the zero point vacuum fluctuations of the matter fields as well as from the spontaneous symmetry breaking of the gauge theories. In principle, all models with scalar fields (in particular the SM with its Higgs boson sector) end up with a large vacuum energy density associated to their potentials. For example, from the SM Higgs potential we expect GeV, which explains the aforementioned orders of magnitude discrepancy and corresponding 55th digit fine tuning CC problem. The huge hierarchy between the predicted and the observed CC, the so-called “old CC problem” , is one of the biggest mysteries of theoretical physics of all times. To avoid removing the huge initial value of the vacuum energy density by hand with an extremely fine tuned counterterm 222See e.g. a detailed account of the old fine tuning CC problem in Sect. 2, and Appendix B, of Ref. . (giving the theory a very unpleasant appearance), alternative dark energy models have been suggested. Typically, these models replace the large CC by a dynamical energy source [see  and references therein]. This seems reasonable for describing the observed late-time acceleration, but generally it still requires that the large CC has been removed somehow. In the end, it means that a fine tuned counterterm has been tacitly assumed .
It is encouraging that time-varying vacuum energy models inspired by the principles of QFT can be suggested and may hopefully provide an alternative and more efficient explanation for the dynamical nature of the vacuum energy . Recently the analysis of some well motivated models of this kind versus the observations has shown that their phenomenological status is perfectly reasonable and promising [8, 9] – see also the recent works . However, in order to solve the old CC problem in this context, we need to focus on models that dynamically counteract the large initial . The idea, in a nutshell, is: 1) to obtain an effective (the measured one) which satisfies at all times and without fine-tuning; 2) to insure that preserves the standard cosmic evolution (i.e. the correct sequence of radiation, matter and dark energy dominated epochs); and finally 3) to make sure that is able to reproduce the measured value of the vacuum energy at present, , again without fine-tuning. Only in this way we can guarantee a low-curvature universe similar to ours, namely , with GeV the current Hubble rate. The question of course is: is that program really possible?
Obviously, in order to follow this road in a successful way it is necessary to go beyond the common dark energy models and face a class of scenarios where the large CC is not hidden under the rug, so to speak, but rather it is permanently considered as a fundamental ingredient of the overall theory of the cosmos. In this work, we analyze a class (probably not unique) of models operating along this line, namely the CC relaxation mechanism based on modified gravity [4, 11, 12]. These models are intimately related with the family of the so-called XCDM models of the cosmic evolution  (in which is generally not a scalar field, but an effective quantity in the equations of motion which derives from the complete structure of the effective action). They might also be connected with mechanisms based on matter with an inhomogeneous equation of state . A first modified gravity approach implementing the relaxation mechanism was presented in . Furthermore, a related model in the alternative Palatini formalism has been studied , too.
As stated, the class of models considered here is probably just a subset of a larger class of theories that can produce dynamical relaxation of the vacuum energy. Our work should ultimately be viewed as aiming at illustrative purposes only, i.e. as providing a proof of existence that one can construct explicitly a relaxation mechanism, if only in a moderately realistic form 333Remember that in the past the dynamical adjustment of the CC was attempted through scalar field models , but later on a general no-go theorem was formulated against them . Fortunately, this theorem can be circumvented by the relaxation mechanism, see  for details.. We cannot exclude that more sophisticated and efficient mechanisms can be eventually discovered which are much more realistic, but at the moment the models serve quite well our illustrative purposes. They constitute a potentially important step in the long fighting of Theoretical Physics against the tough CC problem, especially in regard to the appalling fine tuning conundrum inherent to it. Additional work recently addressing the CC problem from various perspectives can be found e.g. in Refs. .
In contrast to late-time gravity modifications, in the relaxation mechanism an effective energy density is permanently induced by the modification to the standard gravity action, which counteracts dynamically (at any time of the postinflationary cosmic history) the effect of the large vacuum energy density and provides the net value (presumably close to the observationally measured one ). As a result the universe has an expansion history similar, but not identical, to the concordance CDM model. Therefore, it is important to investigate how the relaxation models perform with respect to observational constraints coming from the Cosmic Microwave Background (CMB) , type Ia supernovae (SNIa) , the Baryonic Acoustic Oscillations (BAO)  and the high redshift data on the expansion rate .
The paper is organized as follows: in Sect. 2 we discuss different aspects of the working principle of the CC relaxation mechanism in modified gravity. In Sect. 3 we present the numerical tools that we use to perform the statistical analysis, and in Sect. 4 we apply them to determine the optimal relaxation model candidates with a further insight in the details of the relaxation mechanism. In Sect. 5 we present predictions for the redshift evolution of the deceleration parameter and the effective equation of state, and compare with the CDM. In the last section we deliver our conclusions. Finally, in an appendix we provide some useful formulae discussed in the text.
2 Vacuum relaxation in modified gravity
The general form of the complete effective action of the cosmological model in terms of the Ricci scalar and the Gauss-Bonnet invariant is given by
where denotes the Lagrangian of the matter fields. This action represents the Einstein-Hilbert action extended by the term defining the modification of gravity. Moreover, we include all vacuum energy contributions from the matter sector in the large CC term .
From the above action the modified Einstein equations are derived by the variational principle . They read
with the Einstein tensor , the cosmological term and the energy-momentum tensor of matter emerging from .
We describe the cosmological background by the spatially flat FLRW line element with the scale factor and the cosmological time . Accordingly, the curvature invariants and can be expressed in terms of the Hubble expansion rate and the deceleration parameter . The tensor components in (2) are given by , and
where stand for the partial derivatives of with respect to . The -term induces the effective energy density and pressure , implying that the whole effective dark energy density and pressure consist of two parts each:
As announced, they follow the structure of the XCDM model , as both are the sum of a cosmological term energy density (resp. pressure) and an extra contribution whose structure derives from the presence of new terms in the effective action. The local energy density constitutes the “induced DE density” for this model. It is indeed induced by the new term of the modified gravity action (1). However, the only measurable DE density in our model is the effective vacuum energy in (5), i.e. the sum of the initial (arbitrarily large) vacuum energy density and the induced DE density . None of the latter, though, is individually measurable. Let us note that the induced DE density is covariantly self-conserved, i.e. independently of matter (which is also conserved). Indeed, the Bianchi identity on the FLRW background leads to the local covariant conservation laws
which are valid for all the individual components ( radiation, matter and vacuum) with energy density and pressure . This leads immediately to the usual expressions and for cold matter and radiation respectively. As for the effective vacuum energy density , since is a true CC term and satisfies , it follows that the -term density is self-conserved:
where is the corresponding (non-trivial) effective equation of state (EoS) of the -term. We shall come back to it in more detail in Sect. 5.
The cosmological evolution is therefore completely determined by the generalized Friedmann equation of our model,
where denotes the energy density of the (self-conserved) matter, which is mostly dust-like matter for the purpose of this paper (as we focus mainly on the matter dominated and DE epochs).
Here and are two parameters of the model, and and (usually taken to be integers) are two numbers characterizing a large class of functions that can realize the relaxation mechanism. To understand how the mechanism works in modified gravity, let us express the denominator of (9) in terms of and ,
Taking into account that each derivative of in Eqs. (3) and (4) increases the power of the denominator function , it is easy to see that the induced energy density and pressure have at least one positive power of in their denominators. Schematically, one can show that it takes on the form
with being time-dependent functions not containing the factor . Notice that we need in order that can increase with the expansion, i.e. when is decreasing. In this case, the gravity modification in the action (1) will be able to compensate the large initial vacuum energy density in the field equations (2), whatever it be its value and size, by becoming large in a dynamical manner. It means that can take the necessary value to compensate for the big initial , this being achieved automatically because becomes small during the radiation dominated regime (), then also in the matter dominated era (), and finally again in the current universe and in the asymptotic future, where becomes very tiny.
A few additional observations are in order. The fact that we search our relaxation functionals among those of the form is because they are better behaved. Indeed, let us recall that general modified gravity models of the form , with , and , are generally problematic as far as ghosts and other instabilities are concerned. This is because these theories introduce new degrees of freedom, which potentially lead to instabilities not present in general relativity (GR). They may e.g. suffer from the Ostrogradski instability, i.e. they may involve vacuum states of negative energy. In general they contain gravitational ghosts and can lead to various types of singularities. These issues are discussed e.g. in  and are reviewed in . Fortunately, some of these problems can be avoided by specializing to functionals involving only the Ricci scalar and the Gauß-Bonnet invariant . The reason is that all functionals of the form are ghost free , a property which is shared by our functionals because .
Finally, the theories are thought to have a more reasonable behavior in the solar system limit [24, 25]. A detailed study of the astrophysical consequences of our model was presented in . It shows that a huge cosmological constant can also be relaxed in astrophysical systems such as the solar system and the galactic domain. These studies were performed by solving the field equations in the Schwarzschild-de Sitter metric in the presence of an additional term. The latter operates the relaxation of the huge CC at astrophysical scales in a similar way as the term in equation (9) does it in the cosmological domain. One can show that none of these terms has any significant influence in the region of dominance of the other. Therefore a huge CC can be reduced both in the local astrophysical scales and in the cosmological one in the large. The solution in the Schwarzschild-de Sitter metric merges asymptotically with the cosmological solution, where we recover the original framework presented in . Furthermore, it was shown in  that there are no additional (long range) macroscopic forces that could correct in a measurable way the standard Newton’s law. This is a consequence of the dynamical relaxation mechanism and is in contrast to ordinary modified gravity models. The upshot is that the implementation of the relaxation mechanism should not be in conflict with local gravitational experiments. At the same time, we expect small deviations from GR only at scales of a few hundred kpc at least, which is in accordance with other authors . As this scale is much larger than the scale where star systems can be tested at the level of GR, the physics of these local astrophysical objects should not be affected.
2.1 Physical scales for the relaxation mechanism
Remarkably, in order to insure that holds at all times, we need not fine tune the value of any parameter of the model. It is only necessary that the parameter in the class of invariants (9) has the right order of magnitude and sign. This requirement has nothing to do with fine tuning, as fine tuning means to adjust by hand the value of the parameter to some absurd precision such that both (big) terms and almost cancel each other in (5). This is not necessary at all for our mechanism to work because it is dynamical, meaning that it is the evolution itself of the universe through the generalized Friedmann equation (8) what drives the -term density towards almost canceling the huge initial in each relevant stage of the cosmic evolution (i.e. in the radiation dominated, matter dominated and dark energy dominated epochs). In this way we can achieve the natural relation
for some value of sufficiently close to . The smallness of the observed as compared to the huge initial thus follows from the right-hand side of (12) being suppressed by the ratio of the present critical density as compared to its initial value in the early universe, which was of course of order . By working out the explicit structure of (11) from (3), and using the fact that , we can easily convince ourselves that all terms end up roughly in the form . Although we are omitting here other terms, we adopt provisionally that expression for the sake of simplicity (see the next section for more details). Within this simplified setup, it follows that the value of that solves equation (12) – which, as we said, should be close enough to the current value of – is approximately given by
We see that for this mechanism provides also a natural explanation for the current value of being so small: the reason simply being that is very large! Moreover, we can make of order of the measured provided has the right order of magnitude (without operating any fine tuning). Notice that since the power mass dimension of for the models is
and , it follows that is related to through
For the simplest model this implies , and therefore if is near we obtain around the characteristic meV scale associated to the current CC density: eV. On the other hand, for the models and we find in both cases , which implies for in the ballpark of the GUT scale ( GeV) up to the Planck mass GeV. This is certainly a possible mass scale in the SM of Particle Physics.
Notice that equations (13) and (15) have been derived under the assumption . Therefore, the conclusion that the current value of is small because is large cannot be inferred from equation (13) if . Still, we shall see in the next next section that a more accurate treatment of the models leads once more to the conclusion that the current is small. What else can be learnt from equation (15)? Notice that it can be rewritten as
with GeV, GeV and GeV (for some integers and ). Now, being GeV the GUT scale, we must have . For , it follows from (16) that . But let us keep in mind that we could have . Since, however, the scale should not be too tiny 444If could be very small, we would stumble upon the same tiny mass problem that afflicts e.g. all quintessence-like models , where mass scales as small as eV are a common place. This is one, but certainly not the only one, of the most serious drawbacks of these models, another one being of course the need of extreme fine-tuning. In our case, however, we aim precisely at avoiding both of these severe problems., it is natural to assume that it is not much smaller than the typical meV scale associated to the CC density, i.e. we must have meV GeV, which enforces . Thus e.g. in the extreme case and , we have . The upshot of this (order of magnitude) consideration is that, with bounded in the approximate interval , we should expect in general that the following combined “natural relaxation condition” is fulfilled:
in which both naturalness of the mass scales and relaxation of the vacuum energy are insured. In other words, the above argument suggests that if and are taken as integers, we cannot assume large values for them (unless ) because the relation (17) could not be fulfilled. This is of course a welcome feature because it suggests that only the canonical cases are natural candidates, namely and at most – the last case of this series being the one which approaches the most to the upper bound imposed by the relation (17), but still the difference is of . The next model in the list, , tenses up a bit too much perhaps that bound, so for definiteness we stop the number of candidate models to the first five in the list. Of course the models with for arbitrary would do, but again it is natural to focus on only the canonical representative of this class. In general the class is special because, as we can see from equation (16), it requires . Therefore, if GeV is a typical large GUT scale, then must coincide with it. We may think of the models as the class of “no-scale” relaxation models inasmuch as they do not introduce any other new scale beyond the one associated to the initial vacuum energy itself, . Also interesting about the “no-scale models” is to note that if there is no GUT scale above the SM of Particle Physics, i.e. if turns out to be just the electroweak scale GeV, then the relaxation mass parameter will naturally take on the order of magnitude value , characteristic of the electroweak gauge boson masses, for all the relaxation models . Remarkably, it is the only situation where we can accommodate the electroweak scale in the relaxation mechanism.
To summarize, even though we have in principle five canonical candidate models satisfying the natural relaxation condition (17), on the whole they involve only two new physical scales for the dynamical adjustment of the cosmological constant, to wit: the two models are both linked to the same mass scale close to the one inherent to the current CC density, eV, whereas the two models are both associated to the scale MeV, which is a typical Particle Physics scale within the SM of electroweak and strong interactions. Thus both scales quite natural ones. On the other hand the fifth model (and, for that matter, the entire class) is a “no-scale” model which is able to operate the relaxation mechanism by using the very same scale as the one associated to , irrespective of being a huge GUT scale or just the electroweak scale of the SM.
As for the dimensional parameter in (10), let us note that it is basically irrelevant for the present discussion. In Ref. , this parameter played a role only in the radiation dominated epoch and did not suffer any fine tuning either; it just had to take a value within order of magnitude. In practice the whole structure of the last term in equation (10) was motivated by considerations related to getting a smooth transition from the radiation to the matter dominated epochs. Since, however, we are now comparing the class of models (9) with the observations, all the relevant data to which we can have some access belong to the matter dominated epoch until the present time (apart from a correction from the radiation term in the case of the CMB data, which has been duly taken into account, see Sect. 3). We have indeed confirmed numerically that the last term on the r.h.s. of equation (10) is unimportant for the present analysis.
2.2 A closer look to the general structure of the induced DE density
After we have presented the simplest version of the relaxation mechanism, it seems appropriate to discuss a bit further the general behavior of the -density in order to better understand the relaxation mechanism in the different cases. Following the considerations exposed at the end of the last section, hereafter we will set in equation (10). We start by considering the the general structure of the -density for an arbitrary model, which as we know can be cast as in equation (11). However, by dimensional analysis and the explicit structure of (3) and (10), it is not difficult to convince oneself that it can eventually be brought into the more specific form:
Here and are functions of which are different for different models, but in all cases they are rational functions of with a (multiple) pole at , specifically , , where for convenience we have defined the following expression that appears in the calculation:
For specific realizations of equation (18), see Appendix A. Let us note that because of the pole at all these models have an unstable fixed point in the matter dominated epoch, which is responsible for the universe to approach for a long while during that epoch and at the same time enhances dynamically the value of . As a result the huge value of the vacuum energy can be relaxed from to the effective tiny , which can be identified with at the present time. When that pole is left behind during the cosmic evolution, the relaxation mechanism can still work effectively provided tends to a very small value in the present universe. Notice that there are two terms in equation (18) that cooperate to fulfill this end: one of them is the overall factor, which contributes to the relaxation mechanism provided is strictly smaller than , as indeed required by the relation (17); and the other is the term in the parentheses. For all the models such that , the first term suffices and in this way we recover the result (13) which we had sketched in the simplified exposition of Sect. 2.1.
However, if the simplified argument of Sect. 2.1 cannot be applied. The class of these “no-scale” functionals is of the form , where the relevant parameter is , the same one as that associated to . As previously noticed, this is the only case where the scale defined in (14) could be identical to the initial vacuum energy scale GeV of the GUT at the early universe, which is a very interesting possibility in that here the mechanism for canceling the large vacuum energy density of the early universe would be naturally dealt with by the very same scale pertaining to the GUT transition at that time. For the models, the factor inside the parenthesis in equation (18) takes its turn in the relaxation mechanism. Noting that now equation (18) simplifies to
it follows that the relaxation condition (12) for the present universe enforces to take the value
In this equation we have used the fact that, for the models, we have , which is of the same order of magnitude as and therefore the two factors cancel out approximately. Finally, is a dimensionless function, basically a number which can be taken of order in this kind of consideration. Therefore, we conclude that the predicted value of the current expansion rate for these models is , and therefore it can naturally be of order . So, again, the relaxation mechanism works and requires a very small value for the present Hubble rate.
3 Likelihood analysis from CMB, SNIa, BAO and data
In the following we briefly present some details of the statistical method and on the observational samples and data statistical analysis that will be adopted to constrain the relaxation models presented in the previous section. First of all, we use the Constitution set of 397 type Ia supernovae of Hicken et al. . In order to avoid possible problems related with the local bulk flow, we use a subsample of 366 SNIa, excluding those with . The corresponding function, to be minimized, is:
where is the observed scale factor of the Universe for each data point and the corresponding (measured) redshift. The fitted quantity is the distance modulus, defined as , in which is the luminosity distance:
with the speed of light and a vector containing the cosmological parameters of our model that we wish to fit for. In equation (22), the theoretically calculated distance modulus for each point follows from using (23), in which the Hubble function is given by the generalized Friedmann’s equation (8). Finally, and stand for the measured distance modulus and the corresponding uncertainty for each SNIa data point, respectively. The previous formula (23) for the luminosity distance applies only for spatially flat universes, which we are assuming throughout.
On the other hand, a very interesting geometrical probe of dark energy is provided by the measures of  from the differential ages of passively evolving galaxies [hereafter data] 555For the current value of the Hubble constant, we use .. This sample of galaxies contains 11 entries spanning a redshift range of , and the corresponding function can be written as:
Note that the latter set of data became recently interesting and competitive for constraining dark energy, see e.g. .
In addition to the SNIa and data, we also consider the BAO scale produced in the last scattering surface by the competition between the pressure of the coupled baryon-photon fluid and gravity. The resulting acoustic waves leave (in the course of the evolution) an overdensity signature at certain length scales of the matter distribution. Evidence of this excess has been found in the clustering properties of the SDSS galaxies (see , ) and it provides a “standard ruler” that we can employ to constrain dark energy models. In this work we use the measurement derived by Eisenstein et al. . In particular, we utilize the following estimator
with the normalized Hubble rate. The previous estimator is measured from the SDSS data to be , where [or ]. Therefore, the corresponding function can be written as:
Finally, a very accurate and deep geometrical probe of dark energy is the angular scale of the sound horizon at the last scattering surface, as encoded in the location of the first peak of the Cosmic Microwave Background (CMB) temperature perturbation spectrum. This probe is described by the CMB shift parameter [30, 31], defined as:
The measured shift parameter according to the WMAP 7-years data  is at the redshift of the last scattering surface: [or ]. In this case, the function is given by:
Let us note that when dealing with the CMB shift parameter we have to include both the matter and radiation terms in the total normalized matter density entering the function in (27):
Indeed, we have , with the number of neutrino species and . Therefore, at and for three light neutrino species the radiation contribution amounts to some of the total energy density associated to matter. For a detailed discussion of the shift parameter as a cosmological probe, see e.g. .
In order to place tighter constraints on the corresponding parameter space of our model, the probes described above must be combined through a joint likelihood analysis666Likelihoods are normalized to their maximum values. In the present analysis we always report uncertainties on the fitted parameters. Note also that the total number of data points used here is , while the associated degrees of freedom is: , where is the model-dependent number of fitted parameters., given by the product of the individual likelihoods according to:
which translates in an addition for the joint function:
In order to proceed with our minimization procedure, we would like to reduce as much as possible the free parameter space by imposing a prior, specifically we fix the value of the current mass parameter . In principle, is constrained by the maximum likelihood fit to the WMAP and SNIa data in the context of the concordance cosmology, but in the spirit of the current work, we want to use measures which are completely independent of the dark energy component. An estimate of without conventional priors is not an easy task in observational cosmology. However, various authors, using mainly large scale structure studies, have attempted to put constraints to the parameter. In a rather old paper Plionis et al.  using the motion of the Local Group with respect to the cosmic microwave background found . From the analysis of the power spectrum, Sanchez et al.  obtain a value . Moreover,  and  analyze the peculiar velocity field in the local Universe and obtain the values and respectively. In addition, the authors of Ref. , based on the cluster mass-to-light ratio, claim that lies in the interval (see also  for a review). Therefore, there are strong independent indications for , and in order to compare our results with those of the flat CDM we will restrict our present analysis to the choice . If we fix the value of , then the corresponding vector contains only two free parameters namely, . Note that we sample and in steps of 0.001.
4 Identifying the best relaxation models from observation
In order to select the optimal relaxation models , defined in Eq. (9), from the phenomenological point of view, we are going to perform a likelihood analysis along the lines described in the previous section. Specifically, we will use a two-parameter fit of our models. Namely, for any given model with fixed and , we have to look for the best fit values for and .
4.1 Numerical solution of the models
Remember that the parameters and are necessary in order to establish the initial conditions for solving the generalized Friedmann equation (8), in which is a complicated function of the form . Using one can see that and hence the generalized Friedmann equation, despite its innocent appearance, becomes a third order differential equation in the scale factor 777Although equation (4) indicates that, in principle, the field equations are of fourth order, the self-conservation of , see (7) (and hence also of the effective vacuum energy ), enables us to reduce the order of the field equations by one unit. . Therefore, since the current is known, we need to input and for any given relaxation model . Notice also that the initial values of and must be consistent with the current value of , which in our case is identified with
for flat space cosmology. For the reasons indicated in Sect. 2.1, we will limit ourselves to analyze the five canonical models , , and using the combined likelihood method described in the previous section. We present a sample of the likelihood contours in the plane for the individual sets of data on SNIa, CMB shift parameter, BAO and in Figs. 1-3, whereas in Fig. 4 we display the combined likelihood contours for all the five models. A numerical summary of the statistical analysis for these models is shown in Table 1. In the next section, we provide more details of this analysis and discuss the obtained results.
|1.187||black dotted area|
4.2 The Statistical Results
In the upper left panel of Fig. 1 we present the results of our analysis for the model in the plane. The individual contours for each observable are plotted only for the 1 and 2 confidence levels in order to avoid confusion. In particular, the SNIa-based results indicated by thin solid lines, the results by thick dot-dashed lines, the BAO results by dotted lines and those based on the CMB shift parameter by thin dashed lines. The remaining panels show the statistical results for SNIa/BAO, SNIa/ and SNIa/CMB. Using the SNIa/BAO data alone it is evident that the parameter is unconstrained within errors. However, within errors we can put some constraints (the best fit values are and ). On the other hand, utilizing the SNIa/ data the best fit parameters are partially constrained within : and . As can be seen in the lower right panel of Fig. 1, the above degeneracy is broken when using the SNIa/CMB data and practically the best fit parameters coincide with those of joint likelihood analysis, involving all the cosmological data. Indeed, for the model we find that the overall likelihood function peaks at and with for degrees of freedom.
In the case of the “no-scale” model we find that only the combined SNIa/CMB data can put constraints on the free parameters (see Fig. 2), while the overall likelihood analysis provides and with for degrees of freedom. Concerning the model (see Fig. 3) we find that the corresponding statistical results are in a very good agreement to those of the model. Indeed, the SNIa/BAO data put some constraints within errors, and while using the SNIa/ data we find and . Again we observe that the joint likelihood function (mainly due to SNIa/CMB) peaks at and with for degrees of freedom. As for the model, again we find that the comparison between SNIa/BAO does not place constraints on the free parameters while the SNIa/ data put some constrains (even within ): and . As before, the free parameters of the model are tightly constrained by the SNIa/CMB data. The joint likelihood function peaks at and with for degrees of freedom. To this end for the model we find that the SNIa/ comparison implies and while using the joint likelihood analysis we find and with for degrees of freedom. Although we do not present individual likelihood contours for the and models, we can see their overall likelihood contours in Fig. 4, together with those of the other models.
The summarized analysis of all the five canonical relaxation models is presented in Table 1 and in Fig. 4. In this combined figure we plot the , , and also the , overall likelihood contours for the five the models under consideration. In it we can see a compact presentation of all our statistical results including their comparison with the corresponding values of for the concordance or traditional CDM model. Let us note that the pair for the concordance model can be computed from the formulae:
Therefore, for we obtain , as quoted in the first row of Table 1. Let us mention that we have checked that using the earlier SNIa results (UNION) of Kowalski et al.  does not change significantly the previously presented constraints. Finally, let us clarify that the chosen initial value for the vacuum energy is completely arbitrary and the results do not depend on it because the relaxation mechanism is dynamical and hence works for any numerical choice of . However, in order to avoid instabilities in the lengthy numerical analysis involved in the solution of these models (in which the large quantity almost cancels against the dynamically generated numerical value of ) it is convenient not to choose exceedingly large, but apart from this proviso any other arbitrarily selected value would do, as we have checked.
5 The expansion history/future for the models
Since the current main cosmological quantities (scale factor, Hubble function etc.) exhibit a complicated scaling with the redshift, the absorption of the extra effects from the relaxation model into an “effective dark energy” contribution, with a non-trivial EoS of the form is possible with the effective pressure and energy density taken from Eqs. (5). The corresponding EoS for the matter component is not affected by the presence of the -term, so that the behavior of the matter epoch is the expected one. Indeed, during the background evolution in e.g. the non-relativistic matter dominated era, the deceleration parameter changes only very slightly to compensate the decreasing Hubble rate , but always stays around the value . Therefore, the universe expands like a matter dominated cosmos despite . Analogously, will vary minimally around the value to ensure during the radiation regime. Eventually, in the asymptotic future the relaxation of the CC is guaranteed by the smallness of in the function (10) at late times. The correct temporal sequence of the three cosmic epochs follows from the different powers of in the function , where higher powers are more relevant at earlier times: for the radiation dominated epoch, and for the matter and vacuum dominated epochs, although for the latter becomes smaller than in the former because is no longer close to . As we can see, all these dynamical features are encoded in the structure of (10). The precise behavior of the relevant quantities in the various epochs becomes transparent in the various numerical examples considered in this section (cf. Figs. 5 and 6), on which we will elaborate further below).
In general, it is well known that one can express the effective dark energy EoS parameter in terms of the Hubble rate, . This function of the cosmological redshift becomes known (numerically) after we explicitly solve the model as indicated in Sect. 4. In the present case the structure of the effective EoS of the DE is quite cumbersome. First of all, from the formulae of Sect. 2 one can show that