The generation: present and future constraints on neutrino masses from global analysis of cosmology and laboratory experiments
We perform a joint analysis of current data from cosmology and laboratory experiments to constrain the neutrino mass parameters in the framework of bayesian statistics, also accounting for uncertainties in nuclear modeling, relevant for neutrinoless double decay () searches. We find that a combination of current oscillation, cosmological and data constrains () at 95% C.L. for normal (inverted) hierarchy. This result is in practice dominated by the cosmological and oscillation data, so it is not affected by uncertainties related to the interpretation of data, like nuclear modeling, or the exact particle physics mechanism underlying the process. We then perform forecasts for forthcoming and next-generation experiments, and find that in the case of normal hierarchy, given a total mass of eV, and assuming a factor-of-two uncertainty in the modeling of the relevant nuclear matrix elements, it will be possible to measure the total mass itself, the effective Majorana mass and the effective electron mass with an accuracy (at 95% C.L.) of , , respectively, as well as to be sensitive to one of the Majorana phases. This assumes that neutrinos are Majorana particles and that the mass mechanism gives the dominant contribution to decay. We argue that more precise nuclear modeling will be crucial to improve these sensitivities.
pacs:14.60.Pq, 23.40.-s, 98.80.-k
It is by now firmly established by oscillation experiments that neutrinos do have a mass. However, oscillation experiments are only sensitive to neutrino mass differences and mixing angles, and thus do not provide information on the absolute scale of masses, on the mass hierarchy nor on their Dirac or Majorana nature. The nature of neutrino masses and their smallness with respect to those of the charged leptons represents a puzzling fact, possibly related to the mechanism of neutrino mass generation. Three main avenues are currently being pursued in order to experimentally probe the absolute scale of neutrino masses, namely i) direct measurements, studying the kinematics of decay Drexlin:2013lha (), ii) searches for neutrinoless double decay () Cremonesi:2013vla (), and iii) cosmological observations Lesgourgues:2006nd (). Approaches based on kinematic arguments have the advantage of being very direct and model-independent. An alternative is to study decay, i.e., the double decay of nuclei, in which no neutrinos are present in the final state. If observed, it would guarantee that neutrinos have a non-vanishing Majorana mass Schechter:1981bd (); if not, upper limits on the mass scale can still be placed, under the assumption that neutrinos are Majorana particles. Relating the (potentially) observed rate for this process to neutrino masses also requires to assume that the mass mechanism is the dominant one leading to decay. It is worth noting that even if this is the most natural scenario, nevertheless other possibilities exist, involving additional physics beyond the standard model, see e.g. Refs. Deppisch:2012nb (); Rodejohann:2011mu () for a discussion. Moreover, our imprecise knowledge of the appropriate nuclear matrix elements is a relevant source of uncertainty on the interpretation of the results of these experiments Cremonesi:2013vla (). Finally, neutrino masses can be measured through cosmological observations, like measurements of the temperature and polarization anisotropies of the cosmic microwave background, or of the distribution of large scale structures, since massive neutrinos affect the background evolution of the Universe, as well as the growth of cosmological perturbations. Cosmology presently provides the most stringent limits on the absolute scale of neutrino masses Planck:2015xua (), with the shortcoming that these limits depend on assumptions on the underlying cosmological model.
The three approaches outlined above should be seen as complementary, as each of them presents its own advantages and disadvantages, and also because they probe slightly different quantities related to the neutrino masses. For this reason, it appears natural to combine data from direct measurements, searches and cosmology, other than from oscillation experiments, in order to constrain the neutrino mass parameters joint (). In this paper, we want to derive joint constraints on neutrino mass parameters from the most recent observations from both laboratory and cosmological experiments, combining them in the framework of Bayesian statistics. In particular, for experiments, we take into account the uncertainty related to nuclear matrix elements, by treating it as a nuisance parameter to be marginalized over, in order to account its impact on the neutrino mass estimates. We also perform forecasts, considering both forthcoming and next-generation experiments.
We use ) to denote the masses of the neutrino mass eigenstates . We denote with 1 and 2 the eigenstates that are closest in mass; moreover, we take , so that is always positive, while the sign of discriminates between the normal (NH) and inverted (IH) hierarchies, for or , respectively. The neutrino mass eigenstates are related to the flavour eigenstates () through , where are the elements of the neutrino mixing matrix , parameterized by the three mixing angles , one Dirac () and two Majorana () CP-violating phases. Oscillation phenomena are insensitive to the two Majorana phases, that however affect lepton number-violating processes like decay. The different probes of the absolute scale of neutrino masses are sensitive to different combinations of the mass eigenvalues and of the elements of the mixing matrix. decay experiments measure the squared effective electron neutrino mass , while searches are sensitive to the effective Majorana mass , where and . Finally, cosmological observations probe, at least in a first approximation, the sum of neutrino masses .
We perform a bayesian analysis based on a Markov chain Monte Carlo (MCMC) method, using cosmoMC Lewis:2002ah () as a generic sampler in order to explore the posterior distribution of the parameters given the data. We consider the following vector of base parameters: where is a “nuisance” parameter related to the uncertainty in nuclear modeling (see below). We assume uniform prior distributions for all parameters. We do not consider the mixing angle since none of the mass parameters depend on it.
We consider data from oscillation experiments, direct measurements of the electron neutrino mass, searches and cosmological observations, all folded in the analysis through the corresponding likelihood function. Our baseline dataset is the most recent global fit of the neutrino oscillation parameters Forero:2014bxa (), updated after the Neutrino 2014 conference. We model the likelihood as a the product of individual gaussians in each of the oscillation parameters, since correlations can be neglected for our purposes Forero:2014bxa (); GonzalezGarcia:2012sz (); Bergstrom:2015rba (). For the means and standard deviations, we take respectively the best-fit value and the uncertainty quoted in Tab. II of Ref. Forero:2014bxa (). When the error is asymmetric, we conservatively take the standard deviation equal to the largest between the left and right uncertainties. For direct measurements, we consider KATRIN Osipowicz:2001sq () and HOLMES Alpert:2014lfa () as our forthcoming and next-generation datasets, respectively. KATRIN is expected to reach sub-eV sensitivity in , while HOLMES could go down to . Kinematic measurements are directly sensitive to the square of the effective electron neutrino mass, so in both cases we take the likelihood to be a gaussian in (with the additional condition that ), with a width given by the expected sensitivity of the experiment, i.e. for KATRIN and HOLMES, respectively. For searches, we consider the current data from the GERDA experiment Agostini:2013mzu () as the present dataset, its upgrade to the so-called “phase 2” for the near-future, and the nEXO experiment nEXO () as a next-generation dataset. experiments are sensitive to the half-life of decay . Assuming the Majorana nature of neutrinos, and that decay is induced by the exchange of light Majorana neutrinos (in the following we shall always assume that this is the case, unless otherwise stated), is related to the Majorana effective mass through:
where is the electron mass, is a phase space factor and is the nuclear matrix element. The phase I of the GERDA project provides the tightest bounds on the half-life of decay of , reporting a limit at 90% C.L. () Agostini:2013mzu ()111 Other isotopes currently yield () Alfonso:2015wka () and () at 90% C.L. Asakura:2014lma (), corresponding to and , respectively. . The upgrade to the phase II of the experimental program is expected to increase the 90% C.L. sensitivity to , () for 40 kg of detector mass and 3 years of observations NOWcatta (). nEXO is a next-generation ton-scale experiment for the detection of decay of , conceived as scaled-up version of the currently ongoing project EXO, with an estimated sensitivity at 90% C.L. () for 5 tons of material and 5 years of data NOWpocar (). We model the likelihood of experiments as a Poisson distribution in the number of observed events in the “region of interest” (the energy window around the -value of the decay) with an expected value given by the sum of signal () and background () contributions. For a given value of , the expected number of signal events observed in a time for a detector mass is
where is Avogadro’s number, is the exposure, is the detector efficiency, is the molar mass of the enriched element involved in the decay. The level of background is usually expressed in terms of the “background index”, i.e. the number of expected background events per unit mass and time within an energy bin of unit width. For GERDA-I, we use the parameters reported in Tab. I of Agostini:2013mzu () for the case with pulse-shape discrimination. For GERDA-II, we consider a reduction of the background index down to , a total exposure of 120 kg yr, and the same efficiency as GERDA-I priv_catta (). For nEXO, we assume a background index corresponding to 3.7 events in the region of interest and an exposure of 25 ton yr NOWpocar (), and the same efficiency as EXO Albert:2014awa (). We also consider an update to nEXO in which the background in the inner 3 tons of the detector can be reduced by a factor 4 through tagging. We assume 10 years of observations for this updated version NOWpocar ().
In order to account for the uncertainty related to nuclear modeling nuclear (), including both that on nuclear matrix elements (NME) and that on the axial coupling constant, we compute for a given using fiducial values of these quantities, and then rescale it by a factor . A similar approach was used in Ref. Minakata:2014jba () in a frequentist framework, while we refer to Ref. Bergstrom:2012nx () for a different bayesian approach. The fiducial values are for the axial coupling, () and () for (). The value of is extracted at every step of the MC from a uniform distribution in the range , and marginalized over. This is equivalent, for example, to assume that, given exact knowledge of the axial coupling, the numerical estimates of the NME can be wrong by up to a factor two in either direction. Finally, for what concerns the cosmological dataset, we use results obtained combining full mission Planck temperature and polarization data with data on the baryon acoustic oscillations Planck:2015xua (), as both our current and forthcoming reference dataset. For simplicity, we shall refer to this dataset simply as “Planck 2015”. In particular, we use the chains publicly available through the Planck Legacy Archive PLA () to derive the posterior distribution of given these data, corresponding to a 95% upper limit . As a next-generation experiment, we consider the Euclid mission. The combination of all Euclid probes (weak lensing tomography, galaxy clustering and ISW) with data from Planck is expected to constrain the sum of neutrino masses with a sensitivity of for , as reported in Tab.2 and the main text in Laureijs:2011gra (). We shall refer to this dataset simply as “Euclid”. We model the likelihood as gaussian in , with and the addition of the physical prior .
To summarize, we consider four combinations of datasets. All of them include the most updated information from oscillation experiments. The “present” dataset includes Planck 2015 for cosmology, and GERDA-I for searches We do not include information from available direct measurements (e.g. those from the Troisk and Mainz experiments) since they do not add information on with respect to the data already considered. The “forthcoming” dataset consists of the same cosmological data as the previous dataset, GERDA-II, and KATRIN for kinematic measurements. The “next generation I (II)” dataset includes Euclid, nEXO without (with) tagging and HOLMES. For future data, we have to assume fiducial values of the parameters: in the case of the forthcoming dataset, we take them equal to their best estimates from the combination of oscillations and Planck2015. For the futuristic case, we assume and estimate and from the combination of Euclid and oscillation parameters.
We present our results for , and in Tab. 1 for the three datasets described above. We report limits both in the case where is fixed to 1, and when is marginalized over, in order to show the impact of uncertainties in nuclear modeling. We quote our results, both in the text and table, in terms of the Bayesian 95% minimum credible interval Hamann:2007pi (). When this interval includes the minimal value of the parameter allowed by oscillation measurements, we only quote the extremes of the range; on the contrary, we report the mean the 95% uncertainty. We do this in order to emphasize a “detection” scenario – i.e., one in which the observations point to a value of the parameter under consideration being different, with a given statistical significance, from the lowest value allowed by oscillations alone – from a “non-detection” scenario in which this oscillation minimal value is still allowed. We choose to identify the minimal value allowed by oscillations as the bayesian 95% C.L. lower limit of the neutrino mass parameters when the lightest eigenstate is set to zero. With this definition, we get (), (), () for NH (IH). We would like to point out that the exact definition of the minimal value is somehow arbitrary, in a sense that it is not formally well-defined due to the finite precision of the oscillation measurements. For example, we could have chosen the lowest value allowed by fixing the oscillation parameters to their best-fit values, rather than computing the bayesian 95% lower limit, and this would equally make sense. This choice only affects the way in which limits are reported in Tab. 1 (and we verified that it also has a minor impact in that respect), so it does not alter our conclusions in any way. In any case, we recall that the confidence intervals represent a compression of the information contained in the one-dimensional posteriors, that fully represent the probability distribution associated to a given parameter. In Fig. 1 we show the marginalized one-dimensional posterior distributions for the mass parameters. In most cases, the low mass region is excluded by the oscillation data, with the only exception of in the case of NH; the reason is that in this case the phases can arrange in order to yield even for finite values of the mass differences. Present data provide similar limits independently of whether nuclear uncertainties are marginalized over. This happens because the present constraints are dominated by the cosmological limit on , that translates directly to bounds on and once oscillation data are taken into account (this can be understood by noticing that the direct limits on these parameters are much weaker). We have verified explicitly that this is the case by performing parameter estimation using only Planck2015 and oscillation data, as shown in Fig.2. In particular, we find that the present data constrain () at 95% C.L. for NH (IH), regardless of the inclusion of information. Forthcoming datasets yield similar constraints for the mass parameters; this means that the improved sensitivity of GERDA-II and the inclusion of KATRIN add only marginally to the Planck2015 plus oscillations data combination. The fact that present and forthcoming limits on are dominated by the latter dataset has the consequence that they do not depend on the modeling of -decaying nuclei, nor on assumptions about the mechanism that induces the decay (while, on the other hand, they are affected by the model dependence of the cosmological analysis). This picture changes substantially for next-generation experiments. In this case, cosmological observations and searches have comparable power in constraining the mass parameters, and the nuclear uncertainties - as well as theoretical assumptions about the particle physics of 02 decay - play a role in deriving parameter constraints. We find that, if neutrinos are Majorana and 02 decay is dominantly induced by the mass mechanism, marginal 95% evidence for non-minimal mass parameters can be obtained in the case of normal hierarchy, even when nuclear uncertainties are taken into account. This detection is further strengthened in the Next generation-II dataset, for which we get , , . In the case of inverted hierarchy, we obtain upper limits for and , and a more than 95% evidence for non-minimal . In particular, for the Next generation-II dataset and marginalizing over nuclear uncertainties, we find , and . Finally, we report that while present and forthcoming experiments have little, if none, sensitivity to the neutrino mixing phases, the combination of next-generation experiments will possibly allow to determine the value of , as shown in Fig. 3.
The combination of current and forthcoming data from oscillation, kinematic, and cosmological experiments allows to put upper bounds , and for NH (IH). These limits are dominated by the combination of oscillations and cosmological data and as such are not affected by uncertainties in nuclear modeling, nor rely on the knowledge of the particle physics mechanism leading to decay. If neutrinos are Majorana particles and decay is induced by the exchange of light Majorana neutrinos, and further assuming a total mass of and a factor 2 uncertainty in nuclear modeling, next-generation experiments will ideally allow to measure non-minimal mass parameters with a 95% accuracy better than , , for , , respectively, for NH. In the case of IH, the allowed parameter range is reduced by roughly 25% with respect to the present for and , while can be measured with a accuracy. The uncertainty on can be reduced by up to a factor 4 by a better modeling of the nuclear factors. Next-generation experiments will also be sensitive to the phase .
Acknowledgements.We would like to thank C. Cattadori, S. Dodelson, M. Hirsch, M. Tórtola and F. Vissani for useful discussion. MG and AM acknowledge support by the research grant Theoretical Astroparticle Physics number 2012CPPYP7 under the program PRIN 2012 funded by MIUR and by TASP, iniziativa specifica INFN.
- (1) G. Drexlin et al., Adv. High Energy Phys. 2013, 293986 (2013).
- (2) O. Cremonesi and M. Pavan, Adv. High Energy Phys. 2014, 951432 (2014).
- (3) J. Lesgourgues and S. Pastor, Phys. Rept. 429, 307 (2006).
- (4) J. Schechter and J. W. F. Valle, Phys. Rev. D 25, 2951 (1982).
- (5) F. F. Deppisch, M. Hirsch and H. Pas, J. Phys. G 39, 124007 (2012) [arXiv:1208.0727 [hep-ph]].
- (6) W. Rodejohann, Int. J. Mod. Phys. E 20, 1833 (2011) [arXiv:1106.1334 [hep-ph]].
- (7) P. A. R. Ade et al. [Planck Collaboration], arXiv:1502.01589 [astro-ph.CO].
- (8) G. L. Fogli et al., Phys. Rev. D 70, 113003 (2004); S. Dell’Oro et al., arXiv:1505.02722 [hep-ph].
- (9) A. Lewis and S. Bridle, Phys. Rev. D 66, 103511 (2002).
- (10) D. V. Forero, M. Tortola and J. W. F. Valle, Phys. Rev. D 90, 093006 (2014).
- (11) M. C. Gonzalez-Garcia et al., JHEP 1212, 123 (2012). Updated results after the Neutrino 2014 conference can be found here: http://www.nu-fit.org/
- (12) J. Bergstrom, M. C. Gonzalez-Garcia, M. Maltoni and T. Schwetz, arXiv:1507.04366 [hep-ph].
- (13) A. Osipowicz et al. [KATRIN Collaboration], hep-ex/0109033.
- (14) B. Alpert et al. [HOLMES Collaboration], arXiv:1412.5060 [physics.ins-det].
- (15) M. Agostini et al. [GERDA Collaboration], Phys. Rev. Lett. 111, 122503 (2013).
- (16) K. Alfonso et al. [CUORE Collaboration], arXiv:1504.02454 [nucl-ex].
- (17) K. Asakura et al. [KamLAND-Zen Collaboration], arXiv:1409.0077 [physics.ins-det].
- (18) https://www-project.slac.stanford.edu/exo/
- (19) C. Cattadori (on behalf of the GERDA Collaboration), talk given at the Neutrino Oscillation Workshop 2014.
- (20) A. Pocar (on behalf of the EXO-200 Collaboration), talk given at the Neutrino Oscillation Workshop 2014.
- (21) C. Cattadori, private communication.
- (22) J. B. Albert et al. [EXO-200 Collaboration], Nature 510, 229â234 (2014)
- (23) A. Smolnikov and P. Grabmayr, Phys. Rev. C 81, 028502 (2010); J. Kotila and F. Iachello, Phys. Rev. C 85, 034316 (2012); J. Barea, J. Kotila and F. Iachello, Phys. Rev. C 87, 014315 (2013).
- (24) H. Minakata, H. Nunokawa and A. A. Quiroga, PTEP 2015, no. 3, 033B03 (2015).
- (25) J. Bergstrom, JHEP 1302, 093 (2013)
- (26) http://pla.esac.esa.int/pla/
- (27) R. Laureijs et al. (Euclid collaboration), arXiv:1110.3193 [astro-ph.CO].
- (28) J. Hamann et al., JCAP 0708, 021 (2007).