A On Resonant Production

Constraining Models of Neutrino Mass and Neutrino Interactions with the Planck Satellite

Abstract

In several classes of particle physics models – ranging from the classical Majoron models, to the more recent scenarios of late neutrino masses or Mass-Varying Neutrinos – one or more of the neutrinos are postulated to couple to a new light scalar field. As a result of this coupling, neutrinos in the early universe instead of streaming freely could form a self-coupled fluid, with potentially observable signatures in the Cosmic Microwave Background and the large scale structure of the universe. We re-examine the constraints on this scenario from the presently available cosmological data and investigate the sensitivity expected from the Planck satellite. In the first case, we find that the sensitivity strongly depends on which piece of data is used. The SDSS Main sample data, combined with WMAP and other data, disfavors the scenario of three coupled neutrinos at about the 3.5 confidence level, but also favors a high number of freely streaming neutrinos, with the best fit at 5.2. If the matter power spectrum is instead taken from the SDSS Large Red Galaxy sample, best fit point has 2.5 freely streaming neutrinos, but the scenario with three coupled neutrinos becomes allowed at . In contrast, Planck alone will exclude even a single self-coupled neutrino at the confidence level, and will determine the total radiation at CMB epoch to ( errors). We investigate the robustness of this result with respect to the details of Planck’s detector. This sensitivity to neutrino free-streaming implies that Planck will be capable of probing a large region of the Mass-Varying Neutrino parameter space. Planck may also be sensitive to a scale of neutrino mass generation as high as 1 TeV.

1

I Introduction

Cosmology and particle physics are becoming increasingly linked, as is particularly evident in the dark sector, comprising dark matter, dark energy and dark radiation (neutrinos or other thermalized relativistic particles). Considerable work has been done to discover the nature of the new particle physics underlying the cosmological dark sector. While much remains unknown about it, the current abundance of data, combined with more precise measurements in the future, promise to give a clearer window into the dark. In this paper we consider the neutrino fraction of the dark sector. Cosmology already tells us a number of interesting facts about the neutrino sector when the universe was in its infancy. Big Bang Nucleosynthesis (BBN), for example, places significant constraint on the amount of radiation at , translating to an effective constraint on the total number of thermalized neutrino species, Cyburt et al. (2005). The strongest upper bound on neutrino mass is also currently derived from the matter power spectrum on small scales, and the current limit is at 95% C.L., as given in Seljak et al. (2006).

From a particle physics perspective, there are many well-motivated models which generate beyond-the-standard-model neutrino physics, potentially giving rise to signals in the early universe. Most of these proposals could be classified into one of the following four broad categories: suggestions that neutrinos have (i) small but nonzero masses Pontecorvo (1968); Gribov and Pontecorvo (1969); (ii) interactions with electromagnetic fields (via magnetic/transition moments) Cisneros (1971); Okun et al. (1986); (iii) interactions with new heavy states Wolfenstein (1978); (iv) interactions with new light states Chikashige et al. (1981); Gelmini and Roncadelli (1981). As is well known, the first of these ideas has actually proven true, giving us the first direct discovery of physics beyond the standard model. While it is certainly possible that there are no other observable effects of new physics, it is intriguing to think that the neutrino sector could hold more surprises. The pragmatic approach is to set aside any theoretical prejudice (which has proven rather unhelpful in the past in predicting neutrino properties) and settle the issue experimentally. Should new neutrino interactions be discovered, the theoretical implications would be profound.

Testing neutrino properties in terrestrial experiments has been very challenging. It took three decades from the first indication of the “solar neutrino problem” Davis et al. (1968) to conclusively demonstrate that neutrinos are massive and oscillate. As for the other proposals on the list, progress has been even slower. In the case of non-standard neutrino interactions mediated by novel heavy particles, bounds from accelerator-based experiments Berezhiani and Rossi (2002); Davidson et al. (2003), solar Friedland et al. (2004a); Guzzo et al. (2004); Miranda et al. (2006), atmospheric Friedland et al. (2004b); Friedland and Lunardini (2005) and beam neutrino experiments Kitazawa et al. (2006); Friedland and Lunardini (2006) remain quite weak to this day. Bounds on the electromagnetic moments changed little over the last 15-20 years, with the best constraints coming from cooling of astrophysical systems such as red giants before helium flash Raffelt (1990). Finally, while certain models of neutrino coupling to light scalar fields (those involving weak doublet Aulakh and Mohapatra (1982); Santamaria and Valle (1987) or triplet Gelmini and Roncadelli (1981) fields) have been ruled out by the LEP data on the boson width, a large class of models (involving coupling to singlet fields) remains viable (see, e.g., Berezhiani et al. (1992)).

In this paper, we show that the experimental sensitivity to nonstandard interactions mediated by a new light particle is about to be significantly improved. In this case the experimental tools are cosmological observations rather than neutrino oscillation experiments. The new interaction can manifest itself through large scattering rates of the neutrinos with themselves (and the new light fields), or through extra radiation in the early universe (through thermally populating the new light states). As we will show, while the present cosmological data yields limited constraints on the scenarios in question, the data on the Cosmic Microwave Background (CMB) anisotropies expected in the next several years from the Planck satellite mission (scheduled to launch in 2008) will lead to qualitative improvements.

There are many models which generate strong neutrino interactions. A sterile neutrino added to the standard model may have its mass generated by a scalar field in analogy with the Higgs mechanism (though the mass-generating scalar is not, by necessity, the Higgs boson since the sterile neutrino carries no standard model interactions). That scalar will mediate neutrino interactions, and the size of the coupling is often large enough to make them measurable in the CMB. Cases in point are Mass-Varying Neutrinos (MaVaNs) Fardon et al. (2004, 2006), a theory of neutrino dark energy, or a model of “late” neutrino masses Chacko et al. (2004, 2005). These theories feature neutrino-scalar interactions of a size which, as we will see, may be large enough to be measurable by the Planck satellite. We will see that Planck alone will be able to observe or rule out scenario with a single interacting and two standard neutrinos at the 4.2 level.

In many models, additional scalars may also become thermally populated. It is possible to choose the parameters such that the scalars recouple after the BBN era Beacom et al. (2004); Chacko et al. (2004). Hence an independent CMB constraint becomes of great interest. Though the present constraints on are not very strong in comparison to the BBN constraints, we will also find that Planck will be able to observe a single extra neutrino at approximately 2.5 level.

It has been shown that in the case of interactions mediated by heavy particles cosmology does not yield effective new bounds Mangano et al. (2006). Why is the situation different when the new particles are light? The basic idea can be easily stated. Coupling neutrinos to a light scalar field can populate additional degrees of freedom in the early universe, changing the expansion rate as well as the evolution of inhomogeneities. Moreover, it can drastically increase the neutrino-neutrino interaction cross section, much more than what is possible with heavy new particles, turning the neutrinos in the early universe into a self-coupled fluid. The neutrino density perturbations would evolve differently from the standard case – coupled neutrinos would undergo acoustic oscillations rather than stream freely in the CMB epoch ( eV) – and affect the baryon-photon fluid and dark matter through their gravity during the epoch of radiation domination.

Our cosmological analysis is done in a rather model-independent way. For most of the parameter space, one can simply assume that in the CMB epoch there is a certain number of free streaming relativistic species, (the standard neutrinos fall into this category), and a certain number of relativistic species forming a self-coupled fluid, . This approach is certainly not new Chacko et al. (2004); Bell et al. (2006). We will denote the total number of effective neutrinos (which includes any thermalized radiation, whether free-streaming or coupled) as . The results then have a broader applicability. For example, if one sets , one obtains a bound on the total number of the standard neutrinos. The constraints on the latter from the current data, as well as forecasts for Planck, have been investigated in numerous recent studies Bell et al. (2006); Seljak et al. (2006); Cirelli and Strumia (2006); Hannestad and Raffelt (2006), Bowen et al. (2002); Bashinsky and Seljak (2004); Perotto et al. (2006), with somewhat differing results. We will weigh in on this controversy.

The cosmology of neutrinos coupled to a light scalar is a very old topic. Over the years, different aspects of it were discussed in Georgi et al. (1981); Kolb and Turner (1985); Raffelt and Silk (1987); Atrio-Barandela and Davidson (1997), Chacko et al. (2004); Beacom et al. (2004). Four modern data analyses are those by Hannestad Hannestad (2005), by Trotta and Melchiorri Trotta and Melchiorri (2005), by Bell, Pierpaoli, and Sigurdson Bell et al. (2006), and by Cirelli and Strumia Cirelli and Strumia (2006). These papers only analyze the existing data and do not forecast the reach of Planck. Moreover, once again, the results these papers find with the available data differ. We will reexamine this situation using up-to-date cosmological data.

In the next Section we examine in more detail the types of neutrino models which generate strong neutrino interactions or extra relativistic degrees of freedom. We then turn to the CMB phenomenology of constraining these models. We carry out an analysis using all the current cosmological data, and determine the precision with which Planck will be able to constrain neutrino interactions. We then apply these constraints to Majorana neutrino interactions and consider the implications for theories of neutrino dark energy (MaVaNs). The details of our analysis are contained in Sects. III-V; the Reader with interests in cosmology may wish to focus on these Sections. Conversely, the Reader who is interested only in the implications for neutrino models can proceed directly from Sect. II to Sect. VI.

Ii Models of Neutrino-Scalar Interactions

As mentioned in the Introduction, the possibility of coupling the neutrinos to a novel light scalar field has been entertained in the literature since the early 1980’s. In this Section, we will briefly review some of the cosmologically relevant features of the many models that were constructed. For more details, the Reader should consult, e.g., Chikashige et al. (1980, 1981); Gelmini and Roncadelli (1981); Georgi et al. (1981); Aulakh and Mohapatra (1982); Santamaria and Valle (1987); Berezhiani et al. (1992) and many other papers dedicated to the subject.

From the point of view of cosmological constraints based on the CMB, we are interested in the physics at 1 eV energies. At these low energies the relevant properties of the full models are captured by effective low-energy Lagrangian terms. We may thus try to learn about some of the common features of the models by building this interaction “bottom-up”.

The starting step may be to seek an interaction of the Yukawa type, . To have the correct gauge structure, the neutrino field must be promoted to the lepton doublet. At the level of dimension 4 operators one can then write . This means the new scalar field is a triplet of and couples to the boson. If one further writes a symmetry breaking potential for (to obtain a massless Goldstone boson – a Majoron), one obtains the classical model of Gelmini and Roncadelli Gelmini and Roncadelli (1981). This model has been ruled out by the LEP measurements of the boson width: the triplet Majoron would contribute an equivalent of two extra neutrino species to the width. This argument extends to other models in which the new light scalar is not a singlet of ; for instance, the Higgs doublets Aulakh and Mohapatra (1982); Santamaria and Valle (1987) would contribute a half of an extra neutrino species to the width. As a consequence, we must assume that the light scalar field is a Standard Model singlet.

The simplest renormalizable model that generates an effective low-energy neutrino-scalar coupling involves adding a right-handed sterile neutrino to the theory and coupling a scalar to it. To this end, consider the standard see-saw mechanism. The mixing of with the Standard Model neutrinos only enters through a Dirac neutrino mass term, so that the neutrino Lagrangian contains

(1)

The first term is of the same form as the lepton Dirac masses in the Standard Model, while the second one is a Majorana mass term for the right-handed (sterile) neutrino. When the sterile neutrino is integrated out, assuming , only a light Majorana neutrino remains, with mass .

We may now promote the sterile neutrino mass, , to a dynamical (complex) field , in analogy to the Standard Model Higgs mechanism, where the Dirac mass is generated by the dynamical Higgs 2:

(2)

When develops a VEV, , the sterile neutrino gets a mass and can be integrated out of the theory. The light Goldstone mode () is coupled to the light neutrinos via an effective interaction of the form

(3)

Clearly, at the energies of eV the Higgs field in Eq. (3) has no excitations, only the vacuum expectation values (VEVs), .

An alternative form of the interaction can be obtained by absorbing into the phase of . The interaction then appears from the kinetic term for ,

(4)

This form makes manifest the derivative nature of the coupling of .

We note that while the construction reviewed here is the simplest, it is certainly not unique. In general, the scalar field need not couple to only the sterile neutrino and could be coupled via both the Dirac and Majorana terms in Eq. (1), e.g., by promoting , Chacko et al. (2005).

Expanding Eq. (3) to the first order in , we find a Majorana mass for , (where ) and an effective Yukawa (pseudoscalar) interaction of with , with the coupling strength

(5)

The second equality has a very simple physical interpretation: gives the amount of admixture of the sterile neutrino into the light mass eigenstate and is the coupling of to the sterile component.

In order to generate a sizable coupling between the neutrino and scalar, there must not be too large a hierarchy between and the scale of symmetry breaking . If the sterile neutrino mass is at GUT scale, the coupling of is of the size , assuming and , which is too small to give rise to significant effects during the CMB epoch. On the other hand, if the scale of new physics, , is TeV scale, couplings of the size may be generated. Though this is a very small coupling, it is large enough to result in observable effects through decay-inverse decay of the neutrino to scalars, which tightly couples the scalars and neutrinos, removing neutrino free-streaming. We return in detail to the constraints on the coupling in Sect. VI.

There are many theories where a coupling is generated. Theories of neutrino dark energy introduce interactions in the effective theory of the form Eq. 2, often with a sizable coupling . In the models of dark energy originally discussed in Frieman et al. (1992) and revived in Hill et al. (2006) as neutrino dark energy, the scalar generating the dark energy is a pNGB. If the scale of the symmetry breaking of the scalar U(1) symmetry is sufficiently low (TeV scale or lower), a cosmologically interesting coupling between the scalar and neutrino may be generated. Since the scalar is a pNGB in these models, it is protected from large radiative corrections, and hence remains light–an advantage in any theory of dark energy. In a similar model of Barbieri et al. (2005), , but and are small, through extremely small Yukawa couplings and to the neutrino mass generating fields and . Since is very small, is also very small and there are no observable consequences in the CMB through the neutrino-scalar coupling. Because the sterile neutrino is light, however, and the mixing between the active and sterile neutrino is substantial, the sterile neutrino may become thermalized, and there may be signals in the CMB through increased .

Another theory of dark energy may also generate interesting signals in the CMB. In contrast to these models where the scalar field is typically associated with a broken symmetry at a comparatively high scale , MaVaNs, as introduced in Fardon et al. (2004), place the entire sector around a meV–this scale is chosen according to the measured neutrino mass splittings and the dark energy scale , . As a consequence, all mass scales in the model, including the Dirac mass and the scalar mass, lie in the sub-eV range, and the coupling is typically not too small; the small hierarchy between the Dirac and sterile neutrinos implies a sizable coupling in many cases.

There are many other instances where the hierarchy between the Dirac neutrino and sterile neutrino is much smaller due to the introduction of lighter sterile neutrinos. Much lighter sterile neutrinos have been considered in a wide variety of contexts, most notably perhaps in connection with the LSND measurement, where the presence of a sterile neutrino with mass around has been invoked to explain the appearance of muon neutrinos Gonzalez-Garcia and Nir (2003). Other models feature a keV mass sterile neutrino as the dark matter Dodelson and Widrow (1994); Abazajian et al. (2001); Shi and Fuller (1999), sometimes with accompanying keV mass scalars Kusenko (2006), and weak scale neutrinos associated with SO(10) GUTS, where the addition of the neutrino is necessary for anomaly cancellation Raby (2002). Various low energy see-saws have also been considered, as in Chacko et al. (2004), where the scale is in the 50 MeV to 500 GeV range.

We have seen that there is a broad class of models which generate exotic neutrino-scalar interactions. The addition of these interactions, depending on the choices of , and , may be good candidates for observation in the cosmic microwave background. First, additional scalars may become thermalized and increase the effective number of neutrino species. Second, these scalars mediate additional neutrino interactions, through scalar mediated neutrino scattering, which could remove neutrino free-streaming at CMB temperatures. We now turn our attention to studying the impact of non-standard behavior in the dark radiation sector on CMB anisotropies.

Iii Tightly coupled neutrinos: modified evolution equations.

We summarize the relevant physical effects of dark radiation (i.e. neutrinos) on the CMB. The energy density in relativistic neutrinos 3 is a fraction of the total radiation (freely-streaming and coupled neutrinos plus photons). Thus during the radiation epoch the neutrino gravity is important. One needs to consider both the effects of the neutrino background density and that of the neutrino density perturbations. dictates the expansion rate of the universe and, together with matter density , controls the redshift of matter-radiation equality, . The latter effect is more subtle. Assuming adiabatic initial conditions, the neutrino and photon inhomogeneities are of comparable size to begin, so the presence of neutrinos modifies the evolution of the photon perturbations. When a perturbation of a given size enters the horizon, the gravity of the neutrino perturbation is comparable to the gravity of the photon perturbation. The subsequent evolution of the two, however, are different. The photon-baryon plasma oscillates like a compressible fluid; the standard neutrinos, on the other hand, stream freely, quickly erasing their density fluctuations. Gravitational coupling between the two means that the evolution of fluctuations in the photons could be affected by neutrino free-streaming. In fact, Stewart Stewart (1972) noted back in 1972 that if this effect were large, it would jeopardize structure formation.

Shortly thereafter, Peebles Peebles (1973) numerically solved the problem of the coupled evolution of the neutrino and photon perturbations. He showed that the neutrino inhomogeneities in the standard case do indeed decay shortly after entering the horizon; in the process, they damp photon inhomogeneities, although at a significantly smaller level (12%) than anticipated by Stewart. Much more recently, the problem was addressed by more accurate numerical computations and analytically Hu and Sugiyama (1996); Bashinsky and Seljak (2004) and for the CMB power spectrum the amount of suppression (relative to the case where all the radiation is strongly coupled) was found to be

(6)

where is the energy density in freely streaming radiation and is the total radiation density (free-streaming plus strongly coupled, including photons).

Coupling the oscillating photons to the damped neutrinos through the gravitational potential also changes the phase of the photon oscillations, due to a change in the speed at which information propagates in the fluid from the presence of freely streaming neutrinos, compared to the hypothetical case of no free-streaming neutrinos. This effect was clearly established in Bashinsky and Seljak (2004). The resulting shift of the CMB peaks is

(7)

Remarkably, both the amplitude suppression and phase shift are clearly present in the Peebles’ solution 4. These effects are at the core of the physics behind the sensitivity of CMB to neutrino free-streaming. The amount of damping and the phase shift change if either additional freely streaming relativistic (“neutrino-like”) species are added or the neutrinos become self-coupled. In the latter case, both effects are removed: the neutrino fluid oscillates similarly to the photon fluid (without baryon loading).

Notice that the effect is not uniform for all CMB multipoles. The above argument was made for modes entering the horizon in the radiation era. For modes entering the horizon in the matter era, the neutrino perturbations have no effect. Thus, the damping is operational only on small scales, .

In our analysis, we consider interacting neutrinos in the tightly coupled limit. By this we mean that the neutrinos can be approximated by a fluid for the entire range of relevant scales, including the scales corresponding to the CMB multipoles of and those measured by the large scale structure (LSS) surveys SDSS and 2dF. The neutrino analogue of the Silk damping scale is thus assumed to be well below comoving Mpc. The implications of this assumption are further discussed in Sect. VI.

With the above assumptions, the Boltzmann equations for the coupled neutrinos are very simple, as discussed in Hu et al. (1995): the standard multipole expansion for the neutrino perturbations (see Ma and Bertschinger (1995)) is truncated at the level of density and velocity perturbations. The quadrupole (shear) and higher order moments of the perturbations are set to zero. The analogues of Eqs. (49) or (50) in Ma and Bertschinger (1995) are:

  • Synchronous gauge

(8)
  • Conformal Newtonian gauge

(9)

Here all the conventions are those of Ma and Bertschinger (1995). The quantity is the neutrino density perturbation; , with the “” index assumed on the right hand side, is the neutrino velocity perturbation; is the shear (see Eq. (22) of Ma and Bertschinger (1995)). The quantity is one of the two scalar perturbations in the synchronous gauge (the one corresponding to the trace of the scalar metric perturbation). and are the scalar metric perturbations in the Conformal Newtonian gauge. They coincide, up to a sign, with the gauge invariant Bardeen variables Bardeen (1980).

Figure 1: Effect of neutrino free-streaming on the CMB multipole spectrum. The thickest curve is the spectrum with the best fit parameters from WMAP3; the other curves (from bottom to top) correspond to 1, 2, and 3 strongly coupled neutrinos (keeping the total number of neutrinos fixed at 3).

The same method of truncating the multipole expansion is utilized in the earlier analyses. Our equations agree with those of Hannestad (2005); Bell et al. (2006) (which are stated in the synchronous gauge) and those of Cirelli and Strumia (2006) (which are stated in the Conformal Newtonian gauge). We note that the CAMB Lewis et al. (2000) code we use (also CMBFast Seljak and Zaldarriaga (1996)) employs the synchronous gauge. A slightly more general parameterization in terms of the “viscosity parameter”, Hu (1998) is followed in Trotta and Melchiorri (2005).

The effect of making the neutrinos coupled, for fixed cosmological parameters, is illustrated in Fig. 1. The thick curve refers to the standard case of three freely streaming neutrinos, while the other curves illustrate the effect of coupling 1, 2, and 3 neutrinos (in order of deviation from the thick curve). The changes of the temperature (TT), polarization (EE), and the cross correlation between them (TE) are shown. The Figure clearly exhibits both the amplitude suppression and the phase shift effects at .

The difference in the anisotropies on scales is quite large (on average about 25-30% for three coupled neutrinos), well outside the errors of the current WMAP data. However, this does not mean than the coupled neutrino scenario is already excluded. Indeed, it must be kept in mind that there are many cosmological (“nuisance”) parameters that can be adjusted, such as the baryon, dark matter and dark energy densities, the spectrum of primordial fluctuations, and others. By adjusting these parameters, it may be possible to undo most of the effect of the neutrino self-coupling.

Figure 2: Effect of extra neutrinos on the CMB multipole spectrum. The central black curve is the spectrum with the best fit parameters from WMAP3, the top green curve the spectrum with 7 freely streaming neutrinos, and the lowermost magenta curve results when the total number of freely streaming neutrinos is 7, but is kept fixed by varying .

This issue of “degeneracies” between different parameters is of course well known in cosmology. A simple illustration of it is given in Fig. 2 where we show changing the CMB power spectra as one changes the total number of freely streaming neutrinos. While simply changing to 7 produces a large shift in the position of the peaks (because of the faster expansion in the radiation era as discussed above), the effect can be compensated by changing other parameters such that the redshift of equality is preserved Bowen et al. (2002). is fixed while varying by scaling to compensate for the increase in , while also fixing the physical baryon density and . Indeed, the physical quantities that are measured in CMB are dimensionless quantities (angles on the sky), hence they depend on the ratios of the physical densities, etc. See, e.g., Bashinsky (2004) for further discussion.

Not all effects follow this simple argument. For example, the Silk damping does not, as it involves a physical dimensionful constant, the Thompson cross section. The faster expansion of the universe with more neutrinos implies more Silk damping in the high multipoles of the CMB 5. This can be partially compensated by adjusting the Helium fraction, as discussed in Bashinsky and Seljak (2004). It should be kept in mind that this mechanism is limited by a variety of astrophysical considerations. We will return to this topic in Sect. V.2.

The real challenge is to establish the size of the residual differences of the CMB predictions in the two scenarios, after appropriately adjusting the “nuisance” parameters, in comparison with the resolution of the experiments. These residual differences turn out to be much smaller than the differences seen in Fig. 1. This fact renders difficult writing down a simple estimate for the predicted sensitivity of the present and future experiments using order-of-magnitude arguments and necessitates a detailed scan of the multidimensional parameter space.

In Sect. V.2 we show how well the effects of amplitude suppression (6) and phase shift (7) can be compensated by adjusting the cosmological parameters and how big the residual differences are. We will also see which CMB multipoles are essential for testing the neutrino sector and how robust our predictions for Planck are. A complete analysis of this type has not been done before.

We now present the results of our numerical studies.

Iv Numerical results

iv.1 Sensitivity of the current data

Literature overview

We begin by summarizing the bounds on the numbers of freely streaming and self-coupled relativistic species derived in the literature. In the case of extra relativistic species, a number of studies have been undertaken in the last several years. Crotty, Lesgourgues, and Pastor found the effective number of neutrinos to lie in the interval at confidence level (C.L.), by combining WMAP year one data Bennett et al. (2003) with the LSS data from 2dF Hawkins et al. (2003) and a prior on the Hubble constant. Similar results were found by Pierpaoli Pierpaoli (2003) and Hannestad Hannestad (2003). Bell, Pierpaoli, and Sigurdson Bell et al. (2006) have shown that WMAP year one data Bennett et al. (2003), Lyman- forest Viel et al. (2004), and Sloan Digital Sky Survey (SDSS) data Tegmark et al. (2004) taken together constrain the effective number of neutrinos to be at C.L. Seljak, Slosar and McDonald Seljak et al. (2006) have also computed this number with more recent data – including the WMAP three year data (WMAP3) Hinshaw et al. (2006); Page et al. (2006), Lyman- forest data McDonald et al. (2005), SDSS measurements of the baryon acoustic oscillations Eisenstein et al. (2005) (see Seljak et al. (2006) for a complete list). They find a similar result, at 95% C.L. The WMAP team finds year-3 data combined with large-scale structure and supernovae prefer neutrinos Spergel et al. (2006). Cirelli and Strumia Cirelli and Strumia (2006) find ( errors). In contrast, Hannestad and Raffelt Hannestad and Raffelt (2006), constraining neutrino mass and relativistic energy density together, determine at 95% C.L. using WMAP3 and large-scale structure data from 2dF and SDSS.

In the case of self-interacting neutrinos, all fits are consistent with no interactions, but with (very) different confidence levels. Hannestad Hannestad (2005) concludes that in the model with three interacting neutrinos “it is impossible to simultaneously fit CMB and LSS data”. From his Fig. 8 we read off a 3 exclusion using only WMAP1 and 6 exclusion when this data is combined with the SDSS LSS data and the HST Hubble constant measurement (in the limit of zero neutrino mass). In contrast, Trotta and Melchiorri Trotta and Melchiorri (2005) find that the interacting neutrino scenario is disfavored at only 2.4. Similarly, Bell, Pierpaoli, and Sigurdson find at 95% C.L. Cirelli and Strumia find the number of additional interacting neutrinos constrained to be less than at with more recent data (including WMAP3).

In short, there is quite a bit of variation between the published results. Some of this variation could be attributed to different data used in the calculations (for example, WMAP 1-year vs. WMAP 3-year data releases, whether Lyman- was included in the analysis or not, etc), but some clearly must be due to the differences in the analyses themselves. We therefore consider it well-motivated to repeat the calculations, using the most recent data available to us. In Sect. IV.1.2 we consider the sensitivity of WMAP3 alone, while in Sects. IV.1.3 and IV.1.4 we add other cosmological data.

As far as forecasting for Planck, published results on likewise differ. In addition, no analysis on has been performed. Ref. Bowen et al. (2002) finds that Planck will be able to constrain the total number of freely streaming neutrino species to (1 error) using temperature information only and a sky coverage . In contrast, Ref. Bashinsky and Seljak (2004) finds using Planck temperature information only even with a more optimistic sky coverage . Both studies employ the same technique (the Fisher matrix analysis). Finally, Ref. Perotto et al. (2006) investigated the bounds on using both the Fisher analysis and Markov Chain Monte Carlo (MCMC). The results are quite intriguing: while the Fisher analysis yields with (without) lensing, the MCMC method yields very different results depending upon whether lensing is assumed – with lensing the result is consistent with the Fisher analysis, while without lensing . The forecast for Planck’s sensitivity to the number of self-coupled neutrinos to the best of our knowledge has not been done. We perform a combined analysis of Planck’s sensitivity to freely streaming and self-coupled neutrino species in Sect. IV.2.1. We also investigate, in Sect. IV.2.2, a potential impact of combining Planck with other cosmological data.

WMAP3 alone

C.L. Age/GYr
13.75 0.24 11.3 72.8
0.2 13.66 0.23 10.2 73.8
0.4 13.43 0.22 10.1 76.5
1.4 13.06 0.18 11.0 83.0
0.6 15.82 0.23 10.0 64.1
0.6 12.40 0.26 12.2 79.3
Table 1: The best fit values of (second column) and the corresponding shift of the best fit parameters as a function of the number of coupled neutrinos. The last five columns list derived parameters. The initial spectrum is defined with a pivot point Mpc. The fit is to the WMAP3 dataset only.

As a first calculation, we investigate how sensitive WMAP is by itself to neutrino self-coupling. We do this by fitting WMAP3 Hinshaw et al. (2006); Page et al. (2006), varying a set of cosmological parameters (to be specified shortly) under four different assumptions: (i) three standard free-streaming neutrinos; (ii) one neutrino coupled, two free-streaming; (iii) two neutrinos coupled, one free-streaming; (iv) all three neutrinos coupled. We then compare the goodness of fit at the corresponding best-fit points. We also explore the sensitivity of WMAP to the total number of (standard) neutrinos, in a similar way. For this, we consider two more scenarios: one with the total neutrino number equal to five and another with a single neutrino flavor.

The fitting is done using the MCMC code COSMOMC Lewis and Bridle (2002), with the CAMB code Lewis et al. (2000) modified by us to include both freely streaming and tightly self-coupled neutrinos. The MCMC method is by now a standard tool in cosmology, used for both data analysis and forecasting. It is employed by the WMAP Spergel et al. (2003, 2006) and Planck Planck Collaboration () teams, as well as many other groups (in particular, among the papers reviewed in Sect. IV.1.1, Trotta and Melchiorri (2005); Bell et al. (2006); Seljak et al. (2006); Perotto et al. (2006) use MCMC). With the MCMC method, the likelihood function is mapped out in a multidimensional region of parameters around its maximum. As a result, one gets the location of the best-fit point, with the corresponding likelihood characterizing the goodness of fit, as well as the allowed ranges of the parameters. One need not make a priori assumptions on the functional form of the likelihood function, although choosing the parameterization in such a way that the posterior distributions are approximately Gaussian and there are no strong correlations saves computer time Lewis and Bridle (2002) 6.

In this Subsection, we choose the following six cosmological parameters: the physical densities of baryons, , and dark matter, , the ratio of the approximate sound horizon to the angular diameter distance , the optical depth to the last scattering surface , and the primordial spectrum of the scalar curvature perturbations, characterized by the spectral index and the power on a preset (“pivot”) scale (taken in this calculation to be Mpc). We assume that the Universe is flat, there is no running of the spectral index, the Helium fraction is fixed at and the dark energy is a cosmological constant (). As we will see, in the case of WMAP3, the six parameters we vary contain the necessary degeneracies with the neutrino coupling.

The results are tabulated in Table 1. The first row corresponds to the standard scenario, . The next three rows correspond to coupling one, two, or three of the neutrino species, while keeping the total number of neutrinos at three. Finally, in the last two rows, we vary the total number of neutrino species, assuming they are all freely streaming.

The second column shows the difference between the of the best fit in a given scenario and the corresponding quantity in the standard scenario (first row). The third column shows the corresponding confidence level (C.L.) 7. The next six columns in the Table show the values of the best-fit parameters for each scenario. The last five columns show the values of derived parameters: the cosmological constant , the age of the Universe, the matter fraction , the red-shift of ionization and finally the Hubble constant, . To compensate for the effect of neutrino coupling, increases, while and the spectral index decrease. To compensate for additional neutrino species, all three quantities increase. In the latter case, it is easy to check that the best-fit parameters shift in such a way that the redshift of matter-radiation equality, , is preserved, as discussed in Sect. III. also increases with to compensate for the reduction of perturbations on all scales.

From the second column, we see that the effects of the neutrino coupling as well as the variation of the total number of neutrinos are nearly perfectly compensated. The quality of the fits is virtually the same in each of the six cases. Therefore, WMAP by itself cannot distinguish between these scenarios. This conclusion is consistent with the findings of the analysis by the WMAP collaboration, which considers the sensitivity of the experiment to the number of freely streaming neutrino species Spergel et al. (2006). It differs from the findings of Hannestad (2005) where a 3 exclusion of the () scenario was claimed. Clearly, our conclusion relies on the ability of the code to find the set of parameters that give the most complete compensation of the effects of modifying the neutrino sector. We see that the MCMC method accomplishes this task well.

WMAP3 plus large scale structure (2dF, SDSS), HST, SN Ia.

C.L. Age/GYr
0.244 0.018 13.70 0.26 9.9 71.4 0.791
4.0 0.236 0.062 13.47 0.27 10.3 71.8 0.793
8.8 0.242 0.008 13.36 0.28 9.8 72.0 0.770
15.6 0.242 -0.017 13.18 0.29 9.9 72.5 0.776
12.3 0.242 0.015 15.59 0.29 9.8 61.2 0.735
-0.3 0.241 0.006 12.37 0.26 12.4 79.4 0.850
Table 2: The same as Table 1, but including WMAP3, SDSS Main and LRG data samples, 2dF, HST, and SN Ia datasets. The last column is the derived best fit value of .
Figure 3: The shift of the best-fit parameters as a result of neutrino coupling. The dashed-dotted curves correspond to the scenario of coupled neutrinos, while the solid ones refer to the standard (free streaming) neutrinos. In both cases, the fits are to the datasets from WMAP3, SDSS, 2dF, HST, and SN Ia.
Figure 4: The combined reach of all available data (excluding Lyman-) on the number of free-streaming neutrinos, , and the number of coupled neutrinos, : WMAP3 + SDSS (LRG) + SDSS (Main) + 2dF + HST + SNIa. The solid contours indicate 1 and 2 C.L.

We now explore if the addition of the other presently available cosmological data can lift the degeneracy of WMAP3. In this analysis, we include the data on the large scale structure (LSS) from the 2dF Cole et al. (2005) and the SDSS survey – both the Main Tegmark et al. (2004) and the recently released Large Red Galaxy (LRG) Tegmark et al. (2006) data samples. We also use the Type Ia supernova data from the SNLS collaboration Astier et al. (2006), as well as the Hubble constant measurements from the Hubble Space Telescope (HST) Freedman et al. (2001), in the form of a Gaussian prior .

We vary eight parameters, six as described in the previous Subsection, plus two more, the helium fraction, and the running of the spectral index, . These two parameters are added in anticipation of the analysis for Planck, where their roles are important. With the present day data, as we will see, they do not make much of a difference. For the helium fraction, we use a gaussian prior motivated by observational bounds Olive and Skillman (2004). We also impose a rather generous hard prior, . The values greater than 0.3 would be problematic for the solar model (since the input value into the solar model cannot be less than the primordial value) Ciacio et al. (1997); Bahcall et al. (2001).

We begin by performing a combined fit to all the data. Results for several specific scenarios are shown in Table 2. We find that the degeneracies of the WMAP3-only analysis are broken by the additional data. In particular, relative to the standard case of three freely streaming neutrinos, the case with all three neutrinos coupled is disfavored at 3.5, while two and one coupled neutrinos are disfavored at 2.5 and 1.5, respectively. Assuming no self-coupled neutrinos, the scenario with a single freely streaming neutrino is disfavored at 3.1, while the scenario with five freely streaming neutrinos gives a fit which is just as good as the standard one.

The corresponding best-fit parameters are tabulated in Table 2. One can see that the general trends are similar to what was observed with WMAP3 only: as more neutrinos are coupled, the fit prefers larger values of the Hubble constant and smaller values of the spectral index . The shifts are graphically illustrated in Fig. 3. The standard free streaming neutrinos are shown in solid curves, while the scenario of three self-coupled neutrinos is shown with the dashed-dotted curves.

From the point of view of particle physics one may be interested to know how the number of allowed self-coupled neutrino species varies with the total number of neutrinos. As mentioned earlier, one can easily imagine models in which additional neutrino-like degrees of freedom are populated after the time of the BBN but before CMB (see, e.g., Chacko et al. (2005)). In Fig. 4 we present the allowed region in the plane. The plot was obtained by running MCMC in the ten-dimensional space of parameters , assuming a CDM universe, and marginalizing over all but the first two parameters. From the Figure, we see that scenarios with no freely streaming neutrino are clearly disfavored, whether the number of self-coupled neutrinos is zero or three. At the same time, the constraint on is relaxed if in addition to the coupled neutrinos there are also freely streaming ones. In fact, even with standard neutrinos the fit actually prefers the total number of freely streaming neutrinos to be greater than three: the best fit is achieved for , . This curious result warrants further investigation.

Figure 5: Sensitivity of WMAP3 + SDSS (LRG) + 2dF + HST + SNIa. The removal of the SDSS Main sample lowers the best fit point for and signficantly weakens the constraints on a scenario with and .

It turns out that the piece of data responsible for favoring large is the Main data sample from SDSS. With it removed (i.e., with the matter power given by the LRG SDSS dataset and 2dF dataset), the fit changes dramatically, as shown in Fig. 5. The new fit has a slight preference for values of less than three: the best fit lies at , .

Conversely, if we remove the LRG data from the fit, the best fit point moves all the way to , , as can be seen in Fig. 6. Marginalizing over all other parameters including yields . Thus, it may be too early to conclude that the SDSS LRG and Main samples are consistent with each other. At least as implemented in COSMOMC, they pull the best-fit value of in different directions, with the LRG sample favoring the standard values.

Our results show that if one chooses to rely on the LRG sample, one finds much less sensitivity to neutrino self-coupling: the point , lies inside the 2 contour, while , lies on the 1 contour. Thus, at present the bounds on coupled neutrinos from the global fit should perhaps be taken with caution.

Figure 6: Sensitivity of WMAP3 + LSS from SDSS and 2dF + HST + SNIa. Compared to the previous figure, LRG data has been replaced with SDSS Main sample. SDSS main prefers larger compared to LSS from LRG data sample.

Adding the Lyman- data.

We also repeat the analysis including the Lyman- dataset in the fit. This dataset has been somewhat controversial 8. Our main reason for doing this calculation is to get an idea about the additional sensitivity that can be gained from this dataset, and also to further check consistency with the published analyses where this data is used.

We find that the overall sensitivity to coupled neutrinos is somewhat increased with the addition of this piece of data. Relative to the standard case of three free-streaming neutrinos, the case with a single coupled neutrino is disfavored at 1.8, the case with two coupled neutrinos is disfavored at 3.1 and, finally, the case with three coupled neutrinos is disfavored at 4.1. In other words, the addition of the Lyman- data does bring further improvements in sensitivity, though not very large ones. The Lyman- dataset does favor large values of , just like the SDSS Main sample, as also observed in Bell et al. (2006), Seljak et al. (2006). The comments at the end of the last Subsection apply here as well.

iv.2 Sensitivity of Planck

Planck only

As already mentioned, the situation will improve markedly with the expected data from the Planck satellite. In this subsection, we describe the result of our MCMC analysis of Planck’s sensitivity.

We generate mock data for Planck in the all_l_exact data format of COSMOMC, using the best-fit point of WMAP3 as the “true” (seed) value and assuming flat Universe, for the cosmological constant, and , for the numbers of neutrinos. The characteristics of the Planck detector (the beam size and the noise levels for temperature and polarization measurements) are given in Planck Collaboration (). The relevant measurements will be done in three frequency channels of the High Frequency Instrument (HFI), 100, 143, and 217 GHz. In the literature Bowen et al. (2002); Bashinsky and Seljak (2004), the analyses have been done assuming an effective single channel for Planck. Ref. Bashinsky and Seljak (2004) in particular uses the sky coverage of , the detector noise for temperature , and for polarization and the beam size (see Sect. V.1 for the definitions of these quantities). We have checked that these effective numbers (especially the value of ) are in reasonably good agreement with these three-channel parameters of Planck Collaboration () 9. The only substantial difference is that we believe it is more appropriate to take 10.

Our analysis here parallels that of the last Subsection. We first fit the mock data in the several specific scenarios considered early, namely, varying the numbers of self-coupled neutrinos keeping the total neutrino number at three, and then varying the number of standard neutrinos assuming no self-coupled neutrinos. We compare the quality of the best fits in each case and observe how the cosmological parameters change to compensate for the effects of neutrino coupling. For our second analysis, we perform a global fit in the space of parameters . We then marginalize over the last 8 parameters to obtain the allowed region in the space of .

C.L. Age/GYr
0.243 0.001 13.68 0.24 11.3 72.9
21.1 0.262 0.016 13.49 0.22 11.3 75.5
85.5 0.277 0.030 13.31 0.21 11.1 78.3
197.5 0.281 0.039 13.11 0.19 11.0 81.5
91.7 0.298 -0.017 16.06 0.30 11.2 58.9
32.9 0.162 0.006 12.16 0.22 11.4 84.2
Table 3: The same as in Table 1, but now fitting to the mock Planck dataset.
Figure 7: The improvement of sensitivity of Planck (solid) over the current data (dashed), assuming the standard scenario of three freely streaming neutrinos. The dotted curve illustrates how the parameters measured at Planck would shift assuming a single neutrino species is coupled. The current dataset is comprised of the data from WMAP3, SDSS Main and LRG, 2dF, HST, and SN Ia, plus the observational prior on (see text for details).

The results of the first set of calculations are tabulated in Table 3. The trends in the parameter shifts are similar to the case of WMAP3 – both and decrease while the physical baryon density increases as more neutrinos are coupled – but with Planck the shifts are much constrained. Moreover, the degeneracies are very efficiently broken and the quality of the best fits is significantly poorer in the coupled cases relative to the standard case. The scenario of a single coupled neutrino (, ) is disfavored at 4.2 relative to the standard one. The scenario with (, ) is disfavored at 5.4, while the other scenarios in the Table are ruled at greater than 8. Clearly, Planck’s sensitivity will be dramatically better than that of the current data.

The situation is graphically illustrated in Fig. 7, where the solid curves show the expected measurements at Planck and the dashed curves are those obtained with the combined current data (see Sect. IV.1.3) – both under assumption of three standard freely streaming neutrinos. The dotted curves show how the best-fit parameters shift if one instead fits the data under the assumption of a single coupled neutrino (, ). Clearly, Planck by itself will have errors that are significantly smaller than those of today’s experiments combined.

Figure 8: Expected sensitivity of Planck alone to , .

The results of the second calculation show that this level of sensitivity persists for any direction in the plane. The sensitivity contours in this plane are depicted in Fig. 8. We wish to stress two main results of this analysis: (i) the sensitivities of Planck to and are similar, and (ii) for no direction in the plane is there is a degeneracy between the two parameters. We will return again to the last point in Section V.1.

Our prediction for Planck’s sensitivity to the number of freely streaming neutrinos is asymmetric, . The lower error roughly agrees with that of Bashinsky and Seljak (2004), when the choice of higher is taken into account, and Perotto et al. (2006) obtained with the Fisher matrix method. It is in reasonable agreement with what Perotto et al. (2006) finds with the MCMC method assuming no lensing of the CMB. We have investigated the effect of lensing on the sensitivity and we do not find the large effect of Perotto et al. (2006). Instead we find the bounds remain similar, slightly weaker for , slightly stronger for . Notice that Bashinsky and Seljak (2004), considering Planck’s sensitivity to with the Fisher analysis, also finds the effect of lensing small.

Planck plus other cosmological data

Figure 9: Sensitivity of Planck plus other cosmological data. The constraints on are not much changed as compared with Planck alone, but the figure shows a factor of two potential improvement in .
Figure 10: Sensitivity of Planck plus other cosmological data (dashed-dotted) vs. Planck only (solid). 1-d marginalized probabilities.

Lastly, we briefly consider what might be gained by combining Planck with the other cosmological data. For that, we run a combined fit to Planck, LRG and Main samples of SDSS, 2dF, the Type Ia supernova data and the Hubble constant measurements from the HST.

The result of the fit is shown in Fig. 9. The sensitivity to the number of freely streaming neutrinos is unaffected by the addition of the other data, while the sensitivity to coupled neutrinos improves by a factor of two. This can also be seen from the one-dimensional marginalized plots in Fig. 10.

One should, of course, not overinterpret this result. It should be kept in mind that the mock data for Planck was generated from the best-fit point of WMAP3 and we did not explore the dependence of the fit on this choice. Also, by the time Planck’s data is available, the other experiments could be updated. Nevertheless, the general lesson is that while most of the precision will come from Planck, the addition of the other data may lead to further tightening of the bounds.

V Discussion of the results and further numerical investigations.

The results of the previous Section indicate that Planck should constrain the numbers of coupled (and freely streaming) neutrino species much more accurately than what is possible today. In this Section, we make this statement more robust by investigating the following points:

  • The sensitivity is an indication that not all of the effects of neutrino coupling could be compensated and that the residuals should be within the sensitivity reach of Planck’s instruments. We recall that in the last Section, we varied these parameters: , , , , , , , , , . We have omitted other cosmological parameters such as the curvature , or the dark energy equation of state . If varying these additional parameters introduced additional ways of compensating the effects of neutrino coupling, the bounds of the previous Section could be weakened. We need to show that this is not the case.

  • We also made assumptions about the performance of Planck’s detector. We need to check the robustness of our results with respect to changing the characteristics of the detectors, such as the angular resolution and the detector noise levels. Put another way, we need to establish which multipoles are crucial for this measurement.

We will investigate these points using the Fisher matrix technique.

In addition to these practical issues, we will also investigate the issue of principle: what is the nature of Planck’s sensitivity to neutrino coupling? We do that by examining which effects of coupling cannot be compensated by adjusting the cosmological parameters.

v.1 The role of other cosmological parameters

As mentioned before, we will investigate the role of other cosmological parameters using the Fisher matrix technique. The idea is very simple: suppose that the likelihood function around the best-fit point is approximately gaussian. In this case, one can fix the parameters of the gaussian (in the case of an dimensional parameter space, an symmetric matrix) by evaluating the likelihood function at points, rather than at points, as in the case of mapping out the parameter space with MCMC.

The main reason for using this approximate method is speed Bowen et al. (2002). Obviously, the Fisher matrix method has its limitations, as for example, was recently discussed in Perotto et al. (2006). Still we believe that our usage of the Fisher matrix approximation – not to obtain the exact bounds but to investigate qualitative issues outlined above – is appropriate.

By expanding the log of the likelihood function to the second order in cosmological parameters around the maximum, one finds the standard expression:

(10)

Here is the inverse of the covariance matrix (to be defined shortly), and the are the power spectra in the temperature and polarization channels. We restrict ourselves to the temperature, , polarized, and cross temperature-polarization , power spectra. They are the only ones relevant for scalar perturbations.

The elements of the covariance matrix Seljak (1996); Zaldarriaga and Seljak (1997); Eisenstein et al. (1999), which give the errors of the corresponding measurements, are:

(11)
(12)
(13)
(14)
(15)
(16)

In these equations, is the sky coverage, and specify the detector noise for temperature and polarization respectively, and is the beam smearing window function. Here is the full-width, half-maximum of the beam in radians, and are the quantities characterizing the detector noise level for temperature and polarization, respectively. For Planck, we take , , and .

More precisely, the Fisher information matrix is defined through derivatives of the likelihood function for data  and model parameters  as

(17)

The right-hand side of Eq. (17) is averaged over the data , weighted by the probability of their observation in the fiducial model. Given this definition, the Cramér-Rao inequality states that the r.m.s. of the best-estimator for a parameter  cannot be less than , as discussed, e.g., in Eisenstein et al. (1999).

Of course, to perform the average in general requires mapping out the likelihood function over the parameter space and we are back to the problem of computing the likelihood at points. We do not perform such mapping here. Instead, we simply assume the likelihood is close to gaussian, compute the matrix in Eq. (10), and use as an estimate of a 1 error on the parameter .

We consider the following set of cosmological parameters , replacing with in our list of parameters. The first 10 parameters span the parameter space of our MCMC analysis of the last Section, while the last three are the new additions. We calculate the derivatives in Eq. (10) by symmetric finite differences about the best-fit cosmological parameters from the WMAP year three data. We compute the resulting 1 marginalized error on and with all 13 parameters and then drop the last three parameters in turn. We also consider the effect of fixing the helium abundance .

Vary all Fix Fix , Fix , , Fix , ,
13 params. ,
Error on 0.31 0.31 0.31 0.29 0.28
Error on 0.38 0.38 0.38 0.35 0.32
Table 4: The Fisher matrix estimate of the 1 error on and . The effects of fixing neutrino mass, curvature, dark energy equation of state, and the Helium fraction are considered.

The results are shown in Table 4. We see that keeping the curvature and the neutrino mass fixed to zero is completely justified. Moreover, the effect of varying the dark energy equation of state is also quite small (% for and for ), as is the effect of varying (% for and for ). Thus, practically speaking, we are justified in our choice of the 10 cosmological parameters used in our MCMC calculations, as the other three parameters do not change our qualitative conclusions about Planck’s sensitivity to the number of coupled neutrinos. In addition, though the Fisher analysis shows that the effects of on the errors of and are fairly minimal, we include as a parameter in our MCMC with priors consistent with astrophysical observations as described in Sect. IV.1.3.

v.2 The compensation mechanism and the nature of Planck’s sensitivity

We now wish to establish the nature of Planck’s sensitivity to neutrino self-coupling. By this we mean showing which effects of self-coupling cannot be entirely compensated by adjusting the cosmological parameters and how the residual differences compare to the corresponding errors. The latter, as we saw in the previous subsection, are set by a combination of the cosmic variance and the resolution/sensitivity of the apparatus. Equipped with our numerical results, we are now able to explicitly see the compensation mechanism in action.

For this, we turn to Table 3. The best-fit parameters listed in the Table are precisely those which compensate the effects of neutrino coupling most efficiently within each scenario. Indeed, we recall that while the mock data is always generated under the assumptions of three freely streaming neutrinos, for each of the scenarios listed in the Table, the MCMC code attempts to find the best fit possible within the framework of that particular scenario. The code varies the values of the cosmological parameters until the closest agreement is found.

Figure 11: The relative residual differences in temperature and polarization power spectra for 1, 2, and 3 coupled neutrinos (in order of deviation from zero) that remain after the cosmological parameters are adjusted in each of the scenarios (as shown in Table 3).

The compensation turns out to be very efficient. One way to illustrate it is to plot the relative differences of the temperature and polarization power spectra in the scenarios of coupled vs. freely streaming neutrinos that remain after the cosmological parameters are adjusted. This is done in Fig. 11, where we show the situations for 1, 2, and 3 coupled neutrinos. We can see that, unlike the differences shown in Fig. 1 which were in tens of percent, these residual differences are only at the level of 1-3%.

We further observe that below the multipole number of the residuals have no clear structure, while above they develop a shape of periodic oscillations, as a consequence of the phase shift, Eq. (7).

In terms of the sensitivity of Planck, the physically relevant quantities are not the relative differences in the ’s just shown, but the ratios of the differences to the corresponding errors. The errors are given by the elements of the covariance matrix, Eqs. (11-16). We can then consider at each the quantities

(18)

where runs over , , and , and are again the differences between the power spectra of the reference model () and the best-fit power spectra in a scenario with self-coupled neutrinos.

Figure 12: The quantities and defined in Eqs. 18 and 19. They illustrate the ratios of the residual discrepancies at each to the corresponding errors, as explained in the text. The residual discrepancies at play a key role.

The quantities are plotted in Fig. 12 (the top three panels). In the last panel of the figure, we plot the quantity

(19)

which is nothing but the contribution of a given multipole to the . These Figures are very instructive, for they show that statistically significant deviations occurs for , and have no clear structure. The higher multipoles which exhibit clear effects of the phase shift in Fig. 11 turn out to be of relatively minor importance.

This is further seen when we consider the effects of the expected detector resolution and noise on the errors in . In Figs. 1314, we show the change in sensitivity as a function of detector characteristics, obtained a Fisher analysis. The rate at which increased resolution and lower noise improve the sensitivity to is rather modest. Decreasing the noise by an order of magnitude, for example, improves the sensitivity by less than a factor of two. In addition, we consider the sensitivity of a hypothetical experiment which measures all multipoles with within cosmic variance; we see in Fig. 15 that sensitivity to multipoles is sufficient to measure , and implies that Planck’s high sensitivity to neutrino free-streaming does not depend on the high multipoles. It also implies that our qualitative predictions for Planck have a high degree of robustness, as the sensitivity does not depend strongly on the high properties of the detector.

Figure 13: sensitivity of Planck (calculated with a Fisher matrix) to as a function of the detector noise. A noise factor of one corresponds to the sensitivity chosen in the main analysis (, ).
Figure 14: sensitivity of Planck (calculated with a Fisher matrix) to as a function of the detector resolution. The resolution chosen in the main analysis is .
Figure 15: sensitivity of a hypothetical CMB experiment to as a function of the maximum multipole, , measured to within cosmic variance. Multipoles with are discarded.

One characteristic of Planck that proves crucial is the simultaneous measurement of both temperature and polarization. This makes it possible for Planck to constrain down to 0.5 at 1 level even if is only 800. In comparison, we saw that WMAP3 places virtually no constraint on the number of self-coupled neutrinos. The difference is the relatively poor quality of the polarization measurements at WMAP, as well as the temperature measurements for , where Planck is cosmic variance limited.

Vi Constraints on neutrino interactions

Since we have shown that even a single interacting neutrino or a single extra neutrino will be observed or excluded to high precision with Planck, we consider the implications for models of neutrino-scalar interactions, including neutrino dark energy. Before turning to the specific models, we first review general constraints on the couplings from Majorana neutrino-scalar interactions which will be possible with Planck data.

The loss of neutrino free-streaming and production of additional relativistic degrees of freedom as a result of neutrino-scalar coupling have been considered in Ref. Chacko et al. (2004). We summarize and reformulate the constraints here; a more complete treatment is left to the appendix.

For a Majorana mass term, the relevant interactions , , and (if ) . The last interaction is often most constraining on the coupling , and its rate is

(20)

for . This rate can also be described from on resonant s-channel scattering; see the appendix for details. The factor accounts for the boost from the rest frame of the scalar. This rate increases as the temperature decreases, and will cause to decay to neutrinos, increasing , once drops below , provided .

If, on the other hand, remains relativistic through recombination, this same process may tightly couple the neutrino to the scalar, removing neutrino free-streaming. The process can only occur in a small region of phase space which depends on the angle between the incoming neutrinos, . In order to isotropize the neutrino momentum, therefore, this process must occur times. We require then Chacko et al. (2004). The equilibration occurs before recombination if for , and becoming less restrictive with dropping by the factor .

The exception is if scattering of ’s through a interaction may be fast enough to isotropize the scalar momentum in between decays and inverse decays. Then the constraint is transmuted to , which implies that if , where is the coupling of the interaction, and if , then the neutrinos will be strongly coupled with the scalars. The equilibration then occurs before recombination if

(21)

and

(22)

interactions offer complementary constraints to the decay-inverse decay processes. If both neutrinos and scalars are relativistic at decoupling, the rate for is given by

(23)

and is competitive with , so that if neutrinos and scalars will be tightly coupled. If , however, the constraints on are relaxed by , due to propagator suppression from the scalar.

We have assumed everywhere that . If and , the neutrinos will annihilate to the scalars, producing a “neutrinoless” universe Beacom et al. (2004); for this process, is an evolving function of redshift as the neutrinos annihilate. We do not consider this scenario further in this paper.

We briefly discuss the constraints from BBN. The precise upper bound on the effective number of neutrinos is somewhat controversial and dependent on the particular set of constraints from data utilized, but is on the order of Cyburt et al. (2005). Any additional scalar degrees of freedom brought into equilibrium before BBN will contribute . One scalar alone brought into thermal equilibrium before BBN, then, will not upset the current bounds. If the theory contains multiple scalars, however, there will be constraints already from BBN on the coupling from and coming from demanding that these processes not recouple until .

If these processes are brought into equilibrium after the neutrino processes decouple from the heat bath at , they may still affect the value of measured by the CMB Chacko et al. (2004, 2005). For example, it may be possible that new particle states (e.g., scalars, sterile neutrinos) are populated by recoupling after BBN and then annihilate when the temperature drops below their masses (but still before the CMB decoupling). In this scenario, while the annihilation step occurs at constant entropy, the recoupling does not. In the approximation when the former occurs at constant energy, the value of measured by the CMB is given by Eq. (8) of Chacko et al. (2005). In general, for a given model, accurate analysis of recoupling may be required.

We summarize the future constraints from Planck given no deviation from standard relativistic energy density and neutrino free-streaming in Table 5.

process constraint Effect on CMB
Remove free-streaming
Increase
Remove free-streaming
Remove free-streaming
Increase
Table 5: Constraints on the effective coupling of a Majorana neutrino with a scalar, if the Planck satellite observes no extra, non-standard relativistic degrees of freedom or strong neutrino-scalar interactions.

Vii Some implications for models

vii.1 Models of Neutrino Dark Energy

Models of neutrino dark energy are typically characterized by a coupling between a singlet neutrino and a light scalar field of the form given in Eq. 2. An important difference between the standard see-saw and the MaVaN scenario is that is a continuously evolving parameter, since the vev of the singlet scalar, , varies as the universe cools.

The evolution of results from finite temperature effects in the scalar potential; the background neutrino energy density acts as a source for the scalar field, and may displace from the value dictated by its zero-temperature potential, :

(24)

where is the light mass eigenstate resulting when the sterile neutrino is integrated out:

(25)

This is a self-consistent formalism provided that the sterile neutrino remains thermally unpopulated (and hence effectively integrated out) in the early universe; it was shown in Weiner and Zurek (2006) how this can be done with Planck suppressed operators between the scalar and electrons. In addition, in a theory with the full flavor structure of three generations, must correspond to the lightest mass eigenstate, which is still relativistic in the universe today, so that the neutrino dark energy does not clump and become unstable Afshordi et al. (2005).

As was shown in Sect. II, the mass eigenstate couples to the scalar through the Majoran interaction, , where . Minimizing eqn. 24,

(26)

Hence, we have a temperature dependent coupling .

How can MaVaNs be detected with Planck? Planck will be sensitive to couplings g, through , of size ; these couplings may be small enough to lead to possible detection or interesting constraints on MaVaNs with Planck. We conclude, using from eqn. 26

(27)

implying that the neutrinos will form a tightly coupled fluid down to recombination temperatures, , unless

(28)

For scalars in these mass ranges, Planck will be able to detect MaVaNs.

What are natural parameters and typical scalar masses for the theory? We consider a generic class of MaVaN models laid out in Fardon et al. (2004, 2006). All energy scales in MaVaN models are characterized by the dark energy scale,