1 Introduction

Neutrino mass hierarchy determination with IceCube–PINGU

Walter Winter111Email: winter@physik.uni-wuerzburg.de

11footnotemark: 1 Institut für Theoretische Physik und Astrophysik,

Universität Würzburg, 97074 Würzburg, Germany

July 30, 2019

Abstract

We discuss the neutrino mass hierarchy determination with atmospheric neutrinos in PINGU (“Precision IceCube Next Generation Upgrade”), based on a simulation with the GLoBES software including the full three flavor framework and parameter degeneracy, and we compare it to long-baseline experiment options. We demonstrate that the atmospheric mass hierarchy sensitivity depends on the achievable experiment properties and we identify the main targets for optimization, whereas the impact of a large number of tested systematical errors turns out to be small. Depending on the values of , , and the true hierarchy, a 90%CL to discovery after three years of operation seems conceivable. We also emphasize the synergy with existing beam and reactor experiments, driven by NOA, such as the complementary coverage of the parameter space. Finally, we point out that a low intensity neutrino beam with a relatively short decay pipe could be used to determine the mass hierarchy with a sensitivity comparable to the LBNE experiment irrespective of the directional resolution of the detector.

1 Introduction

With the discovery of a non-zero value of by electron antineutrino disappearance reactor experiments [1, 2, 3], the mixing angles and mass squared differences of the neutrinos are known at the percent level [4, 5, 6]; the flavor physics of the lepton sector has therefore become a precision discipline. The major outstanding issues in oscillation physics are the determination of the neutrino mass hierarchy () and the value of the leptonic Dirac CP phase . If, in addition, is confirmed to be non-maximal, the octant needs to be determined. From LABEL:~[7] it is clear that existing beam experiments, such as T2K [8] and NO[9] will most likely not allow for a high confidence level determination of any of these parameters – even with the help of the reactor experiments and some mutual optimization of the neutrino-antineutrino running. Therefore, the next generation of experiments will be required. For the measurement of , a new long-baseline neutrino oscillation experiment seems the straightforward choice because of excellent control over systematics; for a recent analysis and comparison, see e.g. LABEL:~[10].

The neutrino mass hierarchy is an excellent discriminator of flavor models [11], since (in the non-degenerate case) the structure of the effective mass matrix of the neutrinos strongly depends on . The conventional technique to determine the hierarchy is using matter effects in long enough baselines [12, 13, 14]. This approach can be used in a long-baseline neutrino oscillation experiment with a baseline , such as a superbeam [15, 16], or using atmospheric neutrinos in an iron calorimeter [17, 18, 19] or a liquid argon detector [20]. While a long-baseline neutrino oscillation experiment requires a new dedicated beam source, the atmospheric neutrino flux comes for free. The challenges are, however, completely different, as we will recover in this paper. Alternatives include the study of disappearance over several oscillation peaks in a medium-baseline reactor experiment named Daya Bay-II; see, e.g. LABEL:~[21]. This technique is different from using matter effects, and requires excellent energy resolution. Furthermore, cosmological tests of the mass hierarchy can be performed, see e.g. LABEL:~[22].

A new way to use atmospheric neutrinos to determine the neutrino mass hierarchy has recently been drawing some attention: increasing the Digital Optical Module (DOM) density in a neutrino telescope, operated in sea water or ice, lowers the threshold and allows for a Megaton-size detector already at 10 GeV. Such an approach has been identified for IceCube-DeepCore at the South Pole, called PINGU (“Precision IceCube Next Generation Upgrade”) [23, 24], and ANTARES/KM3NeT in the Mediterranean, called ORCA (“Oscillation Research with Cosmics in the Abyss”) [25]. The respective physics potential of these ideas has been studied in Refs. [26, 27, 28, 29, 30, 31]. We focus on the neutrino mass hierarchy determination in PINGU in this paper. The main selling points of this idea are the relatively short timescale compared to any other option which requires the excavation a large underground cavern, the very predictable and moderate cost, and the small risk due to the experience with the existing technology.

We first of all discuss the physics potential using atmospheric neutrinos including the full three flavor framework, oscillation parameter correlations and degeneracies, and an unprecedentedly large number of systematical errors. We discuss the implementation of PINGU in Sec. 2, the main impact factors including systematics in Sec. 3, and the performance in Sec. 4. Then in Sec. 5, we compare the potential of PINGU to the one of the existing beam and reactor experiments. Finally, we point out that one may also target a low intensity neutrino beam from one of the major high-energy physics laboratories on the Northern hemisphere towards the South Pole in Sec. 6. Earlier studies discussing this possibility have been, for instance, Refs. [32, 33, 34, 35].

2 PINGU implementation and systematics

For the simulation of PINGU, a modified version of the GLoBES (“General Long Baseline Experiment Simulator”) software [36, 37] is used.111This version is based on a (yet) unpublished prototype for GLoBES 4.0, which was originally written for LABEL:~[10]. The advantage of this prototype is that the different directional bins can be defined as different experiments, and systematics can be easily and self-consistently defined using the pull method. A more detailed description can be found in LABEL:~[10]. The directional smearing is performed after the event rate computation, whereas the energy resolution is a built-in feature of GLoBES. Note that the error by introducing the directional smearing after the event rate distribution can be shown to be second order in if the directional resolution depends on the incident energy instead of the reconstructed neutrino energy . The energy binning is chosen from to in steps of 1 GeV. The directional binning is chosen from to with equidistant steps to follow LABEL:~[26]. We take into account the background of down-going neutrinos which may be re-constructed as up-going events (effective coverage) by defining two “overflow” bins and . Note that the down-going neutrinos create a non-oscillating background for the mass hierarchy determination, while the overflow bins can be used to constrain systematical errors. In addition, note that the binning has been checked to be sufficiently fine-grained not to affect the sensitivity significantly, whereas in principle larger bin widths are allowed for quasi-horizontal events. At low energies, where the oscillation probability changes very quickly, the sampling density has been increased to 50 MeV intervals to avoid aliasing effects.

The directional smearing (redistribution of events) between incident and reconstructed is computed by a (normalized) Gaussian

(1)

with the resolution . Since the directional resolution depends on energy, the fraction of events with incident direction mapped into bin with boundaries and has been pre-computed by

(2)

which is also often called a “migration matrix”, since and can only take fixed values in the simulation. As additional complication, it has to be taken into account that up- and downgoing events are properly mapped into the bins (in Eq. (1), may take values smaller than zero or larger than , which just corresponds to shifting the azimuth by and correcting the angle). The migration matrices have been directly compiled into the GLoBES code.

For the atmospheric neutrino flux, we use the most up-to-date South Pole flux [38], where we use the azimuth-averaged solar-min version. In order to take into account site- and model-dependent uncertainties, we include a substantial normalization error of the order 20%, see LABEL:~[28], a slope error (“zenith bias”) of the order of 4% [39] as implemented in LABEL:~[30] on the up-going (signal) events, an (uncorrelated) error on the down-going event sample of the same order, and errors on flavor ratios and polarities estimated from LABEL:~[39]. Our PINGU (default) fiducial mass is based on the official version 12/2012 using a minimum of 20 hits/event, also known as V6.222The fiducial mass is linearly interpolated among , , , , , , , and  [40]. It includes 20 strings with a string-string spacing, and 60 DOMs per string with a DOM-DOM spacing. It should be noted that the actual PINGU configuration is still subject to optimization, and that smaller or larger fiducial masses and threshold functions are conceivable depending on the number of strings and DOM density. For instance, a 40 string setup may perform closer to the “optimistic” setup, we define below.

We use three years of exposure, unless noted otherwise. The event rates have been cross checked with LABEL:~[26, 27, 29]. The oscillation parameters and their uncertainties are taken from LABEL:\@@cite[cite]{[\@@bibref{}{missing}{}{}]}GonzalezGarcia:2012sz, where it is noteworthy that two solutions with and exist. For the true , we choose the normal hierarchy best-fit . We do not take into account the external errors on and though to avoid a bias from the external fit or a preference of a particular octant. In some cases, we combine the PINGU data with the equivalent of 10 years of disappearance data from T2K, based on the simulation in LABEL:~[7], which is expected to provide the best available knowledge at least on at the analysis time – a method which has proven to be successful for beta beam simulations [41, 42]. We have checked that the T2K disappearance data alone do not have any significant mass hierarchy sensitivity.

Systematics Opt. Def. Cons. Comments Ref.
Experiment properties:
Fiducial mass and
energy threshold (*) 12/2012 12/2012 (*) Improved threshold from denser instrumentation, starting at at linearly increasing to at [40]
Energy res. 0.15 0.25 0.35
Dir. resolution “Default” corresponds to at [26]
Cascade mis-ID frac. 0.01 0.05 0.1
Systematical uncertainties:
Normalization 0.10 0.25 0.35 Includes fiducial mass, atmospheric flux models
mis-ID cascades 0.05 0.075 0.10 Uncorrelated among electromagnetic, hadronic, and NC cascades
Cross sections (DIS) 0.05 0.075 0.10 Uncorrelated between neutrinos and antineutrinos [10]
Matter density 0.005 0.01 0.05 Uncorrelated among all directional bins [43]
Uncertainties of atmospheric neutrino flux:
Normalization Included in “Normalization” above. [28]
Slope error
(zenith bias) 0.01 0.04 0.10 [30, 39]
Flavor / 0.005 0.01 0.02 [39]
Polarity / 0.01 0.02 0.03 [39]
Polarity / 0.01 0.025 0.03 [39]
BG down-going 0.05 0.075 0.10 [39]
Table 1: Considered systematical errors and experiment properties in this analysis, where the assumptions for three different scenarios (optimistic, default, conservative) are listed.

The considered systematical errors and experiment properties considered in this analysis are listed in Table 1. In order to the test impact of systematics, we have defined three values in each case (optimistic, default, conservative), which are supposed to represent the performance of the experiment in the best, conceivable, and worst case. Apart from the systematics already discussed above, an improved threshold has been considered for the optimistic setup, as it may be achieved by a denser instrumentation, such as a 40 string configuration. The energy and directional resolution corresponds to similar assumptions as used in the literature [26], where the default values correspond to and at (where the main significance comes from, as we will show below). The fraction of cascades mis-identified as muon tracks is, at this point, an educated guess, which may actually be a function of energy. For the cross sections and matter density uncertainty, the assumptions from LABEL:~[10] have been adopted. We will, however, demonstrate below that only very few of these systematics are important for the analysis at all. We have also tested an energy calibration error (for the definition, see GLoBES manual [36]), and we have not found any significant impact, even not on the measurement. The reason is that the solar is well constrained externally, which means that the bins with long baselines and low energies can be used for the energy calibration (the energy calibration error would affect both the solar and atmospheric oscillations). Since the inclusion of the energy calibration is computationally expensive, we have not included it in the main line of this work.

3 Performance indicator, main impacts, and potential for optimization

Figure 1: Statistical significance per bin for the mass hierarchy determination in PINGU (default setup, , ). The left panel includes statistics only (), the right panel uses the IH rates at the fit minimum, i.e., it includes the oscillation parameter correlations. The minimum in this case is found at , , , , , . The total statistical significance, as defined here using the Gaussian , is and in the left and right panels, respectively. This calculation has been performed without systematical errors.

We define the sensitivity to the hierarchy as the minimal of all solutions with the flipped hierarchy, compared to the simulated hierarchy where the minimal . Therefore, oscillation parameter correlations (connected degenerate solutions) and degeneracies (such as in the octant) are automatically included if all possible degenerate solutions are properly located. Note that this way to define the hierarchy sensitivity is consistent with the literature on long-baseline neutrino oscillation experiments, and can therefore be used for a direct comparison. Occasionally we will show the number of sigmas () for the discovery assuming that for 1 d.o.f. Note that this frequently used assumption may not exactly apply to the hierarchy sensitivity, see LABEL:~[44] and references therein for a more detailed discussion. However, we use it equally for all experiments.

We show the impact of the correlations in Fig. 1. First of all, consider the statistical significance per bin in the left panel, which is to be compared to Fig. 13 in LABEL:~[26].333In this figure, we use a Gaussian , whereas for the main analysis the Poissonian of GLoBES is used. Apart from the different fiducial mass as a function of energy and slightly different angular and directional resolutions, the result is very similar. If, however, the oscillation parameters of the fit solution are allowed to take any value in the parameter space region (are minimized over), the significance decreases, see right panel; the actual position of the minimum is given in the figure caption. In fact, after the marginalization, the main significance comes from mantle-crossing baselines () at about 9 GeV (dark red region). This figure has to be interpreted with care, though, since it only shows where the resulting tension between the two hierarchies remains – whereas other regions help to reduce systematical uncertainties and the measurements of the oscillation parameters.444For instance, we have not shown the energy range between 25 and 50 GeV in the figures at all, where the contribution to the total statistical significance for the mass hierarchy is small in the right panel. However, without this region, the minimal decreases because it helps to measure some of the oscillation parameters and systematical errors.

The most important correlation comes from the definition of . For the normal hierarchy, it is the difference between the heaviest and lightest (squared) masses; for the inverted hierarchy, it is the difference between the lightest and second-heaviest (squared) masses. Therefore, the fit values for the normal and inverted hierarchies are different as an artefact of an asymmetric definition of , and the flipped hierarchy fit is not exactly found at . A possible way out is to define a symmetrized , which reads for the disappearance channels [45, 46, 47, 18, 27, 19]

(3)

However, note that it contains other oscillation parameters, such as , which itself can vary. Therefore, we choose complete minimization over all fundamental parameters as method for the hierarchy determination within the full three flavor oscillation framework, as opposed to Eq. (3). As this is a built-in feature of GLoBES, it is automatically included as for any other long-baseline experiment.

Figure 2: The as a function of for a simulated normal hierarchy (and , ). The left panel shows the inverted hierarchy fit, the right panel the best-fit region. The different curves correspond to PINGU alone, and PINGU combined with T2K disappearance data. Vertical lines correspond to , where is the simulated best-fit value. Default setup from Table 1 used.

We illustrate the correlation in Fig. 2, where the as a function of is shown for the best-fit (right panel) and degenerate solution (left panel), see red curves. The minimal in the left panel corresponds to the hierarchy sensitivity, whereas the minimal in the right panel is zero by definition (statistical fluctuations are not simulated here, we only simulate the “average” experiment). If the fit is fixed to , as illustrated by the vertical line in the left panel, a large artificial is introduced because PINGU itself has a good precision. Similarly, such a contribution can arise from a tension of different data, such as external knowledge on added at some position, which is not to be interpreted as actual mass hierarchy sensitivity.

It is also an interesting question if such external knowledge on can actually help; we therefore include the T2K disappearance data in the gray curves. The precision PINGU can achieve alone for is, for our simulation, about 0.9%, compared to the current precision of about 3% [4] and the precision of the T2K data of about 1.3%, which means that the inclusion of external data may not be necessary. The combined precision of PINGU and T2K, which can be read off from the figure, is about 0.7%. However, the addition of the T2K data only marginally affects the position of the minimum or the minimal – and therefore only marginally affects the hierarchy sensitivity. We therefore do not include it in the following sections, unless noted otherwise. It is also noteworthy that the position of the minima in Fig. 2, left panel, are not exactly matching, which means that even Eq. (3) can only provide a rough estimate if the precision on is as high as in these experiments. For instance, computing the hierarchy sensitivity for a fixed fit obtained from Eq. (3) yields a , which is larger than the actual mass hierarchy sensitivity at the minimum. We therefore rely on full minimization, and do not discuss the effect of separately.

Figure 3: Dominant impact factors (as relative effect on the total ) for two different parameter sets (see plot labels) and . Impact factors under experiment control are shown in blue/dark.

In order to address the potential for optimization, we need to identify the main impact factors for the sensitivity. For that, we have tested all the properties listed in Table 1 including the total exposure (running time detector mass) on the default systematics setup, one by one, for two different sets of parameters. In addition, we have tested the impact of parameter correlations. As method, systematics are switched off to test their impact, where the resulting (relative) change of the total is computed. Correlations are checked by evaluting the full parameter degeneracy and then fixing only one oscillation parameter at a time. Threshold, energy resolution, and directional resolution are switched to the optimistic values in Table 1 and mis-identified cascades are switched off to test their effect. In addition, the impact of a factor two higher exposure (running time or fiducial mass) is tested. Although these choices are to some degree arbitrary, they help to identify the relevant systematics and impacts. This method dates back to LABEL:~[48] and has been motivated by the idea to identify the impacts with the greatest potential for optimization. It typically leads to the same qualitative results as switching all systematics off and then each of them on, one by one; different methods have been explicitely tested in LABEL:~[10]. However, it should be noted that it is not suitable to quantitatively infer the combined effect of different impact factors, as we will show later for the optimistic setup, since, for instance, systematics can be correlated with each other or with oscillation parameters. It rather illustrates what happens if a single of the main impacts can be improved.

The impact on the total is shown in Fig. 3 for the ten most dominant impact factors for two different sets of parameters, i.e., all of the shown impacts are important. It is first of all interesting to observe that the mass hierarchy determination is rather insensitive to systematical errors, such as on cross sections or on the atmospheric neutrino flux, which can be determined by a large number of bins without mass hierarchy sensitivity; cf., Fig. 1. The main impacts, which may be optimized for, are represented by the blue (dark) bars:

Exposure/fiducial mass

May be improved by adding an event sample with un-contained muon tracks, by allowing for less that 20 hits for the threshold, by adding cascade data, or by increasing the instrumentation volume.

Directional resolution

May be improved by a denser instrumentation.

Energy resolution

May be improved by better analysis techniques or better DOM hardware.

Energy threshold

Can be improved by a denser instrumentation.

Mis-ID cascades

Biggest unknown. Has to be addressed, since it is important.

Normalization

Requires both control over fiducial mass of the experiment and the atmospheric neutrino flux prediction.

In addition, note that several of these properties may be improved by using inelasticity information, which might also allow for separate neutrino and antineutrino samples [31]. While some improvements may rely on more advanced analysis techniques to be developed and improved within the next years, some require a modification of the experiment design beforehand. Note that the order of the impacts on Fig. 3 in many cases depend on the difference between the values for the default and optimistic setups in Table 1, which means that all of the listed factors are important. In addition, our qualitative conclusions are rather independent of the parameter space region, as it can be inferred from the comparison between the left and right panels of Fig. 3.

As far as correlations are concerned (red/light bars), their individual impact depends on how well the location of the degenerate solution is estimated. Here we use Eq. (3) to “guess” the degenerate , and use the simulated values for the other parameters. For instance, in the left panel, Eq. (3) does not seem to work well enough and the correlation with is the first one to be taken into account. In the right panel, Eq. (3) works better, but the degeneracy is located at a different , , and than the simulated values. This exercise illustrates that the full parameter degeneracy has to be taken into account, since whatever estimator for the degenerate solution is chosen (here Eq. (3)), it does not work equally well everywhere in parameter space. In the following, we therefore take into account the full parameter degeneracy everywhere, i.e., we take into account the actual parameter space topology and do not rely on such estimators.

4 Performance using atmospheric neutrinos

Figure 4: The number of sigma (N) for the mass hierarchy discovery as a function of the true for the different (true) hierarchies (rows) and octants (columns), as given in the plot captions, for three years of data taking. The solid curves correspond to the default setup in Table 1, the shaded region shows the impact of systematics between optimistic (upper end) and conservative (lower end).

The performance of PINGU is shown in Fig. 4 as a function of for both true hierarchies and both octant solutions. The solid curves correspond to the default setup in Table 1, the shaded region shows the impact of systematics between optimistic (upper end) and conservative (lower end). From the figure it is clear that PINGU performs significantly better if rather than , and if the true hierarchy is normal rather than inverted.555The sensitivities are somewhat worse for the inverted hierarchy, since the event rates in the appearance channels will be somewhat lower because of the lower cross sections.

Figure 5: Left panel: Number of for the hierarchy discovery as a function of time for two extreme cases of the true parameter values (see legend). Right panel: Number of for the hierarchy sensitivity as a function of (true) for the different hierarchies. The plot range corresponds to the current allowed range for , and the best-fit values and ranges are marked by vertical lines and shadings, respectively [4]. The default setup has been used in both panels (three years of data taking and in right panel).

For , a 90% CL hint can be reached after three years even under conservative assumptions, and are conceivable for a somewhat improved setup (compared to the default setup) and a normal hierarchy. For , however, considerably larger exposures will be needed for a discovery. For the default setup, a 90% CL hint after three years seems conceivable in almost all cases (solid curves). It is however clear that the final performance will depend on the true hierarchy, , and , apart from the detector performance. In turn, one has to adjust the running time to accumulate the necessary statistics. This is, for the most extreme cases, illustrated in Fig. 5 (left panel) for the default systematics. In one case, a hint can be reached after one year and almost evidence after three years, in the other case, the 90% CL hint can be reached after five years, and are out of reach. Note that also within the currently allowed range can impact the performance [18, 19]. However, we do not treat at the same level as the other unknown parameters, since the precision of is continuously being improved; on the other hand, it cannot be anticipated that the other discussed parameters (hierarchy, , octant) will be known before the construction or even analysis time [7].

While the two best-fit solutions for in LABEL:~[4] favor non-maximal atmospheric mixing, the allowed range is much larger and includes the possibility of maximal atmospheric mixing. We therefore show in Fig. 5, right panel, the dependence in the currently allowed range. Obviously, the sensitivity for the normal (inverted) hierarchy can vary between and ( and ) for three years of data taking within the currently allowed range, depending on the actual value of . The kinks in this plot come from the octant degeneracy, i.e., the minimal is found in the inverted hierarchy, inverted octant solution, and may be eliminated if the octant was known at the analysis time. A definitive conclusion on the octant from existing equipment is, however, unlikely [7] (see Fig. 5 therein).

5 Comparison with existing beam and reactor experiments

Compared to PINGU, other experiments will be sensitive to the mass hierarchy at the same timescale, where we especially focus on existing equipment with possible upgrades. The most sensitive experiment is the long-baseline neutrino oscillation experiment NOA in the US, with a baseline of 810 km. In addition, data from the reactor experiments (Double Chooz, Daya Bay, and RENO) will be available, as well as data from T2K. In LABEL:~[7], the time evolution for the sensitivities has been discussed as a function of the exposure of the individual experiments. We adopt these assumptions for the time evolution, and re-evaluate the sensitivities for the current best-fit parameters.

We consider to scenarios, called “2020” and “2025”, which are identical to LABEL:~[7]:

2020

Beam and reactor experiments: The main sensitivity to the mass hierarchy comes from NOA, which is assumed to start 08/2012 with full beam (0.7 MW) but 2.5 kt detector mass only, linearly increasing to 15 kt until 01/2014. This running plan is, in fact, to so far away from the current status, the final luminosity is expected to be reached 05/2014, see LABEL:~[49]. There are also assumptions for T2K and the reactor experiments, the final sensitivity is however not so sensitive to. PINGU: Since PINGU is expected to be completed in Feb. 2017 or 2018, we compare this sensitivity to three years of PINGU data.

2025

Beams and reactor experiments: Here it is assumed that the beams are upgraded. In particular, ProjectX for NOA is assumed to an increase of the beam power from 0.7 to 2.3 MW from March 2018 to March 2019. PINGU: We compare this sensitivity to eight years of PINGU data.

Figure 6: Number of for the hierarchy discovery as a function of for two different scenarios for PINGU (three years, left panel and eight years, right panel, respectively), beams and reactor experiments (scenario 2020, left panel, and 2025, right panel, respectively), and their combination. In the right panel, the hypothetical pure combination with the added after minimization (“Pure statistics”) is shown as well as dotted curve, to illustrate the synergy.

Let us first illustrate the complementarity and synergy between long-baseline experiments and PINGU with two examples as a function of in Fig. 6. In the left panel, it is clear that PINGU has a relatively flat performance in , whereas the sensitivity of NOA strongly depends on . Therefore, the two experiment classes complement each other in terms of the parameter space. A similar example is shown in Fig. 6, right panel, where the two experiment classes clearly cover different parts of the parameter space. However, a discovery for all values of can only come from the combination of the different experiments. It is noteworthy that in this case the two experiment classes are synergistic beyond a simple addition of statistics, see LABEL:~[50] for a more detailed discussion. In order to illustrate that, we show the curve “Pure statistics”, for which the of the two results was simply added after the individual parameter space minimization. Since, however, the parameter spaces are different, these minima are now allowed in a combined fit, and the sensitivity improves beyond a simple addition of statistics (compare curves “Pure statistics” and “Combined”).

Figure 7: The fraction of for which the mass hierarchy can be discovered in 2020 (left panel, 90% CL) or 2025 (right panel, CL). The different bar groups correspond to different (true) hierarchies and , the individual bars to PINGU (three years, left panel and eight years, right panel, respectively), beams and reactor experiments (scenarios 2020 and 2025, respectively), and their combination.

A more complete perspective on the parameter space is given in Fig. 7, where the fraction of for which the hierarchy can be discovered is shown in the scenarios 2020 (left panel, 90% CL) and 2025 (right panel, CL). For , the parameter space coverage in depends on the scenario and experiment class; for the normal hierarchy, PINGU seems to have a slight advantage, for the inverted hierarchy, beam and reactor experiments. For , PINGU alone can reach 100% coverage in in all cases. It is noteworthy that the combination of all experiments allows for a 90% CL hint for the hierarchy in 2020 in all cases, and for a discovery in 2025 — in spite of considerably poorer individual performances especially for . Note however that the scenario 2025 relies on ProjectX.

6 Neutrino beam to PINGU?

Figure 8: The oscillation probability for a baseline for and as a function of energy (for the other parameters, our standard values, , and have been used). We also show the NuMI-like beam spectrum times cross section for a decay pipe length of 250 m for comparison (dashed curve).

In LABEL:~[35] the requirements for a detector at the South Pole to receive a neutrino beam have been discussed, where it has been demonstrated that the mass hierarchy can be easily measured. After the discovery of a non-zero value of , it was furthermore proposed that a low intensity beam from the Fermilab main injector would be sufficient for a high confidence mass hierarchy determination in spite of large irreducible backgrounds [51]. Here we adopt this idea for the detector properties in this study.

We study the potential of a setup with a neutrino beam from one of the major accelerator laboratories on the Northern hemisphere, such as CERN, FNAL, JHF, DESY, or RAL. Since it turns out that the baselines from these laboratories to the South Pole are very similar and cross the Earth’s outer core, we use corresponding to the baseline CERN-PINGU. The main sensitivity is expected to come from the oscillation probability, since the beam produces mostly muon neutrinos. We therefore show this probability in Fig. 8 for the normal and inverted mass hierarchies as a function of energy. It is clear that even by a total rate measurement of this channel the hierarchy can be measured. Using a beam spectrum from protons from the Fermilab main injector (the NuMI beam spectrum from the MINOS simulation in LABEL:~[52] based on Refs. [53, 54]), we observe that it peaks between about 1 and 6 GeV. This is illustrated as dashed curve in Fig. 8, where the product of flux and cross section is shown. In that energy range, the MSW resonance of the Earth’s core and the parametric enhancement of the mantle-core-mantle profile [55, 56, 57] lead to a strong enhancement of the oscillation probability, which is significantly larger than the vacuum oscillation amplitude . This combination among parametric enhancement of the oscillation effect, a the beam spectrum peaking in that energy range, and a Megaton-size detector can be used for the mass hierarchy determination with, in fact, a very low intensity beam, such as from a short decay pipe.

Let us fix the boundary conditions similar to the NuMI beam: For the target power, we use 700 kW, as it is expected to be achieved within 2014 for the NOA experiment, corresponding to , and for the total running time we use five years. Since the beam has to be tilted at an angle of about 68 to point towards the South Pole, a decay pipe as long as the one from the NuMI beam (675 m) is difficult to build. Therefore, we assume a shorter decay pipe of about 50 to 250 m only, at the expense of losing about one order of magnitude in flux.666The modification of the beam spectrum is estimated by assuming that the pion decays are exponentially distributed over the decay pipe length and that the neutrino takes about 25% of the pion energy on average. We use the running mode of the beam only, which will lead to a somewhat worse sensitivity for the inverted than normal hierarchy.

For PINGU, we do not require directional resolution in that case, since the duty cycle of the beam can be used to suppress the backgrounds. We discuss the appearance channel as the main source of information about the hierarchy, which means that electromagnetic cascade identification is, in principle, required. Since it is, however, difficult to distinguish electromagnetic cascades from hadronic or neutral current cascades, we make the most conservative assumption that all these cascades are treated as indistinguishable within one data sample, and that 30% of the muon tracks are mis-identified as cascades.777This assumption corresponds to preliminary results for the ORCA detector at the spectral peak energy used here [58]. Note that without flavor identification, the mass hierarchy can, of course, not be discriminated. However, if one can at least discriminate a part of the muon tracks from electromagnetic cascades on a statistical basis and make use of the fact that the other main background, the hadronic cascades, will be suppressed below the production threshold (where the beam spectrum peaks), the mass hierarchy determination becomes possible, see also discussions in Refs. [58, 59] for ORCA. Irreducible intrinsic beam and backgrounds are included as well. For the energy resolution, we use , which is the same as for the default setup above. Note, however, that for the signal event sample a better resolution may be expected, since the energy deposition in cascades is in fact easier to measure than in muon tracks. For this analysis, we include the T2K disappearance data, since the experiment itself cannot measure as good as in the atmospheric case.

One of the critical factors for this measurement is how well the beam spectrum is known. We therefore test two different assumptions:

PINGU, ND

A very short decay pipe of 50 m is assumed, at the expense of the flux. However, it is assumed that one or more near detectors (ND) can be used to understand the beam spectrum and backgrounds at the level of 5%. The analysis range spans to .

PINGU, no ND

A considerably longer decay pipe of 250 m is assumed. Since it is harder to deploy near detectors in this case, we assume that the total flux can be controlled at the level of 10% without near detectors close to the peak flux. In addition, we assume that the analysis range covers the range between and only, where the flux is relatively large. The reason for this window is that it can be hardly assumed that, without ND, the high-E tail, where the flux is more than an order of magnitude lower than at the peak, is representative for the peak.888At high-E, there is sensitivity to the hierarchy which is more or less flat in energy, i.e., the systematics (which are here correlated among all bins) are important. This requires that one understands the spectral shape very well, since, for instance, a spectral tilt error would highly affect the sensitivity. We therefore only discuss the possibility of a large analysis range in the context of near detector(s).

Figure 9: The for the mass hierarchy discovery as a function of the true for the different octants, as given in the plot captions. The dashed curves correspond to the default setup in Table 1 for eight years of PINGU operation using atmospheric neutrinos, the thick solid curves to the neutrino beam to PINGU with and without near detector(s) (ND) after five years of operation, and the thin solid curves to the LBNE experiment with a 10 kt liquid argon detector and five years of operation (2.5 years neutrinos and 2.5 years antineutrinos), see LABEL:~[10]. The beam to PINGU version with near detector uses a short 50 m decay pipe only and 5% systematics, the one without near detector a 250 m decay pipe and 10% systematics. In addition, the external information from ten years of T2K disappearance data is used for the beams.

We show in Fig. 9 the sensitivity to the mass hierarchy comparing PINGU with atmospheric neutrinos and PINGU with a neutrino beam for the two different setups. The data taking times are different, since it is anticipated that such a beam experiment cannot be realized before 2020; therefore eight years of operation are assumed for the atmospheric neutrino setup. For comparison, we also show the LBNE experiment using five years of operation, for which, however, a later starting time of 2022 is already officially anticipated. One can clearly see that the PINGU beam experiment can determine the mass hierarchy almost everywhere in the parameter space at almost for and for irrespective of the conservative assumptions for the detector and beam. The versions including and excluding the near detector perform similar: one has better control of systematics, one has higher statistics. In fact, the sensitivities are similar to LBNE after the same running time, which has a somewhat stronger dependence on . In that case, the detector is significantly smaller, but the shorter baseline, higher beam flux, and controlled systematics and flavor identification compensate for that. The results for the inverted hierarchy sensitivities are qualitatively similar at around irrespective of the octant. It is, however, noteworthy that in this case systematics control is somewhat more important than statistics, i.e., the option with near detector performs slightly better.

Finally, note that optimizing the target geometry and decay tunnel length may lead to significantly improved sensitivities of the beam experiment; it is therefore not clear how effective this option for the hierarchy sensitivity is. Furthermore, it may be used as a backup if is indeed small or the experiment properties, which are partially not needed for the beam (such as directional resolution), are overestimated (cf., conservative setup curves in Fig. 4).

7 Summary and conclusions

We have discussed the mass hierarchy sensitivity of PINGU using cosmic ray interactions in the Earth’s atmosphere and a neutrino beam as neutrino sources, and compared it to the one of existing long-baseline and reactor experiments on the same timescale, driven by the physics of NOA. We have in all cases included the full three flavor oscillation framework with the full parameter correlations and degeneracies, as it is state-of-the-art for any beam analysis, by directly simulating PINGU with GLoBES. In addition, we have considered an unprecedentedly large number of systematical errors for the atmospheric analysis.

For the mass hierarchy determination with atmospheric neutrinos, we have shown that conventional estimators for the degenerate solution do not work everywhere in parameter space, which means that correlations other than with can be important. In order not to mis-interpret any tension with external data as hierarchy sensitivity, we have tested PINGU in combination with 10 years of T2K disappearance data in a self-consistent way – with a marginal impact on the hierarchy sensitivity. In fact, we have shown that PINGU alone can measure with an error of about 0.9% (), which means that PINGU may provide the best measurement of at that time, and that we have avoided the combination with external data on the atmospheric parameters.

The sensitivity also strongly depends on the true values of the parameters, such as , the octant, and the hierarchy. As far as the experiment properties are concerned, exposure, directional resolution, energy resolution, energy threshold, and cascade mis-identification have been found to be the driving impact factors to be optimized for. On the other hand, taking into account the full sky coverage and energies up to 50 GeV, a number of systematical errors related to the atmospheric neutrino flux, cross sections, and matter density uncertainties can be well controlled. The measurement will therefore be very robust with respect to systematical errors.

Taking into account all factors, a 90%CL to discovery after three years of operation seems conceivable including the full parameter degeneracy for the atmospheric neutrino analysis, depending on the true value of and the exploitation of the potential for experimental optimization. The sensitivity is complementary in parameter space to that of existing beam and reactor experiments, driven by the NOA experiment, and in some cases synergistic effects are present which go beyond the simple addition of statistics if the experiments are combined because of different parameter space topologies. For instance, a discovery in 2025 everywhere in the parameter space is possible if the NuMI beam is upgraded by ProjectX even with the default setup for PINGU used in this study.

Finally, we have discussed the option to send a beam from any of the major accelerator laboratories on the Northern hemisphere to the South Pole, where the combination among parametric enhancement from the mantle-core-mantle profile of the Earth, beam spectrum peaking in the right energy range, and Megaton-sized detector in that range offers a unique physics potential. At the example of a NuMI-like beam with a significantly shortened decay pipe (50 m including a near detector or 250 m without near detector), we have demonstrated that a to discovery after five years of operation could be feasible, and the sensitivity is comparable to the LBNE experiment. This option may be an interesting backup if the true is indeed small or the experiment properties, such as directional resolution, turn out to be overestimated. One advantage of this option is that the source location does not yet needed to be specified – such a beam could come from FNAL, CERN, JHF, DESY, or RAL. The actual technical implementation requires, however, further study. Maybe even CP violation studies become possible if the detector can be further upgraded, a concept known as MICA (“Megaton Ice Cherenkov Array”), for which a fiducial masses of order 10 Mt may be reachable already below 1 GeV [60].

In conclusion, we have demonstrated that PINGU can measure the mass hierarchy using atmospheric neutrinos using the same assumptions and machinery which has been used for the beam experiments in the past. However, the final confidence will depend on the experiment parameters and true oscillation parameter values, and a high confidence level determination may require a significant running time of the order of a few years if the full parameter space degeneracy is included.

Acknowledgments

I would like to thank M. Day, J. Leute, U. Katz, M. Kowalski, X. Qian, E. Resconi, and the members of the PINGU collaboration for useful discussions related to the subjects in this study.

This work has been supported by DFG grants WI 2639/3-1 and WI 2639/4-1, the FP7 Invisibles network (Marie Curie Actions, PITN-GA-2011-289442), and the “Helmholtz Alliance for Astroparticle Physics HAP”, funded by the Initiative and Networking fund of the Helmholtz association.

References

  • [1] DOUBLE-CHOOZ Collaboration, Y. Abe et al., Phys.Rev.Lett. 108, 131801 (2012), arXiv:1112.6353.
  • [2] DAYA-BAY Collaboration, F. An et al., Phys.Rev.Lett. 108, 171803 (2012), arXiv:1203.1669.
  • [3] RENO collaboration, J. Ahn et al., Phys.Rev.Lett. 108, 191802 (2012), arXiv:1204.0626.
  • [4] M. Gonzalez-Garcia, M. Maltoni, J. Salvado, and T. Schwetz, JHEP 1212, 123 (2012), arXiv:1209.3023.
  • [5] G. Fogli et al., Phys.Rev. D86, 013012 (2012), arXiv:1205.5254.
  • [6] D. Forero, M. Tortola, and J. Valle, Phys.Rev. D86, 073012 (2012), arXiv:1205.4018.
  • [7] P. Huber, M. Lindner, T. Schwetz, and W. Winter, JHEP 0911, 044 (2009), arXiv:0907.1896.
  • [8] Y. Itow et al., Nucl. Phys. Proc. Suppl. 111, 146 (2001), hep-ex/0106019.
  • [9] NOvA Collaboration, D. Ayres et al., (2004), arXiv:hep-ex/0503053.
  • [10] P. Coloma, P. Huber, J. Kopp, and W. Winter, Phys. Rev. D87, 033004 (2013), arXiv:1209.5973.
  • [11] C. H. Albright and M.-C. Chen, Phys. Rev. D74, 113006 (2006), arXiv:hep-ph/0608137.
  • [12] L. Wolfenstein, Phys. Rev. D17, 2369 (1978).
  • [13] S. P. Mikheev and A. Y. Smirnov, Sov. J. Nucl. Phys. 42, 913 (1985).
  • [14] S. P. Mikheev and A. Y. Smirnov, Nuovo Cim. C9, 17 (1986).
  • [15] V. Barger et al., Phys.Rev. D74, 073004 (2006), arXiv:hep-ph/0607177.
  • [16] V. Barger, P. Huber, D. Marfatia, and W. Winter, Phys. Rev. D76, 053005 (2007), arXiv:hep-ph/0703029.
  • [17] A. Samanta, Phys.Lett. B673, 37 (2009), arXiv:hep-ph/0610196.
  • [18] A. Ghosh, T. Thakore, and S. Choubey, JHEP 1304, 009 (2013), arXiv:1212.1305.
  • [19] M. Blennow and T. Schwetz, JHEP 1208, 058 (2012), arXiv:1203.3388.
  • [20] V. Barger et al., Phys.Rev.Lett. 109, 091801 (2012), arXiv:1203.6012.
  • [21] Y.-F. Li, J. Cao, Y. Wang, and L. Zhan, (2013), arXiv:1303.6733.
  • [22] Y. Oyama, A. Shimizu, and K. Kohri, Phys.Lett. B718, 1186 (2013), arXiv:1205.5223.
  • [23] D. J. Koskinen, Mod.Phys.Lett. A26, 2899 (2011).
  • [24] IceCube/PINGU Collaboration, K. Clark and D. Cowen, Nucl.Phys.Proc.Suppl. 233, 223 (2012).
  • [25] U. Katz, The ORCA Option for KM3NeT, Talk given at the XV International Workshop on Neutrino Telescopes, Venedig, Italy.
  • [26] E. K. Akhmedov, S. Razzaque, and A. Y. Smirnov, JHEP 02, 082 (2013), arXiv:1205.7071.
  • [27] S. K. Agarwalla, T. Li, O. Mena, and S. Palomares-Ruiz, (2012), arXiv:1212.2238.
  • [28] D. Franco et al., JHEP 1304, 008 (2013), arXiv:1301.4332.
  • [29] T. Ohlsson, H. Zhang, and S. Zhou, (2013), arXiv:1303.6130.
  • [30] A. Esmaili and A. Y. Smirnov, (2013), arXiv:1304.1042.
  • [31] M. Ribordy and A. Y. Smirnov, (2013), arXiv:1303.0758.
  • [32] K. Dick, M. Freund, P. Huber, and M. Lindner, Nucl. Phys. B588, 101 (2000), hep-ph/0006090.
  • [33] W. Winter, Phys. Rev. D72, 037302 (2005), hep-ph/0502097.
  • [34] D. Fargion, D. D’Armiento, P. Desiati, and P. Paggi, Astrophys.J. 758, 3 (2012), arXiv:1012.3245.
  • [35] J. Tang and W. Winter, JHEP 1202, 028 (2012), arXiv:1110.5908.
  • [36] P. Huber, M. Lindner, and W. Winter, Comput. Phys. Commun. 167, 195 (2005), hep-ph/0407333, http://www.mpi-hd.mpg.de/lin/globes/.
  • [37] P. Huber, J. Kopp, M. Lindner, M. Rolinec, and W. Winter, Comput. Phys. Commun. 177, 432 (2007), hep-ph/0701187.
  • [38] M. Sajjad Athar, M. Honda, T. Kajita, K. Kasahara, and S. Midorikawa, Phys.Lett. B718, 1375 (2013), arXiv:1210.5154.
  • [39] M. Honda, T. Kajita, K. Kasahara, S. Midorikawa, and T. Sanuki, Phys.Rev. D75, 043006 (2007), arXiv:astro-ph/0611418.
  • [40] PINGU collaboration, official plots.
  • [41] P. Huber, M. Lindner, M. Rolinec, and W. Winter, Phys. Rev. D73, 053002 (2006), hep-ph/0506237.
  • [42] W. Winter, Phys. Rev. D78, 037101 (2008), arXiv:0804.4000.
  • [43] R. J. Geller and T. Hara, Phys. Rev. Lett. 49, 98 (2001), hep-ph/0111342.
  • [44] E. Ciuffoli, J. Evslin, and X. Zhang, (2013), arXiv:1305.5150.
  • [45] A. de Gouvea, J. Jenkins, and B. Kayser, Phys. Rev. D71, 113009 (2005), hep-ph/0503079.
  • [46] H. Nunokawa, S. J. Parke, and R. Zukanovich Funchal, Phys.Rev. D72, 013009 (2005), arXiv:hep-ph/0503283.
  • [47] A. de Gouvea and W. Winter, Phys. Rev. D73, 033003 (2006), arXiv:hep-ph/0509359.
  • [48] P. Huber, M. Lindner, and W. Winter, Nucl. Phys. B645, 3 (2002), hep-ph/0204352.
  • [49] C. Backhouse, Status of NOA, Talk given at NNN2012.
  • [50] P. Huber, M. Lindner, and W. Winter, Nucl. Phys. B654, 3 (2003), hep-ph/0211300.
  • [51] W. Winter, Superbeam FNAL-PINGU?, Analysis and slides presented within the LBNE reconfiguration effort, May 2012. See also talk at TeVPA 2012.
  • [52] P. Huber, M. Lindner, M. Rolinec, T. Schwetz, and W. Winter, Phys. Rev. D70, 073014 (2004), arXiv:hep-ph/0403068.
  • [53] MINOS, E. Ables et al., FERMILAB-PROPOSAL-0875.
  • [54] M. Diwan, M. Messier, B. Viren, and L. Wai, A study of sensitivity in minos, NUMI-L-714 (2001).
  • [55] E. K. Akhmedov, Nucl.Phys. B538, 25 (1999), arXiv:hep-ph/9805272.
  • [56] E. K. Akhmedov, A. Dighe, P. Lipari, and A. Smirnov, Nucl.Phys. B542, 3 (1999), arXiv:hep-ph/9808270.
  • [57] S. Petcov, Phys.Lett. B434, 321 (1998), arXiv:hep-ph/9805262.
  • [58] J. Brunner, (2013), arXiv:1304.6230.
  • [59] C. Lujan-Peschard, G. Pagliaroli, and F. Vissani, (2013), arXiv:1301.4577.
  • [60] S. Böser, M. Kowalski, L. Schulte, N. L. Strotjohann, and M. Voge, (2013), arXiv:1304.2553.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
102702
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description