1 Introduction

IDS-NF-034

EURONU-WP6-12-48

Optimization of a Very Low Energy Neutrino Factory for the Disappearance Into Sterile Neutrinos

Walter Winter1

2 Institut für Theoretische Physik und Astrophysik,

Universität Würzburg, 97074 Würzburg, Germany

Abstract

We discuss short-baseline electron and muon neutrino disappearance searches into sterile neutrinos at a Very Low Energy Neutrino Factory (VLENF) with a muon energy between about two and four GeV. A lesson learned from reactor experiments, such as Double Chooz and Daya Bay, is to use near and far detectors with identical technologies to reduce the systematical errors. We therefore derive the physics results from a combined near-far detector fit and illustrate that uncertainties on cross sections efficiencies can be eliminated in a self-consistent way. We also include the geometry of the setup, i.e., the extension of the decay straight and the muon decay kinematics relevant at the near detector, and we demonstrate that these affect the sensitivities for , where oscillations take place already in the near detector. Compared to appearance searches, we find that the sensitivity depends on the locations of both detectors and the muon energy, where the near detector should be as close as possible to the source, and the far detector at about 500 to 800 m. In order to exclude the currently preferred parameter region, at least useful muon decays per polarity are needed for , or, alternatively, a higher muon energy can be used.

1 Introduction

The test of three-flavor neutrino oscillations in solar, atmospheric, long-baseline, and reactor experiments has been so far very successful, where a non-zero mixing angle has been recently established by the Daya Bay and RENO reactor experiments above the confidence level [1, 2]. On the other hand, neutrino oscillations at short baselines, i.e., where atmospheric oscillations have not yet developed, face a tension between the strong constraints from many short-baseline experiments, and several observed anomalies which may be described by eV-scale sterile neutrinos. In particular, short-baseline electron neutrino disappearance may cause an anomaly identified in Gallium experiments [3], electron antineutrino neutrino disappearance may lead to lower than predicted reactor antineutrino fluxes [4, 5], and electron antineutrino appearance may be driven by sterile neutrinos in the LSND [6] and MiniBooNE [7] experiments. In the simplest models, one would add one extra sterile generation to fit these data by a model, separated by , see, e.g., LABEL:~[8]. However, it turns out that the tension between MiniBooNE neutrino and antineutrino data (CP violation cannot be described by this model), and the tension between appearance and disappearance data basically rule out this model, see, e.g., LABEL:~[9]. Therefore, models have been more recently used [10], which allow for CP violation and include additional degrees of freedom. In any of the above models, crucial information comes from the disappearance channels, which may be the “cleanest” channels to measure the oscillation parameters (see Sec. 2 for details). In addition, electron neutrino and antineutrino disappearance searches are needed to directly test the Gallium and reactor anomalies, respectively.

Various new experiments have been proposed to test short-baseline neutrino oscillations, see Appendix A of LABEL:~[11] for a recent summary of alternatives. In this study, we focus on a very low energy neutrino factory (VLENF), which is a neutrino factory with a low muon energy of about two to four GeV which does not require muon cooling or muon acceleration [12], and could be the first phase of a staged neutrino factory program. Compared to muon neutrino appearance, discussed in LABEL:~[12], we discuss the electron and muon neutrino (antineutrino) disappearance channels. These are qualitatively different from the appearance channels because of different systematics: while the appearance channels are limited by (charge mis-identification and neutral current) backgrounds, the disappearance channels suffer from unknown cross sections efficiencies. Therefore, similar to reactor experiments such as Double Chooz or Daya Bay, near and far detectors have been proposed for the high energy neutrino factory to control the systematical errors [13]. In addition, near detectors very close to the muon storage ring experience geometry effects from the extension of the decay straight and beam divergence [14]. In LABEL:~[13] the geometry effects from the decay straight have been taken into account, which lead to a smearing of the oscillation probabilities, whereas the detectors were assumed to be far enough away from the source (or small enough) not to experience any detector geometry effects (far distance limit). However, for the substantially lower muon energy of the VLENF, the beam will be larger from the muon decay kinematics only, which means that the detector geometry has to be taken into account. In addition, the near and far detectors are supposed to be as similar as possible, which is difficult to reconcile with different detector diameters. Therefore, we will simultaneously integrate over straight geometry and detector surface area in this study, and show the impact of these effects. We discuss electron neutrino disappearance for most of this study, since it is directly relevant to test some of the anomalies, and we point out the differences for muon neutrino disappearance at the end.

This study is organized as follows: we recapitulate the phenomenology of sterile neutrino disappearance searches in Sec. 2. Then in Sec. 3, we introduce our setup, systematics treatment, and geometry treatment. In Sec. 4, we illustrate the impact of the beam and detector geometry, and of the systematics assumptions. Then in Sec. 5, we perform a two-baseline optimization of near and far detectors. We show our main results for electron and muon neutrino disappearance in Sec. 6, and we conclude in Sec. 7.

2 Sterile neutrino phenomenology

In most experiments, the disappearance or appearance of neutrinos at short distances is described in the two-flavor limit

(1)
(2)

where and is an effective mixing angle. However, this description cannot be used for a self-consistent description of a multi-channel experiment. In order to demonstrate that, consider the simplest possible case, a framework with one extra sterile neutrino. Although this case is basically excluded, it is useful two illustrate some of the considerations to be taken into account here which also apply to more sophisticated models. The parameterization-independent probabilities in the limit ( can be written as

(3)
(4)
(5)

where we show the most interesting channels for the Neutrino Factory. From this simple model, it is immediately clear that LSND-motivated electron or muon neutrino appearance, which requires and , must be accompanied by electron and muon neutrino disappearance, and that the disappearance searches provide important and strict constraints on theoretical models. Especially, the disappearance probabilities are proportional to , the appearance probabilities to , i.e., the appearance probabilities are suppressed by two more powers of the new mixing angles. In a specific parameterization, electron and muon disappearance can be described by the two-flavor limit Eq. (1) by different mixing angles (such as and , respectively), whereas the appearance probability in Eq. (2) is proportional to the product of these mixing angles; see e.g. LABEL:~[15] for a direct comparison. This means that Eq. (1) or Eq. (2) can be used independently to effectively describe an individual disappearance or appearance channel. If, however, the information from different channels is to be combined, there will be a model-dependent (but well defined) interplay among the channels.

Now what are the consequences for the Neutrino Factory? For electron or muon neutrino appearance, the main limitations are charge mis-identification3 and neutral current backgrounds. At least for small mixing angles, other systematics, such as the flux and cross section uncertainties, are less relevant. That is quite fortunate, since it would be probably hard to quantify this systematics. Consider, for instance, a near detector to measure the cross sections for muon neutrinos. For large enough , oscillations may already take place in the near detector. Unless it is clearly defined how the theoretical interplay between muon neutrino disappearance at the near detector and muon neutrino appearance at the far detector works, one cannot disentangle the cross sections from oscillation physics in that case. Thus, an effective two-flavor description is not sufficient to obtain reliable information on this systematics.

For the disappearance channels, the situation is very different. Here the electron and muon neutrino disappearance can be described by Eq. (1) in the effective two-flavor limit with different effective mixing angles. Because small deviations from unity are to be measured, a lesson learned from reactor experiments [16, 17], such as Double Chooz or Daya Bay, has been to use (more or less) identical near and far detectors to cancel systematical errors. Compared to reactor experiments, where the flux is the unknown, the cross section efficiency uncertainties are the dominant systematics to be canceled by the near detector. From the oscillation framework point of view, since both near and far detectors measure the same flavors, the oscillation probabilities in both detectors can be described by the same probabilities. In the model, one can simply use Eq. (1) for or disappearance. In a model, the translation into the model parameters is more complicated. Nevertheless, it is clear that any detected difference between near and far rates (or neutrino and antineutrino rates) must be due to oscillations, and cannot come from unknown cross sections or efficiencies. Very interestingly, the near and far detectors may change their roles as a function of . While for small the far detector measures the oscillations and the near detector the normalization, for large the near detector measures the oscillations and the far detector, where the oscillation averages out, the normalization [13]. We choose an effective two-flavor framework in the following to quantify the performance, and we assume CPT invariance for the sake of simplicity (the disappearance channels are always CP invariant). Note that nevertheless CPT invariance tests of the disappearance channels are well motivated, see LABEL:~[13] for the Neutrino Factory. We do not perform a combined fit of appearance and disappearance channels, or electron and muon disappearance channels, since such a fit can only be performed for a specific model. Note, however, that the final experiment can of course be used to test more complicated scenarios.

3 Setup and simulation techniques

Figure 1: Possible geometry of the VLENF near-far setup for (not to scale).

For the simulation, we choose a near-far detector setup with a near detector of fiducial mass efficiency, and a far detector of fiducial mass efficiency, see Fig. 1 for a possible geometry. Since the detector geometry will be important for close distances, we assume cylindrical shapes with diameters of (perpendicular to the beam axis). We test two different muon energies: and . The decay straight length is assumed to be for , and for . For very short baselines and line neutrino sources, the baseline is ill-defined [14]. Therefore, the detector locations are specified by the distance to the end of the decay straight, i.e., . For the integrated luminosity, we use useful muon decays per polarity, and useful muon decays in some cases where explicitely specified. For the detection threshold, we use and for and , respectively. For the energy resolution of the detectors, we use for () disappearance, and for () disappearance, which is motivated by a totally active scintillator or a liquid argon detector. Neutral current backgrounds are included at the level of , which have, however, only a very small effect. Note that a magnetization of the detector for the disappearance searches may or may not be necessary, depending on the underlying physics model, i.e., what happens to the other flavor. For instance, in the model discussed above, one may have large disappearance driven by non-zero , see Eq. (4), whereas appearance vanishes at the same time for , see Eq. (5). Since we only use the disappearance channels, we assume that charge mis-identification is either under control by a magnetic field, or suppressed by the underlying physics. Note that the chosen parameters are consistent with the currently discussed ones of the VLENF study group [18]; see also LABEL:~[12].

For the geometric treatment of the neutrino line source and detector geometry, we follow Refs. [14, 13], and for the simulation, we use the GLoBES (General Long Baseline Experiment Simulator) software [19, 20]. GLoBES (up to version 3.1) assumes that the detectors are far enough away from the source to treat the source as point source, and that the detectors are small compared to the beam divergence. If we start from the differential event rate from a point source without oscillations, as the one used in GLoBES, we can take into account the extension of the straight and the detector and the effect of oscillations by an averaged event rate

(6)

Here parameterizes the integration over the detector geometry for a fixed baseline , energy , and flavor , where is the (energy and flavor dependent) effective area of the detector, and is the actual surface area. Note that the oscillation probability appears inside the integral, since different parts of the decay straight contribute differently. In addition, note that it is assumed that the differential muon decay rate per (straight) length is equal along the decay straight. Since , we can re-write this as

(7)

with the average efficiency ratio times probability

(8)

and the effective baseline

(9)

which is defined such that for and . As a consequence, one would use the usual detector definitions in GLoBES with the effective baseline , which are to be corrected by Eq. (8). We compute Eq. (8) directly in a user-defined probability engine in GLoBES, including both neutrino source and detector geometry. For the sake of simplicity, we assume that the (machine-dependent) beam divergence is smaller the the beam spread given by the muon decay kinematics. In this ideal case, one can easily compute independent of machine-dependent effects more or less analytically [14]. We will demonstrate that there is a substantial effect coming from the extension of the beam compared to the detector, whereas machine-dependent effects may lead to additional smearing if the divergence is not under control. Note that the intrinsic effect of the muon decay kinematics cannot be removed, even in an ideal machine.

For the systematics treatment, we follow the reactor experiments with two (or more) detector, rather than the usual neutrino factory description; see Refs. [21, 13] for details. From the reactor experiments, we known that short baseline electron neutrino disappearance is mostly affected by the signal normalization uncertainty (see, e.g., Refs. [17, 21] for reactor experiments). Here, compared to the reactor experiments, our signal normalization error is not dominated by the flux, which may be known at the level of 0.1% using various mean monitoring devices [22], but the knowledge of the cross sections efficiencies. Because our neutrino energies span the cross section regimes from quasi-elastic scattering, over resonant pion production, to deep inelastic scattering, it is difficult to estimate the degree the cross sections will be known at the time of the measurement. The efficiencies depend on the detection processes and detector properties, which means that their uncertainties are also difficult to pin down. For reactor experiments, on the other hand, the inverse beta decay cross sections are well known, but the fluxes are relatively uncertain. Both classes of experiments have, however, in common that these uncertainties can be controlled by using detectors as identical as possible. In our case, where the near and far detector masses are different, the far detector may consist of five modules identical to the near detector, as it is illustrated in Fig. 1. We neglect the extension of the detector along the beam axis, since it is expected to me much smaller than the extension of the decay straight. However, in practice, the location of the interaction vertex can be measured to some degree.

We adopt the most conservative case for systematics, which is that the cross sections efficiencies ( flux) are fully uncorrelated among the bins, but fully correlated between the near and far detectors.4 This assumption is conservative because is corresponds to cross sections efficiencies with an unknown shape error, where the shape is to be measured by the near detector. Unless noted otherwise, we assume that cross sections efficiencies are only known to the level of 10% (within each bin), where even larger errors do not matter in the oscillation region. In addition, we use a normalization error uncorrelated between near and far detectors, but fully correlated among the bins, which may come from a fiducial mass uncertainty. This error is, for reactor experiments, believed to be controlled below the percent level. We use  [21], whereas Daya Bay claim that they can even do significantly better ( [1]). Since it depends on the detector properties and detection interactions, a more conservative choice seems reasonable. Finally, the backgrounds are assumed to be known within 35% and the energy calibration to 0.5%, uncorrelated between the detectors. In the next section, we will discuss where these uncertainties are important.

4 Impact of geometry and systematics

Figure 2: Exclusion region in - (right hand sides of curves) for disappearance for different geometry assumptions (left panel) and systematics assumptions (right panel); see main text for details (90% CL, 2 d.o.f.). The curves “no systematics” represents a single detector at using statistics only, whereas the other curves correspond to near-far detector setups, where the red thick curves include (conservative) full systematics and geometry effects. Here , useful muon decays per polarity, () and ().

Let us now study the impact of geometry and systematics, where we use , (), and (). For the most of the following, we focus on () disappearance, and discuss the differences to muon neutrino disappearance at the end. In Fig. 2, left panel, the exclusion region in and (right hand sides of curves) is shown for different geometry assumptions. As we will demonstrate below, this setup is optimized for not too small values of . The curve “no systematics” represents the statistics limit of the far detector only, without systematics or backgrounds. It is simulated in the point source and far distance approximations, which means that the baseline is computed with Eq. (9) and that in Eq. (8), respectively, as it is usually done for far detectors. The curve “point source” is obtained for the same geometry assumptions, but for a near-far detector simulation including full systematics. The first peak of the sensitivity at about corresponds to the far detector, as it can be easily seen. In this case, the near detector measures the normalization, the far detector the oscillation. The near detector sensitivity peaks at about , where, however, oscillations are still present in the far detector (see “no systematics” curve). Therefore, the optimal sensitivity is reached at somewhat larger values of , where oscillations in the far detector average out and the cross sections efficiencies are safely measured in the far detector. In this case, near and far detectors swap their roles. This swapping is also the reason why it is difficult to obtain reliable sensitivity predictions for effective far detector simulations only, especially for , where oscillations take place in the near detector. As the next step in Fig. 2, left panel, we take into account the extension of the decay straight in the curve “straight averaged”, which leads to some averaging in the region the near detector is most sensitive to. This case corresponds to using Eq. (8) with . The final result, curve “straight and detector averaged”, shows the additional impact of the detector geometry, i.e., the full Eq. (8). In this case, the sensitivity for large , coming from the near detector, almost vanishes. The reason is that the detected spectrum effectively peaks at lower energies due to the muon decay kinematics, which the near detector (but not the far detector) is sensitive to. For very large , the oscillations average out in both near and far detectors, and the sensitivity is given by the externally imposed error of 10%.

We have also tested if one can reduce the effect of the straight averaging by the use of two beam current transformers (BCTs), one before and one after the straight. One may assume that the number of muon decays along the straight, which is proportional to the difference between the two beam currents, is exponentially distributed along the straight. However, already at , the mean lifetime (decay length) of the muon is about 6 km, which means that the effect is small and not visible in the sensitivities.

In the right panel of Fig. 2, we show the impact of different systematics assumptions, where again the “no systematics” and full systematics (“10% shape error”) curves are used for reference. First of all, we reduced the systematical errors one by one. As expected, a smaller shape error (curve “2% shape error”) improves the sensitivity for large , where the oscillations in near and far detectors are averaged out and this systematics dominates. None of the other systematics has an effect on the sensitivity if improved separately. Only if all of the systematical errors can be controlled better, the curve “low systematics” is obtained.5 Since it is difficult to say how realistic this case is, we rely on our standard (more conservative) values in the following, unless explicitely stated otherwise. In Fig. 2, we also show the result for the far detector only (gray dashed-dotted curve), from which one can see that the near detector is necessary to maintain the sensitivity for low in the presence of the systematical errors. We include the decay straight and detector geometries in the following calculations, as well as the near detector, where the full systematics curve (“10% shape error”) represents our standard values.

5 Two-baseline optimization

Figure 3: Left panel: Optimal detector distances for disappearance, , useful muon decays per polarity. The contours correspond to sensitivities in down to -1.9 (), -1.95 (), -1.5 (), -1.2 () for the depicted values of (90% CL, 2 d.o.f.). Right panel: Sensitivities in - for the “optimal” setups marked in the left panel. For comparison, setup A is shown with useful muon decays per polarity, and the best-fit region from LABEL:~[4] (Fig. 8) is shown as gray-shaded region (99% CL).

Whereas the muon energy is limited by other constraints, such as accelerator and storage ring, the detector locations of the two detectors can be almost freely chosen. We therefore show in Fig. 3, left panel, the two-baseline optimization for disappearance. The different regions correspond to optimal detector locations for the depicted values of , where the respective reaches in are given in the figure caption. For , two optimal regions are found, where two central choices are marked by points A and B. Point A is within a region where the near detector is as close as possible to the source, and the far detector in a distance between about and . Point B corresponds to a longer far detector baseline which helps for small values of . However, a somewhat farther near detector distance is preferred, where the near detector also adds to the sensitivity directly. The larger is, the smaller distances for the far detector are preferred, where in all cases the near detector may be as close as possible to the source. Point C is representative for a region with better sensitivity for . The asymmetry in Fig. 3 (left panel) with respect to the symmetry axis comes from the different detector masses. For equal detector masses, we do not find a qualitative differences apart from the plot becoming symmetric with respect to this axis.

The results in the --plane are shown for the marked setups in Fig. 3, right panel. Setup B has the best sensitivity for small values of , but for larger it is not optimal. Setup C is best for , as expected, whereas setup A is a good compromise between the small and large sensitivities. Therefore, we have chosen it for reference. In order to check if the chosen setup match the needs for disappearance, we show the best-fit region from LABEL:~[4] (Fig. 8) as gray-shaded region for reference (99% CL). One can easily read off the figure that all setups can exclude this best-fit region very well, with setup C actually covering the largest part. Note again that the very large coverage is limited by the external knowledge on cross sections efficiencies for the VLENF, whereas the flatness of the best-fit region for large simply means that the reactor experiments cannot resolve the oscillations (necessary to exclude this part). Therefore, one should probably not overemphasize this part. For reference, we also show in Fig. 3, right panel, the curve for point A with 10 useful muon decays (per polarity) only, as different luminosities are currently being discussed. One can clearly read off the figure, that the statistics is not sufficient to fully exclude the interesting part of the best-fit region at lower values of .

Figure 4: Left panel: Optimal detector distances for disappearance, , useful muon decays per polarity. The contours correspond to sensitivities in down to -2.15 (), -2.5 (), -1.85 (), -1.45 () for the depicted values of (90% CL, 2 d.o.f.). Right panel: Sensitivities in - for the “optimal” setups marked in the left panel. For comparison, setup D is shown with useful muon decays per polarity, and the best-fit region from LABEL:~[4] (Fig. 8) is shown as gray-shaded region (99% CL).

In Fig. 4, we perform a similar analysis for . In the left panel (two-baseline optimization), only the the point A is chosen as for . The qualitative results of the two-baseline optimization are however similar to the above case, with the exception that the optimal region with a long baseline has disappearance in the chosen baseline window. In addition, for very large , somewhat longer baselines are preferred to avoid the geometry effects. In the right panel, the sensitivities for the chosen points are shown, with rather similar results. Again, point A appears to be a good compromise between the small and large sensitivities, but point D performs better for small . The absolute sensitivities are significantly better than in the case, which is a result qualitatively different from the appearance optimization in LABEL:~[12], because statistics are important for the disappearance channels. In this case, even the low luminosity curve with 10 useful muon decays covers the relevant parameter space. In summary, for electron neutrino disappearance, either 10 useful muon decays or are required to exclude the discussed best-fit parameter space region.

6 Results and comparison

Figure 5: Left panel: Exclusion region in - (right hand sides of curves) for disappearance with useful muon decays per polarity for (point A, ) and (point D, ) at the 90% CL, 2 d.o.f. The orange-dashed curve shows the result for an improved (2%) shape error. For comparison, two reference setups are shown: a radioactive ion facility [23] (dashed-dotted red curve in Fig. 6 therein; Li ions per second) and a low gamma beta beam [24] (red curve in Figs. 4 and 5 therein). Right panel: Comparison between disappearance (solid) and disappearance (dotted) for the three test points ( useful muon decays per polarity, ; 90% CL, 2 d.o.f). The combined SciBooNE and MiniBooNE disappearance result from LABEL:~[25] is shown for comparison.

We summarize our results in Fig. 5 for disappearance (left panel) and disappearance (right panel). For disappearance (left panel), optimized setups for different values of are compared. As discussed above, the sensitivity for is significantly better than for , but both setups can in principle cover the relevant parameter region. The large region coverage, marked “systematics limit”, depends on the assumed external knowledge of cross sections efficiencies. The modification of the case for an improved 2% error is shown as dashed curve in the same color. In this case, one can clearly fully cover the discussed (gray-shaded) parameter space. In addition, we show two curves for alternative approaches to disappearance measurements. One example is a low beta beam [24], shown as dashed curve. This setup is in a way very similar to ours both in terms of (our for ) and the beam geometry. For the detection reaction, however, inverse beta decay is used, and thus the systematical error is assumed to be controllable at the level of 1% (ours: 10%). Furthermore, this detection process is only sensitive to . Another example, shown as dashed curve, is the radioactive ion facility in LABEL:~[23]. Here ions are injected into a detector. Again, the systematical error is assumed to be 1%. The final result depends on the ion intensity, where we have shown the most aggressive scenario in Fig. 5 ( ions per second). Here the main issue seems to be the coverage of the low region, which requires relatively long baselines (which cannot be realized within such a detector).

For disappearance (right panel, solid curves), we have used the same fiducial masses efficiencies for the sake of simplicity. Therefore, the main difference to disappearance are the beam spectrum peaking at higher energies (for the same ), and the better energy resolution. For the optimization, we have not find any qualitative differences compared to appearance, apart from the fact that slightly longer baselines are preferred, especially for point B in Fig. 3. For the sake of consistency, we therefore show the same optimization points in Fig. 5 as in Fig. 3, and we show the disappearance (dotted curves) for comparison. One can easily see that the disappearance has a slightly better absolute performance, which mainly comes from the higher beam energy. In order to compare the absolute performance to existing experiments, we show the combined SciBooNE and MiniBooNE disappearance result from LABEL:~[25], which was obtained in a similar spirit, as solid thin curve. The VLENF could improve this by about an other of magnitude.

Figure 6: Contribution of neutrino and antineutrino modes compared to the total result assuming CPT invariance (solid curves) for disappearance (left panel) and disappearance (right panel) at the 90% CL, 2 d.o.f. . In both cases, useful muon decays per polarity, , and optimization point A ( ) have been used.

So far we have used the neutrino and antineutrino channels simultaneously, assuming CPT invariance. However, since the earlier mentioned anomalies appear for specific polarities, the separate sensitivities for neutrinos and antineutrino disappearance may be important for some models. We therefore show in Fig. 6 the contributions of the neutrino and antineutrino modes, together with the result assuming CPT invariance. As it can be clearly seen, similar limits can be obtained for the separated neutrino and antineutrino channels. The antineutrino sensitivities are somewhat weaker than the neutrino sensitivities due to the smaller cross sections. Nevertheless, CPT invariance (or other consistency) tests can be easily performed, see LABEL:~[13] for details. This is a significant advantage over many other experiments, which typically have the neutrino channels (beam experiments) or the antineutrino channels (reactor experiments) dominate.

The discussed alternative setups only represent a limited selection of the ideas for sterile neutrino searches, see Appendix A of LABEL:~[11] for a more complete list. However, none of the proposed options seems to be able to compete with the proposed disappearance searches at the VLENF.

7 Summary and conclusions

We have studied short-baseline electron and muon neutrino disappearance at a VLENF with and . Compared to the appearance channels, where backgrounds limit the sensitivities, the disappearance channels suffer from the knowledge on cross sections efficiencies. We have therefore chose a setup similar to reactor experiments, such as Double Chooz and Daya Bay, using a near and far detector, in which the unknowns can be measured in a self-consistent way. For the systematics, we have adopted the most conservative case of a shape error fully uncorrelated among the bins, but fully correlated between the near and far detectors. In addition, we have included the extension of the decay straight and the beam divergence from the muon decay kinematics, which affect the sensitivity to very large in the near detector. Note that any additional machine-dependent divergence will add to the muon decay systematics and lead to some additional averaging. However, the muon decay kinematics cannot be eliminated, even if the other systematics might be improved.

We have performed a two-baseline optimization of the setup, where we have identified optimal points depending on the value of . From the different options, we have chosen a setup with the near detector as close as possible to the source and the far detector at a distance between about 500 and 800 meters, which is consistent with the optimization for appearance [12] and a good compromise between the small and large sensitivities. As far as the minimal luminosity and muon energy are concerned, we have found that at least useful muon decays per polarity are needed for , or, alternatively, a higher muon energy, in order to outperform practically any other proposed alternative setup and test the relevant parameter space. That is different from the appearance channel optimization, where lower luminosities may be sufficient and the muon energy hardly matters as long as  [12]. Note that the VLENF setup can measure both electron and muon neutrino disappearance, for both neutrinos and antineutrinos. We have also demonstrated that the proposed setup is practically insensitive to the external knowledge on cross sections and efficiencies for , whereas the sensitivity for larger depends on the systematics assumptions since oscillations average out in both near and far detectors.

In conclusion, it is well known from models of sterile neutrinos that disappearance channels provide complementary and important information to constrain the models for sterile neutrinos. In the case, the tension between the appearance and disappearance of various experiments, and between neutrino and antineutrino appearance in MiniBooNE have basically ruled out this model. Apart from the direct tests of disappearance anomalies, for more complicated scenarios, better disappearance information from both electron and muon neutrino disappearance will be needed. These channels may be uncorrelated in the underlying physics model, as it is evident already in the case. The VLENF can provide this information if it is designed similar to the reactor experiments, with near and far detectors as similar as possible.

Acknowledgments

I would like to thank Alan Bross and Christopher Tunnell for useful discussions, and I would like to acknowledge support from DFG contracts WI 2639/3-1 and WI 2639/4-1

Footnotes

  1. Email: winter@physik.uni-wuerzburg.de
  2. footnotemark:
  3. For instance, for neutrino production by , one has to distinguish a few charged current events from the dominant event rate by charge-identification of the produced muons (anti-muons) by means of a magnetic field.
  4. We use 17 bins with bin widths following the energy resolution of the detector. To avoid aliasing effects, we use the built-in filter of GLoBES on the oscillation probability.
  5. Low syst.: Fiducial volume/normalization error 0.1%, shape error 2%, calibration error 0.1%, background error 10%.

References

  1. Daya Bay collaboration, F. An et al., arXiv:1203.1669.
  2. RENO collaboration, J. K. Ahn et al., arXiv:1204.0626.
  3. M. A. Acero, C. Giunti, and M. Laveder, Phys. Rev. D78, 073009 (2008), arXiv:0711.4222.
  4. G. Mention et al., Phys. Rev. D83, 073006 (2011), arXiv:1101.2755.
  5. P. Huber, Phys. Rev. C84, 024617 (2011), arXiv:1106.0687.
  6. LSND, A. Aguilar et al., Phys. Rev. D64, 112007 (2001), hep-ex/0104049.
  7. The MiniBooNE Collaboration, A. Aguilar-Arevalo et al., Phys. Rev. Lett. 105, 181801 (2010), arXiv:1007.1150.
  8. C. Giunti and M. Laveder, Phys. Rev. D84, 093006 (2011), arXiv:1109.4033.
  9. M. Maltoni and T. Schwetz, Phys. Rev. D76, 093005 (2007), arXiv:0705.0107.
  10. J. Kopp, M. Maltoni, and T. Schwetz, Phys. Rev. Lett. 107, 091801 (2011), arXiv:1103.4570.
  11. K. Abazajian et al., Light sterile neutrinos: A white paper, http://cnp.phys.vt.edu/white_paper/, to appear.
  12. C. D. Tunnell, J. H. Cobb, and A. D. Bross, arXiv:1111.6550.
  13. C. Giunti, M. Laveder, and W. Winter, Phys.Rev. D80, 073005 (2009), arXiv:0907.5487.
  14. J. Tang and W. Winter, Phys.Rev. D80, 053001 (2009), arXiv:0903.3039.
  15. D. Meloni, J. Tang, and W. Winter, Phys.Rev. D82, 093008 (2010), arXiv:1007.2419.
  16. H. Minakata, H. Sugiyama, O. Yasuda, K. Inoue, and F. Suekane, Phys. Rev. D68, 033017 (2003), hep-ph/0211111.
  17. P. Huber, M. Lindner, T. Schwetz, and W. Winter, Nucl. Phys. B665, 487 (2003), hep-ph/0303232.
  18. A. Bross, private communication.
  19. P. Huber, M. Lindner, and W. Winter, Comput. Phys. Commun. 167, 195 (2005), hep-ph/0407333, http://www.mpi-hd.mpg.de/lin/globes/.
  20. P. Huber, J. Kopp, M. Lindner, M. Rolinec, and W. Winter, Comput. Phys. Commun. 177, 432 (2007), hep-ph/0701187.
  21. P. Huber, J. Kopp, M. Lindner, M. Rolinec, and W. Winter, JHEP 05, 072 (2006), arXiv:hep-ph/0601266.
  22. ISS Detector Working Group, T. Abe et al., JINST 4, T05001 (2009), arXiv:0712.4129.
  23. C. Espinoza, R. Lazauskas, and C. Volpe, arXiv:1203.0790.
  24. S. K. Agarwalla, P. Huber, and J. M. Link, JHEP 1001, 071 (2010), arXiv:0907.3145.
  25. SciBooNE and MiniBooNE Collaborations, K. Mahn et al., Phys. Rev. D85, 032007 (2012), arXiv:1106.5685.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
102701
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description