A limit for the decay from the MEG experiment
A search for the decay , performed at PSI and based on data from the initial three months of operation of the MEG experiment, yields an upper limit on the branching ratio of BR (90% C.L.). This corresponds to the measurement of positrons and photons from stopped -decays by means of a superconducting positron spectrometer and a 900litre liquid xenon photon detector.
keywords:Muon decay, lepton flavour violation.
We report here on the results of a search for the lepton flavour violating decay , based on data collected during the first three months period of the MEG experiment. This operates at the 590MeV proton ring cyclotron facility of the Paul Scherrer Institut (PSI), in Switzerland.
Lepton flavour conservation in the Standard Model (SM) is associated with neutrinos being massless. Recent observations of neutrino oscillations nuosc () imply a non-zero mass and hence the mixing of lepton flavours. However, in minimal extensions to the SM, with finite but tiny masses, charged lepton flavour violating processes are strongly suppressed and beyond experimental reach.
Additional sources of lepton flavour violation (LFV) Barbieri (); hisano (); LFreview () appear in theories of supersymmetry, grand unification or in extra dimensions, giving predictions that have now become accessible experimentally. Hence, the present lack of observation of a signature of charged LFV may change with improved searches and reveal new physics beyond the SM or significantly constrain the parameter space of such extensions.
The strongest bounds on charged LFV come from the muon system, with the current limit for the branching ratio BR (90 C.L.), set by the MEGA experiment MEGA ().
2 Experimental Principle
The process is characterized by a simple two-body final state, with the positron and photon being coincident in time and emitted back-to-back in the rest frame of the muon, each with an energy equal to half that of the muon mass.
There are two major sources of background, one from radiative muon decay (RMD) and the other from accidental coincidences between a high energy positron from the normal muon decay (Michel decay) and a high energy photon from sources such as RMD, positron annihilation-in-flight or bremsstrahlung. Both types of background can mimic a event by having an almost back-to-back photon and positron. It can be shown kuno-okada (), taking into account the muon rate as well as the acceptances and resolutions, that the accidental case dominates.
Hence the key to suppressing such backgrounds lies in having a continuous muon beam, a good quality beam transport system and precision detectors with excellent spatial, temporal and energy resolutions. This is the basis for the novel design of MEG.
3 Experimental Layout and the MEG Detector
A schematic of the experiment is shown in Figure 1. Surface muons of 28MeV/c from one of the world’s most intense sources, the E5 channel at PSI, are stopped in a thin, partially depolarizing polyethylene target, placed at the centre of the positron spectrometer. To facilitate a stopping rate of in the 18mg/cm thick target, with minimum beam-related background, a Wien filter and a superconducting transport solenoid (BTS) with a central degrader system are employed. The MEG beam transport system so, cleanly separates (7.5) the eight times higher positron contamination to provide a pure muon beam. The use of a helium environment in the spectrometer, together with a slanted target, ensures minimal multiple scattering for both the muons and the out-going positrons. This is essential also for limiting background production, such as annihilation-in-flight and bremsstrahlung, in the acceptance region, centred around 90 to the incoming beam.
Positrons originating from muon decay are analyzed in the COBRA (COnstant-Bending-RAdius) spectrometer consisting of a thin-walled superconducting magnet with a gradient magnetic field, a tracking system of low-mass drift chambers and two fast scintillator timing-counter arrays.
The gradient magnetic field in the spectrometer, ranging from 1.27 Tesla at the centre to 0.49 Tesla at either end, is designed such that positrons emitted from the target with the same momentum follow trajectories with an almost constant projected bending radius, independent of their emission angle. This allows a preferential acceptance of higher momentum particles in the drift chambers as well as sweeping away particles more efficiently, compared to a uniform field.
The drift chamber system (DCH) consists of 16 radially aligned modules, spaced at intervals, forming a half-circle around the target. Each drift chamber module contains two staggered layers of anode wire planes each of nine drift cells. The two layers are separated and also enclosed by 12.5 m thick cathode foils with a Vernier pattern structure. The chambers are operated with a helium:ethane (50:50) gas mixture, allowing this low-mass construction to total along the positron trajectory.
Positron timing information originates from fast, scintillator timing-counter arrays (TC), placed at each end of the spectrometer. Each array consists of 15 BC404 plastic scintillator bars, with 128 orthogonally placed BCF-20 scintillating fibres. Each bar is read-out at either end by a fine-mesh photomultiplier tube, while the fibres are viewed by avalanche photo-diodes. The precise timing and charge signals provide both the impact point on the TCs and give directional information on the positron.
The photon detector is a 900 litre homogeneous volume of liquid xenon (LXe) that subtends a solid-angle acceptance of . It uses scintillation light to measure the total energy released by the -ray as well as the position and time of its first interaction. In total, 846 photomultiplier tubes (PMTs), internally mounted on all surfaces and submerged in the xenon, are used. The advantage of using liquid xenon is its fast response, large light yield and short radiation length. Stringent control of contaminants is necessary since the vacuum ultra-violet (VUV) scintillation light is easily absorbed by water and oxygen even at sub-ppm levels. The xenon is therefore circulated in liquid phase through a series of purification cartridges, and in gas phase through a heated getter. Both the optical properties of xenon as well as the PMT gains and quantum efficiencies are constantly monitored by means of LEDs and pointlike Am -sources deposited on thin wires stretched inside the active volume. The detector is maintained at 165 K by means of a pulse-tube refrigerator with a cooling power of 200 W.
To select matched photon and positron candidates in a high rate, continuous beam environment and store sufficient information for offline analysis requires a well matched system of front-end electronics, trigger processors and data acquisition (DAQ) software.
The front-end electronics signals (2748) are actively split and go to both the trigger and the in-house designed waveform digitizer boards. The latter are based on the multi-GHz domino ring sampler chip (DRS), which can sample ten analogue input channels into 1024 sampling cells each at speeds of up to 4.5GHz. The sampling speed for the drift chamber anode and cathode signals is 500MHz, while that of the PMT signals from the photon detector and timing counters is 1.6GHz. This strategy gives maximum flexibility, allowing various read-out schemes, such as zero suppression, on an event-by-event basis for various trigger types. The system achieves an excellent pile-up recognition, together with superior timing and amplitude resolutions, compared to conventional schemes.
The trigger is based on fast information from the two detectors using PMTs: the liquid xenon photon detector and the positron timing counters. It makes use of a subset of the kinematic observables from -decay at rest, requiring an energy deposit in the photon detector in an interval around MeV, a time coincident positron hit on the timing counters and a rough collinearity of the two particles, based on their hit topology. The decay kinematics is reconstructed by electronics boards arranged in a triple layer tree-structure. The signal digitization is executed by means of a MHz, -bit flash analogue-to-digital converter. A pre-scaled, multi-trigger event scheme is used for data-taking allowing calibration, background and signal events to be read-out together. The typical signal event rate was Hz, and the total DAQ rate was Hz, with an average livetime of .
In total, nine front-end computers are used for the DAQ, each sending an event fragment to a central event building computer over a Gigabit Ethernet link. An integrated slow-control system enables both equipment control and monitoring.
A detailed GEANT 3.21 based Monte Carlo simulation of the full apparatus (transport system and detector) was developed and used throughout the experiment, from the design and optimization of all sub-systems to the calculation of acceptances and efficiencies.
4 Monitoring and Calibrations
The long term stability of the MEG experiment is an essential ingredient in obtaining high quality data over extended measurement periods. Continuous monitoring and frequent calibrations are a prerequisite. Apart from such items as the liquid xenon temperature and pressure, the drift chambers gas composition and pressure and the electronics temperature, a number of additional measurements must be performed to keep the subdetectors calibrated and synchronized. The three most important are nuclear reactions from a Cockcroft-Walton (CW) accelerator, radiative muon decay (RMD) runs and pion charge-exchange (CEX) reaction runs.
Three times a week during normal data-taking, -rays of moderate energy coming from nuclear reactions of protons on a LiBO target are used. Protons of variable energy ( keV) are produced by a dedicated CW-accelerator placed downstream of the experiment. The muon stopping target is automatically replaced by a remotely extendable beam pipe which places the nuclear target at the centre of the detector. Photons of MeV from Li()Be allow the monitoring of the LXe detector energy scale, while coincident ’s from B()C ( MeV) detected simultaneously by the timing counter and the xenon detector allow the determination of time offsets of TC bars.
Once a week an entire day of RMD data-taking at reduced beam intensity was performed, with the trigger requirements relaxed to include non back-to-back positron-photon events in a wider energy range.
Two CEX runs () were also conducted, one at the beginning and one at the end of the data-taking period. Pion capture at rest in a liquid hydrogen target produces photons with energy MeV. By detecting one of these photons with the LXe detector and the other at 180 by means of a set of NaI crystals, preceded by a lead/scintillator sandwich, two mono-energetic calibration lines at the extremes of the energy spectrum are obtained. These enable measurement of the energy scale and uniformity. Dalitz decays () were also collected by using a photon-positron coincidence trigger, and used to study the detector time synchronization and resolution.
The combined use of all these methods enables the investigation of possible systematic variations of the apparatus.
5 Event selection and resolutions
The data sample analyzed here was collected between September and December 2008 and corresponds to muons stopping in the target. At the first stage of the data processing, a data reduction is performed by selecting events with conservative criteria that require the time of the photon detector signal to be close to that of a timing counter hit, and at least one track to be detected by the drift chamber system. This reduces the data size to 16 % of the recorded events. The pre-selected data are again processed and those events falling into a pre-defined window (blinding-box), containing the signal region on the -ray energy and the time difference between the -ray and the positron, are “hidden”, i.e. written to a separate data-stream, in order to prevent any bias in the analysis procedure. Only the events outside the blinding-box are used for optimizing the analysis parameters and for studying the background.
During the course of the data-taking period the light yield of the photon detector was continuously increasing due to the purification of the liquid xenon, which was performed in parallel. Furthermore, an increasing number of drift chambers suffered from frequent high-voltage trips resulting in a reduction of the positron detection efficiency by a factor of three over the period. The increase of the xenon light yield was carefully monitored with the various calibration tools and it is taken into account in the determination of the energy scale. The trigger thresholds were also accordingly adjusted to guarantee a uniform efficiency. For the drift chamber system we adopted a normalization scheme which depends only on the ratio of the signal positron reconstruction efficiency relative to that of the Michel positron, in order to be insensitive to absolute efficiencies.
A candidate event is characterized by the measurement of five kinematic parameters: positron energy (), photon energy (), relative time between the positron and photon () and opening angles between the two particles ( and ).
5.1 Positron energy,
The positron track is reconstructed with the Kalman filter technique Kalman (), in order to take into account the effect of the multiple scattering and energy loss in the detector materials in the variable magnetic field.
The positron energy scale and resolution are evaluated by fitting the kinematic edge of the measured Michel positron energy spectrum at 52.8 MeV as shown in Figure 2. The fit function is formed by folding the theoretical Michel spectrum form with the energy-dependent detector efficiency, and the response function for mono-energetic positrons. The latter is extracted from the Monte Carlo simulation of decays, and is well described by a triple Gaussian function (a sum of a core and two tail components).
The resolutions extracted from the data are keV, MeV and MeV in sigma for the core component and the two tails, with corresponding fractions of %, % and %, respectively. The uncertainty on these numbers is dominated by systematic effects and was determined by varying both the event selection and fitting criteria.
5.2 Photon energy,
The energy calibration and resolution of the -ray at the signal energy is extracted from the two CEX periods. Figure 3 shows an example of the energy spectrum measured with MeV photons from the CEX reaction. A small correction is made to take into account the different background present in the LXe volume during the operation with the pion beam.
The line shape is asymmetric with a low energy tail due to -rays converting in front of the LXe sensitive volume. A 3D mapping of the parameters is also made, since they depend to some extent on the position of the -ray conversion, mainly on the conversion depth inside the detector (). As an example, the average resolution for deep events ( cm) is measured to be FWHM with a right tail of , where the error quoted includes the variation over the acceptance. The energy scale is constantly monitored by looking at the reconstructed MeV energy peak from CW protons on Li and confirmed by a fit of the photon energy spectrum to the expected spectra from the decay, positron annihilation-in-flight and ray pile-up, folded with the line-shape determined during the -experiment. The systematic uncertainty on the energy scale is estimated, by a comparison of these measurements, to be .
5.3 Relative time,
The positron time measured by the scintillation counters is corrected by the time-of-flight of the positron from the target to the TC, as measured by the track-length in the spectrometer. The photon time is determined by the waveforms of the LXe PMTs and corrected by the line-of-flight that starts from the positron vertex on the target and ends at the reconstructed conversion point in the LXe detector.
In Figure 4, the relative time distribution between the positron and the photon in a normal physics run is shown: the RMD peak (outside of the blinding-box) is clearly visible above the accidental background. The -peak is fitted in the region of MeV and, by taking into account a small -dependence observed in the -runs, the timing resolution for the signal is estimated to be ps. The relative time between the LXe detector and the TC was monitored over the whole data-taking period by observing the RMD time peak in runs at normal intensity, and was found to be stable to within ps.
5.4 Relative angles, and
The positron direction and decay vertex position are determined by projecting the positron back to the target. The -ray direction is defined by the line linking its reconstructed conversion point in the LXe detector with the vertex of the candidate companion positron. The resolution of the angles between the two particles is evaluated by combining the angular resolution and the vertex position resolution in the positron detector and the position resolution in the photon detector. The positron angular resolution is evaluated by exploiting tracks that make two turns in the spectrometer, where each turn is treated as an independent track. The - and -resolutions666taking the -axis as the beam-axis, is defined as the polar angle, while is the azimuthal angle are extracted separately from the difference of the two track segments at the point of closest approach to the beam-axis and are mrad, mrad. Due to this difference, and are treated separately in the analysis. The vertex position resolutions are measured, using the two-turn technique, to be mm and mm in the vertical and horizontal directions on the target plane respectively. These values were confirmed independently by a method which reconstructs the edges of several holes placed in the target.
The position of the photon conversion point is reconstructed by using the distribution of the light seen by the PMTs near the incident position. The performance of the position reconstruction is evaluated by a Monte Carlo simulation and it is validated in a dedicated CEX experiment by placing a lead collimator in front of the photon detector. The average position resolutions along the two orthogonal front-face sides of the LXe detector and the depth direction () are estimated to be mm and mm respectively.
On combining the individual resolutions, the averaged opening-angle resolutions of and mrad for and are obtained respectively.
6 Data analysis
The analysis algorithms are calibrated and optimized by means of a large data sample in the side-bands outside of the blinding-box. The background level in the signal region can also be studied with the event distribution in the side-bands since the primary source of background in this experiment is accidental.
The blinding-box is opened after completing the optimization of the analysis algorithms and the background study. The number of events is determined by means of a maximum likelihood fit in the analysis window region defined as , , , and .
An extended likelihood function is constructed as,
where , and are the number of , RMD and accidental background (BG) events, respectively, while , and are their respective probability density functions (PDFs). is defined as the total number of events observed in the analysis window and . The signal PDF is the product of the statistically independent PDFs for the five observables (, , , and ), each defined by their corresponding detector response function with the measured resolutions, as previously described. The RMD PDF is the product of the PDF for , which is the same as that for the signal and the PDF for the other correlated observables (, , and ). The latter is formed by folding the theoretical RMD spectrum kuno-okada () with the detector response functions. The BG PDF is the product of the background spectra for the five observables, which are precisely measured in the data sample in the side-bands outside the blinding-box. The position dependence of the resolutions in the case of the ray is taken into account in the PDFs, together with all their proper normalizations. The event distributions of the five observables for all events in the analysis window are shown in Figure 5, together with the projections of the fitted likelihood function.
The 90 % confidence intervals on and are determined by the Feldman-Cousins approach Feldman-Counsins (). A contour of 90 % C.L. on the (, )-plane is constructed by means of a toy Monte Carlo simulation. On each point on the contour, 90 % of the simulated experiments give a likelihood ratio () larger than that of the ratio calculated for the data. The limit for is obtained from the projection of the contour on the -axis. The obtained upper limit at 90 % C.L. is , where the systematic error is included. The largest contributions to the systematic error are from the uncertainty of the selection of photon pile-up events, the photon energy scale, the response function of the positron energy and the positron angular resolution. The confidence intervals are calculated by three independent likelihood fitting tools, each with different schemes and algorithms. The results are all consistent. The expected number of RMD events in the analysis window is calculated to be , obtained by scaling the number of events in the peak of the -distribution, obtained with lower energy cuts, using the probability ratio in the PDFs. This expectation is consistent with the best estimate in the likelihood fitting of ().
The upper limit on is calculated by normalizing the upper limit on to the number of Michel positrons counted simultaneously with the signal and using the same analysis cuts, assuming . This technique has the advantage of being independent of the instantaneous beam rate and is nearly insensitive to positron acceptance and efficiency factors associated with the DCH and TC detectors, as these differ only slightly between the signal and the normalization samples, due to small momentum dependent effects. The branching ratio can in fact be written as:
where is the number of detected Michel positrons with ; is the prescale factor in the trigger used to select Michel positrons; is the fraction of the Michel positron spectrum above MeV; is the ratio of signal-to-Michel trigger efficiencies; is the ratio of signal-to-Michel DCH-TC matching efficiency; is the ratio of signal-to-Michel DCH reconstruction efficiency and acceptance; is the geometrical acceptance for signal photons given an accepted signal positron; is the efficiency of photon reconstruction and selection criteria. The trigger efficiency ratio is different from one due to the imposition of stringent angle matching criteria at the trigger level. The main contributions to the photon inefficiency are from conversions before the LXe active volume and selection criteria imposed to reject pile-up and cosmic ray events.
The limit on the branching ratio of the decay is therefore
where the systematic uncertainty on the normalization is taken into account.
The obtained upper limit can be compared with the branching ratio sensitivity of the experiment with these data statistics. The sensitivity is defined as the upper limit of the branching ratio, averaged over an ensemble of experiments, which are simulated by means of a toy Monte Carlo, assuming a null signal and the same numbers of accidental background and RMD events as in the data. The branching ratio sensitivity in this case is estimated to be , which is comparable with the current branching ratio limit set by the MEGA experiment MEGA (). Given this branching ratio sensitivity, the probability to obtain the upper limit greater than is if systematic uncertainties in the analysis are taken into account.
7 Conclusion and Prospects
A search for the lepton flavour violating decay was performed with a branching ratio sensitivity of , using data taken during the first three months period of the MEG experiment in 2008. With this sensitivity, which is comparable with the current branching ratio limit set by the MEGA experiment, a blind likelihood analysis yields an upper limit on the branching ratio of BR (90% C.L.).
The problem of the reduced performance of the drift chambers, due to high-voltage trips, has been solved and the chambers functioned successfully during our 2009 run period. Additional maintenance to the LXe-detector has also resulted in a near optimal light-yield. Further improvements to the timing counter fibre detectors and the digitization electronics are in progress.
We are grateful for the support and co-operation provided by PSI as the host laboratory and to the technical and engineering staff of our institutes. This work is supported by DOE DEFG02-91ER40679 (USA), INFN (Italy) and MEXT KAKENHI 16081205 (Japan).
- (1) T. Schwetz, M. A. Tortola and J. W. F. Valle, New J. Phys. 10 (2008) 113011.
- (2) R. Barbieri, L. Hall and A. Strumia, Nucl. Phys. B 455 (1995) 219.
- (3) J. Hisano, D. Nomura and T. Yanagida, Phys. Lett. B 437 (1998) 351.
- (4) W. J. Marciano, T. Mori and J. M. Roney, Ann. Rev. Nucl. Part. Sci. 58 (2008) 315.
- (5) M. L. Brooks et al. [MEGA Collaboration], Phys. Rev. Lett. 83, (1999) 1521.
- (6) Y. Kuno and Y. Okada, Rev. Mod. Phys. 73 (2001) 151.
- (7) R. Frühwirt, et al., “Data Analysis Techniques for High Energy Physics”, second ed. (Cambridge University Press, Cambridge, 2000).
- (8) G. J. Feldman and R. D. Cousins, Phys. Rev D 57, (1998) 3873.