The Physics Case for the New Muon (g-2) Experiment

The Physics Case for the New Muon Experiment


This White Paper briefly reviews the present status of the muon experiment and the physics motivation for a new effort. The present comparison between experiment and theory indicates a tantalizing deviation. An improvement in precision on this comparison by a factor of 2—with the central value remaining unchanged—will exceed the “discovery” threshold, with a sensitivity above . The 2.5-fold reduction improvement goal of the new Brookhaven E969 experiment, along with continued steady reduction of the standard model theory uncertainty, will achieve this more definitive test.

Already, the result is arguably the most compelling indicator of physics beyond the standard model and, at the very least, it represents a major constraint for speculative new theories such as supersymmetry or extra dimensions. In this report, we summarize the present experimental status and provide an up-to-date accounting of the standard model theory, including the expectations for improvement in the hadronic contributions, which dominate the overall uncertainty. Our primary focus is on the physics case that motivates improved experimental and theoretical efforts. Accordingly, we give examples of specific new-physics implications in the context of direct searches at the LHC as well as general arguments about the role of an improved measurement. A brief summary of the plans for an upgraded effort complete the report.

1 Introduction

The anomalous magnetic moment of the muon, has been measured in Experiment E821 at the Brookhaven AGS, which completed data collection in 2001. All results are published [1, 2, 3, 4, 5], and the measurements are shown in Fig. 1. A comprehensive summary of the experiment, containing many of the details of the methods used for data collection and analysis, was published [6] in 2006, and general reviews of the experiment and theory are also available [7, 8, 9, 10].

The nearly equally precise experimental determinations of from positive and negative muon data samples can be combined under the assumption of CPT invariance to give


The final error of 0.54 ppm consists of a 0.46 ppm statistical component and a 0.28 systematic component, combined in quadrature. An upgraded version of E821, E969 [11], has been proposed and received scientific approval at Brookhaven, with the goal of reducing the experimental uncertainty on by a factor of 2.5, down to .

In 2007 the standard model prediction for was updated based on new results from the CMD-2 and SND annihilation experiments, and from radiative return measurements from BaBar. Additional theory work on hadronic light-by-light scattering was also completed. In the review of Ref. [10], the most recent theory value is determined to be


The uncertainty of 0.52 ppm is close to the error on the experimental value, with a difference between the standard model and experiment of


a 3.4 standard deviation difference.

Year Polarity [ppm] 1997   11 659 251(150) 13 1998 11 659 191(59) 5 1999 11 659 202(15) 1.3 2000 11 659 204(9) 0.7 2001 11 659 214(9) 0.7 Avg. 11 659 208.0(6.3) 0.54
Figure 1: Measurements of the muon anomaly, indicating the value, as well as the muon’s sign. To obtain the value of , as well as the world average, CPT invariance is assumed. The theory value is taken from Ref. [10], which uses electron-positron annihilation to determine the hadronic contribution.

2 The Standard-Model Value of the Anomaly

The standard model value of a lepton’s anomaly, , has contributions from three different sets of radiative processes:

  • quantum electrodynamics (QED) – with loops containing leptons () and photons;

  • hadronic – with hadrons in vacuum polarization loops;

  • weak – with loops involving the bosons and Higgs.

Examples are shown in Fig. 2. Thus


with the uncertainty dominated by the hadronic term. The standard-model value of the muon anomaly has recently been reviewed [10], and the latest values of the contributions are given in Table 1.

Figure 2: The Feynman graphs for: (a) Lowest-order QED (Schwinger term); (b) Lowest-order hadronic contribution; (c) The hadronic light-by-light contribution; (d)-(e) the lowest order electroweak and contributions. With the present limits on , the contribution from the single Higgs loop is negligible.
Effect Contribution Future
Hadronic (lowest order)
Hadronic (higher order)
Hadronic (light-by-light)
Table 1: Standard-model contributions to the muon anomalous magnetic dipole moment, . All values are taken from Ref. [10].Possible improvements in errors on the hadronic corrections are also listed. As additional data become available, the error listed for radiative corrections will decrease.

The dominant contribution from quantum electrodynamics (QED), called the Schwinger term [12], is , which is shown diagrammatically in Fig. 2(a). The QED contributions have been calculated through four loops, (eighth-order or ) with the leading five-loop (tenth-order) contributions estimated [13]. It is the uncertainty on the tenth-order contribution that dominates the error on given in Table 1.

The one-loop electroweak contributions were calculated shortly after the electroweak theory was shown to be renormalizeable. It has now been calculated through two loops. The leading-log three-loop effects have been estimated and found to be negligible. The electroweak contribution through two loops is given in Table 1.

The hadronic contribution cannot be calculated from perturbative QCD only because of the low-energy scales involved, and the uncertainty on dominates the total uncertainty on the standard-model value. There are three distinct components in the hadronic contribution:

  • The lowest-order hadronic vacuum polarization, as shown in Fig. 2(b).

  • Higher-order hadronic vacuum polarization loops (excluding the hadronic light-by-light term).

  • The hadronic light-by-light term, shown in Fig. 2(c).

Dispersion theory relates the (bare) cross section for to the lowest-order hadronic contribution to ,


and is a known function [10]. Experimental data are used as input for  [23, 25]. The only assumptions here are analyticity and the optical theorem. The factor of in the dispersion relation means that the resonance region dominates the dispersion integral, with the region up to  GeV being most important. As indicated in Table 1, the uncertainty on consists of three parts; the first two are associated with the numerical integration of the data. Some of these data are quite old, and the uncertainty because of missing radiative corrections is indicated by  [22, 23]. As more data become available, both of these errors will improve. A reasonable expectation is that will improve by at least . As new data which include all radiative corrections replace the old data, will become much less important.

It has been proposed that the hadronic contributions could also be determined from hadronic -decay data, using the conserved vector current (CVC) hypothesis. Such an approach can only give the isovector part of the amplitude, e.g. the but not the intermediate states. In contrast, the annihilation cross section contains both isovector and isoscalar contributions, with the cusp from interference as a dominant feature. Since hadronic decay goes through the charged resonance, and annihilation goes through the neutral , understanding the isospin corrections is essential to this approach. This use of CVC can be checked by comparing the hadronic contribution to obtained from each method. Alternately, one can take the measured branching ratio for , where is any vector final state (e.g. ) and compare it to that predicted using CVC and data, applying all the appropriate isospin corrections. At present, neither comparison gives a satisfactory result. For example, the differences between the measured branching ratios, and those obtained from CVC are 4.5  and 3.6  for the and the channels respectively, with good agreement being obtained for the channel [23]. At present the prescription of CVC with the appropriate isospin correction seems to have aspects that are not understood. Given two consistent data sets and the uncertainties inherent in the required isospin corrections to the data, the most recent standard-model evaluations do not use the data to determine  [23, 25, 10]. We return to this point in the next section.

The sum of the QED, hadronic and electroweak contributions in Table 1, adding the errors in quadrature, gives the standard model value in Eq. 2. When compared with the experimental world average [5, 6] in Eq. 1, one finds the difference given in Eq. 3. This difference is at the “interesting” level, and makes it clear that further work should be done to clarify whether there is a true discrepancy. It is estimated that the theory could improve by a factor of two [10, 24], as could the experiment.

2.1 Expected Improvements in the Standard-Model Value

Over the past fifteen years, significant progress has been made in improving the standard model value of . The QED and weak contributions are now very well known, and substantial progress has been made on the hadronic contribution because of the large quantity of new high-quality data, both on the final state from Novosibirsk [14, 15, 16], and on multi-hadron final states from BaBar [18, 19, 20, 21]. The BaBar detector operates at a fixed beam energy, and final-state hadrons are in coincidence with an initial-state photon which lowers the center of mass energy of the collision (often called radiative return or initial state radiation). The KLOE experiment at Frascati has published data on the channel using radiative return [17], with the initial state radiation not detected. The KLOE collaboration is now analyzing their data at large angles where the soft photon is detected, and are also determining the ratio where many systematic effects cancel. In the next year, the BaBar collaboration will release their data [22] using radiative return. If these BaBar data confirm the Novosibirsk data, it will significantly increase our confidence in our knowledge of the lowest-order contribution to . In the longer term, an upgraded collider, VEPP2000 will come on line at Novosibirsk, with the upgraded detectors CMD-3 and SND. This new facility will permit improved measurement of the annihilation cross section from threshold up to 2.0 GeV, and will complement the data from BaBar that are expected to be available in the next year.

The other hadronic issue is the hadronic light-by-light contribution, shown in Fig. 2(c), which has a 36% relative uncertainty[10, 29]. So far, the only rigorous QCD result in this domain comes from the observation [26] that in the QCD large number of colors () limit, and to leading-order in the chiral expansion, the dominant contribution can be calculated analytically and gives a positive contribution. Unfortunately, to go beyond that, one has to resort to hadronic models. The dynamics of the models, however, is very much constrained by the behavior that QCD imposes in various limits: the chiral limit, but also the operator product expansion for certain kinematic configurations. In other kinematic regimes, the models can also be related to observed processes. A combined effort from theorists along the lines started in Refs. [26, 27, 28, 29] could significantly reduce the present uncertainty. Following this line, a goal of 15% accuracy in hadronic light-by light determination seems possible.

We consider several potential improvements to the hadronic contribution, and calculate the projected future significance assuming the reduced uncertainty on , along with the value of . Four different scenarios are considered, with the improvements obtained given in Table 2.

  1. Lowest-order Hadronic (L-O-Hadronic): The error on the lowest-order hadronic contribution contains three pieces (see Table 1). We take to be improved by , and to be improved from to .

  2. Hadronic Light-by-Light (H-L-b-L): The hadronic light-by-light contribution is improved to a 15% relative error, and the lowest-order hadronic has the present value.

  3. L-O-Hadronic and H-L-b-L: Both the lowest-order and the light-by-light errors are improved by the amount mentioned above.

  4. Most Optimistic: For the various new-physics calculations given below, a combined theory plus experiment error of was assumed

Assumption SM Error Combined Error,  
Present Errors 61 88 3.4
L-O-Hadronic 54 59
H-L-b-L 50 55
Both L-O-Hadronic and H-L-b-L 40 47
Most Optimistic 30 39
Table 2: Potential Improvements to , and the statistical significance if . For each of the future scenarios, the experimental error is assumed to be .

3 and New Physics in the Era of LHC

The next decade will constitute a very promising and exciting era in particle physics. Experiments at the LHC will explore physics at the TeV-scale, an energy scale that has not been probed directly by any previous experiment and which appears to be a crucial energy scale in particle physics. It is linked to electroweak symmetry breaking, and e.g. naturalness arguments indicate that radically new theoretical concepts such as supersymmetry or extra space dimensions might be realized at the TeV-scale. Furthermore, the properties of cold dark matter are compatible with weakly interacting particles with weak-scale/TeV-scale masses, and Grand Unification prefers the existence of supersymmetry at the TeV-scale. Hence, there is a plethora of possibilities, and it is likely that a rich spectrum of discoveries will be made at the TeV-scale. In any case, experimental data at the TeV-scale should give answers to many fundamental questions and lead to tremendous progress in our understanding of particle physics.

Clearly, due to the expected importance and complexity of physics at the TeV-scale, we need to combine and cross-check information from the LHC with information from as many complementary experiments as possible. The measurement of the muon magnetic moment is indispensable in this respect.

The muon magnetic moment is one of the most precisely measured and calculated quantities in elementary particle physics. Moreover, the current experimental value of shows one of the largest deviations of any observable from the corresponding standard-model prediction (see Eq. 3), . Owing to this precision is not only a sensitive test of all standard model interactions, but also of possible new physics at and above the electroweak scale. If the precision of is improved to , will be a highly sensitive probe of physics beyond the standard model up to the TeV-scale. Results from a model-independent MSSM parameter scan shown in Fig. 3 exemplifies this sensitivity for the case of the minimal supersymmetric standard model (MSSM) by comparing the current value and the future precision with the values for compatible with the MSSM.

Figure 3: Possible values of as a function of the lightest observable supersymmetric particle mass . The future constraint assumes . The black horizontal dashed line is the present (E821) band; the dark-red lines give the future (E969) band. The yellow region corresponds to all data points compatible with constraints from , and -decays. The red region is for smuons and sneutrinos heavier than 1 TeV. The figure shows that the future measurement significantly constrains the MSSM parameter space and leads to upper and lower mass bounds on supersymmetric particles. This would even be the case if the future central value would be smaller than today’s. For more details on the plot see Ref. [30].

In the following, the importance of the measurement in the era of TeV-scale physics, and particularly its usefulness as a complement to LHC, is discussed. The discussion is centered around the following aspects:

  • The measured value constitutes a definite benchmark that any model of new physics has to satisfy.

  • is particularly sensitive to quantities that are difficult to measure at the LHC.

  • is an inclusive measure of quantum effects from all particles.

  • is a very clean observable.

  • is a simple and beautiful quantity.

3.1 as a benchmark for models of new physics

It has been established that the LHC is sensitive to virtually all proposed weak-scale extensions of the standard model, ranging from supersymmetry to extra dimensions, little Higgs models and others. However, even if the existence of physics beyond the standard model is established, it will be far from easy for the LHC alone to identify which of the possible alternatives is realized. The measurement of to will be highly valuable in this respect since it will provide a benchmark and stringent selection criterion that can be imposed on any model that is tested at the LHC. For example, if persists to be as large as it is today, many non-supersymmetric models will be ruled out. If turns out to be rather small, supersymmetric models will be seriously constrained.

One example, where the power of as a selection criterion becomes particularly apparent, is the distinction between the minimal supersymmetric standard model (MSSM) and a Universal Extra Dimension (UED) model. Both models predict the existence of heavy partners (superpartners or Kaluza-Klein modes) of the standard-model particles. The quantum numbers and — for suitable model parameters — also the mass spectra of these heavy partners are the same in both models. Ref. [31] analyzed whether the two models can be distinguished at the LHC by considering spin-sensitive observables. The answer turned out to be affirmative, however even in the collider-friendly case that the mass-spectrum of the MSSM reference point SPS1a is realized, the separation of the two models is not an easy task. A precise measurement of would be of great help in this respect. The values predicted by the MSSM[30] and by the UED[32] model (in the parameter point of Ref. [31]) are very different:1


Hence, the future measurement would separate the two models by more than 7 standard deviations and thus allow for a clear decision in favor of one of the two models.

A second example is the distinction between two different, well-motivated scenarios of supersymmetry breaking such as anomaly-mediated and gravity-mediated supersymmetry breaking. The two scenarios lead to different values of superpartner masses, but a major qualitative difference is the different sign for the MSSM-parameter preferred in both scenarios (if the current experimental constraint on BR is imposed). The LHC is not particularly sensitive to sign, and thus cannot test this fundamental difference. However, sign determines the sign of the supersymmetry contributions to . The current value already favors sign, but the magnitude of the uncertainty does not allow a definite conclusion. An improved measurement with uncertainty has the potential to unambiguously determine sign and thus one of the central supersymmetry parameters. Depending on the future central value of , either anomaly- or gravity-mediated supersymmetry breaking could be ruled out or at least seriously constrained by .

Figure 4: The plane of the CMSSM parameter space for , , sign. (a) The between experiment and standard-model theory is from Ref. [10], see text. The brown wedge on the lower right is excluded by the requirement the dark matter be neutral. Direct limits on the Higgs and chargino masses are indicated by vertical lines. Restrictions from the WMAP satellite data are shown as a light-blue line. The 1 and 2-standard deviation boundaries are shown in purple. The region “allowed” by WMAP and is indicated by the ellipse, which is further restricted by the limit on . (b) The plot with , which assumes that both the theory and experimental errors decrease to . (c) The same errors as (b), but . (Figures courtesy of K. Olive)
Figure 5: The CMSSM plots as above, but with . (a) As in Fig. 4 but for (b) The plot with , which assumes that both the theory and experimental errors decrease to . (c) The same errors as (b), but . (Figures courtesy of K. Olive)

A third example concerns the restriction of special, highly constrained models of new physics such as the constrained MSSM (CMSSM) [33]. The CMSSM has only four free continuous parameters. One precise measurement such as the future determination of effectively fixes one parameter as a function of the others and thus reduces the number of free parameters by one. In fact, the CMSSM is very sensitive not only to the but also to the dark matter (assumed to consist of neutralinos) relic density. As shown in Figs. 4 and 5, both observables lead to orthogonal constraints in CMSSM parameter space, and therefore imposing both constraints leaves only two free parameters and thus allows for very stringent tests of the CMSSM at the LHC.

3.2 is sensitive to quantities that are difficult to measure at the LHC

The LHC as a hadron collider is particularly sensitive to colored particles whereas the measurement is particularly sensitive to weakly interacting particles that couple to the muon. Therefore the sensitivities are complementary. As an example, if the MSSM is realized it is possible that the LHC finds some but not all superpartners of the gauge and Higgs bosons, the charginos and neutralinos.

Furthermore, for unraveling the mysteries of TeV-scale physics it is not sufficient to determine which kind of new physics, i.e. extra dimensions, supersymmetry or something else, is realized, but it is necessary to determine model parameters as precisely as possible. In this respect the complementarity between the LHC and becomes particularly important. A difficulty at the LHC is the very indirect relation between LHC observables (cross sections, mass spectra, edges, etc) and model parameters such as masses and couplings, let alone more underlying parameters such as supersymmetry-breaking parameters or the -parameter in the MSSM. It has been shown that a promising strategy is to determine the model parameters by performing a global fit of a model such as the MSSM to all available LHC data. However, recent investigations have revealed that in this way typically a multitude of almost degenerate local minima of as a function of the model parameters results [34]. Independent observables such as will be highly valuable to break such degeneracies and in this way to unambiguously determine the model parameters.

In the following we discuss the complementarity of LHC and for the well-studied case of the MSSM, where it has turned out that the LHC has a weak sensitivity to two central parameters: The LHC has virtually no sensitivity at all to the sign of the -parameter and only a moderate sensitivity to , the ratio of the two Higgs vacuum expectation values.

The MSSM contributions to on the other hand are highly sensitive to both of these parameters,


where denotes the average superpartner mass scale. Therefore, a future improved measurement has the potential to establish a definite positive or negative sign of the -parameter in the MSSM, which would be a crucial additional piece of information.

In order to discuss the relative sensitivity of LHC and to , we reconsider the situation discussed in Ref. [35]. In this reference it has been assumed that the MSSM reference point SPS1a is realized, and the potential of the LHC to determine MSSM parameters has been worked out. By performing a global fit of the MSSM to all available LHC data, a large set of MSSM parameters can determined to a precision of a few percent. Apart from sign, which has been assumed to be positive, could be determined only poorly to .

In such a situation, an improved measurement will be the perfect complement of the LHC. One can simply study the MSSM-prediction for as a function of (all other parameters are known from the global fit to LHC data) and compare it to the measured value. One can display the result in a “blue band” plot, similar to the case of the LEP precision data, which can be compared to the standard model predictions as a function of the standard-model Higgs boson mass. The resulting possible future “blue band” plot for determined by the measurement is shown in Fig. 6. As can be seen from the plot, the improvement of the determination of from the measurement is excellent.

One should note that even if better ways to determine at the LHC alone might be found in the future, an independent determination using will still be highly valuable. is one of the central MSSM parameters, and it appears in all sectors and in almost all observables. Therefore, measuring in two different ways, e.g. using certain Higgs- or -decays at the LHC and using , would constitute a non-trivial and indispensable test of the universality of and thus of the structure of the MSSM.

Figure 6: A possible future “blue band” plot, where is determined from the measurement of . The white region between the yellow vertical bars indicates the region from the LHC-determination of . The darker blue band is with the present E821 restrictions. The lighter blue band corresponds to . It is assumed that the MSSM reference point SPS1a is realized and that the MSSM parameters have been determined by a global fit to the values given in Ref. [35], Table 5, and that the measured value coincides with the actual value of the SPS1a point. The plot shows as a function of , where in all parameters except have been set to the values determined at the LHC. The width of the blue curves result from the uncertainty of these parameters. The plot shows that the precision for that can be obtained using is limited by the precision of the other input parameters but is still better than and thus much better than the determination using LHC data alone.

3.3 is an inclusive measure of quantum effects

At the LHC, it is not trivial to discover all new particles that are in principle kinematically accessible. Some, in particular unexpected particles, might be difficult to detect due to background problems, not-optimized search strategies, or the triggers. The quantity on the other hand, as a pure quantum effect, is sensitive to all particles that couple to the muon (and, from the two-loop level on, even to particles that don’t couple to the muon). Therefore, a precise measurement of constitutes an inclusive test of all kinds of new physics, expected or unexpected.

If a large deviation from the standard model prediction is found, this can help to establish the existence of new particles that have not been seen at the LHC. The projected precision of the measurement will even permit the derivation of mass bounds on such new particles. Feeding this information back to the LHC will help optimizing searches and might thus make a direct detection of the new particles possible.

Likewise, if a small is found, this will help exclude the existence of certain particles in a particular mass range. In this way, regions of parameter space that are difficult to test at the LHC might be covered. 10 years ago, LEP-experiments could not exclude the existence of particularly light charginos, since they could have escaped detection. This hole in parameter space was then closed by considering [36]. Such light charginos would have given a large contribution to which was not observed.

3.4 is a clean observable

The LHC is an extremely complex machine, and it will be a huge task to understand the LHC detectors sufficiently to make reliable measurements possible. At the LHC many sources of systematic errors have to be brought under control, and the overwhelming background makes it difficult to extract meaningful signals. The measurement suffers from none of these problems.

Therefore, the errors associated with LHC and the measurement are totally complementary, and the measurement will constitute a non-trivial cross-check of the LHC. The importance that LHC performs successfully cannot be overestimated. This implies that independent measurements that can cross-check and guide the LHC with clean data are equally important.

3.5 is a simple and beautiful observable

Finally, it is worth mentioning that the role the observable has played in the past and should play in the future goes far beyond just being a useful tool. , and anomalous magnetic moments in general, are some of the simplest and most beautiful observables in fundamental physics. They have found entrance in all quantum field theory textbooks and have inspired generations of quantum field theory students and researchers.

Anomalous magnetic moments are the simplest observables for which quantum effects in quantum field theory are important. The first measurement of the anomalous magnetic moment of the electron sparked the first successful loop calculation in QED by Schwinger, in the course of which the basic ideas of renormalization theory were developed. In the meantime, many more milestones in the understanding of quantum field theory are related to, and were inspired by research on anomalous magnetic moments.

Furthermore, has great appeal to the general public. All of the E821 results were covered by the New York Times and the rest of the popular press, as well as other journals such as Science News, The New Scientist, Physics Today, Science, and Nature. Measuring a quantity with such high precision that one can resolve effects from almost all elementary particles, ranging from the photon, electron, muon, over hadrons, to and bosons, is striking and catches the imagination. The projected precision of the measurement will permit the resolution of effects from new particles such as supersymmetric particles. This opportunity should not be missed.

4 Improvements to the Experiment

The final error of 0.54 ppm obtained in E821 was statistics limited, with a 0.46 ppm statistical error and a 0.28 systematic error. The errors from each running period are given in Table 3. Any upgraded experiment must further improve the systematic errors and significantly increase the volume of data collected. The principal focus of this document is to present the physics case for an improved experiment, rather than to present technical details of the upgraded experiment. We give no specifics for experimental improvements in this white paper, but rather briefly describe the experiment and the possible future goals.

1998 1999 2000 2001 E969 Goal
Magnetic Field Systematic () 0.5 0.4 0.24 0.17 0.1
Anomalous Precession Systematic () 0.8 0.3 0.31 0.21 0.1
Statistical Uncertainty 4.9 1.3 0.62 0.66 0.14
Total Uncertainty 5.0 1.3 0.73 0.72 0.20
Table 3: Systematic and statistical errors in ppm for each of the e821 running periods.

A proposal to Brookhaven, E969, which received enthusiastic scientific approval, plans to reduce the combined error to 0.2 ppm, a factor of 2.5 improvement beyond E821. When combined with expected improvements in the strong-interaction contribution to the standard-model value, the improved sensitivity could increase the significance of any difference between theory and experiment to above the level, assuming the central values remain the same. Some members of the community have encouraged a close look to see if a factor of 5 improvement is possible, down to a precision of  ppm.

4.1 How is measured?

The muon anomalous moment is determined from the difference frequency  between the spin precession frequency, , and the cyclotron frequency, , of an ensemble of polarized muons that circulate in a storage ring having a highly uniform magnetic field. Apart from very small corrections,  is proportional to . Vertical containment in the storage ring is achieved with an electric quadrupole field. In the presence of magnetic and electric fields having ,   is described by


where is the muon charge and is the muon velocity in units of . The term in parentheses multiplying vanishes at the “magic” value of , and the electrostatic focusing does not affect the spin motion (except for a small correction necessary to account for the finite momentum range around the magic momentum).

The magic momentum of 3.094 GeV/c sets the scale of the experiment, and the BNL storage ring [37] is 7.1 m in radius and has a 1.45 T magnetic field. At 3.094 GeV/ the time-dilated muon lifetime is 64.4 s, and the decay electrons have a maximum lab-frame energy of approximately 3.1 GeV. A short ( ns) bunch of muons is injected into the storage ring, and the arrival time and energy of the decay electrons is measured. The time spectrum of muon decay electrons above a single energy threshold produces the time distribution


as shown in Fig. 7. The value of is obtained from a least-squares fit to these data.

Figure 7: The time spectrum of electrons with energy greater than 1.8 GeV from the 2001 data set. The diagonal “wiggles” are displayed modulo 100 s. Data are from Ref. [6].

In the experiment the muon frequency is determined to high precision, and the average magnetic field is measured to equal or better precision. The field is determined from a suite of NMR measurements [6, 38]: to reference the field against an absolute standard; to monitor the field continuously; and to map the field in the storage ring aperture [6].

An upgraded experiment at any level requires a significant increase in the muon beam intensity, as well as improvements in the detectors and front-end electronics. A credible case was made in the E969 proposal that the factor of 2.5 improvement could be realized. We believe that with further research and development, and adequate running time on the accelerator, a significant increase in precision beyond the factor of 2.5 could be achieved. We continue to study potential improvements to the beamline, and to the electron detectors and electronics.

5 Summary

In this White Paper, we concentrate on the physics case for the new  experiment, E969. The standard model theory situation now—Spring, 2007—gives a precision commensurate with experiment at roughly 0.5 ppm. Improvements are expected from ongoing work, both in the experiment-driven lowest-order hadronic vacuum polarization, and in the hadronic light-by-light contributions. The new experiment will reduce the experimental error by a factor of 2.5 (or more) so that a comparison with theory will have a combined sensitivity to better than . The current discrepancy, at would rise above if the magnitude of the difference remains unchanged. New physics signals are expected to emerge in the LHC era, and the improved  measurement can make a significant impact in the role of helping to sort out the nature of the discoveries made at the LHC. The precision physics from low-energy observables is complementary to the discovery potential of the collider. Many authors have understood the importance of the independent constraint placed on new physics by , and there are over 1300 citations to the E821 papers.

We conclude with a list of items related to “Why now?”

  • The experimental precision can be improved by at least a factor of 2.5 or more but it must be started “now” to be ready when the results from the LHC will demand additional constraints. Several years of RD and construction are required before running and analysis can begin. We estimate roughly years from project start to achieve the goal.

  • The standard model theory uncertainty is slightly smaller than experiment and it should be halved again over the next few years. Over the past twenty years E821 stimulated an enormous amount of theoretical work, which necessitated the breaking of new ground in higher-order QED and electroweak contributions, as well as the significant work on the hadronic contribution. These improvements have been driven by the fact that real measurements of  were also being made. The momentum should be sustained and new efforts, especially related to the difficult hadronic light-by-light contribution, must be encouraged.

  • We are already at a compelling moment. The present -based standard model theory is standard deviations from the experiment, providing a strong hint of new physics. If the current discrepancy persists, with halved experimental and theoretical uncertainties, the significance will rise above .

  • For specific models, such as SUSY,  is particularly effective at constraining —the ratio of Higgs vacuum expectation values—for a given superparticle mass and it gives the sign of the parameter, something that cannot be obtained at the LHC. This information is complementary to the anticipated LHC new-particle spectra and it will be crucial in the effort to pin down the parameters of the theory behind them.

  • Independently of SUSY—we do not suggest or depend on this or any other specific model as being correct—measuring  to very high precision will register an important constraint for any new physics theory to respect. Some models will predict a large  effect, while others will not. It is information that can likely help diagnose new physics.

  • On the practical side, the project is based on a proven track record by an existing team of experts and new enthusiastic collaborators. It is efficient to mobilize the Collaboration now while the storage ring and beamline facilities can be dependably re-commissioned, and while the diverse expertise exists.

Acknowledgments: We thank Michel Davier and Simon Eidelman for their helpful comments and discussion on the hadronic contribution. We thank Keith Olive for providing the CMSSM figures.


  1. The UED result is obtained from Eq. (3.8) of reference [32] for the case of one extra dimension, .


  1. R.M. Carey et al., Phys. Rev. Lett. 82, 1632 (1999).
  2. H.N. Brown et al., (Muon Collaboration), Phys. Rev. D62, 091101 (2000).
  3. H.N. Brown, et al., (Muon Collaboration), Phys. Rev. Lett. 86 2227 (2001).
  4. G.W. Bennett, et al., (Muon Collaboration), Phys. Rev. Lett. 89, 101804 (2002).
  5. G.W. Bennett, et al., (Muon Collaboration), Phys. Rev. Lett. 92, 161802 (2004).
  6. G. Bennett, et al., (Muon Collaboration), Phys. Rev. D73, 072003 (2006).
  7. D.W. Hertzog and W.M. Morse, Annu. Rev. Nucl. Part. Sci. 54 141 (2004).
  8. F.J.M. Farley and Y.K. Semertzidis, Prog. in Nucl. and Part. Phys. 52, 1 (2004)
  9. M. Davier and W. Marciano, Annu. Rev. Nucl. Part. Sci. 54 115 (2004).
  10. J.P. Miller, E. de Rafael and B.L. Roberts, hep-ph/0703049, and Rep. Prog. Phys. 70, 795-881 (2007).
  11. See: roberts/Proposal969.pdf
  12. J. Schwinger, Phys. Rev. 73, 416L (1948), and Phys. Rev. 76 790 (1949). The former paper contains a misprint in the expression for that is corrected in the longer paper.
  13. T. Kinoshita and M. Nio, Phys. Rev. D73, 053007 (2006), and references therein.
  14. R.R.Akhmetshin, et al., hep-ex/0610016 published in JETP Letters, 2006, Vol. 84, No. 8, pp. 413-417; Pis’ma v Zh. Eksp. Teor. Fiz., 2006, Vol. 84, No. 8, pp. 491-495
  15. R.R. Akhmetshin, et al., hep-ex/0610021, Phys. Lett. B 648, 28 (2007).
  16. M.N. Achasov et al. hep-ex/0605013, published in JETP 103, 380-384 (2006), Zh. Eksp. Teor. Fiz. 130, 437-441 (2006).
  17. A. Aloisio et al. (KLOE collaboration) Phys.Lett.B606:12-24,2005
  18. B. Aubert, et al., BABAR Collaboration, Phys. Rev. D 70 (2004) 072004.
  19. B. Aubert, et al., BABAR Collaboration, Phys. Rev. D 71 (2005) 052001.
  20. B. Aubert, et al., BABAR Collaboration, Phys. Rev. D 73 (2006) 052003.
  21. B. Aubert, et al., BABAR Collaboration, e-Print: arXiv:0704.0630 [hep-ex], and submitted to Phys. Rev. D.
  22. M. Davier, private communication.
  23. M. Davier, hep-ph/0701163v2, Jan. 2007.
  24. S. Eidelman, private communication.
  25. K. Hagiwara, A.D. Martin, D. Nomura, T. Teubner . KEK-TH-1112, Nov 2006. e-Print Archive: hep-ph/0611102, and Phys. Lett. B649, 173 (2007).
  26. M. Knecht, A. Nyffeler, M. Perrottet and E. de Rafael, Phys. Rev. Lett. 88 071802 (2002).
  27. M. Knecht and A. Nyffeler, Phys. Rev. D57 (2002) 465.
  28. K. Melnikov and A. Vainshtein, Phys. Rev. D70 (2004) 113006.
  29. J. Bijnens and J. Prades, arXiv:hep-ph/0701240, 2007, and Mod. Phys. Lett. A 22, 767 (2007), ( hep-ph/0702170).
  30. D. Stöckinger, “The muon magnetic moment and supersymmetry,” J. Phys. G 34, R45 (2007) [arXiv:hep-ph/0609168].
  31. J. M. Smillie and B. R. Webber, “Distinguishing spins in supersymmetric and universal extra dimension models at the Large Hadron Collider,” JHEP 0510 (2005) 069 [arXiv:hep-ph/0507170].
  32. T. Appelquist and B. A. Dobrescu, “Universal extra dimensions and the muon magnetic moment,” Phys. Lett. B 516 (2001) 85 [arXiv:hep-ph/0106140].
  33. J. R. Ellis, K. A. Olive, Y. Santoso and V. C. Spanos, Phys. Lett. B 565 176 (2003); John Ellis, Keith A. Olive, Yudi Santoso, and Vassilis C. Spanos, Phys. Rev. D71 095007 (2005), and references therein.
  34. T. Plehn, M. Rauch, to be published.
  35. R. Lafaye, T. Plehn and D. Zerwas, “SFITTER: SUSY parameter analysis at LHC and LC,” Contribution to LHC-LC Study Group, G. Weiglein, et al. [hep-ph/0404282].
  36. M. Carena, G. F. Giudice and C. E. M. Wagner, “Constraints on supersymmetric models from the muon anomalous magnetic moment,” Phys. Lett. B 390 (1997) 234 [arXiv:hep-ph/9610233].
  37. G.T. Danby, et al., Nucl. Instr. and Methods Phys. Res. A 457, 151-174 (2001).
  38. R. Prigl, et al., Nucl. Inst. Methods Phys. Res. A374 118 (1996); X. Fei, V. Hughes and R. Prigl, Nucl. Inst. Methods Phys. Res. A394, 349 (1997); W. Liu et al., Phys. Rev. Lett. 82, 711 (1999).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description