The Physics Case for the New Muon Experiment
Abstract
This White Paper briefly reviews the present status of the muon experiment and the physics motivation for a new effort. The present comparison between experiment and theory indicates a tantalizing deviation. An improvement in precision on this comparison by a factor of 2—with the central value remaining unchanged—will exceed the “discovery” threshold, with a sensitivity above . The 2.5fold reduction improvement goal of the new Brookhaven E969 experiment, along with continued steady reduction of the standard model theory uncertainty, will achieve this more definitive test.
Already, the result is arguably the most compelling indicator of
physics beyond the standard model and, at the very least, it
represents a major constraint for speculative new theories such as
supersymmetry or extra dimensions. In this report, we summarize the
present experimental status and provide an uptodate accounting of
the standard model theory, including the expectations for improvement
in the hadronic contributions, which dominate the overall
uncertainty. Our primary focus is on the physics case that motivates
improved experimental and theoretical efforts. Accordingly, we give
examples of specific newphysics implications in the context of
direct searches at the LHC as well as general arguments about the
role of an improved measurement. A brief summary of the plans for
an upgraded effort complete the report.
1 Introduction
The anomalous magnetic moment of the muon, has been measured in Experiment E821 at the Brookhaven AGS, which completed data collection in 2001. All results are published [1, 2, 3, 4, 5], and the measurements are shown in Fig. 1. A comprehensive summary of the experiment, containing many of the details of the methods used for data collection and analysis, was published [6] in 2006, and general reviews of the experiment and theory are also available [7, 8, 9, 10].
The nearly equally precise experimental determinations of from positive and negative muon data samples can be combined under the assumption of CPT invariance to give
(1) 
The final error of 0.54 ppm consists of a 0.46 ppm statistical component and a 0.28 systematic component, combined in quadrature. An upgraded version of E821, E969 [11], has been proposed and received scientific approval at Brookhaven, with the goal of reducing the experimental uncertainty on by a factor of 2.5, down to .
In 2007 the standard model prediction for was updated based on new results from the CMD2 and SND annihilation experiments, and from radiative return measurements from BaBar. Additional theory work on hadronic lightbylight scattering was also completed. In the review of Ref. [10], the most recent theory value is determined to be
(2) 
The uncertainty of 0.52 ppm is close to the error on the experimental value, with a difference between the standard model and experiment of
(3) 
a 3.4 standard deviation difference.
2 The StandardModel Value of the Anomaly
The standard model value of a lepton’s anomaly, , has contributions from three different sets of radiative processes:

quantum electrodynamics (QED) – with loops containing leptons () and photons;

hadronic – with hadrons in vacuum polarization loops;

weak – with loops involving the bosons and Higgs.
Examples are shown in Fig. 2. Thus
(4) 
with the uncertainty dominated by the hadronic term. The standardmodel value of the muon anomaly has recently been reviewed [10], and the latest values of the contributions are given in Table 1.
Effect  Contribution  Future 

QED  
Hadronic (lowest order)  
Hadronic (higher order)  
Hadronic (lightbylight)  
Electroweak 
The dominant contribution from quantum electrodynamics (QED), called the Schwinger term [12], is , which is shown diagrammatically in Fig. 2(a). The QED contributions have been calculated through four loops, (eighthorder or ) with the leading fiveloop (tenthorder) contributions estimated [13]. It is the uncertainty on the tenthorder contribution that dominates the error on given in Table 1.
The oneloop electroweak contributions were calculated shortly after the electroweak theory was shown to be renormalizeable. It has now been calculated through two loops. The leadinglog threeloop effects have been estimated and found to be negligible. The electroweak contribution through two loops is given in Table 1.
The hadronic contribution cannot be calculated from perturbative QCD only because of the lowenergy scales involved, and the uncertainty on dominates the total uncertainty on the standardmodel value. There are three distinct components in the hadronic contribution:
Dispersion theory relates the (bare) cross section for to the lowestorder hadronic contribution to ,
(5) 
and is a known function [10]. Experimental data are used as input for [23, 25]. The only assumptions here are analyticity and the optical theorem. The factor of in the dispersion relation means that the resonance region dominates the dispersion integral, with the region up to GeV being most important. As indicated in Table 1, the uncertainty on consists of three parts; the first two are associated with the numerical integration of the data. Some of these data are quite old, and the uncertainty because of missing radiative corrections is indicated by [22, 23]. As more data become available, both of these errors will improve. A reasonable expectation is that will improve by at least . As new data which include all radiative corrections replace the old data, will become much less important.
It has been proposed that the hadronic contributions could also be determined from hadronic decay data, using the conserved vector current (CVC) hypothesis. Such an approach can only give the isovector part of the amplitude, e.g. the but not the intermediate states. In contrast, the annihilation cross section contains both isovector and isoscalar contributions, with the cusp from interference as a dominant feature. Since hadronic decay goes through the charged resonance, and annihilation goes through the neutral , understanding the isospin corrections is essential to this approach. This use of CVC can be checked by comparing the hadronic contribution to obtained from each method. Alternately, one can take the measured branching ratio for , where is any vector final state (e.g. ) and compare it to that predicted using CVC and data, applying all the appropriate isospin corrections. At present, neither comparison gives a satisfactory result. For example, the differences between the measured branching ratios, and those obtained from CVC are 4.5 and 3.6 for the and the channels respectively, with good agreement being obtained for the channel [23]. At present the prescription of CVC with the appropriate isospin correction seems to have aspects that are not understood. Given two consistent data sets and the uncertainties inherent in the required isospin corrections to the data, the most recent standardmodel evaluations do not use the data to determine [23, 25, 10]. We return to this point in the next section.
The sum of the QED, hadronic and electroweak contributions in Table 1, adding the errors in quadrature, gives the standard model value in Eq. 2. When compared with the experimental world average [5, 6] in Eq. 1, one finds the difference given in Eq. 3. This difference is at the “interesting” level, and makes it clear that further work should be done to clarify whether there is a true discrepancy. It is estimated that the theory could improve by a factor of two [10, 24], as could the experiment.
2.1 Expected Improvements in the StandardModel Value
Over the past fifteen years, significant progress has been made in improving the standard model value of . The QED and weak contributions are now very well known, and substantial progress has been made on the hadronic contribution because of the large quantity of new highquality data, both on the final state from Novosibirsk [14, 15, 16], and on multihadron final states from BaBar [18, 19, 20, 21]. The BaBar detector operates at a fixed beam energy, and finalstate hadrons are in coincidence with an initialstate photon which lowers the center of mass energy of the collision (often called radiative return or initial state radiation). The KLOE experiment at Frascati has published data on the channel using radiative return [17], with the initial state radiation not detected. The KLOE collaboration is now analyzing their data at large angles where the soft photon is detected, and are also determining the ratio where many systematic effects cancel. In the next year, the BaBar collaboration will release their data [22] using radiative return. If these BaBar data confirm the Novosibirsk data, it will significantly increase our confidence in our knowledge of the lowestorder contribution to . In the longer term, an upgraded collider, VEPP2000 will come on line at Novosibirsk, with the upgraded detectors CMD3 and SND. This new facility will permit improved measurement of the annihilation cross section from threshold up to 2.0 GeV, and will complement the data from BaBar that are expected to be available in the next year.
The other hadronic issue is the hadronic lightbylight contribution, shown in Fig. 2(c), which has a 36% relative uncertainty[10, 29]. So far, the only rigorous QCD result in this domain comes from the observation [26] that in the QCD large number of colors () limit, and to leadingorder in the chiral expansion, the dominant contribution can be calculated analytically and gives a positive contribution. Unfortunately, to go beyond that, one has to resort to hadronic models. The dynamics of the models, however, is very much constrained by the behavior that QCD imposes in various limits: the chiral limit, but also the operator product expansion for certain kinematic configurations. In other kinematic regimes, the models can also be related to observed processes. A combined effort from theorists along the lines started in Refs. [26, 27, 28, 29] could significantly reduce the present uncertainty. Following this line, a goal of 15% accuracy in hadronic lightby light determination seems possible.
We consider several potential improvements to the hadronic contribution, and calculate the projected future significance assuming the reduced uncertainty on , along with the value of . Four different scenarios are considered, with the improvements obtained given in Table 2.

Lowestorder Hadronic (LOHadronic): The error on the lowestorder hadronic contribution contains three pieces (see Table 1). We take to be improved by , and to be improved from to .

Hadronic LightbyLight (HLbL): The hadronic lightbylight contribution is improved to a 15% relative error, and the lowestorder hadronic has the present value.

LOHadronic and HLbL: Both the lowestorder and the lightbylight errors are improved by the amount mentioned above.

Most Optimistic: For the various newphysics calculations given below, a combined theory plus experiment error of was assumed
Assumption  SM Error  Combined Error,  

Present Errors  61  88  3.4 
LOHadronic  54  59  
HLbL  50  55  
Both LOHadronic and HLbL  40  47  
Most Optimistic  30  39 
3 and New Physics in the Era of LHC
The next decade will constitute a very promising and exciting era in particle physics. Experiments at the LHC will explore physics at the TeVscale, an energy scale that has not been probed directly by any previous experiment and which appears to be a crucial energy scale in particle physics. It is linked to electroweak symmetry breaking, and e.g. naturalness arguments indicate that radically new theoretical concepts such as supersymmetry or extra space dimensions might be realized at the TeVscale. Furthermore, the properties of cold dark matter are compatible with weakly interacting particles with weakscale/TeVscale masses, and Grand Unification prefers the existence of supersymmetry at the TeVscale. Hence, there is a plethora of possibilities, and it is likely that a rich spectrum of discoveries will be made at the TeVscale. In any case, experimental data at the TeVscale should give answers to many fundamental questions and lead to tremendous progress in our understanding of particle physics.
Clearly, due to the expected importance and complexity of physics at the TeVscale, we need to combine and crosscheck information from the LHC with information from as many complementary experiments as possible. The measurement of the muon magnetic moment is indispensable in this respect.
The muon magnetic moment is one of the most precisely measured and calculated quantities in elementary particle physics. Moreover, the current experimental value of shows one of the largest deviations of any observable from the corresponding standardmodel prediction (see Eq. 3), . Owing to this precision is not only a sensitive test of all standard model interactions, but also of possible new physics at and above the electroweak scale. If the precision of is improved to , will be a highly sensitive probe of physics beyond the standard model up to the TeVscale. Results from a modelindependent MSSM parameter scan shown in Fig. 3 exemplifies this sensitivity for the case of the minimal supersymmetric standard model (MSSM) by comparing the current value and the future precision with the values for compatible with the MSSM.
In the following, the importance of the measurement in the era of TeVscale physics, and particularly its usefulness as a complement to LHC, is discussed. The discussion is centered around the following aspects:

The measured value constitutes a definite benchmark that any model of new physics has to satisfy.

is particularly sensitive to quantities that are difficult to measure at the LHC.

is an inclusive measure of quantum effects from all particles.

is a very clean observable.

is a simple and beautiful quantity.
3.1 as a benchmark for models of new physics
It has been established that the LHC is sensitive to virtually all proposed weakscale extensions of the standard model, ranging from supersymmetry to extra dimensions, little Higgs models and others. However, even if the existence of physics beyond the standard model is established, it will be far from easy for the LHC alone to identify which of the possible alternatives is realized. The measurement of to will be highly valuable in this respect since it will provide a benchmark and stringent selection criterion that can be imposed on any model that is tested at the LHC. For example, if persists to be as large as it is today, many nonsupersymmetric models will be ruled out. If turns out to be rather small, supersymmetric models will be seriously constrained.
One example, where the power of as a selection criterion becomes
particularly apparent, is the distinction between the minimal supersymmetric
standard model (MSSM) and a
Universal Extra Dimension (UED) model. Both models predict the existence of
heavy partners (superpartners or KaluzaKlein modes) of the standardmodel
particles. The quantum numbers and — for suitable model parameters — also
the mass spectra of these heavy partners are
the same in both models. Ref. [31]
analyzed whether the two models can be distinguished at
the LHC by considering spinsensitive observables. The answer turned out to be
affirmative, however even in the colliderfriendly case that the massspectrum
of the MSSM reference point SPS1a is realized, the separation of the two models
is not an easy task. A precise measurement of would be of great help in
this respect. The values predicted by the MSSM[30]
and by the UED[32] model (in the
parameter point of Ref. [31]) are very
different:
(6) 
Hence, the future measurement would separate the two models by more than 7 standard deviations and thus allow for a clear decision in favor of one of the two models.
A second example is the distinction between two different, wellmotivated scenarios of supersymmetry breaking such as anomalymediated and gravitymediated supersymmetry breaking. The two scenarios lead to different values of superpartner masses, but a major qualitative difference is the different sign for the MSSMparameter preferred in both scenarios (if the current experimental constraint on BR is imposed). The LHC is not particularly sensitive to sign, and thus cannot test this fundamental difference. However, sign determines the sign of the supersymmetry contributions to . The current value already favors sign, but the magnitude of the uncertainty does not allow a definite conclusion. An improved measurement with uncertainty has the potential to unambiguously determine sign and thus one of the central supersymmetry parameters. Depending on the future central value of , either anomaly or gravitymediated supersymmetry breaking could be ruled out or at least seriously constrained by .
A third example concerns the restriction of special, highly constrained models of new physics such as the constrained MSSM (CMSSM) [33]. The CMSSM has only four free continuous parameters. One precise measurement such as the future determination of effectively fixes one parameter as a function of the others and thus reduces the number of free parameters by one. In fact, the CMSSM is very sensitive not only to the but also to the dark matter (assumed to consist of neutralinos) relic density. As shown in Figs. 4 and 5, both observables lead to orthogonal constraints in CMSSM parameter space, and therefore imposing both constraints leaves only two free parameters and thus allows for very stringent tests of the CMSSM at the LHC.
3.2 is sensitive to quantities that are difficult to measure at the LHC
The LHC as a hadron collider is particularly sensitive to colored particles whereas the measurement is particularly sensitive to weakly interacting particles that couple to the muon. Therefore the sensitivities are complementary. As an example, if the MSSM is realized it is possible that the LHC finds some but not all superpartners of the gauge and Higgs bosons, the charginos and neutralinos.
Furthermore, for unraveling the mysteries of TeVscale physics it is not sufficient to determine which kind of new physics, i.e. extra dimensions, supersymmetry or something else, is realized, but it is necessary to determine model parameters as precisely as possible. In this respect the complementarity between the LHC and becomes particularly important. A difficulty at the LHC is the very indirect relation between LHC observables (cross sections, mass spectra, edges, etc) and model parameters such as masses and couplings, let alone more underlying parameters such as supersymmetrybreaking parameters or the parameter in the MSSM. It has been shown that a promising strategy is to determine the model parameters by performing a global fit of a model such as the MSSM to all available LHC data. However, recent investigations have revealed that in this way typically a multitude of almost degenerate local minima of as a function of the model parameters results [34]. Independent observables such as will be highly valuable to break such degeneracies and in this way to unambiguously determine the model parameters.
In the following we discuss the complementarity of LHC and for the wellstudied case of the MSSM, where it has turned out that the LHC has a weak sensitivity to two central parameters: The LHC has virtually no sensitivity at all to the sign of the parameter and only a moderate sensitivity to , the ratio of the two Higgs vacuum expectation values.
The MSSM contributions to on the other hand are highly sensitive to both of these parameters,
(7) 
where denotes the average superpartner mass scale. Therefore, a future improved measurement has the potential to establish a definite positive or negative sign of the parameter in the MSSM, which would be a crucial additional piece of information.
In order to discuss the relative sensitivity of LHC and to , we reconsider the situation discussed in Ref. [35]. In this reference it has been assumed that the MSSM reference point SPS1a is realized, and the potential of the LHC to determine MSSM parameters has been worked out. By performing a global fit of the MSSM to all available LHC data, a large set of MSSM parameters can determined to a precision of a few percent. Apart from sign, which has been assumed to be positive, could be determined only poorly to .
In such a situation, an improved measurement will be the perfect complement of the LHC. One can simply study the MSSMprediction for as a function of (all other parameters are known from the global fit to LHC data) and compare it to the measured value. One can display the result in a “blue band” plot, similar to the case of the LEP precision data, which can be compared to the standard model predictions as a function of the standardmodel Higgs boson mass. The resulting possible future “blue band” plot for determined by the measurement is shown in Fig. 6. As can be seen from the plot, the improvement of the determination of from the measurement is excellent.
One should note that even if better ways to determine at the LHC alone might be found in the future, an independent determination using will still be highly valuable. is one of the central MSSM parameters, and it appears in all sectors and in almost all observables. Therefore, measuring in two different ways, e.g. using certain Higgs or decays at the LHC and using , would constitute a nontrivial and indispensable test of the universality of and thus of the structure of the MSSM.
3.3 is an inclusive measure of quantum effects
At the LHC, it is not trivial to discover all new particles that are in principle kinematically accessible. Some, in particular unexpected particles, might be difficult to detect due to background problems, notoptimized search strategies, or the triggers. The quantity on the other hand, as a pure quantum effect, is sensitive to all particles that couple to the muon (and, from the twoloop level on, even to particles that don’t couple to the muon). Therefore, a precise measurement of constitutes an inclusive test of all kinds of new physics, expected or unexpected.
If a large deviation from the standard model prediction is found, this can help to establish the existence of new particles that have not been seen at the LHC. The projected precision of the measurement will even permit the derivation of mass bounds on such new particles. Feeding this information back to the LHC will help optimizing searches and might thus make a direct detection of the new particles possible.
Likewise, if a small is found, this will help exclude the existence of certain particles in a particular mass range. In this way, regions of parameter space that are difficult to test at the LHC might be covered. 10 years ago, LEPexperiments could not exclude the existence of particularly light charginos, since they could have escaped detection. This hole in parameter space was then closed by considering [36]. Such light charginos would have given a large contribution to which was not observed.
3.4 is a clean observable
The LHC is an extremely complex machine, and it will be a huge task to understand the LHC detectors sufficiently to make reliable measurements possible. At the LHC many sources of systematic errors have to be brought under control, and the overwhelming background makes it difficult to extract meaningful signals. The measurement suffers from none of these problems.
Therefore, the errors associated with LHC and the measurement are totally complementary, and the measurement will constitute a nontrivial crosscheck of the LHC. The importance that LHC performs successfully cannot be overestimated. This implies that independent measurements that can crosscheck and guide the LHC with clean data are equally important.
3.5 is a simple and beautiful observable
Finally, it is worth mentioning that the role the observable has played in the past and should play in the future goes far beyond just being a useful tool. , and anomalous magnetic moments in general, are some of the simplest and most beautiful observables in fundamental physics. They have found entrance in all quantum field theory textbooks and have inspired generations of quantum field theory students and researchers.
Anomalous magnetic moments are the simplest observables for which quantum effects in quantum field theory are important. The first measurement of the anomalous magnetic moment of the electron sparked the first successful loop calculation in QED by Schwinger, in the course of which the basic ideas of renormalization theory were developed. In the meantime, many more milestones in the understanding of quantum field theory are related to, and were inspired by research on anomalous magnetic moments.
Furthermore, has great appeal to the general public. All of the E821 results were covered by the New York Times and the rest of the popular press, as well as other journals such as Science News, The New Scientist, Physics Today, Science, and Nature. Measuring a quantity with such high precision that one can resolve effects from almost all elementary particles, ranging from the photon, electron, muon, over hadrons, to and bosons, is striking and catches the imagination. The projected precision of the measurement will permit the resolution of effects from new particles such as supersymmetric particles. This opportunity should not be missed.
4 Improvements to the Experiment
The final error of 0.54 ppm obtained in E821 was statistics limited, with a 0.46 ppm statistical error and a 0.28 systematic error. The errors from each running period are given in Table 3. Any upgraded experiment must further improve the systematic errors and significantly increase the volume of data collected. The principal focus of this document is to present the physics case for an improved experiment, rather than to present technical details of the upgraded experiment. We give no specifics for experimental improvements in this white paper, but rather briefly describe the experiment and the possible future goals.
1998  1999  2000  2001  E969 Goal  

Magnetic Field Systematic ()  0.5  0.4  0.24  0.17  0.1 
Anomalous Precession Systematic ()  0.8  0.3  0.31  0.21  0.1 
Statistical Uncertainty  4.9  1.3  0.62  0.66  0.14 
Total Uncertainty  5.0  1.3  0.73  0.72  0.20 
A proposal to Brookhaven, E969, which received enthusiastic scientific approval, plans to reduce the combined error to 0.2 ppm, a factor of 2.5 improvement beyond E821. When combined with expected improvements in the stronginteraction contribution to the standardmodel value, the improved sensitivity could increase the significance of any difference between theory and experiment to above the level, assuming the central values remain the same. Some members of the community have encouraged a close look to see if a factor of 5 improvement is possible, down to a precision of ppm.
4.1 How is measured?
The muon anomalous moment is determined from the difference frequency between the spin precession frequency, , and the cyclotron frequency, , of an ensemble of polarized muons that circulate in a storage ring having a highly uniform magnetic field. Apart from very small corrections, is proportional to . Vertical containment in the storage ring is achieved with an electric quadrupole field. In the presence of magnetic and electric fields having , is described by
(8) 
where is the muon charge and is the muon velocity in units of . The term in parentheses multiplying vanishes at the “magic” value of , and the electrostatic focusing does not affect the spin motion (except for a small correction necessary to account for the finite momentum range around the magic momentum).
The magic momentum of 3.094 GeV/c sets the scale of the experiment, and the BNL storage ring [37] is 7.1 m in radius and has a 1.45 T magnetic field. At 3.094 GeV/ the timedilated muon lifetime is 64.4 s, and the decay electrons have a maximum labframe energy of approximately 3.1 GeV. A short ( ns) bunch of muons is injected into the storage ring, and the arrival time and energy of the decay electrons is measured. The time spectrum of muon decay electrons above a single energy threshold produces the time distribution
(9) 
as shown in Fig. 7. The value of is obtained from a leastsquares fit to these data.
In the experiment the muon frequency is determined to high precision, and the average magnetic field is measured to equal or better precision. The field is determined from a suite of NMR measurements [6, 38]: to reference the field against an absolute standard; to monitor the field continuously; and to map the field in the storage ring aperture [6].
An upgraded experiment at any level requires a significant increase in the muon beam intensity, as well as improvements in the detectors and frontend electronics. A credible case was made in the E969 proposal that the factor of 2.5 improvement could be realized. We believe that with further research and development, and adequate running time on the accelerator, a significant increase in precision beyond the factor of 2.5 could be achieved. We continue to study potential improvements to the beamline, and to the electron detectors and electronics.
5 Summary
In this White Paper, we concentrate on the physics case for the new experiment, E969. The standard model theory situation now—Spring, 2007—gives a precision commensurate with experiment at roughly 0.5 ppm. Improvements are expected from ongoing work, both in the experimentdriven lowestorder hadronic vacuum polarization, and in the hadronic lightbylight contributions. The new experiment will reduce the experimental error by a factor of 2.5 (or more) so that a comparison with theory will have a combined sensitivity to better than . The current discrepancy, at would rise above if the magnitude of the difference remains unchanged. New physics signals are expected to emerge in the LHC era, and the improved measurement can make a significant impact in the role of helping to sort out the nature of the discoveries made at the LHC. The precision physics from lowenergy observables is complementary to the discovery potential of the collider. Many authors have understood the importance of the independent constraint placed on new physics by , and there are over 1300 citations to the E821 papers.
We conclude with a list of items related to “Why now?”

The experimental precision can be improved by at least a factor of 2.5 or more but it must be started “now” to be ready when the results from the LHC will demand additional constraints. Several years of RD and construction are required before running and analysis can begin. We estimate roughly years from project start to achieve the goal.

The standard model theory uncertainty is slightly smaller than experiment and it should be halved again over the next few years. Over the past twenty years E821 stimulated an enormous amount of theoretical work, which necessitated the breaking of new ground in higherorder QED and electroweak contributions, as well as the significant work on the hadronic contribution. These improvements have been driven by the fact that real measurements of were also being made. The momentum should be sustained and new efforts, especially related to the difficult hadronic lightbylight contribution, must be encouraged.

We are already at a compelling moment. The present based standard model theory is standard deviations from the experiment, providing a strong hint of new physics. If the current discrepancy persists, with halved experimental and theoretical uncertainties, the significance will rise above .

For specific models, such as SUSY, is particularly effective at constraining —the ratio of Higgs vacuum expectation values—for a given superparticle mass and it gives the sign of the parameter, something that cannot be obtained at the LHC. This information is complementary to the anticipated LHC newparticle spectra and it will be crucial in the effort to pin down the parameters of the theory behind them.

Independently of SUSY—we do not suggest or depend on this or any other specific model as being correct—measuring to very high precision will register an important constraint for any new physics theory to respect. Some models will predict a large effect, while others will not. It is information that can likely help diagnose new physics.

On the practical side, the project is based on a proven track record by an existing team of experts and new enthusiastic collaborators. It is efficient to mobilize the Collaboration now while the storage ring and beamline facilities can be dependably recommissioned, and while the diverse expertise exists.
Acknowledgments: We thank Michel Davier and Simon Eidelman for their helpful comments and discussion on the hadronic contribution. We thank Keith Olive for providing the CMSSM figures.
Footnotes
 The UED result is obtained from Eq. (3.8) of reference [32] for the case of one extra dimension, .
References
 R.M. Carey et al., Phys. Rev. Lett. 82, 1632 (1999).
 H.N. Brown et al., (Muon Collaboration), Phys. Rev. D62, 091101 (2000).
 H.N. Brown, et al., (Muon Collaboration), Phys. Rev. Lett. 86 2227 (2001).
 G.W. Bennett, et al., (Muon Collaboration), Phys. Rev. Lett. 89, 101804 (2002).
 G.W. Bennett, et al., (Muon Collaboration), Phys. Rev. Lett. 92, 161802 (2004).
 G. Bennett, et al., (Muon Collaboration), Phys. Rev. D73, 072003 (2006).
 D.W. Hertzog and W.M. Morse, Annu. Rev. Nucl. Part. Sci. 54 141 (2004).
 F.J.M. Farley and Y.K. Semertzidis, Prog. in Nucl. and Part. Phys. 52, 1 (2004)
 M. Davier and W. Marciano, Annu. Rev. Nucl. Part. Sci. 54 115 (2004).
 J.P. Miller, E. de Rafael and B.L. Roberts, hepph/0703049, and Rep. Prog. Phys. 70, 795881 (2007).
 See: http://g2pc1.bu.edu/ roberts/Proposal969.pdf
 J. Schwinger, Phys. Rev. 73, 416L (1948), and Phys. Rev. 76 790 (1949). The former paper contains a misprint in the expression for that is corrected in the longer paper.
 T. Kinoshita and M. Nio, Phys. Rev. D73, 053007 (2006), and references therein.
 R.R.Akhmetshin, et al., hepex/0610016 published in JETP Letters, 2006, Vol. 84, No. 8, pp. 413417; Pis’ma v Zh. Eksp. Teor. Fiz., 2006, Vol. 84, No. 8, pp. 491495
 R.R. Akhmetshin, et al., hepex/0610021, Phys. Lett. B 648, 28 (2007).
 M.N. Achasov et al. hepex/0605013, published in JETP 103, 380384 (2006), Zh. Eksp. Teor. Fiz. 130, 437441 (2006).
 A. Aloisio et al. (KLOE collaboration) Phys.Lett.B606:1224,2005
 B. Aubert, et al., BABAR Collaboration, Phys. Rev. D 70 (2004) 072004.
 B. Aubert, et al., BABAR Collaboration, Phys. Rev. D 71 (2005) 052001.
 B. Aubert, et al., BABAR Collaboration, Phys. Rev. D 73 (2006) 052003.
 B. Aubert, et al., BABAR Collaboration, ePrint: arXiv:0704.0630 [hepex], and submitted to Phys. Rev. D.
 M. Davier, private communication.
 M. Davier, hepph/0701163v2, Jan. 2007.
 S. Eidelman, private communication.
 K. Hagiwara, A.D. Martin, D. Nomura, T. Teubner . KEKTH1112, Nov 2006. ePrint Archive: hepph/0611102, and Phys. Lett. B649, 173 (2007).
 M. Knecht, A. Nyffeler, M. Perrottet and E. de Rafael, Phys. Rev. Lett. 88 071802 (2002).
 M. Knecht and A. Nyffeler, Phys. Rev. D57 (2002) 465.
 K. Melnikov and A. Vainshtein, Phys. Rev. D70 (2004) 113006.
 J. Bijnens and J. Prades, arXiv:hepph/0701240, 2007, and Mod. Phys. Lett. A 22, 767 (2007), ( hepph/0702170).
 D. Stöckinger, “The muon magnetic moment and supersymmetry,” J. Phys. G 34, R45 (2007) [arXiv:hepph/0609168].
 J. M. Smillie and B. R. Webber, “Distinguishing spins in supersymmetric and universal extra dimension models at the Large Hadron Collider,” JHEP 0510 (2005) 069 [arXiv:hepph/0507170].
 T. Appelquist and B. A. Dobrescu, “Universal extra dimensions and the muon magnetic moment,” Phys. Lett. B 516 (2001) 85 [arXiv:hepph/0106140].
 J. R. Ellis, K. A. Olive, Y. Santoso and V. C. Spanos, Phys. Lett. B 565 176 (2003); John Ellis, Keith A. Olive, Yudi Santoso, and Vassilis C. Spanos, Phys. Rev. D71 095007 (2005), and references therein.
 T. Plehn, M. Rauch, to be published.
 R. Lafaye, T. Plehn and D. Zerwas, “SFITTER: SUSY parameter analysis at LHC and LC,” Contribution to LHCLC Study Group, G. Weiglein, et al. [hepph/0404282].
 M. Carena, G. F. Giudice and C. E. M. Wagner, “Constraints on supersymmetric models from the muon anomalous magnetic moment,” Phys. Lett. B 390 (1997) 234 [arXiv:hepph/9610233].
 G.T. Danby, et al., Nucl. Instr. and Methods Phys. Res. A 457, 151174 (2001).
 R. Prigl, et al., Nucl. Inst. Methods Phys. Res. A374 118 (1996); X. Fei, V. Hughes and R. Prigl, Nucl. Inst. Methods Phys. Res. A394, 349 (1997); W. Liu et al., Phys. Rev. Lett. 82, 711 (1999).