Bayesian Selection of sign \mu within mSUGRA in Global Fits Including WMAP5 Results

Bayesian Selection of sign within mSUGRA in Global Fits Including WMAP5 Results

Farhan Feroz
Cavendish Laboratory, JJ Thomson Avenue, Cambridge CB3 0HE, UK
E-mail:
f.feroz@mrao.cam.ac.uk
   Benjamin C Allanach
DAMTP, CMS, Wilberforce Road, Cambridge CB3 0WA, UK
E-mail:
b.c.allanach@damtp.cam.ac.uk
   Mike Hobson
Cavendish Laboratory, JJ Thomson Avenue, Cambridge CB3 0HE, UK
E-mail:
mph@mrao.cam.ac.uk
   Shehu S AbdusSalam
DAMTP, CMS, Wilberforce Road, Cambridge CB3 0WA, UK
E-mail:
s.s.abdussalam@damtp.cam.ac.uk
   Roberto Trotta
Astrophysics Department, Oxford University, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH, UK
E-mail:
rxt@astro.ox.ac.uk
   Arne M Weber
Max Planck Inst. für Phys., Föhringer Ring 6, D-80805 Munich, Germany
E-mail:
arne.weber@mppmu.mpg.de
Abstract:

We study the properties of the constrained minimal supersymmetric standard model (mSUGRA) by performing fits to updated indirect data, including the relic density of dark matter inferred from WMAP5. In order to find the extent to which is disfavoured compared to , we compare the Bayesian evidence values for these models, which we obtain straightforwardly and with good precision from the recently developed multi–modal nested sampling (‘MultiNest’) technique. We find weak to moderate evidence for the branch of mSUGRA over and estimate the ratio of probabilities to be depending on the prior measure and range used. There is thus positive (but not overwhelming) evidence that in mSUGRA. The MultiNest technique also delivers probability distributions of parameters and other relevant quantities such as superpartner masses. We explore the dependence of our results on the choice of the prior measure used. We also use the Bayesian evidence to quantify the consistency between the mSUGRA parameter inferences coming from the constraints that have the largest effects: , and cold dark matter (DM) relic density .

Supersymmetry phenomenology, Supersymmetric Standard Model
preprint: DAMTP-2008-31

1 Introduction

The impending start of operation of the Large Hadron Collider (LHC) makes this a very exciting time for supersymmetric (SUSY) phenomenology. Numerous groups have been pursuing a programme to fit simple SUSY models and identify the regions in the parameter space that might be of interest with the forthcoming LHC data [1, 2, 3, 4, 5, 6]. The Minimal Supersymmetric Standard Model (MSSM) with one particular choice of universal boundary conditions at the grand unification scale, called either the Constrained Minimal Supersymmetric Standard Model (CMSSM) or mSUGRA [7], has been studied quite extensively in multi–parameter scans. mSUGRA has proved to be a popular choice for SUSY phenomenology because of the small number of free parameters. In mSUGRA, the scalar mass , gaugino mass and tri–linear coupling are assumed to be universal at a gauge unification scale GeV. In addition, at the electroweak scale one selects , the ratio of Higgs vacuum expectation values and sign, where is the Higgs/higgsino mass parameter whose square is computed from the potential minimisation conditions of electroweak symmetry breaking (EWSB) and the empirical value of the mass of the boson, . The family universality assumption is well motivated since flavour changing neutral currents are observed to be rare. Indeed several string models (see, for example Ref. [8]) predict approximate MSSM universality in the soft terms. Nevertheless, mSUGRA is just one (albeit popular) choice among a multitude of possibilities.

Recently, Bayesian parameter estimation techniques using the Markov Chain Monte Carlo (MCMC) sampling have been applied to the study of mSUGRA, performing a multi–dimensional Bayesian fit to indirect constraints [9, 10, 11, 12, 13, 14, 15, 16]. Also, a study has been extended to large volume string compactified models [17]. A particularly important constraint comes from the cold dark matter (DM) relic density determined by the Wilkinson Microwave Anisotropy Probe (WMAP). DM is assumed to consist solely of the lightest supersymmetric particle (LSP). As pointed out in [12], the accuracy of the DM constraint results in very narrow steep regions of degenerate minima as the system is rather under–constrained. This makes the global fit to all the relevant mSUGRA parameters potentially difficult. If the MSSM is confirmed in the forthcoming collider data, it will hopefully be possible to break many of these degeneracies using collider observables such as edges in kinematical distributions. However, it is expected that one degeneracy will remain from LHC data in the form of the overall mass scale of the sparticles. We apply the newly developed MultiNest technique [18, 19] to explore this highly degenerate parameter space efficiently. With this technique, one can also calculate the ‘Bayesian evidence’ which plays the central role in Bayesian model selection and hence allows one to distinguish between different models.

Ref. [20] performed a random scan of points in the parameter spaces of mSUGRA, minimal anomaly mediated SUSY breaking (mAMSB) and minimal gauge mediated SUSY breaking (mGMSB). and electroweak physics observables (but not the dark matter relic density) were used to assign a to each of the points. The resulting minimum values for each scenario were then compared in order to select which model is preferred by the data. Unfortunately, the conclusions drawn (that mAMSB is preferred by data) may have been reversed had the dark matter relic density been included in the fit. It is also not clear how accurate the resulting value of minimum is in each scenario, since the scans are necessarily sparse due to the high dimensionality of the parameter space111However, this point could be easily fixed by the authors of Ref. [20] by separating the points randomly into two equally sized samples and examining the difference of the minimum point in each.. Recently, several studies of the mSUGRA parameter space have used Markov Chain Monte Carlo in order to focus on the joint analysis of indirect constraints from experiment with the constraint as determined by WMAP and other data. We extend this approach by using MultiNest to calculate the Bayesian evidence, which, when compared with fits to different models, can be used for hypothesis testing. As an example, we consider mSUGRA versus mSUGRA as alternative hypotheses. In Ref. [12], the evidence ratio for these two quantities was calculated using the method of bridge sampling [21] in MCMCs. However, it is not clear how accurate the estimation of the evidence ratio was, and no uncertainties were quoted. The present approach yields, robustly small uncertainties on the ratio, for a given hypothesis and prior probability distribution. Since Ref. [12], a tension has developed between the constraints coming from the anomalous magnetic moment of the muon , and the branching ratio of the decay of quarks into quarks , which favour opposite signs of  [14]. Ref. [14] investigated the constraints on continuous parameters for either sign of and used the Bayesian calibrated p–value method [22] to get a rough estimate of the upper limit for the evidence ratio between mSUGRA and mSUGRA of . We also use the evidence to examine quantitatively any incompatibilities between mSUGRA parameter inferences coming from three main constraints: , and . Thus we determine to what extent the three measurements are compatible with each other in an mSUGRA context. We also update the fits to WMAP5 data for the first time and include additional –physics constraints. Recent data point to an increased statistical significance in the discrepancy between the Standard Model prediction and the experimental value of , and this leads to an additional statistical pull towards a larger contribution of coming from supersymmetry.

Our purpose in this paper is two–fold: as well as producing interesting physical insights, we also aim to gain experience in developing and applying tools for efficient Bayesian inference, which will prove useful in the analysis of future collider data.

This paper is organised as follows. In Section 2 we motivate the case for Bayesian model selection. In Section 3 we outline our theoretical setup and present our results in Section 4. Finally, in Section 5 we list the summary and present our conclusions. We motivate the case for the use of Bayesian evidence in quantifying consistency between different data–sets in Appendix A.

2 Bayesian Inference

A common problem in data analysis is to use the data to make inferences about parameters of a given model. A higher level of inference is to decide between two or more competing models. For instance, in the case of mSUGRA, one would like to know whether there is sufficient evidence in the data to rule out the branch. Bayesian inference provides a consistent approach to model selection as well as to the estimation of a set parameters in a model (or hypothesis) for the data . It can also be shown that Bayesian inference is the unique consistent generalisation of the Boolean algebra [23].

Bayes’ theorem states that

(1)

where is the posterior probability distribution of the parameters, is the likelihood, is the prior distribution, and is the Bayesian evidence.

Bayesian evidence is simply the factor required to normalise the posterior over and is given by:

(2)

where is the dimensionality of the parameter space. Since the Bayesian evidence does not depend on the parameter values , it is usually ignored in parameter estimation problems and the posterior inferences are obtained by exploring the un–normalized posterior using standard MCMC sampling methods.

A useful feature of Bayesian parameter estimation is that one can easily obtain the posterior distribution of any function, , of the model parameters . Since,

(3)

where is the delta function. Thus one simply needs to compute for every Monte Carlo sample and the resulting sample will be drawn from . We make use of this feature in Section 4.2 where we present the posterior probability distributions of various observables used in the analysis of mSUGRA model.

In order to select between two models and one needs to compare their respective posterior probabilities given the observed data set , as follows:

(4)

where is the prior probability ratio for the two models, which can often be set to unity but occasionally requires further consideration. It can be seen from Eq. 4 that the Bayesian evidence takes the center stage in Bayesian model selection. As the average of likelihood over the prior, the Bayesian evidence is higher for a model if more of its parameter space is likely and smaller for a model with highly peaked likelihood but has many regions in the parameter space with low likelihood values. Hence, Bayesian model selection automatically implements Occam’s razor: a simpler theory which agrees well enough with the empirical evidence is preferred. A more complicated theory will only have a higher evidence if it is significantly better at explaining the data than a simpler theory.

Unfortunately, evaluation of the multidimensional integral (2) is a challenging numerical task. Standard techniques like thermodynamic integration [24] are extremely computationally expensive which makes evidence evaluation typically at least an order of magnitude more costly than parameter estimation. Some fast approximate methods have been used for evidence evaluation, such as treating the posterior as a multivariate Gaussian centred at its peak (see e.g. Ref. [25]), but this approximation is clearly a poor one for multi–modal posteriors (except perhaps if one performs a separate Gaussian approximation at each mode). The Savage–Dickey density ratio has also been proposed [26] as an exact, and potentially faster, means of evaluating evidences, but is restricted to the special case of nested hypotheses and a separable prior on the model parameters. Bridge sampling [21] allows the evaluation of the ratio of Bayesian evidence of two models and is implemented in the ‘bank sampling’ method of Ref. [27] but it is not yet clear how accurately bank sampling can calculate these evidence ratios. Various alternative information criteria for model selection are discussed by [28], but the evidence remains the preferred method.

The nested sampling approach, introduced by Skilling [29], is a Monte Carlo method targeted at the efficient calculation of the evidence, but also produces posterior inferences as a by–product. Feroz & Hobson [18, 19] built on this nested sampling framework and have recently introduced the MultiNest algorithm which is efficient in sampling from multi–modal posteriors exhibiting curving degeneracies, producing posterior samples and calculating the evidence value and its uncertainty. This technique has greatly reduced the computational cost of model selection and the exploration of highly degenerate multi–modal posterior distributions. We employ this technique in this paper.

The natural logarithm of the ratio of posterior model probabilities provides a useful guide to what constitutes a significant difference between two models:

(5)

We summarise convention we use in this paper in Table 1.

Odds Probability Remark
Inconclusive
Weak Evidence
Moderate Evidence
Strong Evidence
Table 1: The scale we use for the interpretation of model probabilities. Here the represents the natural logarithm.

While for parameter estimation, the priors become irrelevant once the data are powerful enough, for model selection the dependence on priors always remains (although with more informative data the degree of dependence on the priors is expected to decrease, see e.g. Ref. [30]); indeed this explicit dependence on priors is one of the most attractive features of Bayesian model selection. Priors should ideally represent one’s state of knowledge before obtaining the data. Rather than seeking a unique ‘right’ prior, one should check the robustness of conclusions under reasonable variation of the priors. Such a sensitivity analysis is required to ensure that the resulting model comparison is not overly dependent on a particular choice of prior and the associated metric in parameter space, which controls the value of the integral involved in the computation of the Bayesian evidence (for some relevant cautionary notes on the subject see Ref. [31]).

One of the most important applications of model selection is to decide whether the introduction of new parameters is necessary. Frequentist approaches revolve around the significance test and goodness–of–fit statistics, where one accepts the additional parameter based on the improvement in by some chosen threshold. It has been shown that such tests can be misleading (see e.g. Ref. [26, 22]), not least because they depend only on the values of at the best–fit point, rather than over the entire allowed range of the parameters.

Another application of Bayesian model selection is in quantifying the consistency between two or more data sets or constraints [25, 32]. Different experimental observables may “pull” the model parameters in different directions and consequently favour different regions of the parameter space. Any obvious conflicts between the observables are likely to be noticed by the “chi by eye” method employed to date but it is imperative for forthcoming high–quality constraints to have a method that can quantify these discrepancies. The simplest scenario for analysing different constraints on a particular model is to assume that all constraints provide information on the same set of parameter values. We represent this hypothesis by . This is the assumption which underlies the joint analysis of the constraints. However, if we are interested in accuracy as well as precision then any systematic differences between constraints should also be taken into account. In the most extreme case, which we represent by , the constraints would be in conflict to such an extent that each constraint requires its own set of parameter values, since they are in different regions of parameter space. Bayesian evidence provides a very easy method of distinguishing between scenarios, and . To see this, we again make use of Eq. 4. If we have no reason to favour either of or over the other, then we can distinguish between these two scenarios using the following ratio,

(6)

Here the numerator represents the joint analysis of all the constraints while in the denominator the individual constraints are assumed to be independent and are each fit individually to mSUGRA, with a different set of mSUGRA parameters for each . The interpretation of the value can be made in a similar manner to model selection, as discussed in the preceding paragraph. A positive value of gives the evidence in favour of the hypothesis that all the constraints are consistent with each other while a negative value would point towards tension between constraints, which prefer different regions of mSUGRA parameter space. We follow this recipe to carry out consistency checks for the mSUGRA model between , and as determined by WMAP and other cosmological measurements. The hypothesis thus states that mSUGRA jointly fits these three observables, whereas states that they all prefer different regions of parameter space and so we require an ‘(mSUGRA)’ model to fit them. Given the fact that Bayesian evidence naturally embodies a quantification of Occam’s razor, the resulting complexity in the model coming from the additional 2 sets of mSUGRA parameters must be matched by a better fit to data for to be preferred.

3 The Analysis

Our parameter space contains 8 parameters, 4 of them being the mSUGRA parameters; , , , and the rest taken from the Standard Model (SM): the QED coupling constant in the scheme , the strong coupling constant , the running mass of the bottom quark and the pole top mass . We refer to these SM parameters as nuisance parameters. Experimental errors on the mass of the boson and the muon decay constant are so small that we fix these parameters to their central values of GeV and GeV respectively.

For all the models analysed in this paper, we used 4,000 live points (see Refs. [18, 19]) with the MultiNest technique. This corresponds to around 400,000 likelihood evaluations taking approximately hours on 4 3.0 GHz Intel Woodcrest processors.

3.1 The Choice of Prior Probability Distribution

In all cases, we assume the prior is separable, such that

(7)

where represents the prior probability of parameter . We consider two initial ranges for the mSUGRA parameters which are listed in Table 2.

mSUGRA parameters 2 TeV range 4 TeV range
60 GeV to 2 TeV 60 GeV to 4 TeV
60 GeV to 2 TeV 60 GeV to 4 TeV
–4 TeV to 4 TeV –7 TeV to 7 TeV
2 to 62 2 to 62
Table 2: mSUGRA uniform prior parameter ranges

The “2 TeV” range is motivated by a general “naturalness” argument that SUSY mass parameters should lie within TeV), since otherwise a fine-tuning in the electroweak symmetry breaking sector results. Deciding which region of parameter space is natural is obviously subjective. For this reason, we include the “4 TeV” range results to check the dependence on prior ranges. We consider the branches and separately.

SM parameters Mean value Uncertainty Reference
(exp)
[33]
[33]
GeV GeV [33]
GeV GeV [34]
Table 3: Constraints on the Standard Model (nuisance) parameters

We impose flat priors on all 4 mSUGRA parameters (i.e. and ) for the “2 TeV” and “4 TeV” ranges and both signs of . Current constraints on SM (nuisance) parameters are listed in Table 3222We note that the experimental constraint on is changing quite rapidly as new results are issued from the Tevatron experiments. The latest combined constraint (released after this paper was first written) is GeV [35]. Any fit differences caused in the movement of the central value will be smeared out by its uncertainty, but we shall mention at the relevant point below where the new value could change the fits.. With the means and uncertainties from Table 3, we impose Gaussian priors on SM (nuisance) parameters truncated at from their central values. We also perform the analysis for flat priors in and for both ranges and both signs of . Since,

(8)

it is clear that the logarithmic prior measures have a factor compared to the linear prior measure and so it could potentially favour lighter sparticles. If the data constrains the model strongly enough, lighter sparticles would only be favoured negligibly. Our main motive in seeing the variation of the fit to the variation in prior measure is to check the dependence of our results on the choice of the prior. For robust fits, which occur when there is enough precisely constraining data, the posterior probability density should only have a small dependence upon the precise form of the prior measure.

3.2 The Likelihood

Observable Mean value Uncertainty Reference
[36]
GeV MeV [37, 38]
[39, 37]
[40, 41]
[40, 41]
[42]
[42, 43]
Table 4: Summary of the Gaussian distributed observables used in the analysis. For each quantity we use a likelihood function with central mean and standard deviation where is the experimental uncertainty and is the theoretical uncertainty. represents the isospin asymmetry of . represents the ratio of the experimental and SM predictions of the branching ratio of mesons decaying into a tau and a tau neutrino. is the ratio of the experimental and the SM neutral meson mixing amplitudes. The non-Gaussian likelihoods for the LEP constraint on Higgs mass, and are described later.

Our calculation of the likelihood closely follows Ref. [15], with updated data and additional variables included, and is summarised in Table 4 and discussed further below. We assume that the measurements of observables (the ‘data’) used in our likelihood calculation are independent and have Gaussian errors333The experimental constraints the LEP constraint on Higgs mass, and likelihood, each described later, are not Gaussian., so that the likelihood distribution for a given model () is

(9)

where

(10)

and

(11)

is the “predicted” value of the observable given the knowledge of the model and is the standard error of the measurement.

In order to calculate predictions for observables from the input parameters , SOFTSUSY2.0.17 [44] is first employed to calculate the MSSM spectrum. Bounds upon the sparticle spectrum have been updated and are based upon the bounds collected in Ref. [11]. Any spectrum violating a 95 limit from negative sparticle searches is assigned a zero likelihood density. Also, we set a zero likelihood for any inconsistent point, e.g. one which does not break electroweak symmetry correctly, has a charged LSP, or has tachyonic sparticles. For points that are not ruled out, we then link the mSUGRA spectrum via the SUSY Les Houches Accord [45] (SLHA) to various other computer codes that calculate various observables. For instance, micrOMEGAs1.3.6 [46], calculates , the branching ratio and the anomalous magnetic moment of the muon .

The anomalous magnetic moment of the muon was measured to be [47]. Its experimental value is in conflict with the SM predicted value from [36], which includes the latest QED [48], electroweak [49], and hadronic [36] contributions to . This SM prediction does not however account for data which is known to lead to significantly different results for , implying underlying theoretical difficulties which have not been resolved so far. Restricting to data, hence using the numbers given above, we find

(12)

This excess may be explained by a supersymmetric contribution, the sign of which is identical in mSUGRA to the sign of the super potential parameter [50]. After obtaining the one-loop MSSM value of from micrOMEGAs v1.3.6, we add the dominant 2-loop corrections detailed in Refs. [51, 52].

The boson pole mass and the effective leptonic mixing angle are also used in the likelihood. We take the measurements to be [37, 39]

(13)

where experimental errors and theoretical uncertainties due to missing higher–order corrections in SM [53] and MSSM [38, 54] have been added in quadrature. The most up to date MSSM predictions for and  [38] are finally used to compute the corresponding likelihoods.

A parameterisation of the LEP2 Higgs search likelihood for various Standard Model Higgs masses is utilised, since the lightest Higgs of mSUGRA is very SM-like once the direct search constraints are taken into account. It is smeared with a 2 GeV assumed theoretical uncertainty in the SOFTSUSY2.0.17 prediction of as described in [13].

The experimental value of the rare bottom quark branching ratio to a strange quark and a photon is constrained to be [55]

(14)

The SM prediction has recently moved down quite substantially from to [56, 57]. This shift was caused by including most of the next-to-next-to-leading order (NNLO) perturbative QCD contributions as well as the leading non-perturbative and electroweak effects. We use the publicly available code SuperIso2.0 [40] (linked via the SLHA to the mSUGRA spectrum predicted) which computes in the MSSM with Minimal Flavor Violation. We note that mSUGRA is of such a minimal flavor violating form, and so the assumptions present in SuperIso2.0 are the appropriate ones. The computation takes into account one-loop SUSY contributions, as well as -enhanced two-loop contributions in the effective lagrangian approach. The recent partial NNLO SM QCD corrections are also included by the program. Ref. [41] derives a 95% interval for the bounds including the experimental and theory SM/MSSM errors to be

(15)

For the constraint on , we use the mean value of and derive the 1– uncertainty from the above given bound to be equal to . We note that this is twice as large as the uncertainty used in another recent global fit [14], where an enhancement in the posterior density of the large region was observed to result from the new constraint.

The new upper 95 C.L. bound on coming from the CDF collaboration is 5.8. We are in possession [58] of the empirical penalty for this observable as a function of the predicted value of from old CDF data when the 95 C.L. upper bound was 0.98. Here, we assume that the shape of the likelihood penalty coming from data is the same as presented in Ref. [12], but that only the normalisation of the branching ratio shifts by the ratio of the 95 C.L. upper bounds: .

For the , isospin asymmetry of , the 95% confidence level for the experimental results from the combined BABAR and Belle data combined with the theoretical errors is [41]:

(16)

with the central value of . We derive the 1– uncertainty from the above given bound to be equal to . We use the publicly available code SuperIso2.0 [40] to calculate . We neglect experimental correlations between the measurements of and . In practice, the constraint makes a much smaller difference than to our fits, and so we expect the inclusion of a correlation to also have a small effect. The parametric correlations caused by variations of and are included by our analysis, since they are varied as input parameters.

The average experimental value of from HFAG [42] (under purely leptonic modes) is:

(17)

The SM prediction is rather uncertain because of two incompatible empirically derived values of : one being . The other comes from inclusive semi-leptonic decays and is . These lead to values of and respectively. We statistically average these two by averaging the central values, and then adding the errors in quadrature and dividing by . This gives:

(18)

Taking the ratio of the experimental and SM values of gives:

(19)

For the MSSM prediction, we use the formulae in Ref. [59], which include the large limit of one-loop corrections coming from loops involving a charged Higgs.

The experimental and SM-predicted values of the neutral meson mixing amplitude are [42, 43]:

(20)

Taking the ratio of these two values, we get:

(21)

We use the formulae of Ref. [60] for the MSSM prediction of , calculating it in the large approximation. The dominant correction comes from one-loop diagrams involving a neutral Higgs boson.

0.250.50.7510.03430.11430.1943constraintpure WMAP5
Figure 1: Depiction of our likelihood constraint on the predicted value of due to lightest neutralinos, compared to a simple Gaussian with WMAP5 central value and a 1 uncertainty of 0.02.

The WMAP 5–year data combined with the distance measurements from the Type Ia supernovae (SN) and the Baryon Acoustic Oscillations (BAO) in the distribution of galaxies gives the -cold dark matter fitted value of the dark matter relic density [61]:

(22)

In the present paper, we assume that the dark matter consists of neutralino, the LSP. Recently, it has been shown that the LSP relic density is highly sensitive to the pre-Big Bang Nucleosynthesis (BBN) rate and even a modest modification can greatly enhance the calculated relic density with no contradiction with the cosmological observations [62]. It is also possible that a non-neutralino component of dark matter is concurrently present and indeed the inclusion of neutrino masses via right-handed neutrinos can change the relic density prediction somewhat [63]. We therefore penalise only for the predicted being greater than the WMAP5 + BAO + SN central value. We define to be the predicted value of , to be the central value from WMAP5 + BAO + SN observations and to be the error on the predicted value which includes theoretical as well as experimental components. We take in order to incorporate an estimate of higher order uncertainties in its prediction [64] and we define the likelihood as:

(23)

A diagram of the resulting likelihood penalty is displayed in Fig. 1. This differs slightly from the formulation suggested previously by one of the authors, for for the case when a non-neutralino component of dark matter is concurrently present, which drops more quickly than our flat likelihood up until the peak of WMAP Gaussian likelihood distribution.

4 Results

In this section, we first show our main results on the quantification of the preference of the fits for . Next, we show some highlights of updated parameter constraints coming from the fit, finishing with a study on the level of compatibility of various observables.

4.1 Model Comparison

We summarise our main results in Table 5 in which we list the posterior model probability odds, for mSUGRA models with and , for the two prior ranges used with flat and logarithmic prior measures as discussed in Section 3. The calculation of the ratio of posterior model probabilities requires the prior probability ratio for the two signs of (see Section 2), which we have set to unity. One could easily calculate the ratio for a different prior probability ratio , by multiplying in Table 5 with . From the probability odds listed in Table 5, although there is a positive evidence in favour of mSUGRA model with , the extent of the preference depends quite strongly on the priors used and the evidence ranges from being relatively strong in the case of logarithmic prior with “ TeV” range to weak for flat priors with “ TeV” range. This dependence on the prior is a clear sign that the data are not yet of sufficiently high quality to be able to distinguish between these models unambiguously. Hopefully, the forthcoming high-quality data from LHC would be able to cast more light on it.

Prior “2 TeV” “4 TeV”
flat log flat log
(our determination)
(our determination)
(from Ref. [12])
(from Ref. [12])
Table 5: The posterior probability ratios for mSUGRA model with different signs of . Here we have assumed the prior probabilities of the different signs of to be same. The uncertainties on for mSUGRA model with different signs of are the same for different priors, since with the MultiNest technique, the uncertainty on the evidence value is set by the number of live points and the stopping criteria (see Refs. [18, 19]) which were the same for different priors used in this study. The second row shows, for comparison, a previous determination with earlier data using the much less precise bridge sampling method. Some aspects of this fit were somewhat different to the present work’s approach and are discussed in the text.

We also show in Table 5 for comparison, the probability ratio determined in an earlier MCMC fit using different data [12]. We can see that our determination of the probability ratio favours more strongly than Ref. [12]. The main factors affecting this are that Ref. [12] had an anomalous magnetic moment of the muon less in conflict with experiment: as opposed to Eq. 12 in the present analysis, which also includes the additional -observables: , and . Some other details of the fit were also different in Ref. [12]: for instance TeV for all fits, and the range of was different. These ranges will affect the evidence obtained, at least to some degree. Unfortunately, Ref. [12] neglects to present statistical errors in the determination of the ratios of evidence values, a situation which is rectified in Table 5. It is clear from Table 5 that the uncertainty in the result of the model comparison is presently dominated by the prior choice, rather than by the small statistical uncertainty in the determination of the evidence ratio with MultiNest. It can however be concluded that present data favour the branch of mSUGRA with a Bayesian evidence going from weak to moderate, depending on the choice of prior.

To quantify the extent to which these results depend on constraint, we calculate the Bayesian evidence ratio, for mSUGRA models with and , for the flat “4 TeV” range priors with all the observables discussed in Section 3.2 apart from . We find translating into posterior probability odds . This shows that in the absence of constraint, both mSUGRA models with and are equally favoured by the data. Inclusion of constraint causes a shift of 2.3 units in favour of for the linear “4 TeV” range prior measure and hence it can be concluded that does indeed dominate our model selection results in favour of .

4.2 Updated Parameter Constraints

Figure 2: The 2-dimensional posterior probability densities in the plane spanned by mSUGRA parameters: , , and for the linear prior measure “4 TeV” range analysis and . The inner and outer contours enclose % and % of the total probability respectively. All of the other parameters in each plane have been marginalised over.

We display the results of the MultiNest fits on the and plane posterior probability densities in Fig. 2 444The uneven “bobbly” appearance of the 2d marginalized posteriors is due to the small pixel size used in the marginalized grid; this was required in order to resolve the finest features in the posterior distributions. Previous global fits in mSUGRA have found that the dark matter relic density has the largest effect on parameter space [9]. In particular, regions where the LSP annihilates efficiently through some particular mechanism are preferred by the fits. In the left-hand panel, we see that the highest posterior region is where the stau co-annihilation channel is active at the lowest value of , where the lightest stau co-annihilates very efficiently with the lightest neutralino due to their near mass degeneracy. Next, in the approximate region 0.5 TeV 1.5 TeV, there is another reasonably high posterior region. In this region, is large and the LSP is approximately half the mass of the pseudo-scalar Higgs boson . The process becomes an efficient channel in this region. For higher values of TeV, the hyperbolic branch [65, 66] régime reigns, where the LSP contains a significant higgsino component and annihilation into weak gauge boson pairs becomes quite efficient. This region dominantly has , as can be seen in the right-hand panel of Fig. 2. All of the qualitative features of previous MCMC global fits [9, 11, 12, 13, 15] have been reproduced in the figure, providing a useful validation of the MultiNest technique in a particle physics context, where the shape of the multi-dimensional posterior exhibits multi-modality and curving degeneracies. 2-dimensional marginalisations in other mSUGRA parameter combinations also agree to a large extent with previous MCMC fits, for both and . However, compared to MCMC fits in Refs. [9, 11, 14], there has been a slight migration for : the stau co-annihilation region has become relatively more favoured than previously and the hyperbolic branch has become less favoured. This is primarily due to and : our calculation includes 2-loop MSSM effects and so we are able to place smaller errors on the theoretical prediction than Refs. [9, 11, 14]. Both of these variables show a mild preference for a sizable SUSY contribution once these 2-loop effects are included [67]. The pure SOFTSUSY2.0.17 calculation is at 1-loop order and without the additional two loop effects, it displays a preference for larger SUSY scalar masses [12], thus favouring the hyperbolic branch region more. An effect in the opposite direction that comes from including the NNLO corrections to is possible [14]. Large values of in the hyperbolic branch region lead to fairly light charged Higgs’ in mSUGRA due to charged Higgs-top loops, which may then push the branching ratio toward its experimentally preferred range, by adding constructively to the Standard Model contribution. However, our estimate of the combined statistical error of in Table 4 means that this effect only has a small statistical pull on the fits, being out-weighed by the effects mentioned above in the opposite direction. We note here that, as as determined from experiment increases, the focus point region moves to higher values of  [68]. However, very similar fits to the ones presented here were performed for GeV, see Fig. 2a of Ref. [16], and the posterior density on the plane did not change much compared to the present paper (which uses GeV).

Figure 3: The 2-dimensional mSUGRA posterior probability densities in the plane , for for (left) the ‘4 TeV range’ linear measure prior analysis and (right) the ‘4 TeV range’ logarithmic measure prior analysis. The inner and outer contours enclose % and % of the total probability respectively. All of the other parameters in each plane have been marginalised over.

For , the fit prefers a higher posterior probability for the focus point region compared to Ref. [12]. We show the marginalisation of mSUGRA to the plane in Fig. 3. The left-hand panel shows the linear measure prior analysis and may be compared directly with Fig. 5a of Ref. [12], which has the stau co-annihilation region having the highest posterior density. The increased discrepancy of in the present fit with current data will favour heavier sparticles due to the SUSY contribution being of the wrong sign for mSUGRA. In the right-hand side, we see how the fit changes due to a logarithmic measure on the prior. Indeed, the foreseen shift toward lower values of is significant, the stau co-annihilation channel being favoured once more. Although there are some similarities with the left-hand panel, it is clear that the choice of prior measure still has a non-negligible effect on the fit despite the inclusion of new -physics observables.

00.250.50.75101234(TeV)00.250.50.75100.511.52(TeV)00.250.50.751-8-6-4-20246(TeV)00.250.50.7510102030405060
Figure 4: Comparison of and 1-dimensional relative posterior probability densities of mSUGRA parameters for the linear measure prior ‘4 TeV’ range analysis. All of the other input parameters have been marginalised over.

With this fact still in mind, we compare the posterior probability density function for and in Fig. 4 for linear measure priors. In Fig. 4, we see the preference for heavier sparticles in the case reflected in the larger values for the universal scalar and gaugino masses and . It is clear from the top left hand panel that any inference made about scalar masses will be quite sensitive to the exact range taken, since the distribution is near its maximum at large values close to 4 TeV. On the other hand, the data constrains TeV robustly. favours large less than since for large , becomes more negative, with the wrong sign compared to the data.

00.250.50.7510.03430.11430.194300.250.50.7513.129.555.900.250.50.7511.393.555.7100.250.50.75180.31780.39880.47900.250.50.7510.2309710.231490.23200900.250.50.7510.1251.2592.39300.250.50.751-0.04920.03750.124200.250.50.7510.490.851.21likelihood
Figure 5: An illustration of tensions between different observables for the mSUGRA model. The black (dash-dotted), red (thin solid) and the blue (thick solid) lines show the relative posterior probability for , and the likelihood respectively for each observable.
Parameter % region % region % region % region
(GeV)
(TeV)
(TeV)
(TeV)
(TeV)
(TeV)
(TeV)
Table 6: sparticle mass ranges for linear ‘4 TeV’ analysis corresponding to % and % of posterior probability.

As discussed in Section 2, one can easily obtain the posterior for the observables, which are derived from the model parameters, from the posterior of the model parameters. Fig. 5 displays the statistical pulls of the various observables. In the absence of any tension between the constraints or volume effects, one would expect the posterior curves to lie on top of the likelihood curves representing the experimental data used in the analysis (see also [10]). In order to separate the volume effects from pulls originating from data, the likelihood profile could be used [15]. Here though, we just comment on the combined effect from the two mechanisms. We see that has a preference for being rather small, but non-zero for either sign of . Since any value below is not penalised by the likelihood penalty we have used, this may be ascribed to a combination of volume effects (where there is simply more volume of parameter space with a small relic density) and pull toward those region from the other observables. The biggest disparity between the experimental data and the posterior probability distribution is observed for the constraint, which can only be near its central measured value for light sparticles and large . Many of the other constraints are pulling toward large values of the masses, where the volume of parameter space is larger, and so small values of are preferred. We see a slight preference for from the constraint, as expected from the discussion in Section4.2 and Ref. [14], but this is too small to outweigh the effects of , as shown previously by our estimate of . The figure shows that the ratio , of the MSSM prediction of the mass splitting to the SM prediction is really not active, i.e. that it does not vary across allowed mSUGRA parameter space, and so does not have an effect on the posterior density.

We list the sparticle mass ranges for linear ‘4 TeV’ analysis corresponding to and of the posterior probability in Table 6.

4.3 Consistency Check between Different Constraints

It is clear from Fig. 5 that and , both important observables, are pulling in opposite directions. We choose the ‘strongly preferred’ value of for our analysis. In order to check whether the observables and provide consistent information on the branch of mSUGRA parameter space, calculation of the parameter as given in Eq. 6 is required. In order to carry out this calculation, we impose linear ‘4 TeV’ priors. In Fig. 6, we plot the posterior probability distributions for the and planes for the analysis with , and individually. From the figure, we see that the % probability regions preferred by the and data are a little different as expected for , since prefers light SUSY particles whereas the datum prefers heavy ones in the hyperbolic branch region. Nevertheless, some overlap in the % probability regions favoured by these two data-sets. One would then expect the inconsistency between and not to be highly significant. We evaluate

(24)

showing very small evidence for inconsistency between and .

Since plays such a dominant role in shaping the posterior, we next check consistency between all three constraints in mSUGRA. We perform the analysis in the same manner as described above and evaluate to be:

(25)

showing no evidence for inconsistency between , and .

These results can be seen qualitatively in the 2-D posterior for the joint analysis of , and in Fig. 6. It can be seen that the joint posterior lies precisely in the region of overlap between posteriors for the analysis of these three data-sets separately. As shown in Appendix A, in the presence of any inconsistency between different data-sets, the joint posterior can be seen to exclude the high posterior probability regions for the analysis with the data-sets separately which is not the case here and consequently we do not find a strong evidence for inconsistency between , and data-sets.

Figure 6: The 2-dimensional posterior probability distributions of branch of mSUGRA with: from top to bottom, , , , and joint analysis of all three. The inner and outer contours enclose % and % of the total probability respectively. All of the other input parameters in each plane have been marginalised over.

We now treat all the observables , apart from , and as additional priors on the mSUGRA parameter space in order to see whether these have any effect on the consistency between and . Eq. 6 then becomes:

(26)

where the hypothesis states that mSUGRA jointly fits the two observables, whereas states that the two observables prefer different regions of parameter space.

Since the measurements of the observables used in the likelihood are independent,

(27)
(28)
(29)

where is the Bayesian evidence for the analysis of branch of mSUGRA model with , all the observables apart from , and . Hence, to evaluate , we calculate the Bayesian evidence for the joint as well as individual analysis with , and . We evaluate

(30)

showing that even the slight inconsistency found between , without treating as additional priors on mSUGRA model, has now vanished which means that data-sets have cut-off the discrepant regions of the two constraints.

5 Summary and Conclusions

Bayesian analysis methods have been used successfully in astronomical applications [69, 70, 71, 72, 73, 74, 75, 76, 77, 30]. However, the application of Bayesian methods to problems in particle physics is less established, due perhaps to the highly degenerate and multi-modal parameter spaces which present a great difficulty for the standard MCMC based techniques. Bank sampling [27] provides a practical means of MCMC parameter estimation and evidence ratio estimation under such circumstances, but it cannot calculate the evidence itself. We have shown that the MultiNest technique not only handles these complex distributions in a highly efficient manner but also allows the calculation of the Bayesian evidence enabling one to perform the model comparison. This could be of great importance in distinguishing different beyond the Standard Model theories, once high quality data from the LHC becomes available.

Our central results are summarised in Table 5. It is clear that, in global mSUGRA fits to indirect data, is somewhat preferred to , mainly due to data from the anomalous magnetic moment of the muon, which outweighs the preference for from the measured branching ratio of a quark into an quark and a photon and the SM prediction when some of the NNLO QCD contributions are included. For a given measure and range of the prior, the evidence ratio between the different signs of is accurately determined by the MultiNest technique. Despite additional data from the sector and the anomalous magnetic moment of the muon having a higher discrepancy with the Standard Model prediction, there is still not enough power in the data to make the fits robust enough. We see a signal of this in the fact that the evidence ratio is highly dependent upon the measure and range of the prior distribution of mSUGRA parameters. We obtain depending upon which range and which measure is chosen. All of these values exhibit positive evidence, but on the scale summarised in Table 1, ‘weak’ evidence is characterised as being bigger than , ‘moderate’ as bigger than . Thus we cannot unambiguously conclude that the evidence is strongly in favour of : only weak. A further test also suggested that within one prior measure and range, and for , the tension between the observables and is not statistically significant.

Appendix A Consistency Check with Bayesian Evidence

In order to motivate the use of Bayesian evidence to quantify the consistency between different data-sets as discussed in Section 2, we apply the method to the classic problem of fitting a straight line through a set of data points.

a.1 Toy Problem

We consider that the true underlying model for some process is a straight line described by:

(31)

where is the slope and is the intercept. We take two independent sets of measurements and each containing data points. The value for all these measurements are drawn from a uniform distribution and are assumed to be known exactly.

a.1.1 Case I: Consistent Data-Sets

Figure 7: Upper left: Data-sets and drawn from a straight line model (solid line) with slope and intercept and subject to independent Gaussian noise with root mean square . Upper right: Posterior assuming that data-sets and are consistent. Lower left: Posterior for data-set . Lower right: Posterior for data-set . The inner and outer contours enclose % and % of the total probability respectively. The true parameter value is indicated by red crosses.

In the first case we consider , and add Gaussian noise with standard deviation and for data-sets and respectively. Hence both the data-sets provide consistent information on the underlying process.

We assume that the errors and on the data-sets and are known exactly. The likelihood function can then be written as:

(32)

where

(33)

and

(34)

where is the predicted value of at a given .

We impose uniform, priors on both and . In Fig. 7 we show the data points and the posterior for the analysis assuming the data-sets and are consistent. The true parameter value clearly lies inside the contour enclosing % of the posterior probability.

In order to quantify the consistency between the data-sets and , we evaluate as given in Eq. 6 which for this case becomes:

(35)

where the hypothesis states that the model jointly fits the data-sets and , whereas states that and prefer different regions of parameter space. We evaluate,

(36)

showing strong evidence in favour of .

a.1.2 Case II: Inconsistent Data-Sets

Figure 8: Upper left: Data-sets and drawn from a straight line model (solid line) with slope , and , respectively and subject to independent Gaussian noise with root mean square . Upper right: Posterior assuming that data-sets and are consistent. Lower left: Posterior for data-set . Lower right: Posterior for data-set . The inner and outer contours enclose % and % of the total probability respectively. The true parameter values are indicated by red and black crosses for Data-sets and respectively.

We now introduce systematic error into the data-set by drawing from an incorrect straight line model with and . Measurements for are still drawn from a straight line with and . We assume that the errors and , for and respectively, are both quoted correctly.

We impose uniform priors, and , on and respectively. In Fig. 8 we show the data points and the posterior for the analysis assuming the data-sets and are consistent as well as for the analysis with data-sets and taken separately. In spite of the fact that the two sets of true parameter values define a direction along the natural degeneracy line in the plane, neither of the true parameter values lie inside the contour enclosing % of the posterior probability. Also, it can be seen that the there is no overlap between the posteriors for data-sets and and so both models can be excluded at a high significance level. We again compute as given in Eq. 35 and evaluate it to be,

(37)

showing evidence in favour of i.e. the data-sets and provide inconsistent information on the underlying model.

Acknowledgements

We thank Nazila Mahmoudi for her help with SuperIso2.0 and Pietro Slavich for advice about the calculation. This work has been partially supported by STFC and the EU FP6 Marie Curie Research & Training Network “UniverseNet” (MRTN-CT-2006-035863). The computation was carried out largely on the Cosmos UK National Cosmology Supercomputer at DAMTP, Cambridge and the Cambridge High Performance Computing Cluster Darwin and we thank Victor Travieso, Andrey Kaliazin and Stuart Rankin for their computational assistance. FF is supported by the Cambridge Commonwealth Trust, the Isaac Newton Trust and the Pakistan Higher Education Commission Fellowships. SSA is supported by the Gates Cambridge Trust. RT is supported by the Lockyer Fellowship of the Royal Astronomical Society, St Anne’s College, Oxford and STFC. The authors would like to thank the European Network of Theoretical Astroparticle Physics ENTApP ILIAS/N6 under contract number RII3-CT-2004-506222 for financial support.

References

  • [1] H. Baer and C. Balazs, analysis of the minimal supergravity model including WMAP, and constraints, JCAP 0305 (2003) 006 [arXiv:hep-ph/0303114].
  • [2] J. R. Ellis, K. A. Olive, Y. Santoso and V. C. Spanos, Likelihood analysis of the CMSSM parameter space, Phys. Rev. D 69 (2004) 095004 [arXiv:hep-ph/0310356].
  • [3] S. Profumo and C. E. Yaguna, A statistical analysis of supersymmetric dark matter in the MSSM after WMAP, Phys. Rev. D 70 (2004) 095004 [arXiv:hep-ph/0407036].
  • [4] E. A. Baltz and P. Gondolo, Markov chain Monte Carlo exploration of minimal supergravity with implications for dark matter, JHEP 0410 (2004) 052 [arXiv:hep-ph/0407039].
  • [5] J. R. Ellis, S. Heinemeyer, K. A. Olive and G. Weiglein, Indirect sensitivities to the scale of supersymmetry, JHEP 0502 (2005) 013 [arXiv:hep-ph/0411216].
  • [6] L. S. Stark, P. Hafliger, A. Biland and F. Pauss, New allowed mSUGRA parameter space from variations of the trilinear scalar coupling A0, JHEP 0508 (2005) 059 [arXiv:hep-ph/0502197].
  • [7] L. Alvarez-Gaume, J. Polchinski and M. B. Wise, Minimal Low-Energy Supergravity, Nucl. Phys. B 221 (1983) 495; R. Arnowitt and P. Nath, SUSY mass spectrum in SU(5) supergravity grand unification, Phys. Rev. Lett. 69, 725 (1992).
  • [8] J. P. Conlon and F. Quevedo, Gaugino and scalar masses in the landscape, JHEP 0606 (2006) 029 [arXiv:hep-th/0605141]; L. E. Ibanez, The fluxed MSSM, Phys. Rev. D 71 (2005) 055005 [arXiv:hep-ph/0408064]; A. Brignole, L. E. Ibanez and C. Munoz, Towards a theory of soft terms for the supersymmetric Standard Model, Nucl. Phys. B 422 (1994) 125, Erratum-ibid. B 436 (1995) 747 [arXiv:hep-ph/9308271].
  • [9] B. C. Allanach and C. G. Lester, Multi-dimensional mSUGRA likelihood maps, Phys. Rev. D 73 (2006) 015013 [arXiv:hep-ph/0507283].
  • [10] L. Roszkowski, R. R. de Austri, J, Silk and R. Trotta, On prospects for dark matter indirect detection in the Constrained MSSM, [arXiv:0707.0622].
  • [11] R. R. de Austri, R. Trotta and L. Roszkowski, A Markov chain Monte Carlo analysis of the CMSSM, JHEP 0605 (2006) 002 [arXiv:hep-ph/0602028].
  • [12] B. C. Allanach, C. G. Lester and A. M. Weber, The dark side of mSUGRA, JHEP 12 (2006) 065 [arXiv:hep-ph/0609295]
  • [13] B. C. Allanach, Naturalness priors and fits to the constrained minimal supersymmetric standard model, Phys. Lett. B 635 (2006) 123 [arXiv:hep-ph/0601089].
  • [14] L. Roszkowski, R. R. de Austri and R. Trotta, Implications for the Constrained MSSM from a new prediction for , JHEP 2007 (2007) 075 [arXiv:0705.2012].
  • [15] B. C. Allanach, C. G. Lester and A. M. Weber, Natural Priors, CMSSM Fits, and LHC Weather Forecasts, JHEP 08 (2007) 023 [arXiv:0705.0487].
  • [16] B. C. Allanach and D. Hooper, Panglossian Prospects for Detecting Neutralino Dark Matter in Light of Natural Priors, arXiv:0806.1923 [hep-ph].
  • [17] B. C. Allanach, M. J. Dolan and A. M. Weber, Global Fits of the Large Volume String Scenario to WMAP5 and Other Indirect Constraints Using Markov Chain Monte Carlo, arXiv:0806.1184 [hep-ph].
  • [18] F. Feroz and M. P. Hobson, Multimodal nested sampling: an efficient and robust alternative to MCMC methods for astronomical data analysis, Mon. Not. Roy. Astron. Soc. 384 (2008) 449 [arXiv:0704.3704].
  • [19] F. Feroz, M. P. Hobson and M. Bridges, MultiNest: an efficient and robust Bayesian inference tool for cosmology and particle physics, arXiv:0809.3437.
  • [20] S. Heinemeyer, X. Miao, S. Su and G. Weiglein, B-Physics Observables and Electroweak Precision Data in the CMSSM, mGMSB and mAMSB, [arXiv:hep-ph/0805.2359].
  • [21] C.H. Bennett, Efficient estimation of free energy differences from Monte Carlo data, Jnl. of Comp. Phys. 22 (1976) 245; A. Gelman and X.-Li Meng, Simulating normalizing constants: from importance sampling to bridge sampling to path sampling, Stat. Sci. 13 (1998) 163; R. M. Neal Estimating ratios of normalizing constants using Linked Importance Sampling, Technical Report No. 0511 (2005), Dept. of Statistics, University of Toronto
  • [22] C. Gordon and R. Trotta, Bayesian calibrated significance levels applied to the spectral tilt and hemispherical asymmetry, Mon. Not. Roy. Astron. Soc. 382 (2007) 4 [arXiv:0706.3014].
  • [23] R. T. Cox, Probability, frequency and reasonable expectation, Am. J. Phys. vol. 14, pp. 1-13, 1946.
  • [24] J.K.O. O’Ruanaidh & W.J. Fitzgerald, 1996, Numerical Bayesian Methods Applied to Signal Processing, Springer-Verlag, New York, (1996)
  • [25] M. P. Hobson, S. L. Bridle and O. Lahav, Combining cosmological data sets: hyperparameters and Bayesian evidence, Mon. Not. Roy. Astron. Soc. 335, 377 (2002) [arXiv:astro-ph/0203259].
  • [26] R. Trotta, Applications of Bayesian model selection to cosmological parameters, Mon. Not. Roy. Astron. Soc. 378 (2005) 72 [arXiv:astro-ph/0504022].
  • [27] B. C. Allanach and C. G. Lester, Sampling using a ‘bank’ of clues, [arXiv:hep-ph/0705.0486].
  • [28] A. R. Liddle, Information criteria for astrophysical model selection, Mon. Not. Roy. Astron. Soc. 377 (2007) [arXiv:astro-ph/0701113].
  • [29] J. Skilling, AIP Conference Proceedings of the 24th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, Vol. 735, pp. 395-405 (2004) available from http://www.inference.phy.cam.ac.uk/bayesys/
  • [30] R. Trotta, Bayes in the sky: Bayesian inference and model selection in cosmology, Invited review, Contemporary Physics, Vol. 49, No. 2, March-April (2008) [arXiv:0803.4089].
  • [31] R. D. Cousins, Comment on “Bayesian Analysis of Pentaquark Signals from CLAS Data”, with Response to the Reply by Ireland and Protopopsecu [arXiv:0807.1330 [hep-ph]].
  • [32] P. Marshall, N. Rajguru and A. Slosar, Bayesian evidence as tool for comparing datasets, Phys.  Rev.  D73 (2006) 067302
  • [33] W. -M. Yao et al. [Particle Data Group], J. Phys. G33 (2006) 1. and 2007 partial update for 2008.
  • [34] The Tevatron Electroweak Working Group, Combination of CDF and D0 results on the mass of the top quark, hep-ex/0703034
  • [35] The Tevatron Electroweak Working Group, Combination of CDF and D0 Results on the Mass of the Top Quark, arXiv:0808.1089 [hep-ex].
  • [36] J. P. Miller, E. de Rafael and B. L. Roberts, Rept. Prog. Phys. 70 (2007) 795.
  • [37] Precision Electroweak Measurements and Constraints on the Standard Model, The LEP Collaboration, [arXiv:0712.0929]
  • [38] The code is forthcoming in a publication by A. M. Weber et al.; S. Heinemeyer, W. Hollik, D. Stöckinger, A. M. Weber and G. Weiglein, Precise prediction for M(W) in the MSSM, JHEP 08 (2006) 052, [arXiv:hep-ph/0604147].
  • [39] [The ALEPH, DELPHI, L3 and OPAL Collaborations, the LEP Electroweak Working Group], Precision Electroweak Measurements and Constraints on the Standard Model, [arXiv:hep-ph/0712.0929].
  • [40] F. Mahmoudi SuperIso: A program for calculating the isospin asymmetry of in the MSSM, Computer Physics Communications (2008), [arXiv:0710.2067].
  • [41] F. Mahmoudi New constraints on supersymmetric models from , JHEP 12 (2007) 026, [arXiv:0710.3791].
  • [42] available at http://www.slac.stanford.edu/xorg/hfag/rare/leppho07/radll/index.html
  • [43] UTfit Collaboration, The Unitarity Triangle Fit in the Standard Model and Hadronic Parameters from Lattice QCD: A Reappraisal after the Measurements of and , JHEP (2006), [arXiv:hep-ph/0606167].
  • [44] B. C. Allanach, SOFTSUSY: A program for calculating supersymmetric spectra, Comput. Phys. Commun. 143 (2002) 305, [arXiv:hep-ph/0104145].
  • [45] P. Skands et al., SUSY Les Houches accord: Interfacing SUSY spectrum calculators, decay packages, and event generators, JHEP 0407 (2004) 036, [arXiv:hep-ph/0311123].
  • [46] G. Bélanger, F. Boudjema, A. Pukhov and A. Semenov, micrOMEGAs: Version 1.3, Comput. Phys. Commun. 174 (2006) 577, [arXiv:hep-ph/0405253]; G. Bélanger, F. Boudjema, A. Pukhov and A. Semenov, micrOMEGAs: A program for calculating the relic density in the MSSM, Comput. Phys. Commun. 149 (2002) 103, [arXiv:hep-ph/0112278].
  • [47] G.W. Bennett et al. [Muon g-2 collaboration], Final report of the muon E821 anomalous magnetic moment measurement at BNL, Phys. Rev. D 73 (2006) 072003 [arXiv:hep-ex/0602035].
  • [48] M. Passera, Precise mass-dependent QED contributions to leptonic g-2 at order alpha**2 and alpha**3, Phys. Rev. D 75 (2007) 013002, [arXiv:hep-ph/0606174].
  • [49] A. Czarnecki, W.J. Marciano, and A. Vainshtein, Refinements in electroweak contributions to the muon anomalous magnetic moment, Phys. Rev. D 67 (2003) 073006, [arXiv:hep-ph/0212229]
  • [50] U. Chattopadhyay and P. Nath, Probing supergravity grand unification in the Brookhaven g-2 experiment, Phys. Rev. D 53 (1996) 1648, [arXiv:hep-ph/9507386].
  • [51] S. Heinemeyer, D. Stöckinger and G. Weiglein, Electroweak and supersymmetric two-loop corrections to (g-2)(mu), Nucl. Phys. B690 (2004) 103, [arXiv:hep-ph/0405255]; S. Heinemeyer, D. Stöckinger and G. Weiglein, Two-loop SUSY corrections to the anomalous magnetic moment of the muon, Nucl. Phys. B 690 (2004) 62, [arXiv:hep-ph/0312264].
  • [52] D. Stöckinger, The muon magnetic moment and supersymmetry, J.Phys. G34 (2007) R45-R92 [arXiv:hep-ph/0609168].
  • [53] M. Awramik, M. Czakon, A. Freitas, and G. Weiglein, Precise prediction for the W-boson mass in the standard model, Phys. Rev. D 69 (2004) 053006 [arXiv:hep-ph/0311148].
  • [54] J. Haestier, S. Heinemeyer, D. Stöckinger and G. Weiglein, Electroweak precision observables: Two-loop Yukawa corrections of supersymmetric particles, JHEP 0512 (2005) 027, [arXiv:hep-ph/0508139].
  • [55] E. Barberio et al [Heavy Flavour Averaging Group], Averages of b-hadron Properties at the End of 2005, arXiv:hep-ex/0603003.
  • [56] M. Misiak and M. Steinhauser, NNLO QCD corrections to the matrix elements using interpolation in , Nucl.  Phys.  B764 (2007) 62, [arXiv:hep-ph/0609241].
  • [57] M. Misiak et al., Estimate of at , Phys. Rev. Lett. 98 (2005) 022002 [arXiv:hep-ph/0609232].
  • [58] C.S. Lin, private communication.
  • [59] G. Isidori and P. Paradisi, Hints of large tan(beta) in flavour physics, Phys. Lett. B 639, 499 (2006) [arXiv:hep-ph/0605012].
  • [60] A. J. Buras, P. H. Chankowski, J. Rosiek and L. Slawianowska, Delta(M(d,s)), B/(d,s)0 mu+ mu- and B X/s gamma in supersymmetry at large tan(beta), Nucl. Phys. B 659 (2003) 3 [arXiv:hep-ph/0210145].
  • [61] E. Komatsu et al., Five Year Wilkinson Microwave Anisotropy Probe (WMAP) Obsercations: Cosmological Interpretation, [arXiv:0803.0547].
  • [62] A. Arbey and F. Mahmoudi, SUSY constraints from relic density: high sensitivity to pre-BBN expansion rate, [arXiv:0803.0741].
  • [63] V. Barger, D. Marfatia, A. Mustafayev, Neutrino sector impacts SUSY dark matter, [arXiv:hep-ph/0804.3601].
  • [64] N. Baro, F. Boudjema and A. Semenov, Full one-loop corrections to the relic density in the MSSM: A few examples, Phys. Lett. B 660 (2008) 550, [arXiv:hep-ph/0710.1821.
  • [65] K. L. Chan, U. Chattopadhyay and P. Nath, Naturalness, weak scale supersymmetry and the prospect for the observation of supersymmetry at the Tevatron and at the LHC, Phys. Rev. D 58, 096004 (1998) [arXiv:hep-ph/9710473].
  • [66] D. Feldman, Z. Liu and P. Nath, Light Higgses at the Tevatron and at the LHC and Observable Dark Matter in SUGRA and D Branes, Phys. Lett. B 662, 190 (2008) [arXiv:0711.4591 [hep-ph]].
  • [67] J. R. Ellis, S. Heinemeyer, K. A. Olive and G. Weiglein, Phenomenological indications of the scale of supersymmetry, JHEP 0605 (2006) 005, [arXiv:hep-ph/0602220].
  • [68] B. C. Allanach, J. P. J. Hetherington, M. A. Parker and B. R. Webber, JHEP 0008 (2000) 017 [arXiv:hep-ph/0005186].
  • [69] M. P. Hobson and C. McLachlan, A Bayesian approach to discrete object detection in astronomical datasets, Mon. Not. Roy. Astron. Soc. 338 (2003) 765 [astro-ph/0204457].
  • [70] P. J. Marshall, M. P. Hobson, A. Slosar, Bayesian joint analysis of cluster weak lensing and Sunyaev-Zel’dovich effect data, Mon. Not. Roy. Astron. Soc. 346 (2003) 489 [astro-ph/0307098]
  • [71] A. Slosar et al., Cosmological parameter estimation and Bayesian model comparison using VSA data, Mon. Not. Roy. Astron. Soc., 341 (2003) L29 [astro-ph/0212497]
  • [72] P. Mukherjee, D. Parkinson, A. R. Liddle, A nested sampling algorithm for Bayesian model selection, Astrophys. J., 638 (2006) L51 [astro-ph/0508461].
  • [73] B. A. Basset, P. S. Corasaniti, M. Kunz, The essence of quintessence and the cost of compression, Astrophys. J., 617 (2004) L1 [astro-ph/0407364].
  • [74] R. Trotta, Application of Bayesian model selection to cosmological parameters, Mon. Not. Roy. Astron. Soc., 378 (2007) 72 [astro-ph/0504022].
  • [75] M. Beltran, J. Garcia-Bellido, J. Lesgourgues, A. Liddle, A. Slosar, Bayesian model selection and isocurvature perturbations, Phys. Rev. D, 71 (2005) 063532 [astro-ph/0501477].
  • [76] M. Bridges, A. N. Lasenby, M. P. Hobson, A Bayesian analysis of the primordial power spectrum, Mon. Not. Roy. Astron. Soc. 369 (2006) 1123 [astro-ph/0511573].
  • [77] R. Trotta, The isocurvature fraction after WMAP 3-years data, Mon. Not. Roy. Astron. Soc. 375, L26-L30 (2007) [arXiv:astro-ph/0608116].
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
250763
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description