A Profile Likelihood Analysis of the Constrained MSSM with Genetic Algorithms
Abstract:
The Constrained Minimal Supersymmetric Standard Model (CMSSM) is one of the simplest and most widelystudied supersymmetric extensions to the standard model of particle physics. Nevertheless, current data do not sufficiently constrain the model parameters in a way completely independent of priors, statistical measures and scanning techniques. We present a new technique for scanning supersymmetric parameter spaces, optimised for frequentist profile likelihood analyses and based on Genetic Algorithms. We apply this technique to the CMSSM, taking into account existing collider and cosmological data in our global fit. We compare our method to the MultiNest algorithm, an efficient Bayesian technique, paying particular attention to the bestfit points and implications for particle masses at the LHC and dark matter searches. Our global bestfit point lies in the focus point region. We find many highlikelihood points in both the stau coannihilation and focus point regions, including a previously neglected section of the coannihilation region at large . We show that there are many highlikelihood points in the CMSSM parameter space commonly missed by existing scanning techniques, especially at high masses. This has a significant influence on the derived confidence regions for parameters and observables, and can dramatically change the entire statistical inference of such scans.
1 Introduction
New physics beyond the Standard Model (SM) is broadly conjectured to appear at TeV energy scales. Particular attention has been paid to supersymmetric (SUSY) extensions of the SM, widely hoped to show up at the Large Hadron Collider (LHC). One of the strongest motivations for physics at the new scale is the absence of any SM mechanism for protecting the Higgs mass against radiative corrections; this is known as the hierarchy or finetuning problem [1]. Softlybroken weakscale supersymmetry (for an introduction, see Ref. 2) provides a natural solution to this problem via the cancellation of quadratic divergences in radiative corrections to the Higgs mass. The natural connection between SUSY and grand unified theories (GUTs) also offers extensive scope for achieving gaugecoupling unification in this framework [3]. Supersymmetry has even turned out to be a natural component of many string theories, so it may be worth incorporating into extensions of the SM anyway (though in these models it is not at all necessary for SUSY to be detectable at low energies).
Another major theoretical motivation for supersymmetry is that most weakscale versions contain a viable dark matter (DM) candidate [4]. Its stability is typically achieved via a conserved discrete symmetry (parity) which arises naturally in some GUTs, and makes the lightest supersymmetric particle (LSP) stable. Its ‘darkness’ is achieved by having the LSP be a neutral particle, such as the lightest neutralino or sneutrino. These are both weaklyinteracting massive particles (WIMPs), making them prime dark matter candidates [4]. The sneutrino is strongly constrained due to its large nuclearscattering crosssection, but the neutralino remains arguably the leading candidate for DM.
Describing the lowenergy behaviour of a supersymmetric model typically requires adding many new parameters to the SM. This makes phenomenological analyses highly complicated. Even upgrading the SM to its most minimal SUSY form, the Minimal Supersymmetric Standard Model (MSSM; for a recent review, see Ref. 5), introduces more than a hundred free parameters. All but one of these come from the soft terms in the SUSYbreaking sector. Fortunately, extensive regions of the full MSSM parameter space are ruled out phenomenologically, as generic values of many of the new parameters allow flavour changing neutral currents (FCNCs) or CP violation at levels excluded by experiment.
One might be able to relate many of these seeminglyfree parameters theoretically, dramatically reducing their number. This requires specification of either the underlying SUSYbreaking mechanism itself, or a mediation mechanism by which SUSYbreaking would be conducted from some undiscovered particle sector to the known particle spectrum and its SUSY counterparts. Several mediation mechanisms (for recent reviews, see e.g. Refs. 5 and 6) have been proposed which relate the MSSM parameters in very different ways, but so far no clear preference has been established for one mechanism over another. For a comparison of some mediation models using current data, see Ref. 7. Gravitymediated SUSY breaking, based on supergravity unification, naturally leads to the suppression of many of the dangerous FCNC and CPviolating terms. Its simplest version is known as minimal supergravity (mSUGRA) [8, 9].
An alternative approach is to directly specify a phenomenological MSSM reduction at low energy. Here one sets troublesome CPviolating and FCNCgenerating terms to zero by hand, and further reduces the number of parameters by assuming high degrees of symmetry in e.g. mass and mixing matrices.
A hybrid approach is to construct a phenomenological GUTscale model, broadly motivated by the connection between SUSY and GUTs. Here one imposes boundary conditions at the GUT scale (10 GeV) and then explores the lowenergy phenomenology by means of the Renormalisation Group Equations (RGEs). One of the most popular schemes is the Constrained MSSM (CMSSM) [10], which incorporates the phenomenologicallyinteresting parts of mSUGRA. The CMSSM includes four continuous parameters: the ratio of the two Higgs vacuum expectation values (), and the GUTscale values of the SUSYbreaking scalar, gaugino and trilinear mass parameters (, and ). The sign of (the MSSM Higgs/higgsino mass parameter) makes for one additional discrete parameter; its magnitude is determined by requiring that SUSYbreaking induces electroweak symmetrybreaking. Despite greatly curbing the range of possible phenomenological consequences, the small number of parameters in the CMSSM has made it a tractable way to explore basic lowenergy SUSY phenomenology.
Before drawing conclusions about model selection or parameter values from experimental data, one must choose a statistical framework to work in. There are two very different fundamental interpretations of probability, resulting in two approaches to statistics (for a detailed discussion, see e.g. Ref. 11). Frequentism deals with relative frequencies, interpreting probability as the fraction of times an outcome would occur if a measurement were repeated an infinite number of times. Bayesianism deals with subjective probabilities assigned to different hypotheses, interpreting probability as a measure of the degree of belief that a hypothesis is true. The former is used for assigning statistical errors to measurements, whilst the latter can be used to quantify both statistical and systematic uncertainties. In the Bayesian approach one is interested in the probability of a set of model parameters given some data, whereas in the frequentist approach the only quantity one can discuss is the probability of some dataset given a specific set of model parameters, i.e. a likelihood function.
In a frequentist framework, one simply maps a model’s likelihood as a function of the model parameters. The point with the highest likelihood is the best fit, and uncertainties upon the parameter values can be given by e.g. isolikelihood contours in the model parameter space. To obtain joint confidence intervals on a subset of parameters, the full likelihood is reduced to a lowerdimensional function by maximising it along the unwanted directions in the parameter space. This is the profile likelihood procedure [12, and references therein]. In the Bayesian picture [13], probabilities are directly assigned to different volumes in the parameter space. One must therefore also consider the state of subjective knowledge about the relative probabilities of different parameter values, independent of the actual data; this is a prior. In this case, the statistical measure is not the likelihood itself, but a priorweighted likelihood known as the posterior probability density function (PDF). Because this posterior is nothing but the joint PDF of all the parameters, constraints on a subset of model parameters are obtained by marginalising (i.e. integrating) it over the unwanted parameters. This marginalised posterior is then the Bayesian counterpart to the profile likelihood. Bayesians report the posterior mean as the mostfavoured point (given by the expectation values of the parameters according to the marginalised posterior), with uncertainties defined by surfaces containing set percentages of the total marginalised posterior, or ‘probability mass’.
One practically interesting consequence of including priors is that they are a powerful tool for estimating how robust a fit is. If the posterior is strongly dependent on the prior, the data are not sufficient to constrain the model parameters. It has been shown that the prior still plays a large role in Bayesian inferences in the CMSSM [14]. If an actual detection occurs at the LHC, this dependency should disappear [44].
Clearly the results of frequentist and Bayesian inferences will not coincide in general. This is especially true if the model likelihood has a complex dependence on the parameters (i.e. not just a simple Gaussian form), and if insufficient experimental data is available. Note that this is true even if the prior is taken to be flat; a flat prior in one parameter basis is certainly not flat in every such basis. In the case of large sample limits both approaches give similar results, because the likelihood and posterior PDF both become almost Gaussian; this is why both are commonly used in scientific data analysis.
The first CMSSM parameter scans were performed on fixed grids in parameter space [15, 16]. Predictions of e.g. the relic density of the neutralino as a cold dark matter candidate or the Higgs/superpartner masses were computed for each point on the grid, and compared with experimental data. In these earliest papers, points for which the predicted quantities were within an arbitrary confidence level (e.g. 1, 2) were deemed “good”. Because all accepted points are considered equivalently good, this method provides no way to determine points’ relative goodnessesoffit, and precludes any deeper statistical interpretation of results.
The first attempts at statistical interpretation were to perform (frequentist) analyses with grid scans [17]. Limited random scans were also done in some of these cases. Despite some advantages of grid scans, their main drawback is that the number of points sampled in an dimensional space with points for each parameter grows as , making the method extremely inefficient. This is even true for spaces of moderate dimension like the CMSSM. The lack of efficiency is mainly due to the complexity and highly nonlinear nature of the mapping of the CMSSM parameters to physical observables; many important features of the parameter space can be missed by not using a high enough grid resolution.
Another class of techniques that has become popular in SUSY analyses is based on more sophisticated scanning algorithms. These are techniques designed around the Bayesian requirement that a probability surface be mapped in such a way that the density of the resultant points is proportional to the actual probability. However, the points they return can also be used in frequentist analyses. Foremost amongst these techniques is the Markov Chain Monte Carlo (MCMC) method [18, 19, 20, 21, 23, 24, 26, 25, 22, 27, 29, 30, 31, 33, 32, 34, 35, 37, 36, 28], which has also been widely used in other branches of science, in particular cosmological data analysis [38]. The MCMC method provides a greatly improved scanning efficiency in comparison to traditional grid searches, scaling as instead of for an dimensional parameter space. More recently, the framework of nested sampling [39] has come to prominence, particularly via the publiclyavailable implementation MultiNest [40]. A handful of recent papers have explored the CMSSM parameter space or its observables using this technique [14, 32, 41, 42, 43, 44, 45, 46], as well as the higherdimensional spaces of the Constrained NexttoMSSM (CNMSSM) [47] and phenomenological MSSM (pMSSM) [48]. MultiNest was also the technique of choice in the supersymmetrybreaking study of Ref. 7. A CMSSM scan with MultiNest takes roughly a factor of 200 less computational effort than a full MCMC scan, whilst results obtained with both algorithms are identical (up to numerical noise) [14].
Besides improved computational efficiency, MCMCs and nested sampling offer other convenient features for both frequentist and Bayesian analyses. In a fullydefined statistical framework, all significant sources of uncertainty can be included, including theoretical uncertainties and our imperfect knowledge of the relevant SM parameters. These can be introduced as additional ‘nuisance’ parameters in the scans, and resultant profile likelihoods and posterior PDFs profiled/marginalised over them. In a similar way, one can profile and marginalise over all parameters at once to make straightforward statistical inferences about any arbitrary function of the model parameters, like neutralino annihilation fluxes [27, 32, 46] or crosssections [45]. The profiling/marginalisation takes almost no additional computational effort: profiling simply requires finding the sample with the highest likelihood in a list, whereas marginalisation, given the design of MCMCs and nested sampling, merely requires tallying the number of samples in the list. Finally, it is straightforward to take into account all priors when using these techniques for Bayesian analyses.
Although the priordependence of Bayesian inference can be useful for determining the robustness of a fit, it may be considered undesirable when trying to draw concrete conclusions from the fitting procedure. This is because the prior is a subjective quantity, and most researchers intuitively prefer their conclusions not to depend on subjective assessments. In this case, the natural preference would be to rely on a profile likelihood analysis rather than one based on the posterior PDF. The question then becomes: how does one effectively and efficiently explore a parameter space like the CMSSM, with its many finelytuned regions, in the context of the profile likelihood?
As a first attempt to answer this question, the profile likelihood of the CMSSM was recently mapped with MCMCs [25] and MultiNest [14]. Despite the improved efficiency of these Bayesian methods with respect to grid searches by several orders of magnitude, they are not optimised to look for isolated points with large likelihoods. They are thus very likely to entirely miss highlikelihood regions occupying very tiny volumes in the parameter space. Such regions might have a strong impact on the final results of the profile likelihood scan^{1}^{1}1Missing finetuned regions could even modify the posterior PDF if they are numerous or large enough., since the profile likelihood is normalised to the bestfit point and places all regions, no matter how small, on an equal footing. It appears that in the case of the CMSSM there are many such finetuned regions. This is seen in e.g. CMSSM profile likelihood maps with different MultiNest scanning priors [14]. Given that the profile likelihood is independent of the prior by definition, these results demonstrate that many highlikelihood regions are missed when using a scanning algorithm optimised for Bayesian statistics. In order to make valid statistical inferences in the context of the profile likelihood, the first (and perhaps most crucial) step is to correctly locate the bestfit points. Setting confidence limits and describing other statistical characteristics of the parameter space makes sense only if this first step is performed correctly.
If one wishes to work confidently in a frequentist framework, some alternative scanning method is clearly needed. The method should be optimised for calculating the profile likelihood, rather than the Bayesian evidence or posterior. Even if the results obtained with such a method turn out to be consistent with those of MCMCs and nested sampling, the exercise would greatly increase the utility and trustworthiness of those techniques.
In this paper, we employ a particular class of optimisation techniques known as Genetic Algorithms (GAs) to scan the CMSSM parameter space, performing a global frequentist fit to current collider and cosmological data. GAs and other evolutionary algorithms have not yet been widely used in high energy physics or cosmology; to our knowledge, their only prior use in SUSY phenomenology [49] has been for purposes completely different to ours^{2}^{2}2See e.g. Refs. 50, 51, 52 for their use in high energy physics, Ref. 53 for uses in cosmology and general relativity, and Ref. 54 for applications to nuclear physics.. There are two main reasons GAs should perform well in profile likelihood scans. Firstly, the sole design purpose of GAs is to maximise or minimise a function. This is exactly what is required by the profile likelihood; in the absence of any need to marginalise (i.e. integrate) over dimensions in the parameter space, it is more important that a search algorithm finds the bestfit peaks than accurately maps the likelihood surface at lower elevations. Secondly, the ability of GAs to probe global extrema excels most clearly over other techniques when the parameter space is very large, complex or poorly understood; this is precisely the situation for SUSY models. We focus exclusively on the CMSSM as our testbed model, but the algorithms could easily be employed in higherdimensional SUSY parameter spaces without considerable change in the scanning efficiency (as they scale as for an dimensional parameter space). We compare our profile likelihood results directly with those of the MultiNest scanning algorithm. This means that we also compare indirectly with MCMC scans, because MultiNest and MCMCs give essentially identical results [14]. We find that GAs uncover many betterfit points than previous MultiNest scans.
This paper proceeds as follows: in Sec. 2 we briefly review the parameters of the CMSSM, the predicted observables, constraints and bounds on them from collider and cosmological observations. We then introduce GAs as our specific scanning technique of choice. We present and discuss results in Sec. 3, comparing those obtained with GAs to those produced by MultiNest. We include the bestfit points and the highestlikelihood regions of the parameter space, as well as the implications for particle discovery at the LHC and in direct and indirect dark matter searches. We draw conclusions and comment on future prospects in Sec. 4.
2 Model and analysis
2.1 CMSSM likelihood
Our goal is to compare the results of a profile likelihood analysis of the CMSSM performed with GAs to those obtained using Bayesian scanning techniques, in particular MultiNest. For this reason, we work with the same set of free parameters and ranges, the same observables (measurable physical quantities predicted by the model) and the same constraints (the observed values of observables, as well as physicality requirements) as in Ref. 14. We also perform the theoretical calculations and construct the full likelihood function of the model based on these variables and data in the same way as in Ref. 14. We limit ourselves to just a brief review of these quantities and constraints here. For a detailed discussion, the reader is referred to previous papers [21, 14, 24].
2.1.1 Parameters and ranges
As pointed out in Sec. 1, there are four new continuous parameters , , and , which are the main free parameters of the model to be fit to the data. There is also a new discrete parameter, the sign of , which we fix to be positive.^{3}^{3}3The choice of positive is motivated largely by constraints on the CMSSM from the anomalous magnetic moment of the muon . The branching fraction actually prefers negative (see, for example, Ref. 41 and the references therein). was apparently chosen in earlier work [14] to stress the seeming inconsistency between the SM predictions for and experimental data. We employ the same fixed sign in the present work for consistency, although it is of course possible to leave this a free discrete parameter in any global fit.
We add four additional nuisance parameters to the set of free parameters in our scans. These are the SM parameters with the largest uncertainties and strongest impacts upon CMSSM predictions: , the pole top quark mass, , the bottom quark mass evaluated at , , the electromagnetic coupling constant evaluated at the boson pole mass , and , the strong coupling constant, also evaluated at . These last three quantities are computed in the modified minimal subtraction renormalisation scheme .
The set of CMSSM and SM parameters together constitute an 8dimensional parameter space
(1) 
to be scanned and constrained according to the available experimental data. The ranges over which we scan the parameters are , , , , , and .
2.1.2 Constraints: physicality, observables, and uncertainties
In order to compare the predictions of each point in the parameter space with data, one has to first derive some quantities which are experimentally measurable. For a global fit, one needs to take into account all existing (and upcoming) data, such as collider and cosmological observations, and direct and indirect dark matter detection experiments. This is indeed the ultimate goal in any attempt to constrain a specific theoretical model like the CMSSM. Because our main goal in this paper is to assess how powerful GAs are compared to conventional methods (in particular the MultiNest algorithm), we restrict our analysis to the same set of observables and constraints as in the comparison paper [14]. These include the collider limits on Higgs and superpartner masses, electroweak precision measurements, physics quantities and the cosmologicallymeasured abundance of dark matter. These quantities and constraints are given in Table 1.
The observables are of three types:

The SM nuisance parameters. Although they are considered free parameters of the model along with the CMSSM parameters, these are rather wellconstrained by the data, and therefore used in constructing the full likelihood function.

Observables for which positive measurements have been made. These are the boson pole mass (), the effective leptonic weak mixing angle (), anomalous magnetic moment of the muon (), branching fraction , mass difference (), branching fraction , and dark matter relic density () assuming the neutralino is the only constituent of dark matter.

Observables for which at the moment only (upper or lower) limits exist, i.e. the branching fraction , the lightest MSSM Higgs boson mass (assuming its coupling to the boson is SMlike), and the superpartner masses.
Our sources for these experimental data are indicated in the table. Note that there are no theoretical uncertainties associated with the SM parameters, because they are simultaneously observables and free input parameters. For details about these quantities, experimental values and errors, particularly the reasoning behind theoretical uncertainties, see Refs. 14, 21 and 24.
Observable  Mean value  Uncertainties  Reference  
(standard deviations)  
experimental  theoretical  
SM nuisance parameters  
  [55]  
  [56]  
  [56]  
  [57]  
measured  
[58]  
[58]  
29.5  8.8  1.0  [59]  
3.55  0.26  0.21  [60]  
[61]  
[60]  
0.1099  0.0062  [62]  
limits only (95% CL)  
14%  [63]  
(SMlike Higgs)  [64]  
(see Ref. 21)  negligible  [64]  
5%  [65]  
()  5%  [66] ([67, 68])  
()  5%  [66] ([67, 68])  
()  5%  [66] ([67, 68])  
()  5%  [66] ([67, 68])  
()  5%  [69] ([67, 68])  
()  5%  [66] ([67, 68])  
()  5%  [66] ([67, 68])  
5%  [56]  
5%  [56] 
In order to calculate all observables and likelihoods for different points in the CMSSM parameter space, we have used SuperBayeS 1.35 [70], a publicly available package which combines SoftSusy [71], DarkSusy [72], FeynHiggs [73], Bdecay and MicrOMEGAs [74] in a statistically consistent way. The public version of the package offers three different scanning algorithms: MCMCs, MultiNest, and fixedgrid scanning. We have modified the code to also include GAs. Other globalfit packages are also available: Fittino [36], for MCMC scans of the CMSSM, SFitter [28], for MCMC scans of the CMSSM and also weakscale MSSM, and Gfitter [75], for Standard Model fits to electroweak precision data (SUSY fits will soon be included as well). Amongst other search strategies, Gfitter can make use of GAs.
In SuperBayeS, the likelihoods of observables for which positive measurements exist (indicated in the upper and central panels of Table 1), are modeled by a multidimensional Gaussian function. The variance of this Gaussian is given by the sum of the experimental and theoretical variances associated with each observable; the corresponding standard deviations are shown in Table 1. For observables where only upper or lower limits exist (indicated in the lower panel of Table 1), a smeared stepfunction likelihood is used, constructed taking into account estimated theoretical errors in calculating the predicted values of the observables. For details on the exact mathematical forms of these likelihood functions, see Ref. 21.
In addition to experimental constraints from collider and cosmological observations, one must also ensure that each point is physically selfconsistent; those that are not should be discarded or assigned a very low likelihood value. Unphysical points are ones where no selfconsistent solutions to the RGEs exist, the conditions of electroweak symmetrybreaking are not satisfied, one or more masses become tachyonic, or the theoretical assumption that the neutralino is the LSP is violated. This is done in SuperBayeS by assigning an extremely small (almost zero) likelihood to the points that do not fulfil the physicality conditions.
2.2 Genetic Algorithms and profile likelihoods of the CMSSM
GAs [76, 77, 78] are a class of adaptive heuristic search techniques that draw on the evolutionary ideas of natural selection and survival of the fittest to solve optimisation problems. According to these principles, individuals in a breeding population which are better adapted to their environment generate more offspring than others.
GAs were invented in early 1970s, primarily by John Holland and colleagues for solving optimisation problems [79], although the idea of evolutionary computing had been introduced as early as the 1950s. Holland also introduced a formal mathematical framework, known as Holland’s Schema Theorem, which is commonly considered to be the theoretical explanation for the success of GAs. Now, about thirty years after their invention, GAs have amply demonstrated their practical usefulness (and robustness) in a variety of complex optimisation problems in computational science, economics, medicine and engineering [77].
The idea is very simple: in general, solving a problem means nothing more than finding the one solution most compatible with the conditions of the problem amongst many candidate solutions, in an efficient way. In most cases, the quality of different candidate solutions can be formulated in terms of a mathematical ‘fitness function’, to be maximised by some algorithm employed to solve the problem. With a GA, one repeatedly modifies a population of individual candidate solutions in such a way that after several iterations, the fittest point in the population evolves towards an optimal solution to the problem.
These iterative modifications are designed to imitate the reproductive behaviour of living organisms. At each stage, some individuals are selected randomly or semirandomly from the current population to be parents. These parents produce children, which become the next generation of candidate solutions. If parents with larger fitness values have more chance to recombine and produce children, over successive generations the best individual in the population should approach an optimal solution.
Because GAs are based only on fitness values at each point, and are insensitive to the function’s gradients, they can also be applied to problems for which the fitness function has many discontinuities, is stochastic, highly nonlinear or nondifferentiable, or possesses any other special features which make the optimisation process extremely difficult. GAs are generally prescribed when the traditional optimisation algorithms either fail entirely or give substandard results. These properties make GAs ideal for our particular problem, as the CMSSM has exactly those unruly properties.
In what follows, we describe the general algorithmic strategy employed in a GA, followed by our particular implementation for a profile likelihood analysis of the CMSSM.
2.2.1 Main strategy
Denote the full model likelihood by , where is the set of free parameters introduced in Eq. 1. This function, as a natural proxy for the goodnessoffit given a fixed number of degrees of freedom, indicates how fit each particular , or individual, is. It is thus a good choice for the genetic fitness function. We now want to find a specific individual, say , for which the fitness function is globally maximised.
Consider a population of candidate individuals (). Denote this entire population by . This population is operated on times by a semirandom genetic operator to produce a series of new populations , where () and . The th individual in the th generation is . For the general fitness of individuals to improve from one generation to the next, must clearly depend upon .
The search must be initialised with some starting population , which is then evolved under the action of until some convergence criterion is met. At this stage, the algorithm returns its best estimate of as the individual where is maximised. If and have been chosen appropriately, this should occur at , i.e. in the last generation. This algorithm can be summarised as follows:
Three main properties define a GA. Firstly, operates on a population of points rather than a single individual. This makes GAs rather different from conventional Monte Carlo search techniques such as the MCMC, though the nested sampling algorithm does also act on a population of points. The parallelism of a GA means that if appropriate measures of population diversity are incorporated into the algorithm, local maxima in the likelihood surface can be dealt with quite effectively; if a population is required to maintain a certain level of diversity, concentrations of individuals clustering too strongly around local maxima will be avoided by the remaining members of the population. This parallelism increases the convergence rate of the algorithm remarkably.
Secondly, does not operate directly on the real values of the parameters (the phenotype), but acts on their encoded versions (the chromosomes, or genotype). Depending on the problem, individuals can be encoded as a string of binary or decimal digits, or even more complex data structures. then acts on the chromosomes in the current generation based only on their fitnesses, i.e. the likelihood function in our case. No further information is required for the GA to work; this means discontinuities in the likelihood function or its derivatives have virtually no affect on the performance of the algorithm.
Finally, the transition rules used in are probabilistic, not deterministic. The constituent genetic operators contained within are the key elements of the algorithm, and how they act on different populations defines different types of GAs. In our case
(2) 
where is the selection procedure (how parents are selected for breeding from a source population), is the crossover (how offspring are to inherit properties from their parents), is the mutation process (random changes made to the properties of newlycreated offspring), and is the reproduction scheme used to place offspring into a broader population. and are stochastic processes defined at the genotype level, whereas and are phenotypelevel processes, semirandom and essentially deterministic in nature, respectively.
The randomised operators of GAs are strongly distinct from simple random walks. This is because every new generation of individuals inherits some desirable characteristics from the present generation. Crossover (or recombination) rules play a crucial role in determining how the parents create the children of the next generation. The children are not copied directly to the next population; mutation rules specify that a certain degree of random modification should be applied to the newlyproduced offsprings’ chromosomes. Mutation is very important at this stage, since it is often the only mechanism preventing the algorithm from getting stuck in local maxima; its strength is typically linked dynamically to some measure of population diversity.
The reproduction loop is terminated whenever is fulfilled. In our case, whenever a predefined number of generations have been produced (i.e. ). The termination parameter (i.e. the number of generations ) and the chosen termination criterion itself depend upon the particular problem at hand and how accurate a solution is required. The fittest individual in the final population is then accepted as the bestfit solution to the problem. If one is also interested in mapping the likelihood function in the vicinity of the bestfit points, so as to be able to plot e.g. 1 and 2 confidence regions for different CMSSM parameters, then it is useful to also retain all the individuals produced during the iterations of the algorithm.
2.2.2 Our specific implementation
Although any algorithm with the basic features described above ensures progressive improvement over successive generations, to guarantee absolute maximisation one usually needs to implement additional strategies and techniques, depending on the particular problem at hand. To implement a GA for analysing the CMSSM parameter space, we have taken advantage of the public GA package PIKAIA [80, 81]. Here we briefly describe the options and ingredients from PIKAIA 1.2 that we used in our GA implementation; the majority of these were the default choices.
Fitness function: A natural fitness function to choose is the loglikelihood of the model, . This function is however always negative (or zero). Since PIKAIA internally seeks to maximise a function, and this function must be positive definite, we chose the inverse chisquare (i.e. ) as the simplest appropriate fitness function. Except for the way we adjust the mutation rate for different generational iterations (see below), all the other genetic operators implemented in our algorithm are functions only of the ranking of individuals in the population; the actual values of the fitness function at these points do not matter as long as the ranking is preserved.
Encoding: We encode individuals in the population (i.e. points in the CMSSM parameter space) using a decimal alphabet. That is, a string of base 10 integers, such that every normalised parameter is encoded into a string , where the are positive integers. This is different from many other publicdomain GAs which usually make use of binary encoding. We use 5 digits to encode each parameter. This means that every individual chromosome is a decimal string of length .
Initialisation and population size: We use completely random points in the parameter space for the initial population. We choose a population size of (the typical number usually used in GAs), keeping it fixed throughout the entire evolutionary process.
Selection: In order to select parents in any given iteration, a stochastic mechanism is used. The probability of an individual to be selected for breeding is determined based on its fitness in the following way: first, we assign to each individual a rank based on its fitness such that corresponds to the fittest individual and to the most unfit. Then a ranking fitness is defined in terms of this rank:
The sum of all ranking fitness values in the population is computed as
and running sums are defined as
Obviously (since are all positive), and . As the next step, a random number is generated and the element is located for which . The corresponding individual is one of the parents selected for breeding; the other one is also chosen in the same manner. This selection procedure is called the Roulette Wheel Algorithm (see Ref. 80 and references therein for more on this procedure and the motivation for using the ranking fitness in place of the true fitness).
Crossover: When a pair of parent chromosomes have been chosen, a pair of offspring are produced via the crossover operator . We use two different types of operators for this purpose, namely, uniform onepoint and twopoint crossovers (see Ref. 82 for a comprehensive review of existing crossover operators). Table 2 illustrates how these two processes work. In our case, parents are encoded as digit strings. The onepoint crossover begins by randomly selecting a cutting point along the chromosomes’ length, and dividing each parent string into two substrings. This is done by generating a random integer . According to the crossover scheme, the strings located in identical parts of the two strings are then swapped to give the two children chromosomes. It is clear that even though the information encoded in the parent chromosomes is transfered to the offspring chromosomes, the latter are in general different from the former, corresponding to a different set of model parameters in the parameter space. In the twopoint crossover scheme, two slicing points are selected randomly by generating two random integers , and the string segments between these two cutting points are exchanged.
uniform onepoint crossover  

initial parent chromosomes  6739…8451  4394…0570 
selecting a random cutting point  6739…8451  4394…0570 
swapping the substrings  6739…8470  4394…0551 
final offspring  6739…8470  4394…0551 
uniform twopoint crossover  
initial parent chromosomes  6739…8451  4394…0570 
selecting two random cutting points  6739…8451  4394…0570 
swapping the substrings  6794…0571  4339…8450 
final offspring  6794…0571  4339…8450 
In our algorithm, for each pair of parent chromosomes, either onepoint or twopoint crossover is chosen with equal probability. This combination of the onepoint and twopoint crossovers is proposed to avoid the socalled “endpoint bias” problem produced by using only the onepoint crossover. This happens when, for example, a combination of two substrings situated at opposite ends of a parent string are advantageous when decoded back to the phenotypic level (in the sense that they give a high fitness), but only when they are expressed simultaneously. Such a feature is impossible to pass on to offspring when using a uniform onepoint crossover, as cutting the string at any single point always destroys this combination. This is much less of a problem for substrings located more centrally in the parent string (see Ref. 80 again for more details on this issue).
Once two parents have been selected for breeding, the crossover operation is applied only with a preset probability. This probability is usually taken to be large but not . We use in our analysis, meaning that there is a chance that any given breeding pair will be passed on intact to the next stage, where they will be affected by mutation.
Mutation: We employ the socalled uniform onepoint mutation operator. Different genes in the offspring’s chromosomes (i.e. decimal digits in the 40digit strings) are replaced with a predefined probability (the ‘mutation rate’), by a random integer in the interval . Like the crossover operator, mutation preserves the parameter bounds. The choice of mutation rate is highly important, and very problemdependent; in general it cannot be chosen a priori. If too large, it can destroy a potentially excellent offspring and, in the extreme case, make the whole algorithm behave effectively as an entirely random search. If too small, it can endanger the variability in the population and cause the whole population to become trapped in local maxima; a large enough mutation rate is often the only mechanism to escape premature convergence at local maxima. Therefore, instead of using a fixed mutation rate we allow it to vary dynamically throughout the run, such that the degree of ‘biodiversity’ is monitored and the mutation rate is adjusted accordingly. When the majority of individuals in a population (as estimated by the median individual) are very similar to the best individual, the population is probably clustered around a local maximum and the mutation rate should increase. The converse is also true: a high degree of diversity indicates that the mutation rate should be kept low. The degree of clustering and the subsequent amount of adjustment in our algorithm are assessed based on the difference between the actual fitness values of the best and median points. This is done by defining the quantity in which and correspond to the fitnesses of the best and median individuals, respectively. The mutation rate is then increased (decreased) by a fixed multiplicative factor whenever is smaller (larger) than a preset lower (upper) bound. We choose the lower and upper critical values of to be and respectively, and the multiplicative factor to be . We bound the mutation rate to lie between the typical values of and . We use an initial mutation rate of .
Reproduction plans: After selecting parents and producing offspring by acting on them with the crossover and mutation operators, the newly bred individuals must somehow be incorporated into the population. The strategy which controls this process is called a reproduction plan. Although there are several advanced reproduction plans on the market, we utilise the simplest offtheshelf option: full generational replacement. This means that the whole parent population is replaced by the newlycreated children in each iteration, all in a single step.
Elitism: There is always a possibility that the fittest individual is not passed on to the next generation, since it may be destroyed under the action of the crossover and mutation operations on the parents. To guarantee survival of this individual, we use an elitism feature in our reproduction plan. Under full generational replacement, elitism simply means that the fittest individual in the offspring population is replaced by the fittest parent, if the latter has a higher fitness value.
Termination and number of generations: There are several termination criteria one could use for a GA. Rather than evolving the population generation after generation until some tolerance criterion is met, we perform the evolution over a fixed and predetermined number of generations. This is because the former strategy is claimed to be potentially dangerous when approaching a new problem, in view of the usual convergence trends exhibited by GAbased optimisers (see Ref. 80 for more details). In our analysis, we used 10 separate runs of the algorithm with different initial random seeds, and a fixed number of likelihood evaluations () for each. We then compiled all points into a single list, and used it to map the profile likelihood of the CMSSM.
3 Results and discussion
We now present and analyse the results of our global profile likelihood fit of the CMSSM using GAs. We compare results with a similar global fit using the stateoftheart Bayesian algorithm MultiNest [40]. In Sec. 3.1 we give the bestfit points and highlikelihood regions in the CMSSM parameter space. In Sec. 3.2 we discuss and compare implications of both methods for the detection of supersymmetric and Higgs particles at the LHC. Sec. 3.3 is devoted to an investigation of dark matter in the CMSSM and the prospects for its direct and indirect detection. Throughout this section we compare our GA profile likelihood results mainly with those of the MultiNest algorithm implemented with linear (flat) priors. The reasons for this are outlined in Sec. 3.4, along with a comparison of the two scanning techniques in terms of the computational speed and convergence.
3.1 Bestfit points and highlikelihood regions
The at our bestfit point is . This is surprisingly better than the values of and found by MultiNest with linear and logarithmic priors, respectively, for the same problem. This improvement in the best fit can in principle have a drastic impact on the statistical inference drawn about the model parameters.
To demonstrate these effects, let us start with twodimensional (2D) plots for the principal parameters of the CMSSM, i.e. , , and , shown in Figs. 1 and 2. These figures show 2D profile likelihood maps. In the first figure, the full likelihood is maximised over all free (CMSSM plus SM nuisance) parameters except and . Similar diagrams are shown in Fig. 2, but now for the 2D profile likelihoods in terms of the parameters and .
Figs. 1a and 2a show the 2D profile likelihood maps obtained by taking into account all the sample points in the parameter space resulting from the GA scan. The inner and outer contours indicate () and () confidence regions based on the GA bestfit point of . That is, points with fall into the region, and points with fall into the region. Similar plots are presented for the MultiNest results in Figs. 1d and 2d, where the and contours are drawn based on the MultiNest bestfit of (1 and regions are given by and , respectively). These panels reflect the outcomes of each scanning algorithm in the absence of any information from the other. The sample points have been divided into bins in all plots and no smoothing is applied.
It is important to realise that although both the 1 and confidence regions in the GA results seem to be rather smaller in size than the corresponding regions in the MultiNest scan (especially the region), this is by no means an indication that MultiNest has found more highlikelihood points in the parameter space. The situation is in fact the exact opposite. This is clear when we recall that the bestfit likelihood values are very different in the two cases giving rise to two completely different sets of isolikelihood contours. To make this point clear, suppose for the sake of argument that the GA bestfit likelihood value is indeed the absolute global maximum we were looking for. If so, we can now check how well the MultiNest algorithm has sampled the parameter space, by looking at Figs. 1b and 2b. These show how many of the MultiNest samples are located in the correct confidence regions set by the absolute maximum; the contours are drawn based on this bestfit value rather than the one found by MultiNest itself. The plots show that MultiNest has discovered no points in the region and only a small fraction of the region. In particular, it is interesting to notice that the MultiNest bestfit point, i.e. the one with (marked as dotted triangles in Figs. 1b,d and 2b,d) now sits in the region. These all come from the fact that only points with and fall in the and regions, respectively, and there are not many points found by MultiNest with such low s. The same statement holds for the logprior MultiNest bestfit point with .
It is obvious from these plots that the GA has found many points in the parameter space with rather high likelihood (or equivalently, low ) which were missed (or skipped) by MultiNest. This indicates that the use of MultiNest scans in the context of the frequentist profile likelihood is rather questionable. This is not really surprising, given that MultiNest is designed to sample the Bayesian posterior PDF, not map the profile likelihood.
We can also use the resultant GA samples in a different way to clarify this result. In Figs. 1c and 2c, the GA samples are plotted with the same contours as in Figs. 1d and 2d, i.e. based on the MultiNest bestfit likelihood value. Compared to 1d and 2d, we see that there are many highlikelihood points in the MultiNest region found by the GA and missed by MultiNest. In the sense of the profile likelihood, it appears that MultiNest has converged prematurely; we see a much larger and more uniform pseudo region with the GA data. Here we see that most of the region labeled as being within the confidence level in the MultiNest scan is actually part of its confidence region.
Our results confirm the complexity of the CMSSM parameter space, showing that much care should be taken in making any statistical statement about it. This is especially true when using a frequentist approach, as this complication plays a crucial role in the final conclusions. It is of course true that the convergence criterion for MultiNest is defined on the basis of the Bayesian evidence, and the algorithm may have (indeed, probably has) converged properly in this context. The point we want to emphasise is that even if MultiNest is converged for a Bayesian posterior PDF analysis of the model, this convergence is far from acceptable for a profile likelihood analysis. The same is also very likely to be true of other less sophisticated Bayesian methods, such as the MCMC; this is the case at least for MCMC scans performed with the same physics and likelihood calculations as in our analysis (since MCMCs and MultiNest give almost identical results in this case [14]).
The aforementioned comparison does also suggest, however, that even the Bayesian posterior PDF obtained from MultiNest and MCMC scans might not yet be quite properly mapped. This is because in principle, a significant amount of probability mass could be contained in the regions found by the GA but missed or skipped by MultiNest. Given the absence of any definition of a measure on the parameter space in profile likelihood analyses such as the one we perform, we are unfortunately not in a position to make any conclusive statement about the actual contribution of these regions to the Bayesian probability mass. Nevertheless, the difference in size between the blue regions in Figs. 1c and 1d is intriguing. We discuss these convergence questions further in Sec. 3.4.
Our GA scan has found highlikelihood points in many of the CMSSM regions known to be consistent with data, in particular the relic abundance of dark matter [83]. These include the stau () coannihilation (COA) region [84] usually at small where the lightest stau is close in mass to the neutralino, the focus point (FP) region [85] at large where a large Higgsino component causes neutralino annihilation into gauge boson pairs, and the light Higgs boson funnel region [15, 86] at small . We have not found any highlikelihood points in the stop () coannihilation region [87, 88, 89] at large negative , where the lightest stop is close in mass to the neutralino. This could be interpreted as confirmation of the claim that this region, although compatible with the WMAP constraint on the relic density of dark matter, is highly disfavoured when other observables are also taken into account [19].
It is important to make the point that although our method does find some points in the funnel region, it does not spread out very well around those points to map the whole region. Finding this very finetuned region is a known challenge for any scanning strategy, including nested sampling. The failure of the GA to map other points in the funnel region can be understood. We believe that this behaviour is caused by the specific crossover scheme employed in our analysis (i.e. one/twopoint crossover), and could probably be cured by using a more advanced algorithm. Alternatively, a different parameterisation of the model, such as a logarithmic scaling of the mass parameters (or equivalently, genetic encoding in terms of the logarithms of these parameters), would probably find the funnel region much more effectively (in the same way as it does when MultiNest is implemented with logarithmic priors). In any case, it is important to realise that these types of regions are findable by our method (although not very well), even without taking into account any ad hoc change in the model parameterisation (or choosing a nonlinear prior such as the logarithmic one in the Bayesian language). See Sec. 3.4 for more discussions about the priors and parameterisation.
Returning to the bestfit points, it is visible from the plots that the global bestfit point is located in the FP region (dotted circles in Figs. 1a,c and 2a,c). This has a very interesting phenomenological implication, which we return to later in this section when we discuss contributions to the total likelihood of the bestfit model from different observables. For comparison, we have also marked the bestfit point located in the COA region, which has (dotted squares in Figs. 1a,c and 2a,c). This point is situated inside the confidence level contour (Figs. 1a and 2a) and is wellfavoured by our analysis. It is interesting to notice that the for this point, although worse than the global bestfit , is still better than the best value found by the MultiNest scan, even when implemented with a log prior (), which also corresponds to a point in the COA region. This result is important as MultiNest scans with logarithmic priors are usually considered a good way to probe lowmass regions such as the COA. Our algorithm, even working with effectively linear priors (because the genome featured a linear encoding to the parameters), appears to have found a better point in this region as well.
As another exhibition of the consequences of our results compared to the Bayesian nested sampling technique, it is interesting to look at the 1D profile likelihoods for the CMSSM parameters (Fig. 3). The horizontal axes in the plots indicate the CMSSM parameters and the vertical axes show the corresponding profile likelihoods, normalised to the bestfit GA value (i.e. with ). Green and grey bars show the GA and MultiNest 1D profile likelihoods respectively. We sorted points into bins for these plots. We have also included the global (FP) and COA bestfit points in the plots, indicated by solid and dashed red lines, respectively.
The very different results in Fig. 3 from the two different algorithms are yet another confirmation that existing sampling techniques are probably still not sufficiently reliable for the exploration and mapping of SUSY likelihoods, at least in a frequentist framework. These plots show that by employing a different scanning algorithm, it is quite possible to find many important new points in the parameter space. This can in principle affect the whole inference about the model, especially when we are interested not only in drawing a general statement about the highlikelihood regions, but also in performing a more detailed exploration of the model likelihood around the bestfit points. It also shows that the GA technology we made use of in this paper seems a better choice for frequentist analyses than conventional tools, which are typically optimised for Bayesian searches; we have found higher likelihood values for almost all the regions within the interesting range of model parameters. Whilst this certainly favours this technique over others, it should also serve as a warning. We can by no means be sure that the GA has actually found the true global bestfit point. Clearly this concern should be taken much more seriously when one is dealing with more complicated models than the CMSSM, with more parameters and more complex theoretical structures.
Listed in Tables 3 and 4 are all properties of the two GA bestfit points. The upper part of the first table gives values of the CMSSM principal parameters and SM nuisance parameters, whereas the lower part shows all physical observables employed in our calculation of model likelihood. These are the same quantities as given in Table 1 except for the Higgs and sparticle masses, which will be presented in the upcoming section on implications for the LHC. To make the differences between the properties of these two “good” points more clear, in the second table we have indicated the individual contributions from different observables to the total at each point. These quantities are also given for MultiNest bestfit points found using flat and logarithmic priors.
model (+nuisance) parameters  

GA global BFP (located in FP region)  GA COA BFP  
observables  
GA global BFP (located in FP region)  GA COA BFP  
5.9  14.5  
3.58  2.97  
0.10949  0.10985  
One interesting fact seen in Table 4 is the apparent tension between and the other observables, in particular . This has been widely discussed in the past [31, 37, 14, 42]. While most of the discrepancy between the model and the experimental data at our global bestfit point (living in the FP region) comes from (), and contributes only about to the total , these two observables partially switch roles at the COA point. This confirms that in general favours large (the FP region), while favours smaller masses (the COA region). A similar feature is also visible in the two MultiNest bestfit points for flat and log priors, as they reside in the FP and COA regions, respectively.
partial (fractional contribution to the total in )  
observable  GA global BFP  GA COA BFP  MN global BFP  MN global BFP 
located in FP region  with flat priors  with log priors  
nuisance parameters  
sparticles  
all  9.35 (100 %)  11.34 (100 %)  13.51 (100 %)  11.90 (100 %) 
In the case where our bestfit point is placed in the FP region, the total from all observables except and is a remarkably smaller fraction () of the total than in the COA region () (this is also the case if we compare the two MultiNest points in the table, with contributions of and ). This can be qualitatively interpreted as yet another reflection of the fact that in the absence of any constraint from , the data is largely consistent with the global bestfit point being in the FP region. That is, if one ignores , it is much easier to fulfil all the experimental constraints on the CMSSM by moving towards larger . If one wants to satisfy also the extra constraint coming from , this might be possible by moving back towards lower masses (the COA region), but at the price of reducing the total likelihood.
It is important to stress that our global bestfit point is in fact part of the FP region, with high (i.e. ). This means that even taking into account the constraint from , the FP is still favoured over the COA region in our analysis, in clear contradiction with some recent claims [31, 37] that the latter is favoured by existing data. These analyses were performed in a frequentist framework, and based on MCMC scans. However, the large discrepancy with our findings probably comes more from differences in the likelihood functions themselves than the scanning algorithms, i.e. in the calculations of physical observables and their contributions to the likelihood. A direct comparison with these works would only be possible if we were to also work with exactly the same routines for the calculation of the likelihood as in Refs. 31 and 37, changing only the scanning algorithm (as we have here in comparing with Ref. 14). The difference we see in this case mostly reflects the discrepancy between the results of Ref. 14 and Refs. 31 and 37.
Nontheless, it is important to note that some differences could be due to the scanning technique. We have shown in this paper that at least for the specific physics setup implemented in SuperBayeS, GAs find betterfit points than nested sampling, which in turn is known to find essentially the same points as MCMCs. It is therefore quite reasonable to expect that GAs could find many points missed by MCMCs. There are even some other FP points found by GAs with masses of about and located in the region (see Fig. 1a), supporting the conclusion that although low masses are favoured over high masses in the previous MultiNest and MCMC scans using SuperBayeS, the opposite holds in our GA scans. This means that there exist many highlikelihood points in the FP region entirely missed by MultiNest and MCMC scans performed in the SuperBayeS analyses. It seems that those algorithms do not sample this region of the parameter space very well, at least when SuperBayeS routines are used for physics and likelihood calculations. We see no reason why a similar situation could not also occur when different codes are used to evaluate the likelihood function.
Since we have not used exactly the same physics and likelihood setup, nor the same numerical routines for calculating different quantities as employed in Refs. 31 and 37 (and we cannot do that in a consistent way as the code employed in those studies is not publicly available), we cannot make a definitive statement as to the overall impact of the scanning algorithm in the discrepancy we see with their results. One should however be very cautious in general when attempting to draw strong conclusions about e.g. the FP being excluded by existing data. The complex structure of the CMSSM parameter space makes the corresponding likelihood surface very sensitive to small changes in the codes and experimental data used to construct the full likelihood, which in turn can introduce a significant dependence upon the scanning algorithm.
3.2 Implications for the LHC
We showed in the previous section that compared to the stateoftheart Bayesian algorithm MultiNest, GAs are a very powerful tool for finding highlikelihood points in the CMSSM parameter space. It is therefore interesting to examine how strongly these results impact predictions for future experimental tests at e.g. the LHC. We have calculated the 1D profile likelihoods corresponding to the gluino mass (as a popular representative of the sparticles) and the lightest Higgs mass , both of which will be searched for at the LHC. The resultant plots are given in Fig. 4. These plots are generated in the same way as those in Fig. 3, indicating the differences between the two scanning strategies. Here, we once again see that the GA has found much better fits in the mass ranges covered by MultiNest.
Looking first at the gluino mass prediction (lefthand plot of Fig. 4), we confirm earlier findings [14, 37] that the LHC will probe all likely CMSSM gluino masses if its reach extends beyond . The FP and COA bestfit points are both located at relatively low masses with quite similar values (). These values are well within the reach of even the early operations of the LHC. The detailed CMSSM mass spectra computed at both of these points are presented in Table 5. It can be seen from these spectra that our global bestfit point favours rather high masses for the sfermions, while the COA bestfit point favours low masses.
GA global BFP  GA COA BFP  GA global BFP  GA COA BFP  
(GeV)  located in FP region  (GeV)  located in FP region  
1908  294.1  1994  798.1  
1903  202  2000  832.4  
1907  294.1  1994  798.1  
1901  201.9  1354  765  
1100  160.1  1492  793.4  
1560  289.2  140.4  152.6  
1906  283.3  269.9  285.4  
1905  283.3  519.7  451.1  
1560  272.5  529.7  469.6  
1998  826.1  270.4  286.9  
1996  805.4  530.3  468.5  
1998  826.1  115.55  111.11  
1996  805.4  179.93  504.24  
1194  672.8  179.83  504.04  
1364  803  201.14  510.67  
2001  832.4  877.1  898.8 
Looking at the righthand plot of Fig. 4, corresponding to the likelihood of different values of , we notice that although a large number of good points have Higgs masses higher than the SM limit from the Large ElectronPositron Collider (LEP; i.e. ), including the global bestfit point (with ), there are also many other important ones which violate this limit, including the bestfit COA point (with ). These points with lowmass Higgs bosons have been allowed by the smoothed likelihood function that we employed for the LEP limit (cf. Sec. 2.1). Instead of this treatment of the Higgs sector, one could use a more sophisticated method, such as implemented by HiggsBounds [90]. This would apply the collider bounds on the Higgs mass in a SUSYappropriate manner, and give more accurate likelihoods at low masses around the bound.
Looking again at Table 4, the contribution from as a percentage of the total is considerably larger in the COA case than the FP. This becomes clear when we compare their corresponding values for (i.e. and , respectively) with the LEP limit. The lower Higgs mass in the COA region is a reflection of the correlation between and in the CMSSM, confirming once more that moving to low (i.e. approaching the COA region) causes models to become less compatible with all experimental data except .
3.3 Implications for dark matter searches
As a natural continuation of our discussion of the consequences of our results for present and upcoming experiments, we turn now to dark matter, beginning with direct detection (DD) experiments. One interesting quantity for these experiments is the spinindependent scattering crosssection of the neutralino and a proton. This crosssection is often plotted against the neutralino mass when comparing limits from different direct detection experiments. Predictions are given in this plane from both the MultiNest and GA scans in Fig. 5, drawn similarly to Figs. 1 and 2. is shown in units of pb (i.e. ). Contours shown in the upper (lower) panels are generated according to the GA (MultiNest) bestfit point, and the points with highest likelihoods are marked as before. Although no constraints from direct detection measurements have been used in forming the model likelihood in this paper (mainly in order to work with the same set of quantities and constraints as employed in Ref. 14), we have also included the current best DD limits for comparison. These are limits at the confidence level from CDMSII [91] and XENON10 [92].
Looking at Fig. 5, we first notice that all the conclusions we made earlier are reconfirmed here: there are many important points in the parameter space that have appeared by the use of GAs, having a strong impact on the statistical conclusions. For example, instead of a rather spread and sparse confidence region produced by MultiNest (Fig. 5d), GAs (Fig. 5a) reveal a more compact region, sharply peaked around the bestfit points. It is interesting to see that in the latter case, most of the FP region around the dotted circle, including the point itself, is already excluded by CDMSII and XENON10 under standard halo assumptions. The global bestfit point has quite a large crosssection () compared to the MultiNest global bestfit point (), making it much more easily probed by direct detection (Table 6). On the contrary, the bestfit COA point has a much lower (), and is still well below these experimental limits. With future experiments planned to reach crosssections as low as , this point will eventually be tested as well. Even if we do not consider the highestlikelihood point found by the GA, and just compare the two lower panels in Fig. 5, MultiNest has obviously only explored a small fraction of its highlikelihood region in the parameter space. It has also missed most of its and points in the region . This is a particularly interesting area, being within the reach of current dark matter DD experiments.
In Table 6, we also give the calculated values for the spindependent scattering crosssections of the neutralino with a proton () and a neutron (), for both the FP and COA bestfit points.
direct detection  
GA global BFP  GA BFP in COA region  
indirect detection  
GA global BFP  GA BFP in COA region  
Considering implications for indirect detection (ID) of dark matter, one is often interested in a plot very similar to the one presented for the DD case, but now for the velocityaveraged neutralino selfannihilation crosssection (instead of the scattering crosssection in the previous case), again versus the neutralino mass . Such plots are shown in Fig. 6 for all different cases in the same style as in Fig. 5. Their general characteristics resemble very much those we enumerated for the DD case.
We first notice the strong correlation between the DD and ID plots, such as the generic similarities between the corresponding highlikelihood regions and the bestfit points. One interesting feature visible in these plots is the existence of a new, considerably large, highlikelihood region spanned by the approximate ranges of and and almost completely missed by MultiNest. To our knowledge, this region has not been introduced so far by any other Bayesian or frequentist analysis. Further investigations show that these points are located in the stau coannihilation region, but with very high (up to ). The very existence of such a high mass COA region compatible with all data indicates again that one should be very careful in drawing any conclusion that low masses in the parameter space are favoured over high masses by existing data. This point was emphasised earlier, in Sec. 3.1, with regards to the highmass, highlikelihood points in the FP region.
Looking again at the highlikelihood regions and the bestfit points in Fig. 6, it is interesting to realise how likely these different points and regions are to be tested by the current and upcoming ID instruments. For example, depending on how much its sensitivity improves in future years, it might be possible for the Large Area Telescope (LAT) [93] aboard the Fermi gammaray space telescope to cover part of the highlikelihood FP region, including the global bestfit point with (Table 6). It is in fact expected from prelaunch estimates [94] that the instrument will cover some fraction of the parameter space below , depending on the neutralino mass. A detailed MultiNest global fit of the CMSSM parameter space using 9 months of Fermi data has already been performed [45], using the dwarf spheroidal galaxy Segue 1 as a target; constraints are already quite close to becoming interesting. A similar analysis can also be done using GAs. The other bestfit (COA) point, however, has , which is well below what is realistically detectable by Fermi.
Finally, we have shown in Fig. 7 the 1D profile likelihood for the neutralino mass . The plot is produced in the same way as Figs. 3 and 4, and compares the results of both GA and MultiNest scans. It is again important to notice the higherlikelihood points found by the GA almost everywhere in the interesting mass range. We also observe that both the FP and COA bestfit points have quite similar and low neutralino masses (), similar to what was seen for in Fig. 4.
3.4 Technical comparison with nested sampling
3.4.1 Dependence on priors and parameterisation
Throughout the previous sections, we compared our GA results mostly with those of the linearprior MultiNest scan of the CMSSM, especially when we discussed the resultant 1D and 2D profile likelihoods. We mentioned several times that MultiNest implemented with log priors gives a better value of for the bestfit , compared to the best fit when implemented with linear priors (13.51). It is true that one can achieve better fits in certain regions of the parameter space by utilising e.g. a logarithmic prior in the search algorithm (see e.g. Ref. 14 and references therein for a discussion of the effects of priors on bestfit points and highlikelihood regions, as well as Bayesian posterior means and highprobability regions). However, there are good reasons not to use the logprior MultiNest results for the main performance comparisons in this paper.
Firstly, in order to make any comparison of the two algorithms reasonable, one should put both on the same footing. On the one hand, the likelihood function, as defined in terms of the original model parameters, is the only statistical measure employed in any frequentist study. Our genetic analysis is no exception. Our GA scans the parameter space according to the likelihood, as a function of the original model parameters. On the other hand, MultiNest, similarly to every other sampling technique optimised for Bayesian scans, performs the scan based on the posterior PDF (i.e. likelihood times prior) rather than the likelihood alone. Consequently, a very natural way of comparing the two is to make the latter sampling algorithm also proceed according to the likelihood function only. This can be achieved by choosing a flat prior in this case.
Imposing any other nontrivial prior (or equivalently, changing the scanning metric), although entirely justified in the Bayesian framework, is a very ad hoc approach in a frequentist framework. In the Bayesian case, this simply means that the algorithm samples the regions containing larger prior volumes better, producing more sample points in these regions. This is exactly what one requires for a Bayesian scan in which the density of samples reflects the posterior density at different points in the parameter space. In the profile likelihood analysis however, we are interested in having reasonable maps of the likelihood function in terms of the given model parameters. Imposing any prior in this case means nothing but giving different scanning weights to different parts of the parameter space, i.e. forcing the algorithm to scan some regions with higher resolutions than the others; this can make the algorithm miss important points in some regions.
In the frequentist language, the effect of imposing a nonflat prior is the same as reparameterising the model. This for example means that, in the case of the logprior scan, the likelihood function is redefined in terms of the logarithmicallyscaled parameters rather than the original model parameters. Results of a profile likelihood analysis should in principle be independent of the specific parameterisation of the model; it should not matter if one works with e.g. one set of coordinates or another. This statement is however correct only if one has perfect knowledge of the likelihood function. No numerical scanning algorithm provides this perfect knowledge, as its resolution is always limited. This means that different parameterisations of the model do give different results until the limit of ‘perfect sampling’ has been reached. Any specific choice should then be justified ‘a priori’. One can for example argue that a specific scaling is theoretically better justified compared to others (e.g. that a log prior is geometrically preferred to a flat one). In principle this is a Bayesian statement, as it places an implicit measure on the parameter space, but it does have a practical impact upon frequentist profile likelihood scans. If one wanted to explore the effects of such reparameterisations, it would be entirely possible to do this by way of a GA, implemented in terms of genomes encoding the rescaled parameters. We suspect that by using a logarithmicallyencoded genome, we would for example find the funnel region properly and probably even some betterfitting points than our current bestfit. Since our primary intention in the current work has been to look at the CMSSM model as it is, we have therefore adhered to the likelihood function defined in terms of the original parameters (and thus employed a linearlyencoded genome). We leave the investigation of logarithmicallyencoded genomes for future work, where we intend to compare results with those of logprior MultiNest scans.
3.4.2 Speed and convergence
The results presented in this work are based on 3 million sample points in total, corresponding to the same number of likelihood evaluations. The samples have been generated through 10 separate runs with different initial populations and generations each. The resultant samples were then combined to obtain the final set of points. Compared to a typical number of likelihood evaluations required in a MultiNest scan (around ), the computational effort here is larger by a factor of 6. We have however chosen the number of runs and generations entirely by hand, not by any advanced convergence criteria; it may be possible to achieve similar (or better) results with less likelihood evaluations, using a more carefully tuned GA and/or a suitable stopping criterion.
There is in fact no way of checking whether or not our present implementation of the algorithm, although giving better results compared to MultiNest, has globally converged; there are indeed several reasons making us believe that it probably has not.
One way of seeing this is to look at the results for each of the 10 runs separately. To illustrate this, we have given in Fig. 8 the corresponding twodimensional profile likelihoods in the  plane.^{4}^{4}4Other two and also onedimensional profile likelihood plots for the parameters and observables exhibit similar features, so add little to the discussion. The isolikelihood contours in each panel show the statisticallyconsistent and confidence regions, based on the bestfit point found in that specific scan. Panels are sorted according to the bestfit values. The values of all model and nuisance parameters at the bestfit points for all 10 runs are also given in Table 7, together with the bestfit values.
(GeV)  (GeV)  (GeV)  (GeV)  (GeV)  

Run 1  9.35  
Run 2  11.34  
Run 3  11.45  
Run 4  11.55  
Run 5  11.61  
Run 6  11.63  
Run 7  11.65  
Run 8  11.76  
Run 9  11.78  
Run 10  11.86 
Our global bestfit point () is found only in one run. The other 9 runs have not found bestfit points better than . However, all these other bestfit values (, , , , , , , , and ) are also significantly better than the one found by MultiNest with linear priors (). Although some of these best fits in individual runs are very similar to the value found by MultiNest with logarithmic priors (), they are still at least slightly better in every single case.
The plots in Fig. 8 indicate that none of our individual GA runs have been able to cover all the interesting highlikelihood regions on their own. It seems for example that in runs 2 and 3, the algorithm has become trapped in local minima in the COA region. In runs 410, the same has happened at high masses, including the focus point region. However, if the individual GAs continued to run, they may have escaped these minima. In runs 6 and 7 for example, although the region does not include the global bestfit region of run 1, is very likely to extend and eventually uncover that region if the run continues. The other difficulty is that if the current version of the algorithm finds the global bestfit point (or some highlikelihood points in its vicinity) relatively quickly, the chance that it will then scan other points with lower likelihoods within the interesting confidence regions is dramatically reduced. This indicates a drawback of the algorithm in correctly mapping confidence intervals around the global bestfit point. This problem is not unexpected, as the primary purpose of GAs is to find global maxima or minima for a specific function as quickly as possible; if they succeed in this, there is no reason for them to start mapping the other less important surrounding points. This issue obviously becomes more serious when spikelike bestfit regions exist, a characteristic which appears to be the case for the CMSSM. For example, the existence of relatively large highlikelihood regions in panels 4,5,6,7,8 and 10 is probably due to the fact that there are many points with likelihood values very close to the highest one, distributed in a large area; this is not the case for runs 1,2, or 3. This means that if the GAs continue to run in those cases and it so happens that the global bestfit point of run 1 is found at some stage, their mapping of the interesting regions will be much better than the present case in run 1. This is again an indication that there is a tradeoff between how quickly we want the algorithm to find the actual global bestfit point and how accurately it is supposed to map the confidence regions; employing more advanced operators and strategies in the algorithms might improve the situation.
As far as our current implementation of GAs is concerned, all the above imply that the algorithms have not converged properly in every individual run. Firstly, they have not been able to find the actual global bestfit point in all (or even necessarily any) of the scans, and secondly, in the run with the highestlikelihood bestfit point (run 1), the mapping of the confidence regions around that point is rather unsatisfactory. On the other hand, most of the unwanted features discussed here are alleviated by combining the results of all 10 runs. This demonstrates the important role that parallelisation could play in improving the efficiency of GAs. Although our full set of sample points seems to provide better results than MultiNest, we are still not sure that this parallel version of the algorithm has converged either. This is again because the number of separate runs employed in our analysis is chosen rather arbitrarily. The arbitrariness in both the number of runs and the termination criteria in each run means that no resultdriven convergence condition exists in our GAs. One could possibly utilise more sophisticated criteria in the termination condition, giving rise to better estimates of the convergence in each run, but to our knowledge no problemindependent such alternatives exist; we leave the investigation of such possibilities for future work.
As discussed in Sec. 3.1, even though such a convergence condition does exist for the MultiNest scans, making the algorithm terminate after less total likelihood evaluations than our GAs, it still misses many points that are important in the frequentist framework. The convergence criterion for MultiNest is defined in terms of the Bayesian evidence; even if a run is properly converged in terms of the evidence, this convergence makes sense only in the context of the Bayesian posterior PDF, not a frequentist profile likelihood analysis. That is, many isolated spikes might have been missed, and consequently the likelihood might not have been mapped effectively. Even tuning the convergence parameters (such as the tolerance) may not help. This is because if the MultiNest algorithm does not find a highlikelihood point on its first approach to a region, it is given no chance to go back and find it at a later stage. This means that even if the convergence parameter for MultiNest is tuned in such a way that the algorithm runs for the same number of likelihood evaluations as a GA (i.e. 3 million here), this does not help in finding betterfit points. One should thus be very careful in introducing convergence criteria to any scanning techniques (including GAs and MultiNest) dealing with a complex model such as the CMSSM; the criteria should be carefully defined depending on which statistical measure is employed.
We also point out that the isolated likelihood spikes missed by Bayesian algorithms will only fail to affect the posterior mass if a limited number of them exist. If there are a significant number of spikes, they could add up to a large portion of the posterior mass, and affect even the Bayesian inference. Our GA results indicate that many such missed points actually exist, hinting that MultiNest might not have even mapped the entire posterior PDF correctly. Given the frequentist framework of this paper, this must unfortunately remain mere speculation, as our results do not allow any good estimate to be made of the posterior mass contained in the extra points.
We conclude this section by emphasising the role of parallelisation in terms of required computational power. Not only does the parallelisation enhance the scanning efficiency of GAs by reducing the probability of premature convergence and trapping in local maxima, it also increases the speed significantly. This can be done in different ways. Firstly, GAs work with a population of points instead of a single individual, providing extensive opportunity for treating different individuals in parallel. As an immediate consequence, the required time in a typical simple GA run with full generational replacement (the scheme we have used) can in principle decrease by a factor of , the number of individuals in each population ( in our case). The other way to parallelise GAs is to have separate populations evolve in parallel. By employing more advanced genetic operators and strategies such as ‘immigration’, individuals in different populations can even interact with each other. Although we have not employed such advanced parallel versions of the algorithm in our analysis, the samples have been generated in a parallel manner, i.e. through 10 separate runs with generations each.
4 Summary and conclusions
Constraining the parameter space of the MSSM using existing data is under no circumstances an easy or straightforward task. Even in the case of the CMSSM, a highly simplified and economical version of the model, the present data are not sufficient to constrain the parameters in a way completely independent of computational and statistical techniques.
There have been several efforts to study properties and predictions of different versions of the MSSM. Many recent activities in this field have used scanning methods optimised for calculating the Bayesian evidence and posterior PDF. Those analyses have been highly successful in revealing the complex structure of SUSY models, demonstrating that some patience will be required before we can place any strong constraints on their parameters. The same Bayesian scanning methods have also been employed for frequentist analyses of the problem, particularly in the framework of the profile likelihood. These methods are not optimised for such frequentist analyses, so care should be taken in applying them to such tasks.
We have employed a completely new scanning algorithm in this paper, based on Genetic Algorithms (GAs). We have shown GAs to be a powerful tool for frequentist approaches to the problem of scanning the CMSSM parameter space. We compared the outcomes of GA scans directly with those of the stateoftheart Bayesian algorithm MultiNest, in the framework of the CMSSM. For this comparison, we mostly considered MultiNest scans with flat priors, but kept in mind that e.g. logarithmic priors give rise to higherlikelihood points at low masses in the CMSSM parameter space; we justified this choice of priors.
Our results are very promising and quite surprising. We found many new highlikelihood CMSSM points, which have a strong impact on the final statistical conclusions of the study. These not only influence considerably the inferred highlikelihood regions and confidence levels on the parameter values, but also indicate that the applicability of the conventional Bayesian scanning techniques is highly questionable in a frequentist context. Although our initial motivation in using GAs was to gain a correct estimate of the likelihood at the global bestfit point, which is crucial in a profile likelihood analysis, we also realised that they can find many new and interesting points in almost all the relevant regions of parameter space. These points strongly affect the inferred confidence regions around the bestfit point. Even though we cannot be confident of exactly how completely our algorithm is really mapping these highlikelihood regions, it has certainly covered large parts of them better than any previous algorithm.
We think that by improving the different ingredients of GAs, such as the crossover and mutation schemes, this ability might even be enhanced further. We largely employed the standard, simplest versions of the genetic operators in our analysis, as well as very typical genetic parameters. These turned out to work sufficiently well for our purposes. Although we believe that tuning the algorithm might produce even more interesting results, it is good news that satisfactory results can be produced even with a very generic version. This likely means that one can apply the method to more complicated SUSY models without extensive finetuning.
One interesting outcome of our scan is that the global bestfit point is found to be located in the focus point region, with a likelihood significantly larger than the bestfit point in the stau coannihilation region (which in turn actually still has a higher likelihood than the global bestfit value obtained with MultiNest, even with logarithmic priors). The focus point region is favoured in our analysis over the coannihilation region, in contrast to findings from some recent MCMC studies [31, 37], where the opposite was strongly claimed. We also found a rather large part of the stau coannihilation region, consistent with all experimental data, located at high . This part of the coannihilation region seems to have been missed in other recent scans. All these results show that, at least in our particular setup, high masses, corresponding either to the FP or the COA regions, are by no means disfavoured by current data (except perhaps direct detection of dark matter). The discrepancy between this finding and those of some other authors that the FP is disfavoured might originate in the different scanning algorithms employed, or in the different physics and likelihood calculations performed in each analysis. We have however shown, by comparing our results with others produced using exactly the same setup except for the scanning algorithm, that one should not be at all confident that all the relevant points for a frequentist analysis can be found by scanning techniques optimised for Bayesian statistics, such as nested sampling and MCMCs.
We have also calculated some of the quantities most interesting in searches for SUSY at the LHC, and in direct and indirect searches for dark matter. We showed that GAs found much better points compared to MultiNest almost everywhere in the interesting mass ranges of the lightest Higgs boson, gluino and neutralino. We confirmed previous conclusions that the LHC is in principle able to investigate a large fraction of the highlikelihood points in the CMSSM parameter space if it explores sparticle masses up to around . As far as the Higgs mass is concerned, there are many points with rather low masses that, although sitting just below the low mass limit given by LEP, are globally very well fit to the experimental data. In the context of dark matter searches, we noticed that the global bestfit point and much of the surrounding confidence level region at high crosssections are actually already mostly ruled out by direct detection limits, if one assumes the standard halo model to be accurate. We also argued that some of these points may be tested by upcoming indirect detection experiments, in particular the Fermi gammaray space telescope. Finally, we realised that the highlikelihood stau coannihilation region at large introduces a new allowed region in the combination of the neutralino mass and selfannihilation crosssection, which (to our knowledge) has not been observed previously.
We also compared our algorithm with MultiNest in terms of speed and convergence, and argued that GAs are no worse than MultiNest in this respect. GAs have a large potential for parallelisation, reducing considerably the time required for a typical run. This property, as well as the fact that the computational effort scales linearly (i.e. as for an dimensional parameter space), also makes GAs an excellent method for the frequentist exploration of higherdimensional SUSY parameter spaces.
Finally, perhaps the bottom line of the present work is that we once again see that even the CMSSM, despite its simplicity, possesses a highly complex and poorlyunderstood structure, with many small, finetuned regions. This makes investigation of the model parameter space very difficult and still very challenging for modern statistical scanning techniques. Although the method proposed in this paper seems to outperform the usual Bayesian techniques in a frequentist analysis, it is important to remember that it may by no means be the final word in this direction. Dependence of the results on the chosen statistical framework, measure and method calls for caution in drawing strong conclusions based on such scans. The situation will of course improve significantly with additional constraints provided by forthcoming data.
Acknowledgments.
The authors are grateful to the Swedish Research Council (VR) for financial support. YA was also supported by the Helge Axelsson Johnson foundation. JC is a Royal Swedish Academy of Sciences Research Fellow supported by a grant from the Knut and Alice Wallenberg Foundation. We thank Roberto Trotta for helpful discussions. We are also thankful to the authors of Ref. 31 for useful comments on a previous version of the manuscript.References
 [1] See e.g. S. P. Martin, A Supersymmetry Primer, arXiv:hepph/9709356.
 [2] I. Aitchison, Supersymmetry in Particle Physics: An Elementary Introduction, Cambridge University Press (2007); H. Baer and X. Tata, Weak Scale Supersymmetry: From Superfields to Scattering Events, Cambridge University Press (2006).
 [3] J. R. Ellis, S. Kelley and D. V. Nanopoulos, Probing the desert using gauge coupling unification, Phys. Lett. B 260 (1991) 131.
 [4] See e.g. G. Jungman, M. Kamionkowski and K. Griest, Supersymmetric dark matter, Phys. Rep. 267 (1996) 195 [arXiv:hepph/9506380]; L. Bergström, Nonbaryonic dark matter: Observational evidence and detection methods, Rept. Prog. Phys. 63 (2000) 793 [arXiv:hepph/0002126]; G. Bertone, D. Hooper and J. Silk, Particle dark matter: Evidence, candidates and constraints, Phys. Rep. 405 (2005) 279 [arXiv:hepph/0404175]; L. Bergström, Dark Matter Candidates, arXiv:0903.4849.
 [5] D. J. H. Chung, L. L. Everett, G. L. Kane, S. F. King, J. D. Lykken and L. T. Wang, The soft supersymmetrybreaking Lagrangian: Theory and applications, Phys. Rep. 407 (2005) 1 [arXiv:hepph/0312378].
 [6] M. A. Luty, 2004 TASI lectures on supersymmetry breaking, arXiv:hepth/0509029.
 [7] S. S. AbdusSalam, B. C. Allanach, M. J. Dolan, F. Feroz and M. P. Hobson, Selecting a Model of Supersymmetry Breaking Mediation, Phys. Rev. D 80 (2009) 035017 [arXiv:0906.0957].
 [8] A. Chamseddine, R. Arnowitt and P. Nath, Locally Supersymmetric Grand Unification, Phys. Rev. Lett. 49 (1982) 970.
 [9] R. Barbieri, S. Ferrara and C. Savoy, Gauge Models With Spontaneously Broken Local Supersymmetry, Phys. Lett. B 119 (1982) 343; N. Ohta, Grand Unified Theories Based On Local Supersymmetry, Prog. Theor. Phys. 70 (1983) 542; L. J. Hall, J. Lykken and S. Weinberg, Supergravity As The Messenger Of Supersymmetry Breaking, Phys. Rev. D 27 (1983) 2359; P. Nath, R. L. Arnowitt and A. H. Chamseddine, Gauge Hierarchy In Supergravity Guts, Nucl. Phys. B 227 (1983) 121; And for some reviews, see e.g. H. P. Nilles, Supersymmetry, Supergravity and Particle Physics, Phys. Rep. 110 (1984) 1; A. Brignole, L. E. Ibañez and C. Muñoz, Soft supersymmetry breaking terms from supergravity and superstring models, published in Perspectives on Supersymmetry, ed. G. L. Kane, 125 [arXiv:hepph/9707209].
 [10] L. AlvarezGaume, J. Polchinski and M. B. Wise, Minimal LowEnergy Supergravity, Nucl. Phys. B 221 (1983) 495; R. Arnowitt and P. Nath, Supersymmetric mass spectrum in SU(5) supergravity grand unification, Phys. Rev. Lett. 69 (1992) 725; P. Nath and R. Arnowitt, Radiative breaking, proton stability and the viability of noscale supergravity models, Phys. Lett. B 287 (1992) 89; P. Nath and R. Arnowitt, Top quark mass and Higgs mass limits in the standard SU(5) supergravity unification, Phys. Lett. B 289 (1992) 368; G. G. Ross and R. G. Roberts, Minimal supersymmetric unification predictions, Nucl. Phys. B 377 (1992) 571; P. Nath and R. Arnowitt, Predictions in SU(5) supergravity grand unification with proton stability and relic density constraints, Phys. Rev. Lett. 70 (1993) 3696 [arXiv:hepph/9302318]; R. L. Arnowitt and P. Nath, Cosmological constraints and SU(5) supergravity grand unification, Phys. Lett. B 299 (1993) 58 [Erratumibid. 307 (1993) 403 ] [arXiv:hepph/9302317]; V. Barger, M. S. Berger and P. Ohmann, Supersymmetric particle spectrum, Phys. Rev. D 49 (1994) 4908 [arXiv:hepph/9311269]; G. L. Kane, C. F. Kolda, L. Roszkowski and J. D. Wells, Study of constrained minimal supersymmetry, Phys. Rev. D 49 (1994) 6173 [arXiv:hepph/9312272].
 [11] G. Cowan, Statistical data analysis, Oxford University Press (1998).
 [12] W. A. Rolke, A. M. Lopez and J. Conrad, Confidence Intervals with Frequentist Treatment of Statistical and Systematic Uncertainties, Nucl. Instrum. Methods A 551 (2005) 493 [arXiv:physics/0403059].
 [13] For an introduction to general applications of Bayesian statistics in physics, see e.g. G. D’Agostini, Probability and Measurement Uncertainty in Physics  a Bayesian Primer, arXiv:hepph/9512295; For reviews of its applications in cosmology, see, R. Trotta, Applications of Bayesian model selection to cosmological parameters, Mon. Not. Roy. Astron. Soc. 378 (2007) 72 [arXiv:astroph/0504022]; R. Trotta, Bayes in the sky: Bayesian inference and model selection in cosmology, Contemp. Phys. 49 (2008) 71 [arXiv:0803.4089]; A. R. Liddle, Statistical methods for cosmological parameter selection and estimation, arXiv:0903.4210; M. Hobson, A. Jaffe, A. Liddle, P. Mukherjee, Bayesian Methods in Cosmology, Cambridge University Press (2008).
 [14] R. Trotta, F. Feroz, M.P. Hobson, L. Roszkowski and R. Ruiz de Austri, The impact of priors and observables on parameter inferences in the Constrained MSSM, 2008024 [arXiv:0809.3792].
 [15] M. Drees and M. Nojiri, The neutralino relic density in minimal N=1 supergravity, Phys. Rev. D 47 (1993) 376 [arXiv:hepph/9207234].
 [16] H. Baer and M. Brhlik, Cosmological relic density from minimal supergravity with implications for collider physics, Phys. Rev. D 53 (1996) 597 [arXiv:hepph/9508321]; J. R. Ellis, T. Falk, K. A. Olive and M. Srednicki, Calculations of neutralino–stau coannihilation channels and the cosmologically relevant region of MSSM parameter space, Astropart. Phys. 13 (2000) 181 [Erratumibid. 15 (2001) 413 ] [arXiv:hepph/9905481]; J. R. Ellis, T. Falk, G. Ganis, K. A. Olive and M. Srednicki, The CMSSM parameter space at large tan beta, Phys. Lett. B 510 (2001) 236 [arXiv:hepph/0102098]; T. Nihei, L. Roszkowski and R. Ruiz de Austri, New Cosmological and Experimental Constraints on the CMSSM, 2001024 [arXiv:hepph/0106334]; A. Lahanas and V. Spanos, Implications of the pseudo–scalar Higgs boson in determining the neutralino dark matter, Euro. Phys. Journ. C23 (2002) 185 [arXiv:hepph/0106345].
 [17] J. R. Ellis, T. Falk, G. Ganis, K.A. Olive and M. Srednicki, The CMSSM parameter space at large tan beta, Phys. Lett. B 510 (2001) 236 [arXiv:hepph/0102098]; J. R. Ellis, K. A. Olive, Y. Santoso and V. C. Spanos, Likelihood analysis of the CMSSM parameter space, Phys. Rev. D 69 (2004) 095004 [arXiv:hepph/0310356]; J. R. Ellis, S. Heinemeyer, K. A. Olive and G. Weiglein, Indirect sensitivities to the scale of supersymmetry, 2005013 [arXiv:hepph/0411216]; J. R. Ellis, S. Heinemeyer, K. A. Olive and G. Weiglein, Phenomenological indications of the scale of supersymmetry, 2006005 [arXiv:hepph/0602220]; O. Buchmueller et al., Prediction for the Lightest Higgs Boson Mass in the CMSSM using Indirect Experimental Constraints, Phys. Lett. B 657 (2007) 87 [arXiv:0707.3447].
 [18] E. A. Baltz and P. Gondolo, Markov chain monte carlo exploration of minimal supergravity with implications for dark matter, 2004052 [arXiv:hepph/0407039].
 [19] B. C. Allanach and C. G. Lester, Multidimensional mSUGRA likelihood maps, Phys. Rev. D 73 (2006) 015013 [arXiv:hepph/0507283].
 [20] B. C. Allanach, Naturalness priors and fits to the constrained minimal supersymmetric standard model, Phys. Lett. B 635 (2006) 123 [arXiv:hepph/0601089].
 [21] R. Ruiz de Austri, R. Trotta and L. Roszkowski, A Markov Chain Monte Carlo analysis of the CMSSM, 2006002 [arXiv:hepph/0602028].
 [22] R. Trotta, R. Ruiz de Austri and L. Roszkowski, Prospects for direct dark matter detection in the Constrained MSSM, New Astron. Rev. 51 (2007) 316 [arXiv:astroph/0609126].
 [23] B. C. Allanach, C. G. Lester and A. M. Weber, The dark side of mSUGRA, 2006065 [arXiv:hepph/0609295].
 [24] L. Roszkowski, R. Ruiz de Austri and R. Trotta, On the detectability of the CMSSM light Higgs boson at the Tevatron, 2007084 [arXiv:hepph/0611173].
 [25] B. C. Allanach, K. Cranmer, C. G. Lester, and A. M. Weber, Natural Priors, CMSSM Fits and LHC Weather Forecasts, 2007023 [arXiv:0705.0487].
 [26] L. Roszkowski, R. Ruiz de Austri and R. Trotta, Implications for the Constrained MSSM from a new prediction for , 2007075 [arXiv:0705.2012].
 [27] L. Roszkowski, R. Ruiz de Austri, J. Silk and R. Trotta, On prospects for dark matter indirect detection in the Constrained MSSM, Phys. Lett. B 671 (2009) 10 [arXiv:0707.0622].
 [28] R. Lafaye, T. Plehn, M. Rauch and D. Zerwas, Measuring Supersymmetry, Euro. Phys. Journ. C54 (2008) 617 [arXiv:0709.3985].
 [29] B. C. Allanach, M. J. Dolan and A. M. Weber, Global Fits of the Large Volume String Scenario to WMAP5 and Other Indirect Constraints Using Markov Chain Monte Carlo, 2008105 [arXiv:0806.1184].
 [30] B. C. Allanach and D. Hooper, Panglossian Prospects for Detecting Neutralino Dark Matter in Light of Natural Priors, 2008071 [arXiv:0806.1923].
 [31] O. Buchmueller et al., Predictions for Supersymmetric Particle Masses in the CMSSM using Indirect Experimental and Cosmological Constraints, 2008117 [arXiv:0808.4128].
 [32] G. D. Martinez, J. S. Bullock, M. Kaplinghat, L. E. Strigari and R. Trotta, Indirect Dark Matter Detection from Dwarf Satellites: Joint Expectations from Astrophysics and Supersymmetry, JCAP 06 (2009) 014 [arXiv:0902.4715].
 [33] L. Roszkowski, R. R. de Austri, R. Trotta, Y. L. Tsai and T. A. Varley, Some novel features of the NonUniversal Higgs Model, arXiv:0903.1279.
 [34] C. Balazs and D. Carter, Likelihood analysis of the nexttominimal supergravity motivated model, arXiv:0906.5012.
 [35] G. Belanger, F. Boudjema, A. Pukhov and R. K. Singh, Constraining the MSSM with universal gaugino masses and implication for searches at the LHC, arXiv:0906.5048.
 [36] P. Bechtle, K. Desch, M. Uhlenbrock and P. Wienemann, Constraining SUSY models with Fittino using measurements before, with and beyond the LHC, arXiv:0907.2589.
 [37] O. Buchmueller et al., Likelihood Functions for Supersymmetric Observables in Frequentist Analyses of the CMSSM and NUHM1, arXiv:0907.5568.
 [38] See e.g. A. Lewis and S. Bridle, Cosmological parameters from CMB and other data: a MonteCarlo approach, Phys. Rev. D 66 (2002) 103511 [arXiv:astroph/0205436].
 [39] J. Skilling, Nested sampling, in Bayesian inference and maximum entropy methods in science and engineering, R. Fischer, R. Preuss and U. von Toussaint eds., Spring Verlag, U.S.A., AIP Conf. Proc. 735 (2004) 395; J. Skilling, Nested sampling for general bayesian computation, Bayesian Anal. C1 (2006) 833.
 [40] F. Feroz and M. P. Hobson Multimodal nested sampling: an efficient and robust alternative to MCMC methods for astronomical data analysis, Mon. Not. Roy. Astron. Soc. 384 (2008) 449 [arXiv:0704.3704]; F. Feroz, M. P. Hobson and M. Bridges, MultiNest: an efficient and robust Bayesian inference tool for cosmology and particle physics, arXiv:0809.3437.
 [41] F. Feroz, B. C. Allanach, M. Hobson, S. S. AbdusSalam, R. Trotta and A. M. Weber, Bayesian Selection of sign(mu) within mSUGRA in Global Fits Including WMAP5 Results, 2008064 [arXiv:0807.4512].
 [42] F. Feroz, M. P. Hobson, L. Roszkowski, R. R. de Austri, and R. Trotta, Are and consistent within the Constrained MSSM?, arXiv:0903.2487.
 [43] R. Trotta, R. R. de Austri and C. P. d. Heros, Prospects for dark matter detection with IceCube in the context of the CMSSM, arXiv:0906.0366.
 [44] L. Roszkowski, R. R. de Austri and R. Trotta, Efficient reconstruction of CMSSM parameters from LHC data  A case study, arXiv:0907.0594.
 [45] P. Scott, J. Conrad, J. Edsjö, L. Bergström, C. Farnier and Y. Akrami, Direct Constraints on Minimal Supersymmetry from FermiLAT Observations of the Dwarf Galaxy Segue 1, arXiv:0909.3300.
 [46] P. Scott and S. Sivertsson, GammaRays from Ultracompact Primordial Dark Matter Minihalos, arXiv:0908.4082.
 [47] D. E. LopezFogliani, L. Roszkowski, R. R. de Austri and T. A. Varley, A Bayesian Analysis of the Constrained NMSSM, arXiv:0906.4911.
 [48] S. S. AbdusSalam, B. C. Allanach, F. Quevedo, F. Feroz and M. Hobson, Fitting the Phenomenological MSSM, arXiv:0904.2548.
 [49] B. C. Allanach, D. Grellscheid and F. Quevedo, Genetic algorithms and experimental discrimination of SUSY models, 2004069 [arXiv:hepph/0406277].
 [50] K. H. Becks, S. Hahn and A. Hemker, Genetic algorithms in elementary particle physics (In German), Phys. Bl. 50 (1994) 238; G. Organtini, A genetic algorithm for optimization of selection algorithms in high energy physics, Talk given at Computing in Highenergy Physics (CHEP 97), Berlin, Germany, 711 Apr 1997; S. Abdullin et al., GARCON: Genetic Algorithm for Rectangular Cuts OptimizatioN. User’s manual for version 2.0, arXiv:hepph/0605143; S. Abdullin, Genetic algorithm for SUSY trigger optimization in CMS detector at LHC, Nucl. Instrum. Methods A 502 (2003) 693; J. M. Link et al. [FOCUS Collaboration], Application of Genetic Programming to High Energy Physics Event Selection, Nucl. Instrum. Methods A 551 (2005) 504 [arXiv:hepex/0503007]; J. M. Link et al. [FOCUS Collaboration], Search for and Using Genetic Programming Event Selection, Phys. Lett. B 624 (2005) 166 [arXiv:hepex/0507103]; L. Teodorescu, Gene expression programming approach to event selection in high energy physics, IEEE Trans. Nucl. Sci. 53 (2006) 2221; M. Mjahed, Search for the Higgs boson at LHC by using Genetic Algorithms, Nucl. Instrum. Methods A 559 (2006) 199; R. Berlich and M. Kunze, Parametric optimization with evolutionary strategies in particle physics, Nucl. Instrum. Methods A 534 (2004) 147.
 [51] J. F. Markham and T. D. Kieu, Evolutionary algorithms applied to Landaugauge fixing, Nucl. Phys. Proc. Suppl. 73 (1999) 868 [arXiv:heplat/9809143]; O. Oliveira and P. J. Silva, Gribov copies and gauge fixing in lattice gauge theories, Nucl. Phys. Proc. Suppl. 106 (2002) 1088 [arXiv:heplat/0110035]; O. Oliveira and P. J. Silva, An algorithm for Landau gauge fixing in lattice QCD, Comput. Phys. Commun. C158 (2004) 73 [arXiv:heplat/0309184]; A. Yamaguchi and H. Nakajima, Landau Gauge Fixing Supported by Genetic Algorithm, Nucl. Phys. Proc. Suppl. 83 (2000) 840 [arXiv:heplat/9909064]; A. Yamaguchi and A. Sugamoto, Genetic Algorithm for Lattice Gauge Theory: On SU(2) and U(1) on 4 dimensional lattice, how to hitchhike to thermal equilibrium state, Nucl. Phys. Proc. Suppl. 83 (2000) 837 [arXiv:heplat/9909063]; A. Yamaguchi, Genetic Algorithm for SU(2) Gauge Theory on a 2dimensional Lattice, Nucl. Phys. Proc. Suppl. 73 (1999) 847 [arXiv:heplat/9809068]; A. Yamaguchi, Genetic Algorithm for SU(N) gauge theory on a lattice, arXiv:heplat/9808001; G. M. von Hippel, R. Lewis and R. G. Petry, Evolutionary Fitting Methods for the Extraction of Mass Spectra in Lattice Field Theory, Comput. Phys. Commun. C178 (2008) 713 [arXiv:0707.2788]; G. M. von Hippel, R. Lewis and R. G. Petry, Using evolutionary algorithms to extract field theory mass spectra, PoS LAT2007 (2007) 043 [arXiv:0710.0014]; GAs have also been used to train neural networks, see e.g. J. Rojo and J. Rojo and J. I. Latorre, Neural network parametrization of spectral functions from hadronic tau decays and determination of QCD vacuum condensates, 2004055 [arXiv:hepph/0401047]; A. Hoecker et al., TMVA  Toolkit for Multivariate Data Analysis, arXiv:physics/0703039.
 [52] L. Teodorescu, Evolutionary Computation in High Energy Physics, arXiv:0804.0369.
 [53] C. Bogdanos and S. Nesseris, Genetic Algorithms and Supernovae Type Ia Analysis, JCAP 05 (2009) 006 [arXiv:0903.2805]; J. Liesenborgs, S. De Rijcke, H. Dejonghe and P. Bekaert, Nonparametric inversion of gravitational lensing systems with few images using a multiobjective genetic algorithm, arXiv:0707.2538; J. Liesenborgs, S. De Rijcke and H. Dejonghe, A genetic algorithm for the nonparametric inversion of strong lensing systems, Mon. Not. Roy. Astron. Soc. 367 (2006) 1209 [arXiv:astroph/0601124]; B. J. Brewer and G. F. Lewis, When Darwin Met Einstein: Gravitational Lens Inversion with Genetic Algorithms, arXiv:astroph/0501202; A. Petiteau, S. Yu and S. Babak, The search for spinning black hole binaries using a genetic algorithm, arXiv:0905.1785; J. Crowder, N. J. Cornish and L. Reddinger, Darwin meets Einstein: LISA data analysis using genetic algorithms, Phys. Rev. D 73 (2006) 063011 [arXiv:grqc/0601036]; D. J. Marshall, G. Joncas and A. P. Jones, Distribution and characteristics of Infrared Dark Clouds using genetic forward modelling, arXiv:0908.3851.
 [54] C. FernandezRamirez, E. Moya de Guerrra, A. Udias and J. M. Udias, Properties of Nucleon Resonances by means of a Genetic Algorithm, Phys. Rev. C 77 (2008) 065212 [arXiv:0805.4178]; C. Winkler and H. M. Hofmann, Determination of bound state wave functions by a genetic algorithm, Phys. Rev. C 55 (1997) 684 [arXiv:nuclth/9412032]; D. G. Ireland, S. Janssen and J. Ryckebusch, A genetic algorithm analysis of N* resonances in reactions, Nucl. Phys. A 740 (2004) 147 [arXiv:nuclth/0312103]; S. Janssen, D. G. Ireland and J. Ryckebusch, Extraction of N* information from the limited data set, Phys. Lett. B 562 (2003) 51 [arXiv:nuclth/0302047]; C. FernandezRamirez, E. Moya de Guerra and J. M. Udias, Effective Lagrangian approach to pion photoproduction from the nucleon, Ann. Phys. 321 (2006) 1408 [arXiv:nuclth/0509020]; C. FernandezRamirez, E. Moya de Guerra and J. M. Udias, Eta Photoproduction as a Test of the Extended Chiral Symmetry, Phys. Lett. B 651 (2007) 369 [arXiv:0706.0616].
 [55] By CDF Collaboration and D0 Collaboration, A Combination of CDF and D0 Results on the Mass of the Top Quark, arXiv:0803.1683.
 [56] W. M. Yao et al. [Particle Data Group], Review of Particle Physics, J. Phys. G 33 (2006) 1 and 2007 partial update for the 2008 edition.
 [57] K. Hagiwara, A. D. Martin, D. Nomura and T. Teubner, Improved predictions for g2 of the muon and , Phys. Lett. B 649 (2007) 173 [arXiv:hepph/0611102].
 [58] See http://lepewwg.web.cern.ch/LEPEWWG.
 [59] J. P. Miller, E. de Rafael and B. L. Roberts, Muon g2: Review of Theory and Experiment, Rept. Prog. Phys. 70 (2007) 795 [arXiv:hepph/0703049].
 [60] E. Barberio et al. [Heavy Flavor Averaging Group (HFAG) Collaboration], Averages of bhadron properties at the end of 2006, arXiv:0704.3575.
 [61] A. Abulencia et al. [CDF  Run II Collaboration], Measurement of the oscillation frequency, Phys. Rev. Lett. 97 (2006) 062003 [arXiv:hepex/0606027]; A. Abulencia et al. [CDF Collaboration], Observation of oscillations, Phys. Rev. Lett. 97 (2006) 242003 [arXiv:hepex/0609040].
 [62] J. Dunkley et al. [The WMAP Collaboration], Fiveyear Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Likelihoods and parameters from the WMAP data, Astrophys. J. Suppl. 180 (2009) 306 [arXiv:0803.0586].
 [63] The CDF Collaboration, Search for and decays in collisions with CDFII, CDF note 8956 (August 2007).
 [64] The LEP Higgs Working Group, http://lephiggs.web.cern.ch/LEPHIGGS; G. Abbiendi et al. [the ALEPH Collaboration, the DELPHI Collaboration, the L3 Collaboration and the OPAL Collaboration, The LEP Working Group for Higgs Boson Searches], Search for the standard model Higgs boson at LEP, Phys. Lett. B 565 (2003) 61 [arXiv:hepex/0306033].
 [65] A. Heister et al. [ALEPH Collaboration], Absolute mass lower limit for the lightest neutralino of the MSSM from data at up to , Phys. Lett. B 583 (2004) 247.
 [66] LEP SUSY Working Group for the ALEPH, DELPHI, L3 and OPAL collaborations, http://lepsusy.web.cern.ch/lepsusy.
 [67] A. Heister et al. [ALEPH Collaboration], Search for scalar leptons collisions at center–of–mass energies up to , Phys. Lett. B 526 (2002) 206 [arXiv:hepex/0112011].
 [68] P. Achard et al. [L3 Collaboration], Search for scalar leptons and scalar quarks at LEP, Phys. Lett. B 580 (2004) 37 [arXiv:hepex/0310007].
 [69] J. Abdallah et al. [DELPHI Collaboration], Searches for supersymmetric particles in collisions up and interpretation of the results within the MSSM, Euro. Phys. Journ. C31 (2004) 421 [arXiv:hepex/0311019].
 [70] Available from: http://superbayes.org.
 [71] B. C. Allanach, SOFTSUSY: a program for calculating supersymmetric spectra, Comput. Phys. Commun. C143 (2002) 305 [arXiv:hepph/0104145]; Available from: http://projects.hepforge.org/softsusy/.
 [72] P. Gondolo, J. Edsjö, P. Ullio, L. Bergström, M. Schelke and E. A. Baltz, DarkSUSY: Computing supersymmetric dark matter properties numerically, JCAP 07 (2004) 008 [arXiv:astroph/0406204]; Available from: http://www.physto.se/edsjo/darksusy/.
 [73] S. Heinemeyer, W. Hollik and G. Weiglein, The Masses of the Neutral CPeven Higgs Bosons in the MSSM: Accurate Analysis at the TwoLoop Level, Euro. Phys. Journ. C9 (1999) 343 [arXiv:hepph/9812472]; S. Heinemeyer, W. Hollik and G. Weiglein, FeynHiggs: a program for the calculation of the masses of the neutral CPeven Higgs bosons in the MSSM, Comput. Phys. Commun. C124 (2000) 76 [arXiv:hepph/9812320]; G. Degrassi, S. Heinemeyer, W. Hollik, P. Slavich and G. Weiglein, Towards HighPrecision Predictions for the MSSM Higgs Sector, Euro. Phys. Journ. C28 (2003) 133 [arXiv:hepph/0212020]; M. Frank, T. Hahn, S. Heinemeyer, W. Hollik, H. Rzehak and G. Weiglein, The Higgs Boson Masses and Mixings of the Complex MSSM in the FeynmanDiagrammatic Approach, 2007047 [arXiv:hepph/0611326]; Available from: http://www.feynhiggs.de/.
 [74] Available from: http://wwwlapp.in2p3.fr/lapth/micromegas/.
 [75] H. Flacher, M. Goebel, J. Haller, A. Hocker, K. Moenig and J. Stelzer, Gfitter  Revisiting the Global Electroweak Fit of the Standard Model and Beyond, Euro. Phys. Journ. C60 (2009) 543 [arXiv:0811.0009].
 [76] For a classic introduction to GAs, see D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, AddisonWesley (1989).
 [77] For recent introductions to GAs, see e.g. S. N. Sivanandam and S. N. Deepa, Introduction to Genetic Algorithms, Springer (2007); A. E. Eiben and J. E. Smith, Introduction to Evolutionary Computing, Springer (2008); M. Affenzeller, S. Winkler, S. Wagner and A. Beham, Genetic Algorithms and Genetic Programming: Modern Concepts and Practical Applications, Chapman & Hall/CRC (2009).
 [78] For a modern treatment of GAs, see D. E. Goldberg, Genetic Algorithms: The Design of Innovation, Springer (2010) to appear.
 [79] J. H. Holland, Adaptation in Natural and Artifcial Systems, Ann Arbor: The University of Michigan Press (1975), Second Edition: Cambridge, MIT Press (1992).
 [80] P. Charbonneau, Astrophys. J. Suppl. 101 (1995) 309; P. Charbonneau and B. Knapp, 1996, A User’s Guide to PIKAIA 1.0, NCAR Technical Note 418+IA (Boulder: National Center for Atmospheric Research); P. Charbonneau, 2002, Release Notes for PIKAIA 1.2, NCAR Technical Note 451+STR (Boulder: National Center for Atmospheric Research).

[81]
PIKAIA can be downloaded
from:
http://www.hao.ucar.edu/modeling/pikaia/pikaia.php.  [82] S. N. Sivanandam and S. N. Deepa, Genetic Algorithms Reference, Volume I: Crossover for singleobjective numerical optimization problems, Published by Tomasz Dominik Gwiazda (2006).
 [83] B. C. Allanach, G. Belanger, F. Boudjema and A. Pukhov, Requirements on collider data to match the precision of WMAP on supersymmetric dark matter, 2004020 [arXiv:hepph/0410091].
 [84] K. Griest and D. Seckel, Three exceptions in the calculation of relic abundances, Phys. Rev. D 43 (1991) 3191.
 [85] K. L. Chan, U. Chattopadhyay and P. Nath, Naturalness, weak scale supersymmetry and the prospect for the observation of supersymmetry at the Tevatron and at the LHC, Phys. Rev. D 58 (1998) 096004 [arXiv:hepph/9710473]; J. L. Feng, K. T. Matchev and T. Moroi, MultiTeV scalars are natural in minimal supergravity, Phys. Rev. Lett. 84 (2000) 2322 [arXiv:hepph/9908309]; J. L. Feng, K. T. Matchev and T. Moroi, Focus points and naturalness in supersymmetry, Phys. Rev. D 61 (2000) 075005 [arXiv:hepph/9909334]; J. L. Feng, K. T. Matchev and F. Wilczek, Neutralino Dark Matter in Focus Point Supersymmetry, Phys. Lett. B 482 (2000) 388 [arXiv:hepph/0004043].
 [86] A. Djouadi, M. Drees and J. L. Kneur, Neutralino dark matter in mSUGRA: Reopening the light Higgs pole window, Phys. Lett. B 624 (2005) 60 [arXiv:hepph/0504090].
 [87] C. Boehm, A. Djouadi and M. Drees, Light scalar top quarks and supersymmetric dark matter, Phys. Rev. D 62 (2000) 035012 [arXiv:hepph/9911496].
 [88] R. L. Arnowitt, B. Dutta and Y. Santoso, Coannihilation effects in supergravity and Dbrane models, Nucl. Phys. B 606 (2001) 59 [arXiv:hepph/0102181].
 [89] J. R. Ellis, K. A. Olive and Y. Santoso, Calculations of NeutralinoStop Coannihilation in the CMSSM, Astropart. Phys. 18 (2003) 395 [arXiv:hepph/0112113].
 [90] P. Bechtle, O. Brein, S. Heinemeyer, G. Weiglein and K. E. Williams, HiggsBounds: Confronting Arbitrary Higgs Sectors with Exclusion Bounds from LEP and the Tevatron, arXiv:0811.4169.
 [91] Z. Ahmed et al. [CDMS Collaboration], Search for Weakly Interacting Massive Particles with the First FiveTower Data from the Cryogenic Dark Matter Search at the Soudan Underground Laboratory, Phys. Rev. Lett. 102 (2009) 011301 [arXiv:0802.3530].
 [92] J. Angle et al. [XENON Collaboration], First Results from the XENON10 Dark Matter Experiment at the Gran Sasso National Laboratory, Phys. Rev. Lett. 100 (2008) 021303 [arXiv:0706.0039].
 [93] W. B. Atwood et al. [LAT Collaboration], The Large Area Telescope on the Fermi Gammaray Space Telescope Mission, Astrophys. J. 697 (2009) 1071 [arXiv:0902.1089].
 [94] E. A. Baltz et al., Prelaunch estimates for GLAST sensitivity to Dark Matter annihilation signals, JCAP 07 (2008) 013 [arXiv:0806.2911].