Full oneloop corrections to the relic density in the MSSM: A few examples
N. Baro, F. Boudjema and A. Semenov
1) LAPTH, B.P.110, AnnecyleVieux F74941, France
2) Joint Institute of Nuclear Research, JINR, 141980 Dubna, Russia
Abstract
We show the impact of the electroweak, and in one instance the QCD, oneloop corrections on the relic density of dark matter in the MSSM which is provided by the lightest neutralino. We cover here some of the most important scenarii: annihilation into fermions for a binolike neutralino, annihilation involving gauge bosons in the case of a mixed neutralino, the neutralinostau coannihilation region and annihilation into a bottom quark pair. The corrections can be large and should be taken into account in view of the present and forthcoming increasing precision on the relic density measurements. Our calculations are made possible thanks to a newly developed automatic tool for the calculation at oneloop of any process in the MSSM. We have implemented a complete onshell gauge invariant renormalisation scheme, with the possibility of switching to other schemes. In particular we will report on the impact of different renormalisation schemes for .
LAPTH1211/07
UMR 5108 du CNRS, associée à l’Université de Savoie.
1 Introduction
The last few years have witnessed spectacular advances in
cosmology and astrophysics confirming with an unprecedented level
of accuracy that ordinary matter is a minute part of what
constitutes the Universe at large. At the same time as the LHC
will be gathering data, a host of non collider experiments will be
carried out in search of Dark Matter, DM, (either direct or
indirect) as well as through ever more precise determination of
the cosmological parameters. In this new paradigm, the search for
DM at the LHC is high on the agenda, as is of course the search
for the Higgs. In fact these may be two facets of the New Physics
that provides a resolution to the hierarchy problem posed by the
Higgs in the Standard Model, . The epitome of this New Physics
is supersymmetry which among many advantages furnishes a good DM
candidate through the lightest neutralino, . If future
colliders discover supersymmetric particles and probe their
properties, one could predict the dark matter density of the
Universe and would constrain cosmology with the help of precision
data[1, 2] provided
by WMAP[3] and PLANCK[4]. It would be highly
exciting if the precision reconstruction of the relic
density from observables at the colliders does not match PLANCK’s
determination, this would mean that the postinflation era is
most probably not entirely radiation
dominated[5]. Already now the accuracy on
the relic dark matter density is about from WMAP and will
soon be improved to about from PLANCK. Such level of
accuracy must be matched by precise theoretical calculations. From
the particle physics point of view this means precision
calculations of the annihilation and coannihilation cross
sections at least at oneloop. Quite sophisticated codes now
exist[6, 7] for the calculation of the relic
density, however they are essentially based on treelevel cross
sections with the inclusion of some higher order effects
essentially through some running couplings, masses or some
effective couplings. Some of these corrections[6]
have already been shown to be essential like the corrections to
the Higgs couplings that can completely change the picture in the
socalled Higgs funnel (annihilation mainly through the
pseudoscalar Higgs, ). The use of other approximations needs
to be justified by complete higher order calculations which
contain more than just the effect of effective couplings. In a
word, the level of accuracy that will soon be reached requires
that one is ready to tackle in a general way a full calculation
at oneloop for any annihilation (or coannihilation) of the
neutralinos in supersymmetry, just as what one has been doing for
the cross sections at the colliders.
The aim of this letter is to report on the progress towards
automatisation of these calculations and to show and discuss some
results on the oneloop corrected annihilation and coannihilation
cross sections of the LSP neutralino in the MSSM. In particular,
we study here some of the most important scenarii: i) annihilation
in the socalled bulk region into fermions for a binolike
neutralino, ii) coannihilation involving the neutralino and the
lightest stau , iii) annihilation into a pair of massive
gauge bosons in the case of a mixed neutralino and iv)
annihilation into where the pseudoscalar Higgs pole can play a role.
We concentrate on the electroweak corrections, although iv) is an
excuse to show how we handle some classes of QCD corrections.
The couple of very recent calculations of loop corrections to the
relic density tackled either QCD corrections in extreme, though
highly interesting, scenarii such as annihilation into top quarks
at threshold[8] and the nice study of
stopneutralino
coannihilation[9]^{1}^{1}1We do not list
here loop induced annihilation processes such [10, 11, 12, 13]. A very
recent paper discusses the QCD correction to annihilation into
in the funnel[14], however the bulk
of all contributions has been known for sometime and implemented
in micrOMEGAs already.. Some important nonperturbative
electroweak effects of the CoulombSommerfeld type that occur for
TeV winos or higgsinos with a relative mass splitting between the
lightest supersymmetric particle (LSP) and the nexttolightest
supersymmetric particle (NLSP) of have been
reported in[15, 16]. Let us
also note that, though not to be seen as radiative corrections to
the annihilation cross sections, the temperature corrections to
the relic density have been considered and found to be totally
negligible at the level of [17]. A
better simulation of the cosmological equation of state to derive
the effective number of relativistic degrees of freedom has been
done giving corrections ranging from to
[18] compared to the usual treatment as
done in DarkSUSY [7] or micrOMEGAs[6].
2 General setup and details of the calculation
2.1 Setup of the automatic calculation of the cross sections
Even in the , oneloop calculations of processes
involve hundreds of diagrams and a hand calculation is practically
impracticable. Efficient automatic codes for any generic
process, that have now been exploited for many [19, 20] and even some [21, 22] processes, are almost unavoidable
for such calculations. For the electroweak theory these are the
GRACEloop[23] code and the bundle of packages
based on FeynArts[24], FormCalc[25]
and LoopTools[26], that we will refer to as FFL for short.
With its much larger particle content, far greater
number of parameters and more complex structure, the need for an
automatic code at oneloop for the minimal supersymmetric standard
model is even more of a must. A few parts that are needed for such
a code have been developed based on an extension of
[27] but, as far as we know, no complete code exists
or is, at least publicly, available. Gracesusy[28] is now also being developed at
oneloop and many results exist[29]. One of the
main difficulties that has to be tackled is the implementation of
the model file, since this requires that one enters the thousands
of vertices that define the Feynman rules. On the theory side a
proper renormalisation scheme needs to be set up, which then means
extending many of these rules to include counterterms. When this
is done one can just use, or hope to use, the machinery developed
for the , in particular the symbolic manipulation part and
most importantly the loop integral routines including tensor
reduction algorithms or any other efficient set of basis integrals.
The results we will report are based on the development of a new automatic
tool that uses and adapts modules, many of which, but not all, are
part of other codes like FFL. This is the package SloopS whose main components and architecture we briefly sketch.
In this application we combine LANHEP[30] (originally part of the package COMPHEP[31]) with the FFL bundle but with an extended and adapted LoopTools[11]. LANHEP is a very powerful routine that automatically generates all the sets of Feynman rules of a given model, the latter being defined in a simple and compact format very similar to the canonical coordinate representation. Use of multiplets and the superpotential is builtin to minimize human error. The ghost Lagrangian is derived directly from the BRST transformations. The LANHEP module also allows to shift fields and parameters and thus generates counterterms most efficiently. Understandably the LANHEP output file must be in the format of the model file of the code it is interfaced with. In the case of FeynArts both the generic (Lorentz structure) and classes (particle content) files had to be given. Moreover because we use a nonlinear gauge fixing condition[23], see below, the FeynArts default generic file had to be extended.
2.2 Renormalisation and renormalisation schemes
In the last half decade there has been an
upsurge and flurry of activity constraining models of
supersymmetry and other New Physics with the limit on the relic
density delimiting most of the parameter space of these models.
All these investigations are based on treelevel, sometimes with
improved effective couplings, estimates of the relic density. Only
in the last few months have some
investigations[32], within mSUGRA, added a
theoretical error estimate of ), i.e, of the same
order as the current experimental error. In these analyses based
on renormalisation group running, a substantial uncertainty is due
to the impact of the running
itself[1, 33]. Even if the
weak scale spectrum is known, loop corrections to the cross
sections are needed. In fact the precision oneloop calculations
we are carrying will be most useful when confronting a measurement
of the relic density once the microscopic properties of dark
matter would have been pinned down at the collider and in
direct/indirect detection. Henceforth we rely on the physical masses of the SUSY particles with the addition of some
physical observables to fully reconstruct the model. We
therefore work, as far as possible, within an onshell scheme
generalising what is done for the electroweak standard model.
i) The Standard Model parameters: the fermion masses as well
as the mass of the and the are taken as input physical
masses. The electric charge is defined in the Thomson limit, see
for example[23]. Because we are calculating
corrections to processes at a scale , the
effect from the running electromagnetic coupling due to the light
fermion masses will, alone, rescale the treelevel cross section
leading to a correction of about to the cross sections. The
light quark (effective) masses, are chosen such as to reproduce
the value of including the light
fermions contribution, which give the corrections
compared to the use of . For the input masses see
the last papers of Ref. [19] with the exception of
GeV. We will keep this rescaling in mind.
This effect can be reabsorbed by using a scheme where the
effective is used as
input.
ii) The Higgs sector: The Higgs sector is conceptually the
trickiest. First we take the pseudoscalar Higgs mass as an
input parameter and require vanishing tadpoles. The extraction and
definition of the ubiquitous , which at treelevel is
identified as the ratio of the of the two Higgs doublet is
the tricky part. Most schemes define the counterterm at
oneloop from a non physical quantity, such as the
transition twopoint function at . It has become
customary to take a definition, by only taking
into account the “universal” ultraviolet part from such
quantities, leaving out all finite parts. These prescriptions are
however not gauge invariant, see for
example[34]. Moreover the “universal”
part is only universal in the usual linear gauge. With the
nonlinear gauge fixing we implement, see
section 2.3, our results would not be gauge invariant
and one has to be very careful with the Ward identities. We leave
this important issue to a forthcoming
paper[35]. Nonetheless to conform with this
widespread general usage, we also implement a
scheme defined from a physical quantity, to be discussed shortly,
which reproduces the usual counterterms defined from other
quantities in the linear gauge. As known, the others Higgs masses
(for the lightest CPeven), (for the
heaviest CPeven) and (for the charged) receive
corrections that can be very important. To be able to stick with
an onshell definition and in order to weigh the effect of the
scheme dependence, we also define two other schemes. One is
based on the use of partial width to
which the QED corrections have been extracted, we will refer to
this scheme as the scheme. For the third one, we
take as an input parameter and trade it for “”
hence loosing one prediction, we will call this scheme . This
scheme is also used in[29]. With
fixed, we can turn to the other sectors.
iii) Neutralino and charginos: For the neutralino and
chargino sector, we implement an onshell scheme taking as input
parameters the masses of the two charginos (this defines the
counterterms to the SU(2) gaugino, wino , mass and
to the higgsino, , parameter ) and the mass of the
LSP (which completes the extraction of the U(1)
gaugino, bino , mass ). The other neutralino masses
, and
receive corrections to their treelevel value. Obtaining finite
corrections for the masses and decays is a not trivial test of the
procedure. Here our implementation is quite similar to the one
adopted in[36] when one
sticks to the .
iv) Sfermions: For the slepton sector we use as input parameters the physical
masses of the two charged sleptons which in the case of nomixing
define the slepton soft breaking mass, and the mass, , giving a
correction to the sneutrino mass at oneloop. In the case of mixing one needs
to fix the counterterm to the trilinear coupling. The best option
would have been to define this from a decay such as . In the present letter we take a much simpler
prescription, we solve the system by taking as input all three
slepton masses.
For the squark sector, for each generation three physical masses
are needed as input to constrain the breaking parameters
for the part,
, for the
part. In case of mixing, the simplest prescription for the
counterterms to the trilinear couplings , derives
from two conditions on the renormalised mixed twopoint functions
as is done in[37]. Our plan is to replace
these conditions by an onshell input such as a decay of the heavy
squark to the lighter one and a , to conform with a fully
onshell scheme and study further the scheme dependence.
Wave function renormalisation is introduced so that the residue at
the pole of all physical particles is and nomixing is left
between the different particles when on shell. This applies for
all sectors. Dimensional reduction is used as implemented in the
FFL bundle at oneloop through the equivalent constrained
dimensional renormalisation[38]. Renormalisation of the
strong coupling constant and the gluino is not an issue for the
examples we study here.
2.3 NonLinear gaugefixing
We use a generalised nonlinear gauge[39, 11] adapted to the minimal supersymmetric model. The gauge fixing writes
(1)  
Unlike the other parts of the model, is written in terms of renormalised fields and parameters. are the Goldstone fields. We always work with so as to deal with the minimal set of loop tensor integrals. More details will be given elsewhere[35].
2.4 The different parts of the cross section
The oneloop amplitudes consist of the virtual corrections and the counterterm contributions . must be ultraviolet finite. On the other hand can contain infrared divergences due to photon and gluon virtual exchange. These are regulated by a small photon or gluon mass. For the QCD corrections we study here, this implementation does not pose a problem. The photon and gluon mass regulator contribution contained in the virtual correction should cancel exactly against the one present in the photon and gluon final state radiation. The photonic (gluonic) contribution is in fact split into a soft part, where the photon (gluon) energy is less than some small cutoff , and a hard part with . The former requires a photon/gluon mass regulator. We use the usual universal factorised form with a simple rescaling for the case of the gluon correction. We take .
2.5 Checks on the calculation
i) For each process and for each set of parameters, we first check the ultraviolet finiteness of the results. This test applies to the whole set of the virtual oneloop diagrams. The ultraviolet finiteness test is performed by varying the ultraviolet parameter . We vary by seven orders of magnitude with no change in the result. We content ourselves with double precision.
ii) The test on the infrared finiteness is performed by including both the loop and the soft bremsstrahlung contributions and checking that there is no dependence on the fictitious photon mass or gluon mass .
iii) Gauge parameter independence of the results is essential. It is performed through the set of the seven gauge fixing parameters defined in Eq. (1). The use of the seven parameters is not redundant as often these parameters check complementary sets of diagrams. It is important to note that in order to successfully achieve this test one should not include any width in the propagators. In fact our treelevel results do not include any width. Because of the parameters and the energies we consider, no width is required to regularise the cross sections.
iv) For the bremsstrahlung part we use VEGAS adaptive Monte Carlo integration package provided in the FFL bundle and verify the result of the cross section against CompHep [31]. We choose small enough and check the stability and independence of the result with respect to .
2.6 Boltzmann equation, the small expansion
Having the collection of cross sections and the masses of the annihilating (and coannihilating) DM particles we could have passed them to micrOMEGAs for a very precise determination of the relic density based on a careful treatment of the Boltzmann equation. However, to weigh the impact of the corrections on the relic density it is worth to gain insight through an approximation in going from the cross sections to the relic density, especially that we have found these approximations to be, after all, rather excellent for the cases we study, including coannihilations. Moreover corrections to the cross sections could be incorporated in the case of nonthermal production. All cross sections where label the annihilating and coannihilating DM particles , are expanded in terms of the relative velocity , which for neutralino annihilation is . Away from poles and thresholds, it is a very good approximation to write , keeping only the wave, , and wave, coefficients. With being the temperature, , the thermal average gives
(2) 
with the neutralino spin degree of freedom (sdof), the coannihilating particle of sdof and mass contributes an effective relative weight of
(3) 
The total number of sdof is . A good approximation for the relic density is obtained by carrying a simple one dimensional integration
(4) 
that are needed to compute in Eq. (2.6) are given in . represents the freezeout temperature. is the effective degrees of freedom at freezeout. is tabulated in micrOMEGAs and we read it from there. For the examples we will study . In the freezeout approximation, can be solved iteratively from
(5) 
where the neutralino mass is expressed in GeV. The numerical solutions of the density equation and hence the freezeout suggest[40, 6] is a very good choice in most, but not all, cases. Though we have verified that Eq. (5) converges quickly and agrees well with the result of micrOMEGAs, the results we give use as extracted from micrOMEGAs. The loop corrected cross sections should also impact on the value of which is not exactly the same as the value extracted from the treelevel cross sections. However, the shift is marginal, though ultimately in a full computation this should be taken into account. Our results therefore use the same value of at both tree and oneloop level. On the other hand to derive the relic density we rely, in this letter, on Eq. (2.6). For The case of annihilation, the latter simplifies to
(6) 
The weight of a channel (see the percentages we will refer to later) corresponds to its relative contribution to .
2.7 Choosing points in the MSSM parameter space
Current limits on the relic density, from WMAP and SDSS[3] give the range
(7) 
In this first exploratory study we thought it is best to consider different scenarii without worrying too much about the absolute value of the derived relic density in order to grasp the origin of the large corrections, if any. Our choice of scenarii was motivated by the physics issues, although our choice is biased towards the popular scenarii that emerge in mSUGRA within thermal production. Nonetheless our choice covers annihilation into fermions, gauge bosons and coannihilation. This said, apart from the annihilation into gauge bosons, the derived relic density is either within this range or not overly outside. For the gauge bosons the motivation was to take a model that singles out the and final states channels. Moreover since the impact of the radiative corrections can be large there is not much sense in picking up a model based on its agreement with the current value of the relic density on the basis of a treelevel calculation. This said we have used micrOMEGAs as a guide, being careful about translations of effective couplings and input parameters. micrOMEGAs was also quite useful in justifying the approximations we use for deriving the relic density from the cross sections. We should also add that in this letter we do not apply the radiative corrections to all the subprocesses that can contribute to the relic density but only to those channels and subprocesses that contribute more than to the relic density. When calculating the correction to the relic density we include these channels at treelevel.
3 Annihilation of a bino LSP into charged leptons
The first example we take corresponds to the socalled bulk region, with a neutralino LSP which is mostly bino. The latter will couple with the particles of largest hypercharge, the Rsleptons. Therefore annihilation is into charged leptons. Because of the Majorana nature of the LSP, there is no wave contribution in the case of massless fermions. In our case the contribution to the wave (at treelevel) is from the ’s. In the radiation dominated standard scenario, to be consistent with the present value of the relic density , we take right sleptons as light as possible (but within the LEP limits) while all other particles (squarks, charginos, other neutralinos) heavy. The example we take has . The relevant physical parameters are GeVGeV and GeV. The masses of the charginos are GeV. To give an idea this reconstructs the treelevel values for the gaugino and higgsino parameters as GeV while GeV leading to a neutralino which is almost bino: .
The other physical input masses reconstruct the treelevel parameters 110GeV, GeV, GeV, GeV and the trilinear coupling . The contribution to the relic density is, then as expected, into leptons ) with the proportions as shown in Table 1. The , channels contribute for each. The difference between the three channels is accounted for by the contribution of the wave of the final states and very little from the fact that is slightly lighter than the lightest smuon and selectron.
()  Tree  

a  0.081  +38%  +35%  +15% 
b  3.858  +18%  +18%  +18% 
0.166  0.138  0.138  0.141  
17  17  15 
Let us first comment on the wave contribution which gives the bulk of the contribution to the relic density. The total correction is about in this case. It is tempting to parameterize the corrections. In fact, had we used the value of the gauge coupling not at low energy but at the scale of the mass of order the bulk of the correction would be absorbed. Indeed, . In the few other examples we have looked at concerning the annihilation into leptons, we arrive at the same order of correction, see for example the corrections to the wave in the case of coannihilation and even where there is some higgsino component as in the case of annihilation into . The other common trend is that the correction does not show any dependence on the scheme we choose when there is a large bino component. This is not the case for the wave. In particular the scheme differs from the and . The dependence comes essentially from the Yukawa contribution, see Eq. (4.8) of ref[1]. The latter is also sensitive to the higgsino component of the neutralino that is also affected by the change. The effect is more obvious in the case of scenario iii), see Table 4. Note however that even in the case of massless fermions, there is (though small) a contribution to the wave due to hard photon radiation. Hard photon radiation in association with a light charged fermion pair is not subject, for the wave amplitude, to the known helicity suppression when no photon is emitted[12]. Taking both the and wave contribution leads to a correction on the relic density of about . As discussed a few lines above, using reduces the correction to the level of a few percent.
4 Neutralino coannihilation
In this scenario the LSP is still the lightest neutralino and we take it to be essentially bino though with a small higgsino component with a composition and mass GeV. We consider a scenario where the NLSP is the lightest with mass GeV coming mainly from its component. The lightest smuon and selectron are given masses so that they are thermodynamically irrelevant, they have a mass of about GeV. Apart from TeV, for the squark and Higgs sector, as well as , the parameters are the same as in the example in the bulk region. We want therefore to concentrate on coannihilation involving only . With a mass difference between the LSP and NLSP of GeV which is about only of the mass of the LSP, annihilation is quite efficient with the two channels accounting for as much as half of the total contribution, see Table 2. takes up a quarter of the total. Neutralino annihilation makes up for about . The rest is made up by . It is interesting to note that . Our approximation based on Eq. (2.6) gives , with .
()  Tree  

a  0.002  +200%  +200%  +200% 
b  1.717  +18%  +19%  +19% 
()  Tree  
a  4.342  +9%  +9%  +9% 
b  1.116  +9%  +8%  +9% 
()  Tree  
a  1.093  +21%  +21%  +21% 
b  0.214  +19%  +19%  +19% 
()  Tree  
a  43.345  +17%  +17%  +17% 
b  14.445  +13%  +13%  +14% 
c  0  0.994  0.994  0.994 
0.128  0.117  0.117  0.117  
9  9  9 
Compared to the binocase in the bulk region where this
channel accounted for the totality of the relic density, here it
only makes up for . Nonetheless the effect of radiative
corrections on this channel are very similar to what we found in
the scenario of section 3. One may be misled by
interpreting the relative correction as a large correction
to the relic density. This is a relatively large correction to the
wave contribution, but in absolute terms this correction is
totally negligible compared to the correction brought about by the
wave contribution at the cross section level as well as after
taking the thermal average, notwithstanding that the whole channel
in the coannihilation region has little weight which further
dilutes the effect of such large relative correction. As pointed
out in the previous section this relative correction is
due to the smallness of the wave
which is offset by the hard photon emission that now allows for
an wave contribution[12]. As discussed in the
first reference in[12], the relative importance of
hard radiation reduces fast as the mass of the intermediate
slepton increases. This explains why the relative effect is more
prominent in the coannihilation region.
Before getting into the details about each of the main contributing channels that involve coannihilation, let us point at some of their common features. The bulk of the contribution comes now from the wave, especially after taking into account thermal averaging. Another common feature is that the scheme dependence is hardly noticeable. Moreover, the corrections to the  and wave are, within a margin of , the same.
For the
correction to the wave is within only a from the
correction to the wave. Here the electroweak correction
amounts to about . This is about half the correction we find
in all other channels. The reason is the following, the effective
coupling for the emitted photon should still be taken at ,
and therefore effectively since there is only one neutralino
coupling, this should be taken at the scale of . Proper
use of the effective couplings here absorbs about of the
correction, leaving therefore a correction. For the
final state the use of the effective coupling would leave out a
correction to the wave after absorbing the correction due to
the effective coupling.
The radiative correction to reveals a quite interesting feature as can be
seen from Fig. 1 which shows the dependence of the
cross section as a function of the relative velocity (or rather
the square of it). It is from this velocity dependence that we
usually extract the values of the coefficients of the  and
wave contribution, at both the treelevel and oneloop, that
we need to calculate the relic density. The figure extends to
values as large as while we could have contented
ourselves with a maximal value of , considering the
typical value that one obtains for the freezeout temperature
. Even so, the figure
shows that in this case a fit to the treelevel cross section in
the form works quite well. For the oneloop correction a
polynomial fit does not do for low enough velocities. There is a
large negative correction for . This correction is in
fact very easy to understand. It is the perturbative oneloop
manifestation of the non relativistic CoulombSommerfeld
effect[41].
With the treelevel cross section denoted as and , the oneloop perturbative cross section for the samesign stau annihilation , , is such that
(8) 
We thus expect for the oneloop cross section,
(9) 
Fig. 1 reflects this repulsive behaviour perfectly. In fact we made a fit to the oneloop result with the function , first with as given in Eq. (9) and then with not constrained. The two fits are practically indistinguishable in Fig. 1. Our automatic calculation code captures this effect perfectly.
One may ask about how to deal with the singularity. In fact when calculating the relic density, the oneloop singularity at the level of the cross section is tamed after thermal average, , see also Eq. 2. At the end its contribution to the relic density compared to is approximately ). In words, non zero temperature of the problem provides a cutoff. One can also ask whether the oneloop result from the CoulombSommerfeld effect is sufficient. As seen, the QED correction is of with typical of the freezeout temperature. Therefore in our case a oneloop treatment seems to be sufficient especially that the is not the most dominant channel. This said, these nonrelativistic QED threshold corrections can be resummed to all orders. This resummation as originally performed by Sommerfeld[41] has been known for quite a long time in quantum mechanics, see [42] for a textbook treatment, and amounts to solving the Schrödinger equation in the Coulomb potential. With for the samesign stau annihilation, the wave factor resums to
(10) 
One might question the validity of Eq. 10 in our case where is not stable. Finite decay width can of course act as a cutoff for the corrections, see the case of pair production[43] or slepton pair production at threshold[44]. In our case width effects are of no importance since the characteristic time of the Coulomb interaction, typical of velocities at freezeout is much smaller than the decay time, , since in our example and . For smaller , the width effect is even more negligible whereas for larger , the channel would be thermodynamically irrelevant. Therefore in our particular case the resummation can be taken from the old Sommerfeld result. Nonetheless, especially after thermal averaging, in our case this type of QED correction is well approximated by the oneloop approximation^{2}^{2}2 In a situation, which is not the case here, where the and would contribute equally at treelevel, the CoulombSommerfeld correction would cancel after adding the twochannels. The correction in would be attractive and given by changing the sign of in Eq. 10.The nonperturbative effects of the CoulombSommerfeldlike corrections that might occur when coloured states are involved[45, 9] need more care because of the strong QCD coupling, bound states effects might be relevant. The effect of the latter is even more important with models with TeV and multi TeV dark matter almost degenerate with a charged component[16, 46]. The non perturbative effects on indirect detection in these models is even more dramatic[15, 46]..
Taking now all the effects and contributions in our specific example we find an overall correction of to the relic density with a corrected value of . As we can see this value would not have been approached with a naive overall rescaling of the effective couplings. Nonetheless, apart from some correction, most of the effect seems to be explained in terms of a proper usage of effective couplings and the Coulomb effect. In fact in the total contribution, the Coulomb effect is diluted and changes the results for the relic density by about only .
5 Annihilation of a mixed gauginohiggsino LSP into vector bosons
Having studied annihilation into fermions, annihilation into the weak vector bosons is quite interesting. In the context of mSUGRA this occurs for example in the socalled focus point region. In order not to mix issues, we do not consider in this letter a scenario where the LSP neutralino is either dominantly higgsino or wino, therefore avoiding that annihilation is of relevance in this case. We seek a scenario with a neutralino where the largest component is still bino but where one has a substantial higgsino and wino component. In our example one has with GeV. All other masses (outside the charginoneutralino sector) are taken to be very heavy at TeV. This is also to avoid contamination from annihilation into fermions. It would however be worth to study the impact of the sfermions on the radiative corrections. In this case we have not made the extra effort of searching for a set with a relic density within the WMAP range. Here annihilation into and accounts for , see Table 3 with a few other channels below each. These involve into light quarks or which we take into account only at treelevel.
()  Tree  

a  3.099  27%  2%  +44% 
b  5.961  32%  7%  +38% 
()  Tree  
a  0.159  22%  +3%  +50% 
b  0.787  30%  6%  +39% 
0.053  0.068  0.054  0.039  
+28  +2  26 
First of all, we see that the corrections which affect annihilation into and are about the same (within in the 3 schemes). Moreover the correction to the wave and wave are of the same order, see also Fig. 2 for the dependence of the cross section and the extraction of the  and wave coefficients. However this is not the most important conclusion. The most important lesson is that there is a very large scheme dependence. In some other investigation concerning Higgs decays, we had noticed, as was also pointed in [34] with a similar scheme, that the scheme can lead to very large corrections. However in many instances, like what we saw in the case of the bino annihilation or coannihilation, the is screened. Unfortunately in this mixed neutralino scenario, the dependence can be potentially enhanced by from the renormalisation of for example. This needs further investigation. In this model, already at treelevel, we had noticed that the cross sections were very sensitive to small changes in the underlying parameters. Apart from the scheme dependence, the corrections in the scheme look modest especially for the dominant  wave contribution. However one should not forget that these small corrections are within the use of in the Thomson limit. Switching to a scale of order , the corrections are large of order . Therefore our conclusion is that in such a scenario there are genuine large corrections in all three schemes we have considered. This also confirms the study of the chargino neutralino system at oneloop in [47] which showed that though the corrections to the masses are modest, there can be a large (of order ) change in the gauginohiggsino component and hence a large impact on cross sections.
6 Annihilation of neutralinos to and a not too heavy pseudoscalar Higgs
We expose this last example for illustrative purposes. Indeed the
oneloop perturbative treatment of the Higgs coupling to a bottom
pair using the bottom pole mass, here we have taken all along
GeV, is far from describing the bulk of the radiative
QCD corrections which as we know need to be resummed both for the
running effective purely from QCD and from the socalled
effects. The latter being more important for high
. These effects are already taken in micrOMEGAs[6] for example. The purpose here
therefore is to see whether there are other possible effects,
though smaller, that are captured in a complete oneloop
calculation. Ideally one would like to subtract the known
universal oneloop QCD corrections from the full oneloop QCD
corrections that can occur for example from box diagrams, for
, outside the Higgs resonance. By the way because of the
Majorana nature of the LSP, at the smallest velocities the most
important Higgs resonance is the pseudoscalar Higgs, . In any
case for a precise calculation of the relic density, non resonant
contributions should be taken into account as thermal average is
to applied and would bring some smearing. Therefore, here we
concentrate on where the scalar
resonance is not negligible. Again like we argued for the previous
example we have to rely here also on some higgsino component. The
composition of the LSP is with
GeV. At
treelevel the system is (re)constructed from a set
GeV and . Compared to the
previous case where all other masses were around TeV, to bring
out the effect of the ’s in the final state we lower first
such that GeV. The masses of all sfermions are
around GeV for the dominant and GeV
for the dominant . The mass of the gluino is also
lowered to be GeV.
At treelevel with GeV, the dominant modes are annihilation into followed by (about smaller). and are about the same level but a factor smaller than the channel. We show, see Table 4, the and channel in order to make a comparison with the previous case of a mixed bino with a substantial higgsino component. Here the scheme dependence has considerably reduced especially between the and scheme. Note also that in the case of channel there is also a discrepancy with the scheme and the other two schemes compared to the almost pure bino case. This again is due to the larger contribution of the higgsino dependent part, naturally in the exchange not present in the pure bino case but also in the exchange.
Let us look at what we obtain for the channel. The electroweak corrections do show some scheme dependence. Compared to the electroweak corrections in the scheme of , the QCD corrections are larger by an order of magnitude, they amount to about in both the  and wave. If one assumes that most of these corrections arise from the vertex, then we know that there are large logs resulting from the anomalous dimension of the pseudoscalar (and for that matter the scalar). Its one loop part can be found in [48] and amounts to
There is also an important SUSY QCD correction termed [49], see section B.4.2 in the second paper of[6]. In this scenario, adding these two corrections amounts to about . Subtracting these from the correction we calculate for the wave, leaves us with about QCD corrections. The QCD corrections from the anomalous dimension and the effect were extracted for the known effect to . However, since at , for the swave cross section, the neutralino system constructs a pseudoscalar because of its Majorana nature, the same corrections should affect the coefficient even if the contribution from the pseudoscalar Higgs is negligible.
7 Conclusions
We have performed the first electroweak corrections to some important processes relevant for the relic density of neutralinos in supersymmetry. This has become possible thanks to an automated code for the calculation of loop corrections in the MSSM that will allow to perform with the same tools and the same framework (scheme dependence,..) analyses at oneloop at the collider and for dark matter. Our findings suggest that in the case of a dominantly bino neutralino, a large part of the correction can be accounted for through an effective electromagnetic coupling at the scale of the neutralino mass. Even so, complete one loop corrections would be needed to match the foreseen precision of PLANCK. The corrections to the relic density are not sensitive much to the way is renormalised. In the case of coannihilation of a bino and stau, the conclusion is similar but one has to be wary of possible CoulombSommerfeld corrections. For a neutralino LSP which is strongly mixed, the corrections are large and the scheme dependence not negligible at all. More investigation of such scenarii should be conducted. Some QCD (and SUSY QCD) corrections affecting final states quarks in the case of neutralino annihilation need that one goes beyond oneloop. Some of these corrections have been identified and already implemented in a code such as micrOMEGAs. Apart from these corrections, there remain some additional oneloop corrections that should be taken into account. Before generalising these conclusions, more work is needed. However, the tools exist. The next step is to interface our code for the loop calculations with a dedicated relic density calculator, avoiding double counting of some of the oneloop corrections implemented as effective operators in the relic density calculator. Work in this direction has already begun based on micrOMEGAs as the relic density calculator.
Acknowledgments We would like to thank Sacha Pukhov for many enlightening discussions especially concerning the future implementation of the loop corrections into micrOMEGAs. David Temes has been invaluable in the first stages of the project and gets all our thanks. We also owe much to our friends of the MinamiTateya group and the GraceSUSY code, in particular we learned much from Masaaki Kuroda and were able to cross check some oneloop results pertaining to decays of supersymmetric particles. As usual, we thank Geneviève Bélanger for advice and fruitful discussions. This work is supported in part by GDRIACPP of the CNRS (France). The work of A.S. is supported by grants of the Russian Federal Agency of Science NS1685.2003.2 and RFBR 040217448. This work is also part of the French ANR project, ToolsDMColl.
References
 [1] B.C. Allanach, G. Belanger, F. Boudjema and A. Pukhov, JHEP 0412 (2004) 020; [arXiv:hepph/0410091].
 [2] E. A. Baltz, M. Battaglia, M. E. Peskin and T. Wizansky, Phys. Rev. D 74 (2006) 103521 [arXiv:hepph/0602187].

[3]
The WMAP Collaboration, C. L. Bennett et al., Wilkinson Microwave Anisotropy Probe (WMAP) Observations: and Basic Results,” Astrophys. J. Suppl. 148 (2003) 1;
[arXiv:astroph/0302207].
D. N. Spergel et al., Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Parameters,” Astrophys. J. Suppl. 148 (2003) 175; [arXiv:astroph/0302209].  [4] The Planck Homepage, http://www.rssd.esa.int/index.php?project=PLANCK.

[5]
P. Salati, Phys. Lett. B 571 (2003) 121 [arXiv:astroph/0207396].
S. Profumo and P. Ullio, JCAP 0311 (2003) 006 [arXiv:hepph/0309220].
F. Rosati, Phys. Lett. B 570 (2003) 5 [arXiv:hepph/0302159]. C. Pallis, JCAP 0510 (2005) 015 [arXiv:hepph/0503080].
G. B. Gelmini and P. Gondolo, Phys. Rev. D 74 (2006) 023510 [arXiv:hepph/0602230]. D. J. H. Chung, L. L. Everett, K. Kong and K. T. Matchev, arXiv:0706.2375 [hepph].
M. Drees, H. Iminniyaz and M. Kakizaki, arXiv:0704.1590 [hepph]. 
[6]
G. Bélanger, F. Boudjema, A. Pukhov and A. Semenov, Comput.
Phys.
Commun. 149 (2002) 103; [arXiv:hepph/0112278]; ibid Comput. Phys. Commun. 176, 367 (2007) [arXiv:hepph/0607059] and Comput. Phys. Commun. 174, 577 (2006) [arXiv:hepph/0405253].
http://wwwlapp.in2p3.fr/lapth/micromegas. 
[7]
DarkSUSY: P. Gondolo et al., JCAP 0407:008,2004,
[arXiv:astroph/0406204].
http://www.physto.se/edsjo/darksusy/.  [8] T. Moroi, Y. Sumino and A. Yotsuyanagi, Phys. Rev. D 74 (2006) 015016 [arXiv:hepph/0605181].
 [9] A. Freitas, Phys. Lett. B 652 (2007) 280 [arXiv:0705.4027 [hepph]].

[10]
L. Bergström and P. Ullio, Nucl. Phys. 504 (1997) 27; [arXiv:hepph/9706232].
Z. Bern, P. Gondolo and M. Perelstein, Phys. Lett. B411 (1997) 86; [arXiv:hepph9706538].  [11] F. Boudjema, A. Semenov and D. Temes, Phys. Rev. D72 (2005) 055024; [arXiv:hepph/0507127].

[12]
L. Bergstrom, Phys. Lett. B 225 (1989) 372.
R. Flores, K.A. Olive and S. Rudaz, Phys. Lett. B232 (1989) 377.
M. Drees, G. Jungman, M. Kamionkowski and M. M. Nojiri, Phys. Rev. D 49 (1994) 636 [arXiv:hepph/9306325].  [13] V. Barger, W. Y. Keung, H. E. Logan, G. Shaughnessy and A. Tregre, Phys. Lett. B 633 (2006) 98 [arXiv:hepph/0510257].
 [14] B. Herrmann and M. Klasen, [arXiv:0709.0043 [hepph]] and B. Herrmann, [arXiv:0709.2232 [hepph]].
 [15] J. Hisano, S. Matsumoto, M. M. Nojiri and O. Saito, Phys. Rev. D 71 (2005) 063528. [arXiv:hepph/0412403].
 [16] J. Hisano, S. Matsumoto, M. Nagai, O. Saito and M. Senami, Phys. Lett. B 646 (2007) 34 [arXiv:hepph/0610249].
 [17] T. Wizansky, Phys. Rev. D 74 (2006) 065007 [arXiv:hepph/0605179].
 [18] M. Hindmarsh and O. Philipsen, Phys. Rev. D 71, 087302 (2005) [arXiv:hepph/0501232].

[19]
G. Bélanger, F. Boudjema, J. Fujimoto, T. Ishikawa, T. Kaneko,
K. Kato and
Y. Shimizu, Phys. Lett. B559 (2003) 252;
[arXiv:hepph/0212261].
G. Bélanger, F. Boudjema, J. Fujimoto, T. Ishikawa, T. Kaneko, K. Kato and Y. Shimizu, Phys. Lett. B559 (2003) 252; [arXiv:hepph/0212261].
G. Bélanger, F. Boudjema, J. Fujimoto, T. Ishikawa, T. Kaneko, K. Kato, Y. Shimizu and Y. Yasui, Phys. Lett. B571 (2003) 163; [arXiv:hepph/0307029].
G. Belanger, F. Boudjema, J. Fujimoto, T. Ishikawa, T. Kaneko, Y. Kurihara, K. Kato, and Y. Shimizu, Phys. Lett. B576 (2003) 152; [arXiv:hepph/0309010].
F. Boudjema, J. Fujimoto, T. Ishikawa, T. Kaneko, K. Kato, Y. Kurihara, Y. Shimizu and Y. Yasui, Phys. Lett. B600 (2004) 65; [arXiv:hepph/0407065].
F. Boudjema, J. Fujimoto, T. Ishikawa, T. Kaneko, K. Kato, Y. Kurihara, Y. Shimizu, S. Yamashita and Y. Yasui, Nucl. Instrum and Meth. A534 (2004) 334; [arXiv:hepph/0404098]. 
[20]
A. Denner, S. Dittmaier, M. Roth and M.M. Weber, Phys. Lett.
B575
(2003) 290; [arXiv:hepph/0307193].
ibid Phys. Lett. B560 (2003) 196; [arXiv:hepph/0301189] You Yu, Ma WenGan, Chen Hui, Zhang RenYou, Sun YanBin and Hou HongSheng, Phys. Lett. B571 (2003) 85; [arXiv:hepph/0306036].
Zhang RenYou, Ma WenGan, Chen Hui, Sun YanBin, Hou HongSheng, Phys. Lett. B578 (2004) 349; [arXiv:hepph/0308203]. 
[21]
F. Boudjema, J. Fujimoto, T. Ishikawa, T. Kaneko, K. Kato,
Y. Kurihara and
Y. Shimizu, Nucl. Phys. Proc.Suppl. 135 (2004) 323;
[arXiv:hepph/0407079].
F. Boudjema, J. Fujimoto, T. Ishikawa, T. Kaneko, K. Kato, Y. Kurihara, Y. Shimizu and Y. Yasui, [arXiv:hepph/0510184].  [22] A. Denner, S. Dittmaier, M. Roth and L.H. Wieders, Phys. Lett. B612 (2005) 223; hepph/0502063. ibid hepph/0505042.
 [23] G. Bélanger, F. Boudjema, J. Fujimoto, T. Ishikawa, T. Kaneko, K. Kato and Y. Shimizu, Phys. Rept. 430 (2006) 117 [arXiv:hepph/0308080].

[24]
J. Küblbeck, M. Böhm, and A. Denner, Comp. Phys. Commun.
60
(1990) 165;
H. Eck and J. Küblbeck, Guide to FeynArts 1.0 (Würzburg, 1992);
H. Eck, Guide to FeynArts 2.0 (Würzburg, 1995);
T. Hahn, Comp. Phys. Commun. 140 (2001) 418; [arXiv:hepph/0012260]. 
[25]
T. Hahn and M. PerezVictoria, Comp. Phys. Commun. 118
(1999) 153,
[arXiv:hepph/9807565].
T. Hahn; [arXiv:hepph/0406288]; [arXiv:hepph/0506201]. 
[26]
T. Hahn, LoopTools,
http://www.feynarts.de/looptools/
.  [27] T. Hahn and C. Schappacher, Comp. Phys. Commun. 143 (2002) 54; [arXiv:hepph/0105349].
 [28] J. Fujimoto et al., Comput. Phys. Commun. 153 (2003) 106;[arXiv:hepph/0208036].
 [29] J. Fujimoto, T. Ishikawa, Y. Kurihara, M. Jimbo, T. Kon and M. Kuroda, Phys. Rev. D 75 (2007) 113002.

[30]
A. Semenov. LanHEP — a package for automatic generation of
Feynman
rules. User’s manual.; hepph/9608488.
A. Semenov, Nucl. Inst. Meth. and Inst. A393 (1997) p. 293.
A. Semenov, Comp. Phys. Commun. 115 (1998) 124.
A. Semenov, [arXiv:hepph/0208011]. 
[31]
CompHEP Collaboration, E.Boos et al., Nucl.
Instrum. Meth.
A534 (2004) 250; [arXiv:hepph/0403113].
A. Pukhov et al., ”CompHEP user’s manual, v3.3”, Preprint INP MSU 9841/542, 1998; [arXiv:hepph/9908288].http://theory.sinp.msu.ru/comphep/
. 
[32]
L. Roszkowski, R. Ruiz de Austri and R. Trotta, JHEP 0707
(2007) 075
[arXiv:0705.2012 [hepph]].
B. C. Allanach, K. Cranmer, C. G. Lester and A. M. Weber, arXiv:0705.0487 [hepph].  [33] G. Belanger, S. Kraml and A. Pukhov, Phys. Rev. D 72 (2005) 015003 [arXiv:hepph/0502079].
 [34] A. Freitas and D. Stockinger, Phys. Rev. D 66 (2002) 095014 [arXiv:hepph/0205281].
 [35] N. Baro, F. Boudjema and A. Semenov, in progress, to appear soon.
 [36] T. Fritzsche and W. Hollik, Eur. Phys. J. C 24, 619 (2002) [arXiv:hepph/0203159].
 [37] W. Hollik and H. Rzehak, Eur. Phys. J. C 32 (2003) 127; [arXiv:hepph/0305328].

[38]
D.Z. Freedman, K. Johnson and J.I. Latorre, Nucl. Phys. B371 (1992) 353.
P.E. Haagensen, Mod. Phys. Lett. A7 (1992) 893; [arXiv:hepth/9111015].
F. del Aguila, A. Culatti, R. Muñoz Tapia and M. PérezVictoria, Phys. Lett. B419 (1998) 263; [arXiv:hepth/9709067].
F. del Aguila, A. Culatti, R. Muñoz Tapia and M. PérezVictoria, Nucl. Phys. B537 (1999) 561; [arXiv:hepph/9806451].
F. del Aguila, A. Culatti, R. Muñoz Tapia and M. PérezVictoria, Nucl. Phys. B504 (1997) 532; [arXiv:hepph/9702342].  [39] F. Boudjema and E. Chopin, Z. Phys. C73 (1996) 85; [arXiv:hepph/9507396].
 [40] P. Gondolo and G. Gelmini, Nucl. Phys. B360 (1991) 145.
 [41] A. Sommerfeld, Atomic Structure and Spectral Lines, Methuen, London, 1934.

[42]
L.I. Schiff, Quantum Mechanics, third edition, McGrawHill
1981,
p. 138.
L. Landau et E. Lifchitz, Mécanique Quantique (T. III), Mir, Moscou 1967, p. 606.  [43] D. Y. Bardin, W. Beenakker and A. Denner,Phys. Lett. B 317 (1993) 213.
 [44] A. Freitas, D. J. Miller and P. M. Zerwas, Eur. Phys. J. C 21 (2001) 361 [arXiv:hepph/0106198].
 [45] H. Baer, K. M. Cheung and J. F. Gunion, Phys. Rev. D 59 (1999) 075002 [arXiv:hepph/9806361].
 [46] M. Cirelli, A. Strumia and M. Tamburini, Nucl. Phys. B 787 (2007) 152 [arXiv:0706.4071 [hepph]].
 [47] W. Oller, H. Eberl, W. Majerotto and C. Weber, Eur. Phys. J. C 29, 563 (2003) [arXiv:hepph/0304006].
 [48] M. Drees and K. I. Hikasa, Phys. Lett. B 240 (1990) 455; [Erratumibid. B 262 (1991) 497].

[49]
L. J. Hall, R. Rattazzi and U. Sarid, Phys. Rev. D 50
(1994) 7048
[arXiv:hepph/9306309].
M. S. Carena, M. Olechowski, S. Pokorski and C. E. M. Wagner, Nucl. Phys. B 426 (1994) 269 [arXiv:hepph/9402253].
M. S. Carena, D. Garcia, U. Nierste and C. E. M. Wagner, Nucl. Phys. B 577 (2000) 88 [arXiv:hepph/9912516].