Dark Matter and Fundamental Physics with the Cherenkov Telescope Array
The Cherenkov Telescope Array (CTA) is a project for a next-generation observatory for very high energy (GeV–TeV) ground-based gamma-ray astronomy, currently in its design phase, and foreseen to be operative a few years from now. Several tens of telescopes of 2–3 different sizes, distributed over a large area, will allow for a sensitivity about a factor 10 better than current instruments such as H.E.S.S, MAGIC and VERITAS, an energy coverage from a few tens of GeV to several tens of TeV, and a field of view of up to 10 deg. In the following study, we investigate the prospects for CTA to study several science questions that can profoundly influence our current knowledge of fundamental physics. Based on conservative assumptions for the performance of the different CTA telescope configurations currently under discussion, we employ a Monte Carlo based approach to evaluate the prospects for detection and characterisation of new physics with the array.
First, we discuss CTA prospects for cold dark matter searches, following different observational strategies: in dwarf satellite galaxies of the Milky Way, which are virtually void of astrophysical background and have a relatively well known dark matter density; in the region close to the Galactic Centre, where the dark matter density is expected to be large while the astrophysical background due to the Galactic Centre can be excluded; and in clusters of galaxies, where the intrinsic flux may be boosted significantly by the large number of halo substructures. The possible search for spatial signatures, facilitated by the larger field of view of CTA, is also discussed. Next we consider searches for axion-like particles which, besides being possible candidates for dark matter may also explain the unexpectedly low absorption by extragalactic background light of gamma-rays from very distant blazars. We establish the axion mass range CTA could probe through observation of long-lasting flares in distant sources. Simulated light-curves of flaring sources are also used to determine the sensitivity to violations of Lorentz Invariance by detection of the possible delay between the arrival times of photons at different energies. Finally, we mention searches for other exotic physics with CTA.
keywords:CTA, Dark Matter, Dwarf satellite galaxies, Galactic centre, Galactic halo, Galaxy clusters, Axion-like Particles, Lorentz Invariance Violations, Neutrino, Magnetic monopoles, Gravitational Waves
[corr]Sent off-print requests to Michele Doro (firstname.lastname@example.org) and Jan Conrad (email@example.com)
The Cherenkov Telescope Array (CTA) Actis et al. (2011) will be
an advanced facility for ground-based gamma-ray astronomy in the
Compared to the
current generation of Imaging Atmospheric Cherenkov Telescopes (IACT)
e.g. H.E.S.S., MAGIC and VERITAS
The search for new physics beyond the Standard Model (SM) of particle physics is among the key science drivers of CTA along with the understanding of the origin of high-energy gamma-rays and of the physics of cosmic ray acceleration in galactic and extragalactic objects. Several such fundamental physics issues are examined here — the nature of cold dark matter, the possible existence of axion-like particles, and expected violation of Lorentz Invariance by quantum gravity effects. Search strategies for cosmic tau neutrinos, magnetic monopoles and follow-up observations of gravitational waves, are also discussed.
The CTA array performance files and analysis algorithms are
extensively described in Bernloher et al. (2012). Eleven array
configurations (…) were tested for the Southern hemisphere and two
(, ) for the Northern hemisphere (Bernloher et al., 2012, Table 2). The
simulations were made at an altitude of 2000 m and at 70 deg elevation.
Arrays and are considered balanced layouts in terms of performance
across the energy range. Arrays , , and are more focused to
low-energies, and arrays , and to high energies. is a higher
energy alternative to . Their point-source sensitivity is compared
in (Bernloher et al., 2012, Fig. 7). The arrays comprise different number of
telescopes of three different sizes: the Large Size Telescope (LST,
23 m diameter), the Medium Size Telescope (MST, 12 m diameter) and Small
Size Telescope (SST, 6 m diameter) (Bernloher et al., 2012, Table 1).
One of the goals of this
study was to compare different array configurations for the
specific scientific case. While in some
cases all CTA configurations are compared against each other, in
others only benchmarks array and are considered, as
representative arrays that maximize the performance at low-energy,
high-energy and in the full-range, respectively. Except for galaxy cluster studies and Galactic halo
studies, where extended or diffuse MC simulations are used, in all
other cases point-like MC simulations are used. This is the first
time that realistic estimates of the prospects of detection for CTA are
presented for such searches. An optimised event
selection procedure and a dedicated analysis ought to improve
on our conservative expectations.
Previous studies often relied on too optimistic sensitivities,
especially at low energies ( GeV); publicly available effective
areas for a subset of configurations
(Actis et al., 2011; Bernloher et al., 2012) are now accurate and can be
used to infer CTA sensitivities for point-like sources.
This contribution is structured as follows:
In Section 1, we explore different possible scenarios for detection of cold dark matter particle signatures in observations of: dwarf satellite galaxies of the Milky Way (Section 1.1), clusters of galaxies (Section 1.2) and the Galactic halo (Section 1.3). We also study anisotropies in the diffuse gamma-ray background as a signature of dark matter (Section 1.4).
In Section 2, we discuss the scientific case for axion-like particles, and make predictions for detection from observation of blazars at different distances and with different flare durations.
In Section 3, we compare the capacity of all planned CTA arrays to constrain high energy violations of Lorentz Invariance, relative to current limits.
In Section 4 we discuss qualitatively three more cases: the observation of air showers from leptons emerging from the Earth’s crust (Section 4.1), the capability to identify magnetic monopoles as bright emitters of Cherenkov light in the atmosphere (Section 4.2) and some consideration about multi-wavelength gravitational wave campaigns (Section 4.3).
Given the wide variety of physics issues considered in this contribution, an introduction to the individual physics case is presented in each section for easier readability. The reader can find an overall summary and closing remarks in Section 5.
1 Cold Dark Matter Particle searches
A major open question for modern physics is the nature of the dark matter (DM). There is a large body of evidence for the presence of an unknown form of gravitational mass, at scales from kiloparsecs to megaparsecs, that cannot be accounted for by SM particles. The observation by the WMAP satellite Komatsu et al. (2011) of the acoustic oscillations imprinted in the cosmic microwave background quantifies the DM component as contributing about 25% of the total energy budget of the Universe. Being dominant with respect to the baryonic component, which accounts for only about 4% of the total energy density, DM shaped the formation of cosmic structures. By comparing the galaxy distributions in large redshift galaxy surveys Reid (2009), and through body simulations of structure formation Springel et al. (2008); Anderson et al. (2010); Springel et al. (2008), it is inferred that the particles constituting the cosmological DM had to be moving non-relativistically at decoupling from thermal equilibrium in the early universe (‘freeze-out’), in order to reproduce the observed large-scale structure in the Universe and hence the term “cold DM” (CDM). This observational evidence has led to the establishment of a concordance cosmological model, dubbed CDM Press and Schechter (1974); Sheth et al. (2001); Springel et al. (2005), although this paradigm is troubled by some experimental controversies Klypin et al. (1999); Kravtsov (2010); de Naray and Spekkens (2011); Walker and Penarrubia (2011); Boylan-Kolchin et al. (2011, 2012).
One of the most popular scenarios for CDM is that of weakly interacting massive particles (WIMPs), which includes a large class of non-baryonic candidates with mass typically between a few tens of GeV and few TeV and an annihilation cross-section set by weak interactions (see, e.g., Refs. Bertone et al., 2005; Feng, 2010). Natural WIMP candidates are found in proposed extensions of the SM, e.g. in Super-Symmetry (SUSY) (Jungman et al., 1996; Martin, 1998), but also Little Higgs (Schmaltz and Tucker-Smith, 2005), Universal Extra Dimensions (Servant and Tait, 2003), and Technicolor models (Nussinov, 1985; Chivukula and Walker, 1990), among others. Their present velocities are set by the gravitational potential in the Galactic halo at about a thousandth of the speed of light. WIMPs which were in thermal equilibrium in the early Universe would have a relic abundance varying inversely as their velocity-weighted annihilation cross-section (for pure wave annihilation): Jungman et al. (1996). Hence for a weak-scale cross-section , they naturally have the required relic density , where is the Hubble parameter in units of km s Mpc Komatsu et al. (2011). The ability of WIMPs to naturally yield the DM density from readily computed thermal processes in the early Universe without much fine tuning is sometimes termed the “WIMP miracle”.
In some SUSY theories, a symmetry called
’-parity’ prevents a too rapid proton-decay, and as a side-effect, also
guarantees the stability of the lightest SUSY particle (LSP), which is
thus a prime candidate for a WIMP. WIMPs can annihilate to SM
particles, and have hadron or leptons in the final products of
annihilation. Thus from cosmic DM annihilations, one can expect
emission of neutrinos, charged cosmic rays, multi-frequency
electromagnetic radiation from charged products, and prompt
gamma-rays Colafrancesco et al. (2006). The detection of these final
state particles can help to identify DM — this is termed “indirect
DM detection”. Gamma-rays are not deflected by cosmic
magnetic fields, and thus trace back to their origin. Therefore,
observation of a gamma-ray signal from cosmic targets where DM is
expected could prove conclusive about its nature .
In the context of gamma-ray astronomy, the differential flux of gamma-rays from within a solid angle around a given astronomical target where DM is expected, can be written as:
where is the annihilation cross-section (times the relative velocity of the two WIMPs), is the photon flux per annihilation summed over all the possible annihilation channels with branching ratios , and is the mass of the DM particle. The ‘astrophysical factor’ is the integral over the line of sight (los) of the squared DM density and over the integration solid angle :
The remaining term in Eq. (1.1) is the so-called ‘boost factor’ which is a measure of our ignorance of intrinsic flux contributions that are not accounted for directly in the formula.
There are various known mechanisms for boosting the intrinsic flux, among which we mention the inclusion of subhalos, and the existence of a ‘Sommerfeld enhancement’ of the cross-section at low velocity regimes in models where the DM particles interact via a new long-range force. All numerical body simulations of galactic halos have shown the presence of subhalos populating the host halo (see, e.g., Refs. Springel et al., 2008; Diemand et al., 2008). Such density enhancements, if not spatially resolved, can contribute substantially to the expected gamma-ray flux from a given object. This effect is strongly dependent on the target: in dwarf spheroidal galaxies (dSphs) for example the boost factor is only of (Pieri et al., 2009; Abramowski et al., 2011), whereas in galaxy clusters the boost can be spectacular, by up to a factor of several hundreds Sanchez-Conde et al. (2011); Pinzke et al. (2011); Gao et al. (2011). On the other hand, the Sommerfeld enhancement effect can significantly boost the DM annihilation cross-section (Sommerfeld, 1931; Lattanzi and Silk, 2009). This non-relativistic effect arises when two DM particles interact in a long-range attractive potential, and results in a boost in gamma-ray flux which increases with decreasing relative velocity down to a saturation point which depends on the DM and mediator particle mass. This effect can enhance the annihilation cross-section by a few orders of magnitude (Pieri et al., 2009; Abramowski et al., 2011).
The current generation of IACTs is actively searching for WIMP
annihilation signals. dSphs are promising targets
for DM annihilation detection being among the most DM dominated
objects known and free from astrophysical background.
Constraints on WIMP annihilation signals from
dSphs have been reported towards Sagittarius, Canis Major, Sculptor
and Carina by H.E.S.S. (Aharonian, 2008, 2009; Abramowski et al., 2011), towards Draco, Willman 1 and Segue 1 by
MAGIC (Albert et al., 2008; Aliu et al., 2009; Aleksic et al., 2011), towards
Draco, Ursa Minor, Boötes 1, Willman 1 and Segue 1 by
VERITAS (Acciari et al., 2010; Aliu et al., 2012), and again towards Draco and Ursa
Minor by Whipple (Wood et al., 2008). Nevertheless, the present
instruments do not have the required sensitivity to reach the
“thermal” value of the annihilation cross-section .
A search for a WIMP annihilation signal from the halo at angular
distances between 0.3 and 1.0 from the Galactic
Centre has also recently been performed using 112 h of H.E.S.S. data
Abramowski et al. (2011). For WIMP masses well above the H.E.S.S. energy
threshold of 100 GeV, this analysis provides the currently most
constraining limits on at the level of a few.
H.E.S.S., MAGIC and VERITAS have also observed some galaxy clusters,
reporting detection of individual galaxies in the cluster, but only
upper limits on any CR and DM associated
emission Aharonian (2008); Aharonian et al. (2009); Aleksic et al. (2010); Acciari et al. (2009); Aleksic et al. (2011); Abramowski et al. (2012). Even though IACT
limits are weaker than those obtained from the Fermi-LAT satellite
measurements in the GeV mass range (Abdo et al., 2010; Abazajian et al., 2010; Hutsi et al., 2010; Ackermann et al., 2011),
they complement the latter in the TeV mass range. Gamma-ray line
signatures can also be expected in the annihilation or decay of DM
particles in space, e.g. into or . Such a
signal would be readily distinguishable from astrophysical gamma-ray
sources which typically produce continuous
spectra Bringmann et al. (2011). A measurement
carried out by H.E.S.S. Spengler et al. (2011) using over 100 h of
Galactic Centre observations and over 1000 h of extragalactic
observations complements recent results obtained by
Fermi-LAT Abdo et al. (2010), and together cover about 3 orders of
magnitude in energy, from 10 GeV to 10 TeV.
In this contribution, we focus on the prospects for DM searches with CTA, which are expected to improve on the current generation of IACTs on the following basis:
the energy range will be extended, from a few tens of GeV to several tens of TeV. At low energies, this will allow overlap with the Fermi-LAT instrument, and will provide sensitivity to WIMPs with low masses. For WIMPs with mass larger than about 100 GeV, CTA will have higher sensitivity as our studies indicate (Funk and Hinton, 2012).
the improved sensitivity in the entire energy range, compared to current instruments, will obviously improve the probability of detection, or even identification of DM, through the observation of spectral features,
finally, the improved energy resolution will allow much better sensitivity to the possible spectral feature in the DM-generated photon spectrum. While astrophysical sources show typically power-law spectra with steepening at high energies, DM spectra are universal and generically exhibit a rapid cut-off at the DM mass. For specific models, “smoking gun” spectral features can appear (Bringmann et al., 2011). The observation of a few identical such spectra from different sources will allow both precision determination of the mass of the WIMP and its annihilation cross-section.
For the following studies, in order to have a detection, we require the number of excess events over the background larger than 10 in the signal region, the ratio between the number of excess events and the background events larger than 3%, and the significance of the detection computed following Eq. (17) of Li&Ma (Li and Ma, 1983), to be larger than . If not explicitly mentioned, we used a number of background-control regions set to 5 ( = 0.2 in the Li&Ma notation), which is a conservative choice, given the fact that the large FOV of CTA may allow for . In case of non detection within a certain observation time, we calculate integral upper limits following the methods described in Rolke et al. (2005) (bounded profile likelihood ratio statistic with Gaussian background, and with a confidence level of 95% C.L) in all cases expect the Galactic halo case, where we use the method of Feldman and Cousins (1998).
We study the effect of various annihilation spectra, assuming in turn 100% BR into a specific channel (, or ). The spectral shapes are obtained from different parameterisation from the literature (Tasitsiomi and Olinto, 2002; Cirelli et al., 2011; Cembranos et al., 2011). For the channel, which is used for comparison of different targets (see Fig. 5.1), this difference accounts for few percents (depending on the DM mass), which is substantially smaller than the uncertainties in, e.g., the astrophysical factor, and do not significantly alters the conclusions.
1.1 Observations of dwarf satellite galaxies
In the CDM paradigm, galaxies such as ours are the result of a complex merger history and are expected to have extended halos of DM in accordance with observations. dSphs are satellites orbiting the Milky Way under its gravitational influence and are considered as privileged targets for DM searches for the following reasons:
many of the dSphs lie within kpc of the Earth,
they have favourable low gamma-ray backgrounds due to the lack of recent star formation history and little or no gas to serve as target material for cosmic-rays (Mateo, 1998).
The family of dSphs is divided into “classical” dSphs, which are well-established sources with relatively high surface brightness and hundreds of member stars identified Simon and Geha (2007); Charbonnier et al. (2011), and “ultra-faint” dSphs, which have mainly been discovered recently through photometric observations in the Sloan Digital Sky Survey (SDSS) York et al. (2000) and have very low surface brightness and only a few tens or hundreds of member stars. Some of the ultra-faint dSphs are not well-established as such because of similarity of their properties with globular clusters, hence their nature is often under debate. However, they are of particular interest due to their potentially very large, albeit uncertain, mass-to-light ratios.
|Ursa Minor||NFW||Charbonnier et al. (2011)|
|Draco||NFW||Charbonnier et al. (2011)|
|Sculptor||NFW||Charbonnier et al. (2011)|
|ISO||Battaglia et al. (2008)|
|Carina||NFW||Charbonnier et al. (2011)|
|Segue 1||Einasto||Aleksic et al. (2011)|
|Willman 1||NFW||Acciari et al. (2010)|
|Coma Berenices||44||NFW||Strigari et al. (2008)|
Table 1.1 shows the astrophysical factor for few selected dSphs for comparison. For the classical dSphs, we selected the two most promising Northern (Ursa Minor and Draco) and Southern (Sculptor and Carina) ones according to Charbonnier et al. (2011, Table 2). The statistical uncertainties on the astrophisical factor are roughly one order of magntiude at 68% CL, slightly depending on the dSphs, and can be found in (Charbonnier et al., 2011, Table 2). For the ultra-faint dSphs, we include Segue 1, Willman 1 and Coma Berenices, which have the highest -values (although their nature is still under debate, especially for Segue 1 Belokurov et al. (2007); Niederste-Ostholt et al. (2009); Geha et al. (2009); Xiang-Gruess et al. (2009); Simon et al. (2010); Essig et al. (2009); Martinez et al. (2009), which makes the determination of the astrophysical factor less accurate than for classical dSphs). We remark how the estimation of the astrophysical factor is subject to uncertainties of either statistical origin or due to the different assumptions considered for its calculation. A systematic study has been done for Sculptor, to estimate the effect of the profile shape and velocity anisotropy assumptions Battaglia et al. (2008). Another compilation of astrophysical factors for several dSphs can be found in Ackermann et al. (2011).
For the subsequent discussion, we consider only three sources: Ursa Minor and Sculptor representative of classic dSphs and located in the Northern and Southern hemisphere respectively, and Segue 1 having the largest astrophysical factor.
Bounds on the annihilation cross-section
Two kinds of radial profiles are generally used to model the DM distribution in dSphs: cusped and cored profiles Walker et al. (2009). While the former is motivated by numerical body simulations, the latter seems to be more consistent with observations Salucci et al. (2012), but the issue is still under debate (see, e.g., Valenzuela et al., 2007). The standard cusped profile is the Navarro, Frenk & White form (NFW) Navarro et al. (1997), while more recently it has been shown that the Einasto profile Navarro et al. (2010) provides also a good fit to the subhalos in body simulations (Springel et al., 2008). On the other hand, for systems of the size of dSphs, the possibility of centrally cored profiles has also been suggested (Moore, 1994; Flores and Primack, 1994; Walker and Penarrubia, 2011). In conclusion, observations of low surface brightness and dSphs (de Blok et al., 2001; van den Bosch and Swaters, 2001; de Blok, 2010) show that both cusped and cored profiles can accommodate their stellar dynamics.
Fig. 1.1 shows the integral upper limits towards Sculptor, the best Southern candidate from Table 1.1, for which we consider both a cusped NFW Charbonnier et al. (2011) and a cored isothermal Abramowski et al. (2011) profile. The sensitivity is calculated assuming that the DM particle annihilates purely in the channel, for arrays and . The observation time is set to 100 hours and the integration solid angle to sr. The best reached sensitivity is at the order of few cm s for the NFW profile for both arrays and , while the isothermal profile is less constraining. Weaker constraints in the low mass range are obtained for the array due to the lack of the large-size telescopes in the centre of their layout. The capability of CTA to discriminate between the two profiles is therefore restricted.
The integration solid angle plays a central role in the estimation of
the sensitivity and in the discrimination of the cusp or core
profiles. The former point was addressed already
(Charbonnier et al., 2011, Fig. 7) where it was shown that small
integration angles guarantee the strongest constraints. In the case of
CTA, depending on the array layout (and the energy range), the
angular resolution could be as low as 0.02 deg, corresponding to a
minimum integration angle of about sr, and thus our results
can be considered conservative, with an expected improvement of up to
a factor . Concerning the second point, Walker et al. (2011)
showed that the more robust constraints, regardless of whether the
profile is cored or cusped, are reached for an integration angle
, where is the so-called half-light
radius, and is the distance to the dSph. For Sculptor, ,
which is over 5 times the integration angle adopted here. In our
calculation this would imply a weakening of the upper limits of a
factor of a few.
In Fig. 1.2 we show the integral upper limits for two classical dSphs, namely Ursa Minor and Sculptor in the Northern and Southern hemispheres respectively, as well as the ultra-faint dSph Segue 1. In order to span the variety of DM particle models, we study the effect of various annihilation spectra (computed using Ref. Cirelli et al. (2011)), assuming in turn 100% BR into , and channels for the array and an observation time h. Assuming the annihilation to be purely into , the sensitivity reaches few cm s for 100 h observation time of Segue 1. In comparing the different dSphs (assuming the reference annihilation channel ) we see that even the most promising classical dSphs are less constraining than Segue 1 by over a factor of 10. However the uncertainties in the estimation of astrophysical factors for ultra-faint dSphs mean that this conclusion may not be reliable. Note that in the above calculations we did not assume any intrinsic flux boost factor, i.e. in Eq. (1.1).
Bounds on Astrophysical factors and Boost factors
Another approach to estimate the capabilities of CTA for DM
detection in dSphs consists in the evaluation of the statistical
significance of the DM signal as a function of the DM particle mass
and the astrophysical factor, for different possible
annihilation channels. Hereafter, we calculate the minimum
astrophysical factor required to reach a statistical
significance of assuming an effective observation time of
h, and the thermal cross-section . This is shown in Fig. 1.3 for
channels: (upper curves) and (lower
curves), using analytical fits from Ref. Cembranos et al. (2011). Again, three proposed CTA
configurations are studied: B, C, and E. In order to put these values
into context, we note that the largest astrophysical factor
dSphs is that of Segue 1 at GeV cm Essig et al. (2010). From the figure we
see that array is the most constraining over the whole energy range. It is
clear that for a detection, the astrophysical factor of the dSph needs
to exceed GeV cm, which is only 1–2 orders of
magnitude smaller than that of the Galactic Centre (see Section 1.3). While we may
expect a few such objects in the Milky Way halo Pieri et al. (2008),
they ought to have already been detected and identified by
Fermi-LAT. Although this has not happened, one can envisage DM subhalos
with no associated dSph (or one not bright enough optically to be
detected), and therefore such gamma-ray emitters may be hidden among
the unidentified Fermi sources Nieto et al. (2011).
Another way to evaluate the prospects of DM detection is by means of the intrinsic flux boost factor term in Eq. (1.1). The minimum is computed as the ratio of the minimum astrophysical factor which provides a 5 detection in 100 h of observation time with CTA, to the observational astrophysical factor from the DM modeling of the dSphs. Again, the thermal cross-section is assumed. Fig. 1.4 shows the minimum for a 1 TeV DM particle annihilating into . is calculated for a NFW profile for all the cases except Segue 1, where an Einasto profile is considered. Considering that the boost factor from subhalos in dSph is only of , CTA observations of dSphs will be more sensitive to scenarios where Sommerfeld enhancement is at play, which may instead boost the signal up to .
1.2 Observations of Galaxy Clusters
Within the standard CDM scenario, galaxy clusters, with masses around M, are the largest gravitationally bound objects and the most recent structures to form Voit (2005). They are complex objects, relevant for both cosmological and astrophysical studies, and for what concerns DM searches Colafrancesco et al. (2006); Sanchez-Conde et al. (2011); Pinzke et al. (2011); Blasi et al. (2007); Jeltema et al. (2009); Pinzke and Pfrommer (2010); Ackermann et al. (2010); Cuesta et al. (2011); Zimmer et al. (2011); Huang et al. (2011); Han et al. (2012). DM, in fact, is supposed to be the dominant component of the cluster mass budget, accounting for up to 80% of its mass (the other components are the galaxies and the gas of the intra-cluster medium (ICM)). This is why clusters have been considered as targets for the indirect detection of DM, with the possibility of detecting the gamma-rays produced in the annihilation (or decay) of DM particles in the halo of the cluster.
body simulations of halo formation and evolution have also proven that, while the majority of early-formed, small structures merge together giving shape to more massive objects, some of the subhalos survive and are still present in the “host” halo of larger objects. Theoretical models foresee a huge number of these substructures at all scales down to M Bringmann (2009). These subhalos have the effect of contributing to the total gamma-ray emission from DM annihilations, and they may have important consequences for DM indirect detection. This is especially true for galaxy clusters, where the intrinsic flux “boost” from subhalos can be of order , in particular compared to the case of dSphs, explored previously, where the subhalos boost should contribute only marginally. Despite the fact that, due to their vicinity, dSphs are usually considered as the best sources for DM indirect detection, thanks to the subhalos boost, some authors claim that galaxy clusters have prospects of DM detection better or at least as good as those of dSphs Sanchez-Conde et al. (2011); Pinzke et al. (2011); Gao et al. (2011).
On the other hand, in galaxy clusters, emission in the gamma-ray range is not only expected by DM annihilation. Clusters may host an Active Galaxy Nucleus (AGN, that appear as point-like sources at very high energies) and radio galaxies. The case of the Perseus galaxy cluster, which has been observed by MAGIC during several campaigns in the last years, is emblematic: MAGIC detected both the central AGN NGC-1275 Aleksíc et al. (2011) and the off-centreed head-tail radio galaxy IC 310 Aleksic et al. (2010). Moreover gamma-rays are expected to be produced also from the interaction of cosmic rays (CRs) with the ICM Pinzke and Pfrommer (2010); Blasi and Colafrancesco (1999); Miniati (2003); Pfrommer et al. (2007); Ensslin et al. (2010). The physics of the acceleration of CRs (electrons and protons) is not completely understood, but plausible mechanisms can be shock acceleration during structure formation, or galactic winds driven by supernovae. CRs can also be injected into the ICM from radio galaxy jets/lobes. At the energies of interest here (above 10 GeV), CRs emit gamma-rays from the processes associated with the decay of the neutral and charged pions produced in the interaction of the CRs with the ICM ambient protons Colafrancesco and Blasi (1998); Colafrancesco et al. (2011). Most importantly, such a contribution is usually found to be larger than the one predicted from DM annihilation. It thus represents an unavoidable source of background for DM searches in galaxy clusters. To date, the deep exposure performed with the MAGIC stereoscopic system of the Perseus cluster Aleksic et al. (2011) placed the most stringent constraints from VHE gamma-rays observations regarding the maximum CRs-to-thermal pressure to %.
The purpose of this section is to estimate the CTA potential to detect gamma-rays from DM annihilation in the halo of galaxy clusters. First, the CR-induced emission only will be considered. This component represents, by itself, an extremely interesting scientific case, at the same time being a background complicating the prospects of DM detection. Afterward, the ideal case of a cluster whose emission is dominated by DM annihilation only will be treated. Finally, the combination of the two components distributed co-spatially will be discussed.
It should be noted here that gamma-ray emission from both DM annihilation and CRs is spatially extended, even though not always co-spatial. In particular, Sanchez-Conde et al. (2011) proved that, for the case of DM, the contribution of subhalos is particularly relevant away from the halo centre, so that annihilations can still produce a significant amount of photons up to a distance of degrees from the centre. This represents a problem for current Cherenkov Telescopes since their FOV is limited to degrees. CTA will overcome this limitation, having a FOV of up to 10 deg (at least above 1 TeV) and an almost flat sensitivity up to several degrees from the centre. It is reasonable to expect, therefore, that CTA will allow a step-change in capability in this important area.
In this study, we selected two benchmark galaxy clusters: Perseus and Fornax. Perseus has been chosen because it is considered that with the highest CR-induced photon yield but a low DM content, and Fornax for the opposite reason: it is considered the most promising galaxy cluster for DM searches Sanchez-Conde et al. (2011); Pinzke et al. (2011). We recall that Perseus is located in the Northern hemisphere, while Fornax is in the Southern hemisphere. To study the prospects for CTA we use two Monte Carlo simulations of the instrument response functions and of the background rates for extended sources, for the case of array and array , which we recall, are representatives of well-performing arrays at low energies (array ) and in the full energy range (array ). The MC simulations were developed explicitly for the analysis of extended sources so that all the relevant observables are computed throughout the entire FOV.
Gamma-ray emission from cosmic-rays
Gamma-ray emission due to the injection of CRs into the ICM of a galaxy cluster is proportional both to the density of the ICM and the density of CRs. For the present work, we refer to the hadronic CR model of Pinzke and collaborators Pinzke et al. (2011); Pinzke and Pfrommer (2010), based on detailed hydrodynamic, high-resolution simulations of the evolution of galaxy clusters, since in these works we found detailed morphological information, essential to compute the CTA response. The CR surface brightness rapidly decreases with the distance from the centre of the halo, so that, in most cases, the total emission is contained in , where is the projected virial radius of the cluster, where the local density equals 200 times the critical density (see, e.g., Fig. 14 of Pinzke et al. (2011), from which we derive the surface brightness of the clusters we analyze). Mpc () for Perseus and Mpc () for Fornax (Sanchez-Conde et al., 2011). The energy spectrum of the model, in the energies of interests here (above 10 GeV), is a power-law with a slope of .
Since the emission region is extended in the sky, we first divide the FOV into a grid of pixels each 0.2 degrees wide, and then we define the region of interest (ROI), constituted by all the pixels within an angle from the centre of the camera. We consider 15 values of energy threshold in logarithmic steps from 50 GeV to 50 TeV. With the theoretical gamma-ray emission and the instrument response, we are able to compute the predicted number of background () and signal events () above each , in each bin of the ROI separately, and then we integrate over the entire ROI. The model of Pinzke et al. (2011) predicts a rather large gamma-ray flux for Perseus ( GeV), the largest among the galaxy clusters, and a smaller one for Fornax ( GeV). Above the different energy thresholds , we determine how many hours CTA will need to detect the sources. We perform the calculation for the two CTA array and and for different ROI. We repeat the procedure 10 times for each energy threshold and average the results, in order to quantify the statistical fluctuations occurring when the number of events (both and ) are generated. The results are shown in Fig. 1.5.
If one assumes the CR-induced gamma-ray model by
Pinzke et al. (2011), CTA will detect such radiation from Perseus
already in about h, a fact which will constitute an
extraordinary scientific result by itself
We see that the exact value of the integration time depends on the energy threshold chosen for the analysis. The reason for this is the tradeoff between the gamma-ray efficiency at different energies (the effective area), the source intrinsic spectrum and the chosen ROI. Roughly 90% of the CR-induced emission is expected within about for Perseus, which corresponds to roughly . We checked that integrating larger ROI, more background than signal is included in the analysis, thus deteriorating the significance of the detection. This suggests that in realistic cases, the best ROI should be optimized. Finally, we also see that the prospects of detection are similar for both considered arrays, and .
Gamma-ray emission from Dark Matter annihilation
The gamma-ray brightness due to DM annihilations from a particular viewing angle in the sky is proportional to the DM density squared integrated along the line of sight, as shown in Eq. (1.1). In the case of galaxy clusters, the contribution of the smooth DM halo is boosted by the presence of DM subhalos. Recent -body simulations of Milky Way-like halos Springel et al. (2008); Anderson et al. (2010); Springel et al. (2008) found that the contribution of subhalos is small in the centre of the halo, due to dynamical friction and tidal effects that disrupt the subhalos. However, already at distances of , subhalos become the dominant component. The real value of the boost factor from subhalos is unknown and the theoretical estimates depend on different assumptions and different methods used in the calculations. Pinzke et al. (2011) estimated a for Fornax and Perseus respectively (for a minimal halo mass of M) , while other authors gave from few tens Sanchez-Conde et al. (2011) up to several thousands Gao et al. (2011).
We refer again to the results of Pinzke et al. (2011) where the authors assumed a double power-law to describe the luminosity of subhalos as a function of the projected distance from the centre of the halo, a behavior derived by analyzing the sub-halos in the Aquarius -body simulation. They also found the projected surface brightness to be largely independent of the initial profile of the smooth DM halo. As a result, the DM profile is very flat since the emission decreases approximately only % at a distance of degrees from the centre, depending on the cluster (Fig. 1.7 and Ref. Sanchez-Conde et al., 2011). For the case of Perseus and Fornax, we used the results of Fig. 10 of Ref. Pinzke et al. (2011), assuming a telescope angular resolution of degree, which is a good approximation for CTA, despite the fact that the exact value depends on the array, the energy and the position in the FOV. We underline that in the case of galaxy cluster, the contribution from substructure strongly shapes the region of emission, basically moving from a point-like source (in case no substructure are considered), to an extended source. Given the fact that the analysis differ in the two cases, the contribution from substructure cannot be considered as a simple multiplicative factor in the intrinsic expected flux with respect to point-like case.
Hereafter we consider the Fornax cluster, which has the largest expected DM-induced photon yield. The intrinsic flux is taken from (Pinzke et al., 2011, Table 2) and includes an intrinsic boost factor from subhalos of , summing up to a total flux of GeV. Additional intrinsic boost factor may come from either other contributions from subhalos not accounted in this model, by mechanisms like the Sommerfeld enhancement discussed above, or by the effect of contraction processes due to baryonic condensation Gnedin et al. (2004); Ando and Nagai (2012).
To compute the CTA prospects of detection, we consider only the case of DM annihilating into (spectral shape obtained from Ref. (Cembranos et al., 2011)), while other channels like or may be more constraining, depending on the energy (see Fig. 1.2). We take the reference thermal cross-section and we scan DM particle mass between 50 GeV and 4 TeV. We optimized the upper limit calculation as described in Ref. Aleksic et al. (2011), by optimizing the energy threshold above which the upper limit is estimated. In addition, we consider the possibility of extending the size of the ROI up to a of 2 degrees, to encompass the full radial extension of the source. Fig. 1.6 show the results. In 100 h observation, the lack of detection would place exclusion limits at the level of .
We also studied the effect of integrating over larger and larger regions: despite the increased numbers of background events, the signal yield is also larger and, in the case of Fornax, we gain more in integrating up to than , while integrating over larger regions leads to a worse sensitivity.
Distinguishing the dark matter signal from other gamma-ray contributions
In the previous sections we have considered separately the contributions of CR and DM to the total gamma-ray photon yield. This is an unrealistic situation: galaxy clusters are, in fact, complex objects where gamma-rays may be due to different contributions possibly of different spatial origin: by collisions of accelerated CRs, by DM annihilations and by foreground or embedded astrophysical sources.
Fortunately, gamma-rays of different origin typically have different spectral shapes, with the DM-induced emission characterized by the peculiar cut-off at and other remarkable spectral features Bringmann et al. (2008, 2009), in contrast to the plain spectral shapes (typically power-laws within the energy range of interest here) of the emission due to CRs, of the central galaxy or any astrophysical objects in the cluster. In the case a VHE emission is detected from a cluster, this fact may be used as a probe to discriminate between the components. However, we remark that in order to significantly discriminate the two sources one would need a quite significant detection over the CR-signal, which is often not supported by theoretical predictions for most galaxy clusters.
A distinct approach could be based on the different spatial extensions of the various contributions of VHE gamma-ray photons from galaxy clusters. The possible individual galaxies emitting within the cluster are typically seen as point-like sources, and thus one may exclude them from the FOV for CR and DM searches. Moreover, from the fact that CR-induced radiation is more concentrated than that induced by DM, one can optimize the ROI to select only those where the emission is DM dominated. In Fig. 1.7, we show the expected brightness profile for CR and DM photons for the Fornax cluster. One can see that up to the emission is dominated by CR-induced photons, whereas this exact value is cluster-dependent and model-dependent, and in particular the possible intrinsic boost-factor in the DM signal can affect this. In this example, above , the CR-signal fades more rapidly than the DM one. Then, in principle, by considering a ROI with a , one could be able to isolate the DM signal. The maximum integration angle should be optimized according to the specific cluster and emission profile to maximize the sensitivity, as discussed above. Unfortunately, at the moment of writing this report, we did not have sufficient coverage in the MC of extended sources to perform such a study, and we are limited to a qualitative discussion. We mention that the “geometrical” discrimination makes sense only if the DM signal is sufficiently large, otherwise different observational strategies could be more constraining. Finally, we stress again that with a large FOV (at least above 1 TeV) that has a near constant sensitivity over several degrees will allow CTA to study extended high energy gamma-ray sources in detail for the first time, with possibly revolutionary consequences for the IACT technique.
1.3 Observations of the Galactic Halo and Centre
The Galactic Centre (GC) is a long-discussed target for indirect DM searches with Cherenkov telescopes (Gondolo and Silk, 1999). The density of the DM halo should be highest in the very centre of the Milky Way, giving rise to a gamma-ray flux from annihilation of DM particles. On the one hand, this view is strengthened by the results of recent body simulations of CDM halos (Springel et al., 2008) suggesting that, for an observer within the Milky Way, the annihilation signal from DM is not primarily due to small subhalos, but is dominated by the radiation produced by diffuse DM in the main halo. On the other hand, searches close to the GC are made difficult by the presence of the Galactic Centre source HESS J1745290 (Acero et al., 2010; Aharonian et al., 2009) and of diffuse emission from the Galactic plane (Aharonian et al., 2006). Both emissions can be plausibly explained by astrophysical emission processes: HESS J1745290 is thought to be related to the Black Hole Sgr A or the pulsar wind nebula G 359.950.04 (Wang et al., 2006), and the diffuse emission is well described as arising from hadronic cosmic rays interacting in giant molecular clouds. In both cases, the measured energy spectra do not fit DM model spectra (Aharonian et al., 2006) and thus make a dominant contribution from DM annihilation or decay unlikely.
In this situation, DM searches should better target regions which are outside the Galactic plane and hence not polluted by astrophysical gamma-ray emission, but which are still close enough to the GC to exhibit a sizable gamma-ray flux from DM annihilation in the Milky Way halo (Schwanke, 2009). Given the angular resolution of Cherenkov telescopes and the scale height of the diffuse emission from the Galactic plane these criteria are fulfilled for an angular distance of about from the GC. This angular scale translates into a distance of 45 pc from GC when using 8.5 kpc as the galactocentric distance. The radial DM density profiles obtained in body simulations of Milky Way sized galaxies, like Aquarius (Springel et al., 2008) and Via Lactea II (Diemand et al., 2008), can be described by Einasto and NFW parameterizations, respectively. These parameterizations differ substantially when extrapolating to the very centre of the Milky Way halo since the NFW profile is much more strongly peaked. At distances greater than about 10 pc, the difference is, however, just a factor of 2 which implies that a search at angular scales of will not be hampered by the imprecise knowledge of the DM density profile at small scales.
A search for a DM annihilation signal from the halo at angular distances between 0.3 and 1.0 from the GC has recently been performed using 112 h of H.E.S.S. data (Abramowski et al., 2011). For WIMP masses well above the H.E.S.S. energy threshold of 100 GeV this analysis provides the currently most constraining limits on the velocity averaged annihilation cross section of WIMPs (for IACTs) at the level of few . Towards lower WIMP masses, observations of dwarf galaxies with the Fermi-LAT satellite yield even better limits (Abdo et al., 2010) demonstrating how both observations of dwarf galaxies and of the extended GC region allow to jointly constrain the parameter space.
Simulations and Assumptions
The prospects of a search for DM annihilation photons from the Milky Way halo with CTA depend on the performance of the southern CTA array, the applied analysis and background rejection techniques, and the details of the DM distribution and WIMP annihilation. At low energies, the sensitivity of IACTs is limited by the presence of hadron and electron showers which arrive isotropically and which can only be distinguished from photons on a statistical basis. The basic strategy for the halo analysis is therefore to compare the fluxes of gamma-like events from a signal region (with solid angle ) and a background region (solid angle ) and to search for DM features in the background-subtracted energy spectra. The signal region can be chosen such that it has the same instrumental acceptance as the background region, but is located closer to the GC and features therefore a higher DM annihilation flux. For the purpose of this section, we rewrite Eq. (1.1) in terms of differential DM photon rate expected from the signal or background regions ( respectively), given by:
where is the photon spectrum generated in the annihilation of a WIMP of mass , and are the CTA effective areas for photons, which depend on the position of the region within the FOV (), the energy and further parameters (like the zenith angle of the observations). is the line-of-sight integral over the squared DM density (cf. Eq. 1.2). Since the DM density depends only on the distance to the GC the line-of-sight integral and the astrophysical factor are only a function of the angular distance from the GC. Assuming that the signal and background region differ only with respect to their DM annihilation flux and their relative size , the rate of excess photon events is given by
Clearly, the rate vanishes when the astrophysical factors of the signal and the background regions are identical which implies that in the case of an isothermal DM density profile, a halo analysis with signal and background region chosen too close to the GC will not allow the placement of limits on .
Given an observation time , Eq. 1.4 can be used to estimate the number of excess photons for a particular realization of CTA and a DM model defining , and . Equivalently, one can place a limit on given an upper limit on the number of excess photon events. Simulations of the candidate arrays and at a zenith angle of were used to infer the effective area for diffuse photons and the residual rate of protons anywhere in the FOV. Both arrays feature large-size telescopes and are therefore suitable for studies in the low-energy domain. The available observation time was set to 100 h, which is about 10 % of the total observation time per year.
Two different ways of defining signal and background regions were employed and compared, namely the so-called Ring Method and the On-Off Method. For the Ring Method, the candidate arrays or were assumed to observe the GC region at Galactic longitude and Galactic altitude , and signal and background regions were placed in the same FOV as illustrated in Fig. 1.8. An annulus with inner radius and outer radius around the observation position was constructed and divided into signal and background region such that the signal region is closer to the GC and has therefore a larger astrophysical factor. The separation of signal and background region is achieved by a circle with radius around the GC whose intersection with the annulus defines the signal region. All other regions on the ring were considered as background region. The values of the four parameters , , and were optimized such that the attained significance of a DM signal per square root time was maximized. The maximization was carried out for a wide range of WIMP masses but the dependence on the actual WIMP mass was found to be fairly weak. The derived values for both candidate arrays are listed in Tab. 1.2. Judging from present IACT observations, we do not expect strong diffuse gamma-ray emission to extend outside the box used to mask the galactic disc. New point-like or slightly extended sources will be excluded, making the On and Off region smaller. In addition, the approach is only sensitive to gradients in the diffuse gamma-ray emission, whereas the charged particle background is isotropic. In the optimization process an Einasto profile was assumed for the DM signal, but the optimal values are only weakly dependent on the assumed profile in the region beyond degrees from the Galactic plane.
The usage of the annulus implies the same acceptance for signal and background region since the acceptance is, to good approximation, only a function of the distance to the observation position. Placing both signal and background regions in the same FOV implies that both regions will be affected by time-dependent effects in a similar way. A disadvantage is, however, that the angular distance between the signal and background region is only of order of the FOV diameter, reducing the contrast in Eq. 1.4 considerably. This contrast was increased in the On-Off Method where data-taking with an offset of typically 30’ in Right Ascension was assumed. In this mode, the telescopes first track for half an hour the same observation position as in the Ring Method which defines the signal region. The telescopes then slew back and follow the same path on the sky for another 30 min. The second pointing has the same acceptance as the first one since the same azimuth and zenith angles are covered but generates a background region with much increased angular distance to the GC. In the On-Off Method, the observation time was 50 h for the signal and 50 h for the background region giving again a total observation time of 100 h. Regardless of whether the Ring Method or the On-Off Method was used, all areas with were excluded from signal and background regions to avoid pollution from astrophysical gamma-rays.
The astrophysical factor (Eq. 1.2) was taken from the Aquarius Simulation (Springel et al., 2008) which had been corrected for the presence of subhalos below the resolution limit of the simulation. The line-of-sight integral assumes a value of at . Table 1.3 lists the astrophysical factors of the signal regions which were defined in the Ring and On-Off Method, respectively. In case of the On-Off Method, the signal region was defined as the total effective FOV of the On–pointing which introduces a dependence on the WIMP mass since the FOV grows with photon energy. For the WIMP annihilation spectrum several different choices were considered. The generic Tasitsiomi spectrum (Tasitsiomi and Olinto, 2002) is appropriate for a dominant annihilation into quark-antiquark pairs with subsequent hadronization into particles and was used in the optimization of the parameters of the Ring Method. Other spectra were explored by considering , and final states (Cembranos et al., 2011).
The two plots in Fig. 1.9 show the upper limits for WIMP masses between 0.1 TeV and 10 TeV, translated from the sensitivity using here the method of Feldman and Cousins (1998). Each curve corresponds to one set of assumptions. It is evident that the most constraining limits can be derived for masses of about 0.5 TeV which is a factor of 2 improvement compared to current IACT arrays like H.E.S.S. reaching best sensitivity around 1 TeV. This is a direct consequence of the lower threshold and superior stereoscopic background rejection of the CTA candidate arrays. Typical limits are around few which is a factor of 10 improvement compared to current IACTs. The comparison of array (blue) and (same line style but red) shows that the limits for array are always better, which can be understood from the fact that comprises large-size telescopes and array only . The magnitude of this effect is, however, comparatively small (). Overall, CTA should be able to probe the parameter space below the velocity averaged annihilation cross-section for thermally produced DM of for WIMP masses between several ten GeV and several TeV.
The upper panel of Fig. 1.9 illustrates the impact of data-taking with the Ring Method and the On-Off Method for the case of a dominant annihilation into quark-antiquark pairs with subsequent creation (Tasitsiomi and Olinto, 2002). The On-Off Method (dashed lines) is more sensitive than the Ring Method (dashed lines). One must keep in mind, however, that the On-Off Method spends 50 % of the observation time far away from the GC which implies that this data set will be of limited use for studies of astrophysical sources. Another drawback of the On-Off Method is its susceptibility to systematic effects arising from variations of the data-taking conditions (electronics, atmosphere). In view of this, the increased sensitivity for the DM halo analysis in parts of the parameter space will not probably suffice to motivate the acquisition of a larger data set in this mode.
Compared with the choice of the CTA candidate array ( or ) and the analysis method (Ring Method or On-Off Methods), the WIMP annihilation spectrum has the strongest impact on the CTA sensitivity. The lower panel of Fig. 1.9 shows for both candidate arrays and the Ring Method the limits obtained in the case of a dominant annihilation into pairs (solid), (dotted) and (dashed). The small photon yield from final states implies limits that are a factor of about 5 worse than limits for dominant annihilation into . It is clear that the full potential of the halo analysis will be exploited by confronting individual DM models with their predicted WIMP annihilation spectra with data.
1.4 Anisotropies in the diffuse gamma-ray background
Besides gamma-rays from individual resolved sources and Galactic foreground, another component of diffuse gamma-ray background radiation has been detected and proven to be nearly isotropic. This radiation dominantly originates from conventional unresolved point sources below the detection threshold, while another fraction might be generated by self-annihilating (or decaying) DM particles, which then could produce specific signatures in the anisotropy power spectrum of the diffuse gamma-ray background Ando et al. (2007); Ando (2009); Couco et al. (2008); Siegal-Gaskins (2008); Siegal-Gaskins et al. (2010). The different hypotheses about the origin of the gamma-ray background may be distinguishable by accurately measuring its anisotropy power spectrum.
Compared to the current generation of IACTs, CTA will have improved capabilities to measure anisotropies in the diffuse gamma-ray background, based upon a better angular resolution (determined by the point-spread function, PSF), an increased size of the FOV, and a higher background rejection efficiency. In the following, we discuss the effects of different assumptions on the background level and the anisotropy spectrum on the reconstruction of the power spectrum for the current generation of IACTs, and address the improvement obtainable with CTA. Finally, we make predictions for the discrimination between astrophysical and dark matter induced anisotropy power spectra for CTA.
In order to investigate the measured power spectrum and the impact of instrumental characteristics, a sample of event lists containing anisotropies generated with Monte-Carlo simulations was analyzed. The event lists were simulated by generating skymaps with a given anisotropy power spectrum. In total, skymaps covering the size of the FOV and being in different celestial positions were created, with a power spectrum for a given multipole moment defined as , , where denotes the coefficients of a (real-valued) spherical function decomposed into spherical harmonics. With , reflects the width of the distribution, which was assumed to be Gaussian. The simulations were made for different power spectra , with . The pixel size of these skymaps was , corresponding to (where ). The skymaps were normalized in a way that the pixel with the smallest signal was assigned the value and the pixel with the largest signal was assigned . Anisotropy power spectra were then derived from the fluctuation maps , such that for a full signal the maximum allowed difference in each map equals . Note that this difference can be smaller when an additional isotropic noise component is present.
An event was simulated in three subsequent steps: First, the celestial position was randomly chosen within the FOV, and the event was classified to represent a signal- or isotropic noise-event, respectively. The decision for a signal event was based upon a normalized random number : If was smaller than the skymap value at the corresponding position, the event was considered a signal event. Otherwise, another event position was selected while reapplying the procedure. Subsequently, the event map was convolved with a PSF of , which is similar to the resolution of current IACTs. The effect of a better angular resolution is discussed below. The event maps were simulated to contain entries. Note here that this number, as selected for the toy model, does in general not reflect the actual number of expected physical signal events. Therefore, the following discussion is focussed more on a qualitative discussion of the criticalities of the calculation rather than on making quantitative predictions.
To analyze an event list containing events, a HEALPix
skymap with pixels was accordingly filled, and analyzed
using the HEALPix software
where denotes the number of events in pixel , and equals inside pixel and outside. The function describes the windowing function — in this case the FOV with Gaussian acceptance — and denotes the original signal function over the full sky. The windowing function was normalized such that the integral over the full sky equals :
Note that this differs from other analyses of this type, where is defined such that the maximum value is . This difference in the normalization was done in order to keep a simple simulation code, and results should be equivalent. Final results were averaged over the corresponding skymaps.
The effect of the anisotropy spectrum and the residual background level on the spectral reconstruction
In Fig. 1.10, we show the mean value and the RMS of the power spectra. The value represents the strength of anisotropies of the angular scale . Anisotropies smaller than the angular resolution (defined by the PSF) are smeared out. This effect is clearly visible for large , where the power spectra converge into the Poissonian noise of the isotropic background spectrum. The angular resolution assumed for the simulation shown in this figure has a width of . Furthermore, anisotropies with a size larger than the FOV are truncated at due to the effect of the windowing function. The simulated FOV in Fig. 1.10 has a width of , which is comparable with the FOV of current IACT experiments. For the toy model, Fig. 1.10 demonstrates that, for , power spectra of different slopes are separable within the statistical errors and distinguishable from isotropic noise. CTA will have a smaller PSF as well as a larger FOV. This will make the signal vanish at larger than in the example, and the windowing function will influence the spectrum to smaller than in the figure. Therefore, we conclude that the FOV as well as the PSF, while important, will not be crucial for the investigation of anisotropies with CTA in the desired multipole range.
In general, the measured flux will be composed of both signal and background events. The background is produced mainly by two separate processes:
Events caused by cosmic rays (protons and electrons) which are misinterpreted as photon events.
An isotropic component of the photon background radiation, which does not count as signal according to our definition.
The influence of isotropic background is demonstrated in Fig. 1.11, where the power spectrum for is shown for different background levels. Here, the signal fraction is defined by , where denotes the number of signal events. The overall power is clearly reduced in case of fully isotropic background. From the figure, we see that when the signal fraction improves by a factor 5, the power spectrum is boosted by about two orders of magnitude. For this reason, we expect the ten-fold improved CTA sensitivity to mark the major difference with respect to the current generation of IACTs for such studies.
Prospects for astrophysical and dark matter anisotropies discrimination
The theoretical expectations for the power spectra of the diffuse gamma-ray flux of both the astrophysical as well as the DM components are highly model dependent. Since the astrophysical component is dominated by the gamma-ray flux from unresolved point sources, expected with a constant ( in our notation), we conservatively assume the slope of the DM component(s) to be similar. In this scenario, the difference between the power spectra manifests in the normalization. For unresolved point sources, , while for DM-induced anisotropies, considering the thermal annihilation cross-section cm s, is expected (see, e.g., Siegal-Gaskins et al., 2010). In our simulation, this was realized by distributing point sources over the full sky. While representing a non-physical model, this is a convenient way of producing a Poissonian anisotropy power spectrum which is a reasonable assumption for generic astrophysical and DM emitters. The normalization of the signal was set by extrapolating the spectrum of the extragalactic gamma-ray background (EGB) Abdo et al. (2010a) to . Note that the strength of the DM annihilation signal is strongly affected by the formation histories of DM halos and the distribution of DM subhalos. For example, Fig. 3 in Abdo et al. (2010b) shows that the gamma-ray spectrum of DM annihilation could reach the measured gamma-ray background spectrum and therefore deliver a significant fraction of the measured flux. Here, we investigate the cases that the total EGB originates from astrophysical sources and 20% of the EGB (optimistically) originates from DM annihilation. The isotropic hadronic component depends on analysis cuts and the quality of the gamma-hadron separation. In the following, three different background rates are assumed: , , and an optimistic rate. We assume a CTA-like FOV with a radius of and a CTA-like PSF with . The results are shown in Fig. 1.12, where each band represents a sample of 20 realizations. One can see in the figure that depending on the achieved background rate, in principle the two above mentioned scenarios and will be well distinguishable for CTA.
2 Search of axion-like particles with CTA
Axions were proposed in the 1970’s as a by-product of the Peccei-Quinn solution of the strong-CP problem in QCD Peccei and Quinn (1977). In addition, they are valid candidates to constitute a portion of, or perhaps the totality of, the non-baryonic CDM content predicted to exist in the Universe. Another extremely interesting property of axions, or more generically, Axion-Like Particles (ALPs, for which — unlike axions — the mass and the coupling constant are not related to each other), is that they are expected to convert into photons (and vice versa) in the presence of magnetic fields Dicus et al. (1978); Sikivie (1983). The photon/ALP mixing is indeed the main signature used at present in ALP searches, such as those carried out by CAST Andriamonje et al. (2007); Arik et al. (2009); Aune et al. (2011); Iguaz and for the CAST Collaboration (2011) or ADMX Duffy et al. (2006), but it could also have important implications for astronomical observations. For example, photon/ALP mixing could distort the spectra of gamma-ray sources, such as Active Galactic Nuclei (AGN) Hooper and Serpico (2007); de Angelis et al. (2007); Hochmuth and Sigl (2007); Simet et al. (2008) or galactic sources, in the TeV range Mirizzi et al. (2007).
The photon/ALP mixing effect for distant AGN was also evaluated by
Sánchez-Conde et al. (2009) under a consistent framework, where mixing takes
place inside or near the gamma-ray emitter as well as in the
intergalactic magnetic field (IGMF). A diagram that outlines this
scenario is shown in Fig. 2.1. The artistic sketch shows
the travel of a photon from the source to the Earth and the main
physical cases that one could identify
The probability of a photon of energy to be converted into an ALP (and vice versa) can be written as Hooper and Serpico (2007):
where is the length of the domain where there is a roughly constant magnetic field , and the inverse of the coupling constant. Here we also defined a characteristic energy, :
or in more convenient units:
where the subindices refer to dimensionless quantities:
and B/Gauss; is the effective ALP mass , with the plasma frequency and the electron
density. The most recent results from the CAST experiment Iguaz and for the CAST Collaboration (2011) give
a value of for ALP mass eV. At
present, the CAST bound is the most general and stringent limit in the
range eV eV.
The main effect produced by photon/ALP mixing in the source is an attenuation in the total expected intensity of the source just above a critical energy (see Fig. 2.1). As for the mixing in the IGMFs, despite the low magnetic field B, the photon/ALP conversion can take place due to the large distances involved. In the model of Sánchez-Conde et al. (2009), it is assumed that the photon beam propagates over N domains of a given length. The modulus of the IGMF is the same in all of them, whereas its orientation changes randomly from one domain to the next, which in practice is also equivalent to a variation in the strength of the component of the magnetic field relevant to the photon/ALP mixing.
In discussing photon/ALP conversion in IGMFs, it is also necessary to consider the important role of the Extragalactic Background Light (EBL), its main effect being an additional attenuation of the photon flux (especially at energies above about 100 GeV). Recent gamma-ray observations already pose substantial challenges to the conventional models that explain the observed source spectra in terms of EBL attenuation Aleksić et al. (2011); Aliu et al. (2009); Neshpor et al. (1998); Stepanyan et al. (2002); Krennrich et al. (2008).
Taken together, photon/ALP conversions in the IGMF can lead to an attenuation or an enhancement of the photon flux at Earth, depending on distance, magnetic fields and the EBL model considered. A flux enhancement is possible because ALPs travel unimpeded through the EBL, and a fraction of them can convert back into photons before reaching the observer. Note that the strength of the IGMFs is expected to be many orders of magnitude weaker (nG) than that of the source and its surroundings (G). Consequently, as described by Eq. (2.3), the energy at which photon/ALP conversion occurs in this case is many orders of magnitude larger than that at which conversion can occur in the source and its vicinity. Assuming a mid-value of B0.1 nG, and (CAST lower limit), the effect could be observationally detectable by IACTs only if the ALP mass is of the order of 10 eV, i.e. we need ultra-light ALPs.
In order to quantitatively study the effect of photon-axion conversion over the cosmological distances of AGN, we consider the total photon intensity. It becomes then useful to define the axion boost factor as the difference between the predicted arriving photon intensity without including ALPs and that obtained when including the photon/ALP conversions. Qualitatively speaking, it is found that the more attenuating the EBL model considered, the more relevant the effect of photon/ALP conversions in the IGMF (since any ALP to photon reconversion might substantially enhance the intensity arriving at Earth). Furthermore, higher B values do not necessarily translate into higher photon flux enhancements. There is always a B value that maximizes the axion boost factors; this value is sensitive to the source distance, the considered energy and the adopted EBL model (see Ref. Sánchez-Conde et al. (2009) for a more detailed discussion).
There could be indeed different approaches from the observational point of view, although all of them will be probably based on the search and analysis of a systematic residual after applying the best-fit (conventional) model to the AGN data. For example, Ref. Sánchez-Conde et al. (2009) predicts the existence of a universal feature in the spectrum of the sources due to the intergalactic mixing, that is completely independent on the sources themselves and only depends on the ALP and IGMF properties. This feature should be present at the same critical energy for all sources, and would show up in the spectra as a drop in the flux — whenever is in the range where the EBL effect is negligible — or even as a sudden flux increase, if the EBL absorption is strong for .
Test case for CTA: PKS 1222+21
We have taken as a test source the flat spectrum radio quasar 4C +21.35 (PKS 1222+21), at redshift , which was detected by MAGIC above 70 GeV Aleksić et al. (2011) in June 2010, during a target of opportunity observation triggered by the high state of the source in the Fermi-LAT energy band. This source is the second most distant object detected by ground-based gamma-ray telescopes, and hence an ideal candidate for the study of propagation effects. The observed energy spectrum of 4C +21.35 during the 0.5 hour flare recorded by MAGIC was well described by a power law of index . The intrinsic spectrum, assuming the EBL model of Domínguez et al. (2011) was estimated to be a power law of index , which extrapolated down to an energy of about GeV, connects smoothly with the harder spectrum () measured by Fermi-LAT between and GeV in a h period encompassing the MAGIC observation. It must be noted that longer-term Fermi-LAT observations of the source in various states of activity show a break in the spectrum between 1 and 3 GeV, with a spectral index after the break (and up to GeV) ranging between 2.4 and 2.8 Tanaka et al. (2011).
We have simulated CTA observations of 4C +21.35 assuming an intrinsic unbroken power-law spectrum, in the relevant energy range, like the one determined by MAGIC for 4C +21.35 during the flare, i.e. K TeV. Keeping the spectral shape unchanged, we have tried different absolute flux normalizations, taking as a reference the flux observed by MAGIC, K m s TeV. We have also tested different observation times: the actual duration of the VHE flare observed by MAGIC is unknown, since the observation was interrupted while the flare was still going on, but the flares observed by Fermi-LAT above 100 MeV show rise and decay time scales of the order of a day Tanaka et al. (2011), so it is reasonable to expect that the source may stay several hours in flux states as high as that observed by MAGIC. For the detector simulation we have used the CTA candidate array . The EBL model in Ref. Domínguez et al. (2011) has been used to account for the effect of the EBL, and the conversion of photons into ALPs and vice versa has been simulated following the formalism detailed in Ref. Sánchez-Conde et al. (2009) as outlined above. Only conversions in the IGMF have been considered (in this case, mixing in the source typically leads to only a few percent of flux attenuation, so we neglected it in order to avoid extra uncertainties).
We have assumed the same parameters for the IGMF as those in the
fiducial model in Sánchez-Conde et al. (2009): 0.1 nG is the (constant) modulus
of the IGMF
Using the performance parameters of array , we obtain the expected gamma-ray and cosmic-ray background rates in bins of estimated energy, and from them the reconstructed differential energy spectrum. After this, we correct the observed spectrum by the energy-dependent attenuation factors expected from the EBL in order to get an estimate of the intrinsic source spectrum. Each simulated spectrum is fitted to a power-law with variable index of the form , in which we constrain the parameter so that the spectrum cannot become harder with increasing energy (such behavior is not expected from emission models in this energy range). Only energy bins with a signal exceeding three times the RMS of the background, and a minimum of 10 excess events, are considered in the fit.
In the absence of any significant photon/ALP mixing, the resulting fits will all match the spectral points within the experimental uncertainties, resulting in good values. But, as shown in Ref. Sánchez-Conde et al. (2009), certain combinations of ALP parameters and values of the IGMF may result in significant modifications of the observed VHE spectra. The most striking feature is a boost of the expected flux at high energies, which is particularly prominent in the estimated intrinsic (i.e. EBL-de-absorbed) spectrum. Such a feature may result in a low value of the -probability of the spectral fit. In Figs. 2.2 and 2.3 we show two such cases, in which the observed spectra, after de-absorption of the EBL effect, show a clear hardening of the spectral index. The effect is particularly striking in the cases in which the EBL absorption at E = E is already strong (e.g. Fig. 2.3), because then the boost sets in very fast, resulting in dN/dE rising with energy at around E. The rise is actually very sharp, but it is smoothed by the energy resolution of the instrument. An improvement in the energy resolution would increase the significance of the feature and improve the determination of E. In contrast, if E is in the range in which the EBL absorption is small or negligible (Fig. 2.2), the feature at E would just be a flux drop of at most Sánchez-Conde et al. (2009), also washed out by the instrumental energy resolution. In those cases, though a high-energy boost may still be clearly detected, it would be hard to determine the exact value of E. This is because, in the formalism described in Ref. Sánchez-Conde et al. (2009), similar ALP boost factors are always achieved at energies , independently of the particular value of E in each case.
For each of the values scanned, we have performed simulations of a CTA observation, all with the same source flux and observation time. We consider that a given value of is within the reach of CTA whenever the median of the -probability distribution is below , which corresponds to 5 standard deviations. In Fig. 2.4 we show the median of the probability versus , for two different assumptions on the source flux and two different observation times. The range of which can be probed with CTA for the different scenarios is the one for which the curves in Fig. 2.4 are below the dashed horizontal line. As expected, the range becomes larger as we increase the observation time and/or the flux of the source. A 0.5 h duration flare like the one reported in Aleksić et al. (2011) would not be enough for CTA to detect a significant effect in any of the tested ALP scenarios, i.e. the solid black line never goes below the dashed line for any value of E. A flare of similar intensity, but lasting 5 hours (green line) would already be enough to see the boost due to ALPs for those scenarios with E GeV. In Fig. 2.4 we can also see that for a hypothetical flare with an intensity 5 times larger, lasting 5 hours, the accessible range of E would extend up to 1.3 TeV.
3 High Energy Violation of Lorentz Invariance
Lorentz Invariance (LI) lies at the heart of all of modern physics, in particular the unification of space and time through the principle of Special Relativity. Space-time was elegantly promoted to be a dynamic entity in the covariant classical theory of gravity namely General Relativity (GR) which has been rigorously tested on astronomical scales and underlies the mathematical description of cosmology. Similarly quantum mechanics has been successfully married with Special Relativity to yield the quantum theory of fields which underlies the very successful Standard Model of leptons and quarks and the gauged electromagnetic, weak and strong forces. However it has proved considerably more difficult to unify gravity with the other forces, since GR is fundamentally non-renormalisable. A fully quantum theory of gravity (QG) is still beyond our grasp although there has been significant progress towards this goal in various approaches such as superstring theory and loop quantum gravity Rovelli (2000). QG should describe dynamics at the Planck energy GeV or equivalently the Planck length cm, where gravitational effects should become as strong as the other forces and the notion of space-time is likely to need revision. This has opened up the possibility that LI may be violated by QG effects although, lacking a fully dynamical theory, the expectation is generic rather than definite. For example quantum fluctuations may produce ‘space-time foam’ at the Planck scale resulting in a non-trivial refractive index and anomalous dispersion of light in vacuo i.e. an energy dependence of the speed of light. Hence over the past decade, there has been tremendous interest in testing LI at high energies as part of what has come to be called ‘quantum gravity phenomenology’ Sarkar (2002); Mattingly (2005); Liberati and Maccione (2009).
Possible energy dependence of the speed of light in the vacuum has been predicted, in the framework of several theories dealing with quantum gravity models and effective field theory models Myers and Pospelov (2003). The seminal paper by Amelino-Camelia et al. (1998) proposed that this can be parameterised by a Taylor expansion of the usual dispersion relation:
where the value of the co-efficients would be specified by the theory of quantum gravity (and may well turn out to be zero) . For example there are specific predictions in some toy models Alfaro et al. (2002); Ellis et al. (2008) and a general parameterisation can be provided in the framework of effective field theory Myers and Pospelov (2003). For more details see the introduction by Ellis and Mavromatos (2011) in Part A of this Special Issue. Typically, two scenarios are envisaged according to whether the linear term or the quadratic term is dominant, parametrised by the scale parameters (linear case) and (quadratic case) respectively. The point is that while QG effects would be prominent only at the Planck scales, there would be residual Lorentz Invariance Violation (LIV) effects at lower energies (GeV–TeV) in the form of anomalous photon velocity dispersion.
Amelino-Camelia et al. (1998) also noted that over a cosmological distance , the magnitude of time–delay induced by LIV between two photons with an energy difference is detectable:
where or according to whether the linear or quadratic terms dominates in Eq. (3.1). The energy scale of QG is commonly expected to lie somewhere within a factor of . The best limit on the linear term has recently been placed by Fermi-LAT observations of GeV photons from GRB 090510 () which require GeV Abdo et al. (2009). The most constraining limit on the quadratic term GeV come from observations of an exceptional flare of the active galactic nucleus (AGN) PKS 2155-304 with the H.E.S.S. telescope Abramowski et al. (2011).
It is important to keep in mind that although the QG induced time–delay is proportional to energy (as opposed to conventional dispersion effects which vary as inverse power of energy) similar time–delay effects may be intrinsic to the source Mastichiadis and Moraitis (2008). Therefore, in order to distinguish between source and propagation time–delays, different types of sources should be considered with different physical properties and situated at different cosmological distances. For such studies, AGN and Gamma-Ray Bursts (GRBs) are the best candidates to test Eq. (3.2). AGN cover the higher energies (up to few TeV) and lower redshift regime (probably up to ) and GRBs the lower energies (probably few tens of GeV) but higher redshifts. Other promising candidates could be pulsars which until now have yielded constraints one order of magnitude weaker than the ones derived from AGN Otte (2011).
The consequences of improved sensitivity and larger energy coverage of CTA on time–delays recovery
Using the Maximum Likelihood Estimation method (MLE) of Martínez and Errando (2009), we investigate the effects that the improved CTA performance, in terms of increased statistics and broader energy lever arm, have on the time–delay recovery.
Five hundred realisations of Gaussian–shaped “pulsed” light curves
were generated for several values of time–delays between
sTeV and sTeV in steps of 10
sTeV. This allowed an estimate of the value of the error
on the measured time–delay . The
error decreases as , where is the number
of photons included in the likelihood fit, and saturates at a value of
about 3 s TeV, about a factor of 3 less than the current
generation of IACTs, due to the increased statistics of CTA.
The effect of the increase in the energy lever-arm, provided by the
wide coverage of CTA from few tens of GeV to several tens of TeV, has
also been addressed for the different array configurations taking into
account the absorption of Extragalactic Background Light (EBL) using
the model of Kneiske et al. (2002)
The intrinsic variability of the photon emission by astrophysical sources such as GRBs and AGN is the main systematic uncertainty in LIV searches. Until now, the detected variability of the AGN was limited to about 100 s, partially due to the limited statistics of the data. The possibility of improved separation of the initially unresolved double peak structures was investigated with light curve simulations and time–delay reconstruction using again the MLE method. Fig. 3.2 shows the minimal peak separation which would allow distinguishing between two Gaussian spikes of the same standard deviation for different photon statistics: a H.E.S.S.-like measurement, a CTA-like measurement with an improved photon collection by a factor 100, and a more optimistic scenario with a factor 1000 more photons.
Sensitivity to TeV photons for selected benchmark AGN
Following the suggestion of Amelino-Camelia et al. (1998) we define a sensitivity factor , where is the magnitude of the time–delay introduced into the flare and is duration of the burst/flare feature that is being examined. In Refs. Barres de Almeida and Daniel (2012); Barres de Almeida (2010) it is shown that is required to determine whether the observed flare time sequence has been skewed in comparison to its original form. To improve upon current limits will require observations of photons at energies larger than 10 TeV (for a given redshift) or observations of similar flares from much more distant AGN. In the following, we calculate the integral numbers of photons from representative AGN to test LIV signatures in AGN flares.
We have taken three VHE AGN representative of several known situations: an AGN-flare with high brightness (Mrk 421), an AGN-flare that shows short variability timescales (PKS 2155-304) and the AGN with the largest known redshift (3C 279) observed at VHE. The spectra in their highest recorded flux state have been taken from current IACT observations, extrapolated to higher energies, convolved with the performance curves of the various CTA array layouts and integrated assuming a flare duration lasting for the appropriate time such that . The results are shown in Fig. 3.3. Since this falls into the category of unbinned methodologies, the precision in the time resolution is the same as the time precision of each array (i.e. better than s). The uncertainty in the time–delay for a single photon is the time precision modulo of the energy resolution (being less than 10% as specified) and the distance (negligible), and saturates at 10 s TeV Gpc. This translates into a Planck scale effect of s (i.e. as good as a binned method would have for more than 100 photons).
Mrk 421 is known to show spectral hardening with increasing flux, and can have very hard spectra indeed on short timescales, as evidenced by Flare C in Ref. Acciari et al. (2011) — see discussion by Gaidos et al. (1996). For the redshift of Mrk 421, Planck scale effects could be expected to induce a delay of s TeV; for TeV photons this means we would need to be able to time resolve flare features at an unprecedented s duration. Whilst features this fast have yet to be identified, this could be because they are below the sensitivity of current instruments and the top panel of Fig. 3.3 demonstrates that, if present, such features can indeed be probed. For PKS 2155-304, the redshift implies that CTA would need features on the timescale of 120 s to test Planck scale effects, which is still a factor of a few faster than the s rising and falling timescales of the 7 Crab flares observed to date Aharonian et al. (2007) but, as shown in the middle panel of Fig. 3.3, we would easily have sufficient photons to resolve such features. For 3C 279 (, Aleksić et al. (2011)), even though the flare timescale of 610 s required for such a distant AGN are well within the variability timescales we currently observe for blazars, the photon flux we expect ( Crab) from such a distant source is expected to be too low to resolve such features at the highest energies, because of the attenuation of the photon flux through interactions with the EBL.
Concerning the different CTA arrays, Fig. 3.3 shows that the best performing arrays at high energies are , , , , , . While it may be of more interest to find out if they would detect sufficient photons on which to perform tests for time–delay, we note that there are a number of unbinned methods that can cope with sparse datasets (see, e.g., Barres de Almeida and Daniel, 2012; Abramowski et al., 2011; Scargle et al., 2008) so that photons of E10 TeV are required in order to be able to begin to test for LIV.
Time–delay recovery with realistic source lightcurve and spectra for the linear case
For this study, we produce 2500 pairs of lightcurves (with and without time–delays), following the method of Timmer and Koenig (1995) for each of the total 13 CTA arrays (11 Southern array … and 2 Northern arrays , ), as shown in Fig. 3.4. We scan the space of possibilities by selecting random values in the 5-dimensional space characterised by the following parameters:
time–delays in the range (linear case)
AGN redshift linearly in the range
energy spectrum power law slope between 20 GeV and 20 TeV in the range , and with a spectral cutoff at 120 GeV
Flux level in the range ph cm s
different observational periods: a) single day observations consisting of 3 pointings of 30 and 15 min, b) weekly observations consisting of 3 nightly pointings of 30 and 15 min, and c) monthly observations consisting of 2 nightly pointings of 30 and 15 min.
The photons thus generated are then distributed as a function of time, based on the variability type that we have initially assumed (e.g. red-noise). These light curve pairs incorporate delay effects accumulated over a given distance depending on the of each pair. The light curve pairs are subsequently convolved with the CTA arrays performance, using the effective area, the background count rate and differential sensitivity. Finally, for each CTA array, we recover the observed time–delays using the cross-power spectral analysis method of Nowak et al. (1999). For each pair of light curves and for each array we consider the quality factor between the simulated time–delay and the recovered time–delay defined as:
where is the width of the Gaussian distribution of the recovered time–delays coming from the simulations.
When considering all the 13 arrays together, we find on average that 6% of the time–delays are recovered with . This is a very strict limit. If one relaxes this limit to and , respectively 77% and 99% of the time–delays in our sample of events are recovered, thus making the prospects of detecting (or constraining) LIV signatures with CTA rather optimistic.
To understand which arrays have the best prospects, the time–delay recovery results for each array individually is shown in Fig. 3.4, adopting the limit of . From the plot we see that the best CTA configurations for time–delay recovery are , , , and respectively for the Southern and Northern hemispheres.
4 Other physics searches with CTA
In this section, we highlight topics of fundamental physics searches that were discussed in recent years and whose scenarios could be studied or hopefully constrained with CTA.
There are several caveats if one wants to present such various and complex topics “in a nutshell”. First of all, the list of topics is not exhaustive; only a subset of topics is reported here. Second, some of the studies presented in this section may not be up-to-date by the time this article is published: theories in this area evolve and are updated exceedingly rapidly. In addition, most of these studies were formulated only within the context of the current generation of IACTs, and not for CTA. Whenever possible, considerations about the prospects for CTA will be addressed. Third, the discussion will mostly be of only a qualitative nature. The goal in this contribution is to provide an introductory discussion of the area, with the aim of encouraging others to explore in more detail these and other interesting new physics possibilities. Let us add that pursuing exotic physics with IACTs (and hence CTA) should be done because a) it is possible (this may seem a naive argument, but, given the terra incognita offered by a new observatory such as CTA, it is a strong one; b) VHE gamma rays have been identified as likely drivers of truly fundamental discovery. VHE gamma rays are a tool to explore new physics and new astrophysical scenarios, the nature of which may contain yet unknown, and unexpected, features. The potential for revolutionary discovery is enormous.
4.1 High energy tau-neutrino searches
Although optimized to detect electromagnetic air showers produced by cosmic gamma-rays, IACTs are also sensitive to hadronic showers. Inspired by calculations made by the AUGER collaboration and D. Fargion (see, e.g., Fargion, 1999, 2002; Letessier-Selvon, 2001; Feng et al., 2002; Abraham et al., 2008; Bertou et al., 2002), the possible response of IACTs to showers initiated by very high energy -particles originating from a collision with the sea or underneath rock is described.
It is well known that neutrinos of energies above the TeV energy range can form part of the cosmic rays hitting the Earth. The origin of such neutrinos could be from point-like sources like galactic microquasars Torres et al. (2005); Bednarek (2005) or extragalactic blazars Mannheim et al. (2001); Muecke et al. (2003) or gamma-ray bursts Guetta et al. (2004). There are also diffuse fluxes of high energy neutrinos predicted to come from unresolved sources, including interactions of EHE cosmic rays during their propagation Berezinksy and T. (1969). Finally, one could think of a more exotic origin of high energy neutrinos like those coming from DM particle annihilation, topological defects or cosmic strings Witten (1985); Hill et al. (1987); Berezinsky et al. (2011). Neutrinos are produced in astrophysical sources or during the transport, mainly after pion and subsequent muon decays:
such that the typical neutrino family mixing at the source is . Tau-neutrinos are found either at the source, if charmed mesons are formed instead of pions, or are created during the propagation, after flavor mixing, such that at Earth, the neutrino family mixing could be Learned and Pakvasa (1995).
The channel has several advantages with respect to the electron or muon channel. First, the majority of the possible decay modes lead to an (observable) air shower or a combination of showers. Only 17.4% of the decays lead to a muon and neutrinos, considered to be unobservable for the effective areas of interest here. Moreover, the boosted lifetime ranges from some 50 m at 1 PeV to several tens of kilometers at EeV energies, almost unaffected by energy losses in matter and thus surpassing the muon range by a factor of 20. Finally, the originating decays, instead of being absorbed by matter, and thus gives origin to another of lower energy which in turn can produce a . At the highest energies, the Earth becomes completely opaque to all types of neutrinos giving rise to a pile-up of .
To be able to observe atmospheric showers from , the
telescopes should be pointed at the direction where the escapes
from the Earth crust after having crossed an optimized distance inside
the Earth. Of course, this distance is strongly dependent on the
telescope location, and no general conclusions can be drawn before the
CTA site will be defined. In the past, two directions were
In Ref. Gaug et al. (2008), the effective area for observation with the MAGIC telescope was calculated analytically. The results were the following: the maximum sensitivity would be in the range 100 TeV–1 EeV. For the observation downward towards the Sea, the sensitivity for diffuse neutrinos is very low because of the limited FOV (3 events/year/sr) and CTA cannot be competitive with other experiments like Icecube Abbasi et al. (2012), Baikal Balkanov et al. (2000), Auger Abraham et al. (2008), Antares Adrian-Martinez et al. (2011) or KM3NeT. On the other hand, if flaring or disrupting point sources are observed, like is the case for GRBs, one can even expect an observable number of events from one GRB at reasonable distances, if the event occurs just inside a small observability band of about 1 degree width in zenith and an azimuth angle which allows to point the telescopes downhill.
For CTA, the situation could be different: taking an extension of the FOV of several times that of MAGIC in extended observation mode, the higher effective area and lower energy threshold, meaning higher fluxes, one naïve rescaling of the MAGIC calculations leads to relatively optimistic results, depending very much on the local geography. For point-like sources, the situation would not change so much w.r.t. the MAGIC case, unless the CTA telescopes are located close to a shielding mountain chain. The required observation times are still large, but one may argue that these observations can be performed each time when high clouds preclude the observation of gamma-ray sources.
4.2 Ultrarelativistic Magnetic Monopoles
The existence of magnetic monopoles is predicted by a wide class of extensions of the standard model of particle physics Groom (1986). Considerable experimental effort has been undertaken during the last eight decades to detect magnetic monopoles. No confirmed success in detection has been reported at the present time. Current flux limits on cosmogenic magnetic monopoles reach values of to depending on the monopole velocity. As outlined below, the CTA observatory is sensitive to a magnetic monopole flux.
According to Tompkins (1965) magnetic monopoles moving in air faster than the speed of light in air are emitting times more Cherenkov photons than an electric charge under the same circumstances. Being fast enough (Lorentz factor ) and heavy enough (mass ) magnetic monopoles that possibly propagate through the earth atmosphere are neither significantly deflected by the Earth’s magnetic field nor loose a significant amount of energy through ionization Spengler and Schwanke (2011). Assuming the last two constraints to be fulfilled a magnetic monopole moving through the Earth’s atmosphere propagates on a straight line, thereby emitting a large amount of Cherenkov photons. This process of a uniform emission of intensive Cherenkov light differs from the Cherenkov light emitted by secondary particles in a shower initiated from a high energy cosmic or gamma-ray. As shown by Spengler and Schwanke (2011), the number of triggered pixels in a telescope array is typically smaller and the intensity of the triggered pixels is typically higher for magnetic monopoles compared to events originating from cosmic or gamma-rays. Cuts in a parameter space spanned by the number of triggered pixels in the CTA array and the number of pixel with high intensity allow for an excellent discrimination between magnetic monopole events and background from cosmic or gamma-rays. The effective detection area of H.E.S.S. Col. (2006) for magnetic monopoles has been studied in detail Spengler and Schwanke (2011). Extrapolating the results of this study for CTA with its one order of magnitude increased design collection area, leads to a typical CTA magnetic monopole effective area of . In Fig. 4.2, we show that assuming around hours of CTA data from different observations accumulated in about years of array operation, the sensitivity of CTA to magnetic monopoles with velocities close to the speed of light can reach the Parker limit Groom (1986) of . Despite being still two orders of magnitude worse than current monopole flux limits from neutrino experiments Achterberg et al. (2010) this sensitivity will allow a technically independent and new test for the existence of magnetic monopoles.
4.3 Gravitational waves
The period of operation of CTA should hopefully see the detection of the first gravitational wave (GW) by ground-based interferometers, now in the “advanced sensitivity” design phase. The 3 km–scale Michelson interferometers Enhanced LIGO and Advanced Virgo Accadia et al. (2011) are increasing their sensitivity and extending the horizon distance of detectable sources up to hundreds of Mpc, depending on the frequency. Two additional smaller interferometric detectors are part of the network of GW observatories, the Japanese TAMA (300 m arms) and the German-British 600 m interferometer GEO600. The LIGO and Virgo observatories should start full operation in the advanced version in 2014/2015 and may operate together to fully reconstruct the arrival direction of a signal. They may localize strong GW bursts with an angular uncertainty down to one degree, while weaker signal have larger uncertainties, up to tens of degrees Sylvestre (2003). At the stage of conceptual design, the Einstein Telescope aims at increasing the arm length to 10 km, with three arms in a triangular pattern, implementing consolidated technology. It is foreseen to be located underground to reduce the seismic motion thus allowing a better sensitivity up to a factor 10 Sathyaprakash et al. (2011).
The most promising astrophysical mechanisms able to produce observable GWs are in–spiral and coalescence of binary compact objects (neutron stars and black holes), occurring for example during the merger of compact binary systems, as progenitors of supernova or neutron-star collapse, and associated with pulsar glitches. Signals from these systems may last from milliseconds to a few tens of seconds, but their expected rate and strength are uncertain (for a review, see, e.g., Buonanno, 2007). Moreover, unexpected or unknown classes of sources and transient phenomena may be responsible for GW emission and may actually provide the first detection. Therefore combined GW and electromagnetic observations would be critical in establishing the nature of the first GW detection. While electromagnetic counterparts cannot be guaranteed for all GW transients, they may be expected for some of them Bloom et al. (2009); Chassande-Mottin et al. (2011) from radio waves to gamma-rays, such as in gamma ray bursts Stamatikos et al. (2009), ultra high-luminous X-ray transients and soft gamma repeater Abbott et al. (2008). Electromagnetic identification of a GW would confirm the GW detection and improve the reconstruction and modeling of the physical mechanism producing the event. Moreover, significant flaring episodes identified in the electromagnetic band could serve as an external trigger for GW signal identification, and could even be used to reconstruct independently the source position and time, thus allowing the signal-to-noise ratio required for a confident detection to be lowered. The feasibility of this approach has been corroborated through dedicated simulations by the LIGO and Virgo collaborations Abadie et al. (2011).
CTA has the capability to pursue such a program of immediate follow-up of target of opportunity alerts from GW observatories, and to interact with GW collaborations to pursue offline analysis on promising candidates. The capability of CTA to observe in pointed mode with small FOV or in extended mode covering many square degrees of sky, is unique to follow strong and weak GW alerts. The observation mode should resemble the GRB procedure, which allow a fast repositioning on the order of tens of seconds.
In the era in which ground-based gravitational wave detectors are approaching their advanced configuration, the simultaneous operation of facilities like CTA and Virgo/LIGO may open, in the forthcoming years, a unique opportunity for this kind of multi-messenger search.
5 Summary and Conclusion
In this study we have investigated the prospects for detection and
characterization of several flavors of physics beyond the standard
model with CTA.
Particle Dark Matter searches
We have investigated dark matter (DM) searches with CTA for different observational strategies: from dwarf satellite galaxies (dSphs) in Section 1.1, from clusters of galaxies in Section 1.2 and from the vicinity of the Galactic Centre in Section 1.3. In Section 1.4, we discussed spatial signatures of DM in the diffuse extragalactic gamma-ray background.
Concerning searches in dSphs of the Milky Way, we have investigated
the prospects for detection of well-known “classical” dSph like Ursa
Minor and Sculptor, and one of the most promising “ultra-faint”
dSph, Segue 1 (Table 1.1). We have first shown that the
predictions for core or cusp DM density profiles are quite similar for
the baseline CTA angular resolution (Fig. 1.1). We have
then simulated a 100 h observation for several CTA arrays, and found
that for Segue 1, we can exclude velocity-averaged cross-sections
depending on different
annihilation channels (Fig. 1.2). We also presented
the same results in terms of the minimum astrophysical factor for dSphs
to be detected (Fig. 1.3), showing that astrophysical
factors of at least GeV cm are needed. We finally
showed the minimum intrinsic boost factor to achieve detection
(Fig. 1.4), which for Segue 1 is about 25 for a hard
annihilation spectrum. The best candidate arrays for dSph study are
array and . Nevertheless, the robustness of our results is hindered
by the yet not precise determination of the astrophysical factor in
some cases. Forthcoming detailed astronomical measurements will
provide clues for deep exposure observations on the most promising
dSphs, with, e.g., the planned SkyMapper Southern Sky
Survey Keller et al. (2007), which will very likely provide the
community with a new dSph population, complementing the Northern
hemisphere population discovered by the SDSS. Also, the uncertainties
on dark matter density will be
significantly reduced by new measurements of individual stellar
velocities available after the launch of the GAIA