Depth of Ultra High Energy Cosmic Ray Induced Air Shower Maxima Measured by the Telescope Array Black Rock and Long Ridge FADC Fluorescence Detectors and Surface Array in Hybrid Mode
The Telescope Array observatory utilizes fluorescence detectors and surface detectors to observe air showers produced by ultra high energy cosmic rays in the Earth’s atmosphere. Cosmic ray events observed in this way are termed hybrid data. The depth of air shower maximum is related to the mass of the primary particle that generates the shower. This paper reports on shower maxima data collected over 8.5 years using the Black Rock Mesa and Long Ridge fluorescence detectors in conjunction with the array of surface detectors. We compare the means and standard deviations of the observed distributions with Monte Carlo distributions of unmixed protons, helium, nitrogen, and iron, all generated using the QGSJet II-04 hadronic model. We also perform an unbinned maximum likelihood test of the observed data, which is subjected to variable systematic shifting of the data distributions to allow us to test the full distributions, and compare them to the Monte Carlo to see which elements are not compatible with the observed data. For all energy bins, QGSJet II-04 protons are found to be compatible with Telescope Array hybrid data at the 95% confidence level after some systematic shifting of the data. Three other QGSJet II-04 elements are found to be compatible using the same test procedure in an energy range limited to the highest energies where data statistics are sparse.
(Telescope Array Collaboration)
Ultra high energy cosmic ray (UHECR) sources remain a mystery over a century since they were first observed by Hess in 1912 (HESS & ANDERSON, 2013). Outstanding questions regarding the sources, acceleration mechanisms, propagation, and chemical composition of UHECRs have been studied now for over five decades, with the first of the large air shower arrays exceeding an area of 1 km reporting results in 1961 (Linsley et al., 1961). The UHECR spectrum, as shown in Figure 1, exhibits structure that hints at correlated changes in chemical composition and energy that can help us resolve these long standing questions. Of particular interest are the energy regions of UHECR flux dubbed the “knee” ( eV), the “ankle” ( eV), and the “GZK cutoff” (or suppression) ( eV). Prior to the knee, the cosmic ray flux is remarkably stable for six decades of energy, decreasing with energy as a power law, , with . The flux then steepens above the knee (), falling more rapidly. Around the energy of the ankle the flux begins to flatten (), until very rapidly dropping off nearly completely above the GZK cutoff. Models that wish to describe these changes in flux need to account for the maximum injection energy of astrophysical sources, acceleration either at the source or through other means such as shock waves, interactions with the interstellar medium, and chemical composition of cosmic rays observed in the Earth’s atmosphere.
Composition provides a strong constraint on models describing UHECR sources and is therefore a fundamental parameter in modelling them. For instance, models that theorize about the origin of the knee come in two flavors: astrophysical and interaction models. Astrophysical models explain the knee as an intrinsic feature of the energy spectra of individual chemical species, resulting from magnetic rigidity dependence (). Different proposed acceleration mechanisms in magnetic field regions of galactic supernovae remnants are theorized to boost the energy of particles to PeV and higher energies, limited by some maximum energy. The acceleration efficiency of higher nuclei allows those heavier elements to be boosted to higher energies than light elements. The spectra of individual nuclei shows a series of cascading cutoffs as energy increases. The all-particle cosmic ray spectrum around the knee under this model theorizes increasing particle mass with energy. Other versions of astrophysical models attempting to explain the origin of the knee, use a leaky box model in which light galactic nuclei can not be contained in the galaxy as energy increases due to their large gyroradii. Heavier nuclei can not escape the galaxy as easily and contribute to the observed flux as a larger proportion of elements at higher energies. Interaction models of the knee propose that new physics is in play as the air shower interacts in the atmosphere producing, for example, undetected supersymmetric particles or other exotic particles not yet observed in nature. See Hoerandel (2004) for a review of many different models used to explain the UHECR knee feature.
In the energy region of the ankle, the cosmic ray flux flattens, indicating a slight rise in the flux compared to energies below it. The ankle is traditionally thought of as the energy region where cosmic rays of extragalactic origin begin to dominate the spectrum. This is because there are few, if any, known sources in the galaxy able to accelerate nuclei to eV while allowing the nuclei to remain contained in the galactic disk. A signature of galactic sources of UHECRs at the energy of the ankle would be anisotropy of arrival directions in the galactic plane, something which is not observed. Historically, the flattening of the spectrum at the ankle was described as the intersection of a steeply falling galactic spectrum () and a flatter extragalactic spectrum () at the energy of the ankle. This model is known simply as the ankle model. More recent models fitted to data from large cosmic ray experiments use the signatures of propagation through the photon field of cosmic microwave background (CMB) radiation in intergalactic space. These propagation effects result in the suppression of flux above eV due to photopion production with CMB photons (the GZK mechanism (Greisen, 1966; Zatsepin & Kuzmin, 1966)), a bump due to pile up in the flux for primaries with energies below the photopion production energy, and a dip due to pair production during interaction with CMB photons. This is called the dip model. The dip model predicts that the galactic component of UHECRs disappears at a lower energy compared to predictions of the ankle model. The dip model is relatively insensitive to model parameters such as size and inhomogeneity of the source distributions, cosmological evolution of sources, and maximum acceleration energy. However, the dip model is sensitive to composition of heavy nuclei in the spectrum. Heavier nuclei interact and photospallate readily due to larger cross section than protons, resulting in several primaries of lower total energy and a change in the shape of the dip. See, for example, Aloisio et al. (2007) for further discussion of the dip model and the transition from galactic to extragalactic cosmic rays. This model therefore makes testable predictions based upon composition as well.
UHECR composition can be measured directly up to about eV because the event rate here is about 1/m/h, sufficient for balloon-borne or satellite experiments and their associated equipment to measure particle mass. Above this energy the particle cascades created by the cosmic ray primary inelastically colliding with an air molecule must be observed if one wishes to collect sufficient data. To do this, large ground based experiments are required to observe the parts of the shower that survive to ground level or using fluorescence detectors to observe the light produced by the air shower. Neither method directly measures the mass of the primary, and a single observation of muons on the ground or light generated from an air shower can not reveal the mass of an individual primary. Therefore to measure the mass composition of ultra high energy cosmic rays we must resort to understanding the physics of extensive air showers, identifying those observables that can be related to the primary mass, and collecting large data sets to build statistical samples of sufficient size to reliably measure the average mass in some energy range. This method requires good hadronic models of high energy interactions in matter to energy ranges not yet measured in the lab. Measurements from the LHC reaches up to about eV in the lab frame, whereas UHECR primary particle energies above eV have been measured. Hadronic models which predict particle elasticity, multiplicity, and interaction cross section currently require extrapolation over a few decades of energy for the energy region below the ankle and above.
UHECR composition measurements are performed best by fluorescence detectors which observe the depth in the atmosphere where the electromagnetic component of a cosmic ray induced shower reaches a maximum. This atmospheric depth is called and is measured in g/cm. A toy model first developed by Heitler (Carlson & Oppenheimer, 1937) demonstrates how is related to the primary particle energy and mass using a simple branching model of electromagnetic (EM) showers in which a high energy electron primary of energy collides inelastically with a target particle. The EM shower is created and grows in size through the repeated processes of pair production and bremsstrahlung. In this model, making the simplistic assumption of a fixed interaction length, , between interactions, two new particles are generated and added to the shower for each existing particle. After interactions, the total depth the shower has traveled is , and the size of the shower is The average energy of each particle at depth is . This process of particle generation at each interaction length continues until the average energy per particle decreases below some critical energy, , defined as the energy at which particle energy lost due to collisions exceeds radiative energy losses. When the average particle energy is equal to , the shower reaches its maximum size, called , and the depth is . Using the definition of , we find and ; the number of particles generated at shower maximum is proportional to the primary particle energy, and is proportional to the log of that energy.
Showers initiated by a very high energy hadronic primary particle exhibit similar relationships. If the primary particle has mass and energy , we use the superposition principle to treat the particle as independent nucleons each with an average initial energy of . Using the Heitler model under the assumption of the superposition principle we find . In reality hadronic showers are more complicated, since for each hadronic interaction, on average 2/3 of the particles produced are charged particles such as and 1/3 are . The rapidly decay into two photons which contribute to the electromagnetic part of the shower. The relation between particle mass and predicted by the Heitler model is still valid. The property of shower universality tells us that for showers created by a hadronic primary particle of any mass the electromagnetic component evolves in the same way, parameterized by the shower age, (Giller et al., 2003). Using this property we can use the same method of observing of a shower to determine the mass of the primary particle, even if that particle is very light, such as a proton, or much heavier, such as an iron nucleus. For details about the treatment of hadronic showers as related to cosmic ray composition refer to Engel et al. (2011) and Kampert & Unger (2012).
Heavier primary particles are therefore expected to reach shower maximum at shallower depths in the atmosphere than light primaries. To experimentally measure UHECR composition, one can use fluorescence detectors to record energy and for many showers. For a given energy range for lighter primaries will be larger than for heavy primaries. In addition because of the superposition principle, the fluctuations in are expected to be smaller for heavy primaries. This data can be compared to models of individual primary species or mixtures of elements to determine the composition of UHECRs observed. Figure 2 shows measured over by many experiments over the past 30 years.
Recent measurements of over the past decade have vastly improved statistics in the important region of the ankle and above, with the three largest fluorescence based measurements of HiRes, Telescope Array, and Auger. HiRes reported on composition measured by events observed in stereo using simultaneous observation of two fluorescence detectors (Abbasi et al., 2010) and Telescope Array has presented results using hybrid reconstruction (Abbasi et al., 2014). Both results found indications of composition consisting of light primaries resembling mostly protons up to eV, by comparing and of data to models. These results are consistent with the view (up until that time), that UHECRs with energies at the ankle and higher are most likely extragalactic protons. The existence of a suppression in the flux at the energy predicted by the GZK mechanism, first observed by HiRes (Abbasi et al., 2008b), and later confirmed by Auger (Abraham et al., 2008), fit in well with this scenario. However, Auger’s recent measurement of composition challenges this view. Auger has shown an energy evolution in and to heavier primaries above eV (Aab et al., 2014). UHECR flux with increasing mass above the ankle leads to unexpected models that have been deemed “disappointing” for the field (Aloisio et al., 2012). Implications of such models are a lack of photopion production on CMB photons (and therefore very few ultra high energy cosmogenic neutrinos), no anisotropy of nearby sources due to strong deflection in magnetic fields, and no cutoff in the spectrum due to the GZK mechanism since the maximum energy of astrophysical accelerators is too low (Allard et al., 2008; Aloisio et al., 2012). The tension between these experimental results, and the implications for particle astrophysics, provide the impetus for further, high precision studies of UHECR composition such as this one.
This work presents an analysis of data collected by the Telescope Array experiment over an 8.5 year period. The distributions are collected in energy bins and and are computed for each bin. Four sets of Monte Carlo, each representing a single chemical element, are generated and then reconstructed in the same manner as the data. The distributions, , and of the Monte Carlo are compared to the observed data. Statistical tests are used to compare the compatibility of the Monte Carlo to the data. Section 2 describes the design and operation of the Telescope Array experiment. Section 3 describes the hybrid method of observation and how data is reconstructed to measure for air showers. Section 4 examines the data collected, analysis cuts applied to the data, resolution and bias of observables important to good reconstruction, compares data to Monte Carlo, and discusses the importance of understanding the different types of biases in measuring . Section 5 details the statistical tests used to measure compatibility of the different Monte Carlo sets to the data and the results of these tests. Conclusions of this analysis are presented in Section 6.
Telescope Array (TA) is one of the few detectors in the world able to shed light on the composition of UHECRs. TA is the successor experiment of the AGASA (Ohoka et al., 1997) and HiRes (Abu-Zayyad et al., 2000; Boyer et al., 2002) experiments. Expertise using surface arrays from the AGASA experiment and fluorescence detectors from the HiRes experiment is combined into a single cosmic ray observatory able to observe ultra high energy cosmic ray flux over four decades of energy.
TA is located in Millard County Utah (N and W, 1400 m above sea level), consisting of 507 scintillation surface detectors (SDs) sensitive to muons and electrons, and 48 fluorescence telescopes located in three fluorescence detector (FD) stations overlooking the counters. The spacing of the counters in the SD array is 1.2 km and they are placed over an area of approximately 700 km. Figure 3 shows the physical locations of the SDs and FDs.
Each surface detector is made up of two layers of plastic scintillator 3 m by 1.2 cm thick. Grooves running parallel along the length of each layer are etched into each scintillator layer and 104 wavelength shifting fiber optic cables are embedded in them, for a total length of 5 m of fiber in each layer. When a charged particle passes through the scintillator and light is produced, the light is transmitted to a photomultiplier tube (PMT) via the fiber. Each scintillator layer has a dedicated PMT that is optically coupled to the fiber bundle to detect the passage of charged particles. The analog PMT signal is digitized via 12 bit FADC electronics operating at 50 MHz sampling rate with signals stored in a local buffer. Each FADC bin is 20 ns wide and a waveform consists of 128 FADC bins, providing a waveform buffer 2.56 s wide. Each SD electronics suite also has a FPGA which continuously monitors the FADC waveforms to monitor pedestals, and to determine if the event trigger condition is met. When an SD measures a signal above threshold, it can announce it to a remote DAQ via radio communications. An SD can record two types of low level triggers: a level 0 trigger, in which an integrated signal exceeding 15 FADC counts above pedestal is measured, and a level 1 trigger in which an integrated signal exceeding 150 FADC (equivalent to 3 MIPs or minimum ionizing particles) counts above pedestal is measured. These remote DAQ locations are referred to as communication towers (CTs), as they monitor and receive data from many SDs and make the decision about high level triggers based upon the low level trigger logic of all SDs that is communicates with. If three or more adjacent SDs announce level 1 triggers within an 8 s window, this constitutes a level 2 event trigger and the CT directs all SDs that observed a level 0 trigger within s of the event to send the waveform data to the CT for storage. Each SD has an onboard GPS unit to timestamp event triggers, so the time of particle passage is also recorded by each SD and included as part of the event information.
SD event reconstruction is done by examining all level 2 triggers and finding SDs that have sufficient signal to noise ratio, and also are connected in a small space-time window. Using the positions of the SDs and their relative trigger times, the shower core and track direction can be determined. Shower energy is measured by relating the SD signal size 800 m from the shower axis (called S800) to a function which maps S800 and the shower zenith angle to primary particle energy. This mapping is determined by Monte Carlo simulation using CORSIKA and is therefore model dependent. The final shower energy is found by scaling this energy mapping via a scale factor by using real events observed by both FD and SD, and correcting the energy to that determined by the FD. Detailed information about the operation of the SD array can be found in Abu-Zayyad et al. (2013c); Ivanov (2012).
The three FD stations are placed on the periphery of the SD array and look towards its center. Each is located the same distance from a central laser facility (CLF) about 21 km away, which contains a calibration laser that is fired throughout the night to monitor each FD’s response and to monitor atmospheric quality. The Middle Drum (MD) FD station is located on the northern border of the SD array. It has 14 fluorescence telescopes that view in azimuth arranged in two rings of elevation angle coverage. Ring 1 telescopes observe from to in elevation and ring 2 telescopes observe from to . Seven telescopes are in each ring. Each telescope consists of a 5.1 m mirror which reflects light from the sky onto a cluster of 256 PMTs arranged in a array, with each PMT monitoring approximately 1 millisteradian solid angle. Middle Drum is built using the same sample and hold electronics and hardware that was used at the HiRes1 FD in the HiRes experiment (Abu-Zayyad et al., 2000). The Middle Drum site is also the location of the TALE FD station, which is has ten telescopes and is designed to observe low energy cosmic rays near the energy of the ankle.
On the southeast and southwest borders of the SD array are the Black Rock Mesa (BR) and Long Ridge (LR) fluorescence detector stations. Each of these are made up of 12 telescopes in a two ring configuration similar to Middle Drum. The electronics and hardware of these stations were newly built for the Telescope Array experiment and utilize FADC electronics. Each telescope has 256 PMTs focused onto a 6.8 m mirror for light collection. Each PMT’s analog signals are digitized by FADC electronics which employ 12 bit digitizers operating at 40 MHz. Before storage to the DAQ, four digital samples are summed to provide an equivalent 14 bit, 10 MHz sampling rate providing a time resolution of 100 ns. Each telescope employs a track finder module which applies the trigger logic to incoming waveforms searching for spatial patterns which indicate a track caused by an extensive air shower. When an event trigger occurs a central computer orders the readout of telescope electronics to store the data for later offline analysis. More detailed information describing the construction and operation of the BR and LR fluorescence detectors can be found in Tameda et al. (2009); Tokuno et al. (2012).
When an UHECR primary particle interacts in the atmosphere, an extensive air shower results, producing copious amounts of electrons and positrons among many other particles types. This electromagnetic component of the shower interacts with atmospheric N producing fluorescence light, which is emitted isotropically and observed by FD telescopes on the ground. Because of the typical large distances to the showers and the relatively large size of each PMT’s field of view, the shower appears as a downward traveling line source. Each tube observes the shower at different altitudes and therefore different atmospheric depth. Tube signals will vary depending upon the amount of light produced in the sky at that depth, the atmospheric clarity, and distance to the shower. The goal of FD shower reconstruction is to convert the signals and times measured by the passage of the shower along with the fixed, known geometry of the PMTs, into the location and direction of a distant shower track, as well as the energy of the primary particle.
To accurately measure , fluorescence detectors must be used. For those events that also have their arrival time simultaneously measured by the surface detector (SD) array though, the geometry of an individual shower can be very well measured. These hybrid events are very valuable for use in high quality measurements of . Telescope Array began hybrid data collection in May 2008 and has now analyzed over eight and a half years of data using this method. This paper is the first to report on measurements using the BR and LR stations.
The results of measurement using the Middle Drum FD and the SD array are reported in Abbasi et al. (2014). This analysis uses only the combined data of both Black Rock and Long Ridge hybrid events. Black Rock and Long Ridge employ identical electronics and hardware design, which are different from that found at Middle Drum. For example Black Rock and Long Ridge use larger mirrors than Middle Drum, which affects triggering and acceptance of tracks. Because hybrid reconstruction uses both the FDs and SDs the locations of the FD stations relative the SD array border is also an important consideration. Black Rock and Long Ridge, located 3 and 4 km away from the SD border respectively, are closer to the SD array than Middle Drum, which is 8 km away. This affects the acceptance of low energy hybrid events. Black Rock and Long Ridge are more efficient below eV than Middle Drum is. Because of these differences Middle Drum reconstructed events are not considered for this analysis.
3.1 Hybrid Analysis Methodology
Hybrid reconstruction combines the separate SD and FD data streams by searching for time coincident events. The kinematic properties of the shower, such as the charged particle depth profile, primary energy, and , are determined using the standard profile fitting procedure applied to FD data. The time coincident SD data is used only to improve the shower track geometry, because it can very accurately measure the time of arrival and position of the shower core. Hybrid reconstruction strives to use the same routines as those used in the standard SD-only and FD-only analyses.
Hybrid reconstruction of data consists of the following steps:
Tagging events which trigger both the SD and FD.
FD and SD geometry measurement.
FD Profile fitting.
3.1.1 Tagging Hybrid Events
Tagging hybrid events is done by searching for hybrid event candidates that independently trigger the SD array and either of the BR or LR FD stations. For real data the FD data stream is searched for downward-going events, and the SD data stream is searched for all level 2 triggers. Level 2 triggers are events in which three or more adjacent SD counters individually within 2400 m of each other, detect a three minimum ionizing particle (MIP) signal within 8 s (Abu-Zayyad et al., 2013c). FD downward-going events satisfy the requirements of the FD event trigger and are traveling from the atmosphere towards the ground (Tameda et al., 2009). Hybrid events are found by time matching SD and FD events using a 500 s window.
For Monte Carlo data a shower library of pregenerated CORSIKA events is used. The hybrid MC CORSIKA shower library is a collection of air showers generated at predetermined energies and zenith angles. For energies above eV TA has 250 showers generated per 0.1 decade in energy. We generate showers of arbitrary energy by measuring the elongation rates of the Gaisser Hillas profile parameters, and correcting them from their generated values to the values expected for the energy being thrown for the MC. In this way we can throw a continuous distribution of shower profiles for any randomly chosen energy. Each CORSIKA shower library element is thrown and set at a fixed zenith angle. Shower zenith angles are chosen according to a distribution for . To generate a random shower the following procedure is performed:
A shower energy is randomly chosen according to the energy distribution of the combined HiRes1 and HiRes2 observed spectrum (Abbasi et al., 2008b).
A pregenerated shower element contained in the energy bin chosen from the previous step is randomly selected.
Shower azimuthal angle is chosen by a uniform distribution for .
Shower core is randomly selected within a circle of 25 km radius centered on the CLF.
Monte Carlo generation also assumes a given primary particle type and other choices that must be made when running CORSIKA (Heck et al., 1998), such as the high and low energy hadronic models.
At the conclusion of this stage of the reconstruction each event will contain data from the SD counters and at least one of the FD stations. A hybrid event candidate then has data, or data, or data, for events energetic enough to have been observed simultaneously by both FD stations.
This step of hybrid analysis is the only step which differs between reconstructing real data and Monte Carlo data. The real data and simulated data are packed into the same data format so that all analysis which follows uses the same software.
3.1.2 FD and SD Geometry Measurement
This step of reconstruction determines the geometry of the shower as independently observed by the FD and by the SD. FD plane fitting is performed using the standard routines described in Abu-Zayyad et al. (2013b); Stratton (2012). This procedure determines the geometry of the air shower track relative to the observing FD. This includes measuring the shower-detector plane (SDP) normal vector (), shower zenith angle (), shower azimuth angle (), SDP angle (), shower impact parameter (), core location and arrival time.
FD geometry reconstruction begins by selecting tubes that correlate closely in time and space on the PMT cluster face. For any event randomly triggered noise tubes are present as well. A filtering step is performed to reject tubes that are not part of the shower track. The SDP normal vector is found by finding those components of the vector which minimize the following sum
where is the number of tubes along the shower track, is the pointing direction of tube in the SDP, and is the tube signal measured as number of photoelectrons. FDs can accurately measure the SDP normal because many individual tubes are used to perform the fit. A typical shower contains 50 tubes along the shower track. Figure 4 illustrates the relationship among the different parts of the SDP.
Once the SDP is known, the shower impact parameter and the SDP angle are calculated by the time vs. angle fit. The expected trigger time of PMT is
, the shower impact parameter, also called the pseudodistance, is the point of closest approach of the shower track and the origin of the observing FD. is the SDP angle, or the angle on the ground between the shower track and the core vector, which points from the FD origin to the point of impact on the ground. is the pointing direction of the PMT. is the time when the shower front is located at the point. , , and are parameters to be determined by fitting. A minimization is performed on the function that measures the difference between the observed tube trigger times and the expected trigger times, described by Equation 2
where is the observed trigger time and is the timing uncertainty for tube .
SD reconstruction is performed as well to determine the core arrival time and core location. A similar procedure for separating counters which trigger during an event and those that are noise triggers is employed by using time and space pattern recognition. Details about SD reconstruction can be found in Ivanov (2012).
At the end of this processing step, we have two independent determinations of the shower’s point of impact on the ground and its time of arrival.
3.1.3 Hybrid Fitting
Hybrid fitting is done by casting the individual “pixels” of the FD and SD that observe the passage of the shower into a common frame of reference, then performing a four component minimization to redetermine some parameters of the shower geometry. By combining the independent FD and SD observations, the geometry fit is improved compared to FD-only or SD-only observations. The hybrid function minimized for this analysis is
and the parameters being minimized are shower core positon, and , zenith angle, , azimuth angle, , and shower core arrival time, . Figure 5 shows how the common geometry of the FD and SDs is constructed. The FD pixels are treated as they typically are for normal FD reconstruction, each assigned a pointing direction and trigger time to determine the position and time of the shower point observed on the shower axis. The geometry and timing of the SDs that observe a common event are used to determine the pointing direction and times of those points on the shower axis in the reference frame of the FD.
is the same as equation 3 using the FD data described in Section 3.1.2. It is included in because in this fitting procedure the position of the shower core and the SDP are allowed to vary, potentially changing the previously calculated parameters of , , and . (equation 1) is the same as described in Section 3.1.2 as well, allowing for varying shower track geometry. Because we cast the SD positions and times into a common reference frame as the FD tube data, uses equation 1 to measure the best values for , , and of the shower track. is a term used to directly relate the determination of the shower core as observed by the SDs (the center of charge) to the new shower core being fitted in equation 4. This term is calculated by
where and are the (fixed) components of the shower core as observed by the SD array, and is the SD uncertainty on the core location.
3.1.4 Profile Fitting
Once the shower geometry is improved by folding in the SD data, FD profile fitting is done as it is normally done and described in Abu-Zayyad et al. (2013b). This means properties of the shower, such as the primary particle energy and the depth of shower maximum, are determined using only the light profile observed by the FD.
An inverse Monte Carlo method is used to determine the four parameters of the Gaisser-Hillas equation (Gaisser & Hillas, 1977) as shown in equation 6, , , , and , which, after simulating the measured shower geometry, atmosphere, and detector acceptance, best mimics the tube-by-tube response measured in the FD.
This analysis imposes constraints on the values of and , fixing them to 70 g/cm and -60 g/cm respectively. It has been observed that and are correlated, meaning is strictly not an independent parameter. Also, ground based fluorescence detectors can not observe the depth of first interaction, which is supposed to represent, but it is found to be physically meaningless since it often takes negative values (Song, 2004). We find, for this experiment, that by fixing these two parameters the bias in energy and is reduced for simulated proton induced showers over our energy range of interest. Both data and Monte Carlo utilize databases of detector PMT pedestals, PMT gains, and atmospheric conditions. These time-dependent databases are generated from the actual data and describe accurately the actual detector response recorded over each night’s observations.
Once the best shower profile is determined, the energy of the primary particle is found by integrating the shower profile and applying corrections for energy transported by components of the shower not observed by direct fluorescence detection, i.e., neutrinos and muons. This correction is about 8% to 5% for proton induced showers at eV and eV respectively. For iron induced showers the missing energy correction is about 4% larger (Barcikowski, 2011). This analysis applies the missing energy correction calculated from simulated proton induced showers to all data and all Monte Carlo. We therefore expect a relative energy bias of around 4% in the reconstruction of proton and iron Monte Carlo data.
Figure 6 is an example of a typical hybrid event observed by Telescope array. The surface detectors triggered in the event are shown in Figure (a)a. Marker size indicates signal measured by the SD and color indicates trigger. The arrow shows the azimuthal angle of the shower track and the solid line is the projection of the shower detector plane along the surface of the earth along the line between the observing FD and the shower core. Figure (b)b shows the hybrid time vs. angle fit. Red triangles show the trigger time and viewing angle of FD tubes the observed the passage of the shower. The blue triangles show the same information for surface detectors. The sold line is the fit to the combined SD and FD pixels using equation 2. Figure (c)c shows the FD tubes triggered in this event. Marker size indicates signal size and color indicates trigger time. The solid line is the shower detector plane found by fitting equation 1. The flux and uncertainty in photons/degree/m as a function of shower depth for each tube along the shower detector plane are shown in Figure (d)d. The solid lines show the simulated flux generated for the best profile fit. Red is fluorescence flux, blue is Rayleigh scattered flux, green is direct Cherenkov flux, and black is the sum of all these flux components.
4 Analysis of the Data
The period of data collection covered in this work is 27 May 2008 to 29 November 2016, about 8.5 years. Because this analysis is done using hybrid data, the collection period is limited by the approximately 10% duty cycle of the fluorescence detectors. This analysis examines 1500 nights of data collected of these 8.5 years.
By performing time matching of SD and FD events as described in Section 3.1.1, 17834 hybrid candidate events (hybrid events that have not gone through full reconstruction and analysis cuts) are found. The distribution of hybrid candidate events by FD that observed them is
|17834||hybrid candidate events|
|10381||BR events (mono & stereo)|
|8942||LR events (mono & stereo)|
|8892||BR events (mono)|
|7453||LR events (mono)|
|1489||BR + LR stereo events|
Monocular hybrid candidate events are hybrid events observed by only one FD (either BR or LR for this analysis) and the SD array. Stereo hybrid candidate events are events observed by both FDs and the SD array. Even though an event is tagged as a stereo candidate event, there is no guarantee that the event is able to be fully reconstructed independently by each FD. For example, a medium energy event located much closer to one FD than the other, may not be of sufficient quality to pass all cuts for the FD located farther away.
Reconstruction of the data proceeds as described in Section 3.1. Once profile fitting is done, cuts are applied to the events to reject those that are poorly reconstructed which may introduce bias and degrade resolution. These cuts are chosen to maximize the number of events collected without introducing large reconstruction biases.
Boundary cut. This cut ensures the core of the event falls within the bounds of the SD array. Additionally, the core location must not be within 100 m of the boundary. This is to ensure the charged particle distribution on the ground is fully contained within the array, allowing accurate reconstruction by the SDs.
Track length cut. This cut ensures the shower track as observed by the FDs is sufficiently long enough to allow enough PMTs to observe it. Short shower tracks may be caused by distant showers or showers moving towards the detector, both instances of which indicate unfavorable geometry for accurate reconstruction. We require a shower track of or greater to accept the track.
Good tube cut. Good tubes are tubes that have sufficient signal-to-noise and are spatially and temporally part of the shower track as determined by the FD plane fitting routines. We require 11 or more good tubes to accept a track.
SDP angle cut. This cut rejects events with SDP angle () greater than . Showers with greater than have some component of the track vector pointing towards the observing FD. As this angle grows, the shower is seen more and more head on, increasing the contribution of direct Cherenkov light received. Showers with large direct Cherenkov signals are difficult to reconstruct accurately and are rejected.
Time extent cut. This cut rejects tracks with time profiles less than 7 s. This is another cut to ensure short tracks, potentially approaching the observing FD directly, are removed from the data.
Zenith angle cut. Tracks with large zenith angle () are difficult to reconstruct by both the SD and FD. FD-only reconstruction can accurately reconstruct events with relatively large zenith angles () (Abu-Zayyad et al., 2013b). The upper zenith angle limit for SD-only reconstruction is (Abu-Zayyad et al., 2013a; Ivanov, 2012). This cut removes tracks with zenith angles greater than .
Hybrid geometry cut. This cut requires that the reduced of the hybrid fitting described in Section 3.1.3 must be less than 5 to accept the track. This ensures good geometry reconstruction when combining the SD and FD geometry information.
Profile cut. This cut requires that the reduced of the profile fit described in Section 3.1.4 be less than 10. This ensures the light profile of the track is well observed and the inverse Monte Carlo process sufficiently well simulated the shower as observed by the FD.
bracketing cut. Once the shower profile is reconstructed, the atmospheric depth vs. angle for observed parts of the shower is known. Using this depth profile, the minimum depth observed, , and the maximum depth observed, , as well as are calculated. The bracketing cut requires that the fitted value of be greater than and less than for the track to be accepted. This cut ensures that the turnover from rising shower size before and the falling size after it are in the field of view of the observing FD. This is required to get a good profile fit. If is not bracketed, the Gaisser-Hillas profile fit will often fail to accurately measure and , which also causes large uncertainty on the primary particle energy.
Energy cut. This cut is used to ensure the energy of the shower is not less than eV. Showers with energies below this cut are difficult to reconstruct by both the SD and FDs independently due to low signal to noise. The hybrid aperture falls steeply below eV. This cut represents the design limit of our detectors operating in hybrid mode.
Weather cut. Events that occur during bad weather nights are rejected. Weather is monitored by operators in the field and logged hourly. Operators record the state of cloud coverage in the four cardinal directions as well as the amount of overhead coverage and cloud thickness as judged by eye. These weather codes are used to categorize nights into “excellent”, “good”, or “bad” weather nights. This analysis uses nights deemed “excellent” or “good”. Nights that are recorded as bad are rejected from the data. To ensure consistency among similar analyses, the same weather cut criteria as used by the FD monocular spectrum analysis (Abu-Zayyad et al., 2013b) is also used here.
Hybrid monocular events which pass all of the analysis cuts are accepted as part of the final event set. Stereo hybrid events must go through one more selection step before final acceptance. A stereo event is independently reconstructed using the data as observed by the BR FD and as observed by the LR FD. The cuts above are applied separately to the BR and LR reconstructed hybrid event information. If only one of the two site’s data passes cuts, then the hybrid event data is accepted using that site’s reconstruction parameters. If both site’s data passes the cuts, then the site’s data with the profile reduced closest to one is accepted. The same set of cuts and the same selection procedures are used on data and Monte Carlo data.
After the cuts have been applied to the data 3330 events remain. The distribution of fully reconstructed and accepted data events is
|3330||hybrid accepted events|
|1743||BR events (mono & stereo)|
|1587||LR events (mono & stereo)|
|1676||BR events (mono)|
|1504||LR events (mono)|
|150||BR + LR stereo events|
4.1 Data/Monte Carlo Comparison
Monte Carlo is stored in the same format and analyzed in the same manner as data. This allows us to understand the acceptance of our detector by detailed simulations and to perform data/Monte Carlo comparisons to test the agreement of data with different composition models and mixtures. Figures 7 and 8 compare data and Monte Carlo of the parameters used by the analysis cuts for four different primary species: protons, helium, nitrogen, and iron. All of the Monte Carlo data used in this analysis is generated in CORSIKA using the QGSJet II-04 hadronic model. For all plots, the accepted energy range is eV and all Monte Carlo simulated data histograms are normalized by area to the area of the data histogram. Each Monte Carlo data set represents six years of TA operations with about 10 times the statistics collected in the data over that same period. The TA Monte Carlo mimics the real operating conditions of the FDs and SDs, by using time-dependent databases of the real operating conditions in the field, such as pedestals, atmosphere, and tube gains. This information is used in reconstruction as well as event simulation. As the figures show, the simulation mimics the observed data well between the four chosen models.
To measure the bias and resolution of our detector for a given observable parameter , for all reconstructed Monte Carlo events we histogram the difference , where is the reconstructed value of the parameter and is the true value of the parameter. The parameter bias is the sample mean of this distribution and the resolution is the sample standard deviation. Table 1 shows the measured bias and resolution of this analysis for four primary species for all reconstructed Monte Carlo events with eV. The table shows the reconstruction biases for is very small, about -1 g/cm for protons and -4 g/cm for iron, both of which are much smaller than the resolutions of 17 g/cm and 13 g/cm respectively. Energy bias is less than 2% for protons and -6.5% for iron. We expect a larger energy bias for iron because when the shower energy is computed the missing energy correction assumes a proton primary (see Section 3.1.4). In all cases the energy resolution is less than 6% for the four primary species shown. Angular resolution and bias for the geometric parameters are acceptably small in all cases, less than a degree, which is expected for hybrid reconstruction. Bias and resolution of the shower impact parameter is of order 0.1% of the average observed distance of . The reconstruction accuracy of and , the and components of the shower core location on the ground are also very good.
bias in our simulation comes in two parts: bias due to detector acceptance and bias due to reconstruction. Reconstruction bias is bias that is affected by operating condition of the detector, selection of cuts, composition and hadronic model dependence, and proper modeling of the detector and air shower physics in the Monte Carlo. Acceptance bias is affected by physical detector design and detector response, such as choice of triggering algorithm.
Acceptance bias predominantly affects the deeply penetrating tail of the distribution. This is because there is an upper bound to the maximum depth to which an air shower can be observed due to limited atmospheric mass overburden, which is part of the detector design (placement on the Earth’s surface). For very deeply penetrating primaries, the ability to reconstruct via fluorescence observation is limited by the following scenarios: 1) the air shower track has small zenith angle and occurs at the ground level or below, or 2) the air shower track achieves shower maximum in air, but therefore has a very large zenith angle. The result is that as a function of energy, acceptance bias in TA is seen as a systematic shift of the mean of the distribution to smaller depths and narrowing in the width (RMS) of the distribution in each energy bin. This effect is shown explicitly in the Monte Carlo distributions in Figure 9.
This effect is dependent upon the mass and energy of the primary particle; light particles penetrate more deeply and shower maximum occurs at deeper depths on average with increasing energy and decreasing primary mass. It is also dependent upon the physics of UHECR hadronic interactions, which are not known for our energy range of interest. This appears as model dependence through our choice of hadronic generator in CORSIKA simulations. More recent hadronic models tuned to LHC results, such as QSGJet II-04 (Ostapchenko, 2011) and EPOS LHC (Pierog et al., 2015), generate events that penetrate more deeply on average than older models, such as QGSJet II-03 (Ostapchenko, 2007).
The sum of acceptance bias and reconstruction bias is called total bias, and it is important for us to understand because it appears as a systematic shift in the final reconstructed distribution compared to the true generated . Given that there is some combination of hadronic model and mixture of elements that represent the true distribution of that is impinging upon the Earth’s atmosphere, detectors with acceptance bias will never be able to fully reproduce the true distribution in nature simply by plotting the distribution of reconstructed events. Acceptance bias will distort the observed distribution, because information about the distribution is simply lost and it will typically appear as if it is the result of a heavier mixture of elements. To correct for this type of bias, one can attempt unfolding of the data, or resort to even more restrictive sets of cuts such as done by the Auger experiment (Aab et al., 2014).
An alternate method to understand measured composition is to simply use Monte Carlo to simulate biases incurred due to detector acceptance and compare the measured distribution to the biased, simulated one. This is the method chosen by TA. It is important to understand how a measurement deals with this issue of acceptance bias before attempting to compare composition results between different experiments.
4.3 Ta data
TA hybrid data is binned by energy into eleven energy bins. Below eV, there are sufficient statistics to use 0.1 decade wide energy bins, to provide events per bin. Above eV the bins are widened to try and capture more events. Figures 10 and 11 show the distributions measured in this analysis. The distributions for reconstructed QGSJet II-04 Monte Carlo are shown as well. For each energy bin, the histogram of each individual species is normalized by area to the area of the data histogram. In a given energy bin, lighter elements have larger as expected because of the relationship discussed in Section 1
Each figure shows that the means and standard deviations of the distributions of the simulated elements decrease with increasing mass as we expect. We can use them to compare to the data to determine which pure element drawn from the QGSJet II-04 model most resembles the data and which elements may be excluded. Such a comparison does not imply that we believe cosmic rays in nature to be composed of a single chemical element in any given energy bin we’ve observed. In a future paper, we will investigate the compatibility of TA data with mixtures of elements. In this current work, we only compare TA data to pure CORSIKA elements.
UHECR composition measurements typically utilize the first and second moments of distributions of data and Monte Carlo to compare observed results to those expected for the models under investigation. These individual quantities are too limited to fully understand the details of distributions, particularly for light elements which exhibit prominent non-Gaussian tails. While examining and as a function of energy is still useful, especially to place an experiment in historical context with older measurements, utilizing more powerful statistical techniques is more appropriate given that more powerful computers now exist to make these calculations much more practical. For this reason we will make our primary visual comparisons of data and Monte Carlo by simultaneously examining and , which is a more powerful way to understand the relationship of the data and Monte Carlo.
Recalling the discussion of the relationship of mass and energy to the mean and width of the distribution from Section 1, we can examine the signature of a given element as observed by 8.5 years of exposure in TA in hybrid mode by simultaneously measuring the distributions of and . Light elements will have both larger and distributions, because of the larger fluctuations in first the interaction in the atmosphere and subsequent shower development. We will also be able to see the affect of TA’s acceptance on the reconstructed distributions and compare them to the observed data. To do this the reconstructed distribution for a single element, such as QGSJet II-04 protons shown in Figures 10 and 11, is sampled according to the same number of events recorded in the data for a given energy bin. and of the energy are calculated and recorded for this sample. This procedure is then 5000 times. The distribution of and is used to calculate the 68.3%, 90%, and 95% confidence intervals. The entire and calculated by this method is then plotted as a 2-dimensional distribution along with the computed confidence intervals. This procedure is repeated for the other three chemical elements used in the analysis. Figures 12 and 13 show this measurement for all observed energy bins. The and of the data observed in each energy bin is also recorded as a single red star. Additionally, the statistical, systematic, and combined statistical and systematic error bounds are marked around the data.
Figures (a)a and (b)b, corresponding to the energy range eV, show that each of the four modeled chemical elements have clear separation and are individually resolvable by TA in those energy bins given our acceptance and statistics in the data (801 and 758 events, respectively). The of the data resembles QGSJet II-04 protons, but the of the data is lower by about our systematic uncertainty in those energy bins. Note that we do not account for systematic uncertainties in the QGSJet II-04 model, which will be discussed in Section 5. Figures (c)c - (f)f, corresponding to the energy range eV show that of the data continue to resemble QGSJet II-04 protons, and of the data falls within the 68.3% confidence interval of the proton distributions within the data’s systematic uncertainty. We also notice an effect of decreasing statistics in the data, by observing the increase in the size of the confidence intervals of the individual Monte Carlo elements. Figure (a)a, corresponding to the energy range eV, shows a relatively large downward fluctuation in of the data. In this energy bin, the 68.3% confidence intervals of QGSJet II-04 proton and helium both fall within the bounds of the systematic uncertainty of the data. In Figure (b)b, corresponding to the energy range eV, of the data fluctuates up from the previous energy bin and the systematic error bounds of the data falls within the 68.3% confidence interval of protons. In this energy, because of the small statistics in the data (80 events), the largest confidence intervals of proton and helium begin to overlap. This indicates that given TA’s current exposure in this energy bin, we are losing our ability make precise statements about the signature of pure light chemical elements. The ability to distinguish among nitrogen from iron, or nitrogen from helium or protons remains. In Figure (c)c, corresponding the energy range eV, the 95% confidence intervals of proton and helium are once again separated, but here we have doubled the size of the energy bin in an attempt to enable us to still make a reasonably good measurement of and in the data. The 68.3% confidence intervals of both proton and helium fall within the bounds of the systematic uncertainty of the data. Figures (d)d and (e)e, corresponding to the energy range eV, show that TA’s ability to resolve individual QGSJet II-04 elements is degraded due to the overlap of the confidence intervals. According to these figures, when considering only the joint distributions of and , within the data’s systematic uncertainty the data may resemble QGSJet II-04 proton, helium, or nitrogen.
Figure 14 shows only the means of the distributions presented in Figures 10, 11, 12, and 13, also called the elongation rate, of the observed data (), as well as reconstructed Monte Carlo for four primary species. The gray band around the data points indicates the systematic uncertainty in of 17.4 g/cm estimated for this analysis.
4.4 Systematic Uncertainties
Systematic uncertainties in the and are evaluated for four sources: detector modeling, atmosphere, fluorescence yield, and the reconstruction algorithm.
The pointing accuracies of the phototubes ( degrees) and the relative timing between FD and SD (240ns) dominate the detector effects. These effects give g/cm and g/cm uncertainty in , and g/cm and g/cm in , respectively. Some events are detected by both FD stations (BR and LR), and the differences of such stereo events can also be used to estimate the detector effect. We found that the BR-LR difference is smaller than 10 g/cm in for events with energies greater than eV.
The atmospheric effect is dominated by the amount of aerosols. We have uncertainty of aerosols in terms of vertical aerosol optical depth (VAOD), which gives a shift in of 3.4 g/cm. Variations in atmospheric aerosols potentially has a large affect on . Aerosols are measured every 30 minutes by the central laser facility (CLF) (Tomida et al., 2013). If we compare how varies using the VAOD data measured by the CLF, the effect on is found to be 18.9 g/cm. Another effect comes from the atmospheric profile, i.e., the pressure and density of the atmosphere as functions of height. When we reconstruct data using an atmospheric database that uses NOAA National Weather Service radiosonde data instead of the Global Data Assimilation System (GDAS) (Laboratory, 2004), the effects on and are found to be 5.9 g/cm and 7.4 g/cm, respectively.
We use a fluorescence yield model which uses the absolute yield measurement by Kakimoto et al. (Kakimoto et al., 1996) and the fluorescence spectral measurement by the FLASH experiment (Abbasi et al., 2008a). A g/cm effect is expected if we use different fluorescence modeling. For example, we found a g/cm shift in and a 3.7 g/cm effect in when we use the model based on the measurements by the AirFly experiment (Ave et al., 2007, 2013) on the absolute yield, the spectrum, and the atmospheric parameter dependencies.
A systematic effect in also comes from the reconstruction program used in the analysis. We have two reconstruction programs independently developed in TA for the same data. The reconstruction bias can be estimated by an event-by-event comparison of values calculated by these two separate reconstruction procedures, and this is smaller than 4.1 g/cm for events with eV.
Some of these contributions are not fully independent. For example, the uncertainties evaluated from the BR-LR difference and the comparison of different analysis programs could be correlated. In the calculation of the total systematic uncertainty, we use a linear sum of these two sources of uncertainty (14.1 g/cm) as a conservative estimate. Other sources are added in quadrature, and we find the total systematic uncertainty in to be 17.4 g/cm. The results are summarized in Table 2.
The systematic uncertainties of from the sources discussed above are also evaluated, and given in Table 3. Adding in quadrature, we obtain 21.1 g/cm.
|Detector||5.1 g/cm||Relative timing between FD and SD (3.8 g/cm), pointing direction of the telescope (3.3 g/cm)|
|Atmosphere||6.8 g/cm||Aerosol (3.4 g/cm), atmospheric depth (5.9 g/cm)|
|Fluorescence yield||5.6 g/cm||Difference in yield models|
|Quadratic sum||10.2 g/cm|
|Not fully independent sources|
|Detector||10.0 g/cm||Difference in two FD stations|
|Reconstruction||4.1 g/cm||Difference in reconstructions|
|Linear sum||14.1 g/cm|
|Detector||4.3 g/cm||Relative timing between FD and SD (1.7 g/cm), pointing direction of the telescope (4.0 g/cm)|
|Atmosphere||20.3 g/cm||Aerosol (18.9 g/cm), atmospheric depth (7.4 g/cm)|
|Fluorescence yield||3.7 g/cm||Difference in yield models|
|Quadratic sum||21.1 g/cm|
As seen in Figure 14, within systematic uncertainties, of the data is in agreement with QGSJet II-04 protons and helium for nearly all energy bins. There is clear separation between the region of systematic uncertainty and heavier elements such as nitrogen and iron. In the last two energy bins there is some overlap between the systematic uncertainty region of the data and the nitrogen, but statistics in the data there are very poor. Care must be taken in interpreting Figure 14, since by itself is not a robust enough measure to fully draw conclusions about UHECR composition. When comparing of data to Monte Carlo, in addition to detector resolution and systematic uncertainties in the data which may hinder resolving the between different elements with relatively similar masses, the issue of systematic uncertainties in the hadronic model used to generate the Monte Carlo must also be recognized. This will be discussed in Section 5. Referring back to Figures 12 and 13, we can see that though the of the data in Figure 14, lies close to QGSJet II-04 helium, the of the data is larger than the helium model allows for energy bins with good data statistics. For this reason, we will test the agreement of data and Monte Carlo by comparing not just and , but by using the entire distributions. The elongation rate of the data shown in Figure 14 found by performing a fit to the data is found to be g/cm/decade. The /DOF of this fit is 10.67/9. Table 4 summarizes the observed first and second moments of TA’s observed for all energy bins.
5 Statistical Hypothesis Tests
If one wishes to draw conclusions about agreement between the data and the models we should employ a test that measures the agreement of the entire distributions instead of relying upon the first and second moments of the distributions. Comparisons of the means and standard deviations of distributions are difficult to fully characterize agreement or disagreement because these distributions are naturally skewed. The deep is problematic for energy bins with low exposure, which can lead to misinterpretation of the results if care is not taken and only the first and second moments of the distributions are considered. To test the compatibility of the data and the Monte Carlo, we use an unbinned maximum likelihood test.
To perform these tests we fit the Monte Carlo distributions to a continuous function described by a convolution of a Gaussian with an exponential function then uniformly shift the distributions of the data within g/cm in 1 g/cm steps, calculate the log likelihood, then record which shift gives the best likelihood between data and Monte Carlo. We allow for shifting of the data to account for possible systematic uncertainties in our reconstruction and for uncertainties in the models that we are testing against. An additional benefit of this method is that if the required shift is significantly larger than the combined experimental and theoretical uncertainties, that pure elemental composition is strongly disallowed. However, in the current paper, we focus on the shape comparisons exclusively.
For example, Figure 15 shows the log likelihood values measured for the chemical elements tested against the data in the energy bin. Figure 16 shows the data after shifting by the best and the Monte Carlo for each chemical element for the same energy bin. Data and protons appear to agree well over their entire distributions. Data and helium match well up until about 850 g/cm, where the helium tail begins to fall off faster than the data. Nitrogen and iron show less agreement in the tails as well. Note that the same data is used for each subfigure shown in Figure 16, but it is shifted by a different amount in each one. The shifts applied to the data are +29, +7, -19, and -41 g/cm in the proton, helium, nitrogen, and iron subfigures respectively. The slight variation of the shape of the data histogram in each subfigure is due to the effect of systematic shifting of all data points and then binning in the plot. However, the maximum likelihood calculated for our tests use an unbinned method.
We then calculate the probability (-value) of measuring a log likelihood in the Monte Carlo equal to or more extreme than the one measured in the data shifted by the best . Figure 17 shows the distributions of log likelihood calculated for the energy bin, as well as the likelihood measured for the data when shifted by the best . The null hypothesis being tested is that the data after shifting and the Monte Carlo are drawn from the same continuous distribution. If the -value we measure from this test statistic is 0.05 or less, we reject the null hypothesis at the 95% confidence level, and we say the two distributions are not compatible. If the -value is greater than 0.05 we fail to reject the systematically shifted data and Monte Carlo as being compatible.
Hadronic models in the UHECR energy regime are based upon measurements made in accelerators. Cross section, multiplicity, and elasticity of the primary particle are fundamental parameters used by these models that are particularly sensitive for UHECR . These parameters are measured at relatively low energies ( TeV corresponds to about eV in the lab frame), which need to be extrapolated up to eV to fully describe the physics up to the highest energy cosmic rays observed. Abbasi and Thomson have examined the uncertainty in in several different popular hadronic models introduced by extrapolating these parameters. The estimated lower limits on the uncertainty in from the extrapolation was found to g/cm at eV and g/cm at eV (Abbasi & Thomson, 2016). This uncertainty in at eV is about the same as the difference in predicted among the deepest model (EPOS LHC) and the shallowest model (QGSJet01c). The shapes of the distributions have a much smaller dependence on hadronic model assumptions. Because of these large uncertainties in the models that we compare our observed to, we simultaneously systematically shift the data and test the shapes of the distributions to measure compatibility between the data and model.
Table 5 shows the results of these tests. For each QGSJet II-04 model tested against the data, the which gave the best log likelihood is shown, as well as the -value for that shift. For QGSJet II-04 protons, in most energy bins the shifts are about the size of or slightly larger than the systematic uncertainty of for this analysis. The -values, which measure the agreement of the shapes of the entire distributions after shifting, all have values , therefore we fail to reject protons as being compatible with the data for all energy bins using this test. QGSJet II-04 helium has shifts smaller than protons and within our quoted systematic uncertainties of , but the -values indicate that once shifting is performed the shapes of the data and Monte Carlo do not agree for . For those energy bins, the test rejects QGSJet II-04 helium as being compatible with data after systematic shifting. Above , the -values are and we fail to reject helium as being compatible with the data. QGSJet II-04 nitrogen requires shifts slightly larger than our systematic uncertainty, but the -values of the tests reject nitrogen for . Iron requires shifts larger than our systematic uncertainty, and is rejected as being compatible with the data for . Figure 18 visually summarizes the results of the tests and the data in Table 5.
We can understand why the likelihood test finds our data simultaneously compatible with elements with very different masses such as protons and nitrogen in the last two energy bins if we consider Figures (d)d and (e)e. We see that because of poor detector exposure leading to very few events collected in these energy bins, the confidence intervals of and of the Monte Carlo overlap among the different elements. Given our current exposure, we should not expect to be able to distinguish the difference between protons, helium, or nitrogen in these energy bins. Iron too is found compatible with the data in the last energy, but a shift of larger than our systematic uncertainty is required. The agreement of the maximum likelihood test then comes from very few events in the data (19 events) and the lack of a tail in the data deep distribution ( g/cm) as seen in Figure (e)e. This lack of deep tail allows the shapes of the data and Monte Carlo to resemble each after a sufficient amount of shifting.
analysis using five years of data analyzed by the Middle Drum FD also found compatibility of the data with protons and incompatibility with iron. In that analysis, the test was applied to QGSJet II-03 protons and iron in three energy ranges: - , - , and - eV. For these energy bins, the -values of the tests rejected iron, but not protons (Abbasi et al., 2014).
Telescope Array has completed analyzing 8.5 years of data collected in hybrid mode using events observed simultaneously by the surface detector array and the Black Rock and Long Ridge fluorescence detectors. This data provided 3330 events after reconstruction and cuts are applied, and was used to analyze the depth of shower maxima (). Good operation of the detector was verified by using an extensive Monte Carlo suite with showers pre-generated using CORSIKA. This Monte Carlo allows us to verify that we understand the detector with a high degree of confidence and also to compare the observed distributions with CORSIKA models of four different single element primaries: protons, helium, nitrogen, and iron, all generated using the QGSJet II-04 hadronic model. The data can be compared to the Monte Carlo by the traditional method, comparing the first and second moments ( and ) of the observed distributions to the Monte Carlo. This method may be overly simplistic and misleading especially for energy bins with low exposure, which can change the shapes of the observed distributions. We have presented a new way to visualize and by plotting their joint distributions in the data as well as the confidence intervals expected from Monte Carlo. We have extended the analysis of by using unbinned maximum likelihood, which allows us to measure the compatibility of the data and Monte Carlo using the entire distributions. This is especially important for statistical distributions that potentially exhibit a high degree of skew, such as those of light elements with distributions with deeply penetrating tails. Using this test we can empirically reject certain chemical elements at a given confidence level as being compatible with our data.
After allowing for systematic shifting of the data distributions and performing the likelihood test on the data and Monte Carlo distributions of four pure chemical species, we find that we fail to reject QGSJet II-04 protons as being compatible with the data for all energy bins at the 95% confidence level. QGSJet II-04 helium is rejected as being compatible with the data for . QGSJet II-04 nitrogen is rejected for and iron is rejected for . We’ve demonstrated that for , TA has insufficient exposure to accurately distinguish the difference between different individual elements. Energy bins in this energy range have poor statistics due to low exposure and agreement among several of the models is found with the data. However, this agreement is physically unrealistic for the case of iron because of the large shifts required, in excess of our systematic uncertainty.
The Telescope Array experiment is supported by the Japan Society for the Promotion of Science through Grants-in-Aid for Scientific Research on Specially Promoted Research (21000002) “Extreme Phenomena in the Universe Explored by Highest Energy Cosmic Rays” and for Scientific Research (19104006), and the Inter-University Research Program of the Institute for Cosmic Ray Research; by the U.S. National Science Foundation awards PHY-0601915, PHY-1404495, PHY-1404502, and PHY-1607727; by the National Research Foundation of Korea (2015R1A2A1A01006870, 2015R1A2A1A15055344, 2016R1A5A1013277, 2007-0093860, 2016R1A2B4014967); by the Russian Academy of Sciences, RFBR grant 16-02-00962a (INR), IISN project No. 4.4502.13, and Belgian Science Policy under IUAP VII/37 (ULB). The foundations of Dr. Ezekiel R. and Edna Wattis Dumke, Willard L. Eccles, and George S. and Dolores Doré Eccles all helped with generous donations. The State of Utah supported the project through its Economic Development Board, and the University of Utah through the Office of the Vice President for Research. The experimental site became available through the cooperation of the Utah School and Institutional Trust Lands Administration (SITLA), U.S. Bureau of Land Management (BLM), and the U.S. Air Force. We appreciate the assistance of the State of Utah and Fillmore offices of the BLM in crafting the Plan of Development for the site. Patrick Shea assisted the collaboration with valuable advice on a variety of topics. The people and the officials of Millard County, Utah have been a source of steadfast and warm support for our work which we greatly appreciate. We are indebted to the Millard County Road Department for their efforts to maintain and clear the roads which get us to our sites. We gratefully acknowledge the contribution from the technical staffs of our home institutions. An allocation of computer time from the Center for High Performance Computing at the University of Utah is gratefully acknowledged.
- Aab et al. (2014) Aab, A., et al. 2014, Phys. Rev., D90, 122005
- Aartsen et al. (2013) Aartsen, M. G., et al. 2013, Phys. Rev., D88, 042004
- Abbasi et al. (2008a) Abbasi, R., et al. 2008a, Astropart. Phys., 29, 77
- Abbasi & Thomson (2016) Abbasi, R. U., & Thomson, G. B. 2016, arXiv:1605.05241
- Abbasi et al. (2008b) Abbasi, R. U., et al. 2008b, Phys. Rev. Lett., 100, 101101
- Abbasi et al. (2010) —. 2010, Phys. Rev. Lett., 104, 161101
- Abbasi et al. (2014) —. 2014, Astropart. Phys., 64, 49
- Abraham et al. (2008) Abraham, J., et al. 2008, Phys. Rev. Lett., 101, 061101
- Abu-Zayyad et al. (2000) Abu-Zayyad, T., et al. 2000, Nucl. Instrum. Meth., A450, 253
- Abu-Zayyad et al. (2001) —. 2001, Astrophys. J., 557, 686
- Abu-Zayyad et al. (2013a) —. 2013a, Astrophys. J., 768, L1
- Abu-Zayyad et al. (2013b) —. 2013b, Astropart. Phys., 48, 16
- Abu-Zayyad et al. (2013c) —. 2013c, Nucl. Instrum. Meth., A689, 87
- Allard et al. (2008) Allard, D., Busca, N. G., Decerprit, G., Olinto, A. V., & Parizot, E. 2008, JCAP, 0810, 033
- Aloisio et al. (2007) Aloisio, R., Berezinsky, V., Blasi, P., et al. 2007, Astropart. Phys., 27, 76
- Aloisio et al. (2012) Aloisio, R., Berezinsky, V., & Gazizov, A. 2012, J. Phys. Conf. Ser., 337, 012042
- Amenomori (2008) Amenomori, M. 2008, Adv. Space Res., 42, 467
- Ave et al. (2007) Ave, M., et al. 2007, Astropart. Phys., 28, 41
- Ave et al. (2013) —. 2013, Astropart. Phys., 42, 90
- Barcikowski (2011) Barcikowski, E. L. 2011, PhD thesis, Utah U.
- Bird et al. (1994) Bird, D. J., et al. 1994, Astrophys. J., 424, 491
- Boyer et al. (2002) Boyer, J. H., Knapp, B. C., Mannel, E. J., & Seman, M. 2002, Nucl. Instrum. Meth., A482, 457
- Carlson & Oppenheimer (1937) Carlson, J. F., & Oppenheimer, J. R. 1937, Phys. Rev., 51, 220
- Engel et al. (2011) Engel, R., Heck, D., & Pierog, T. 2011, Ann. Rev. Nucl. Part. Sci., 61, 467
- Fenu (2017) Fenu, F. 2017, in The Pierre Auger Observatory: Contributions to the 35th International Cosmic Ray Conference (ICRC 2017), 9–16
- Fowler et al. (2001) Fowler, J. W., Fortson, L. F., Jui, C. C. H., et al. 2001, Astropart. Phys., 15, 49
- Gaisser & Hillas (1977) Gaisser, T. K., & Hillas, A. M. 1977, International Cosmic Ray Conference, 8, 353
- Giller et al. (2003) Giller, M., Kacperczyk, A., Malinowski, J., et al. 2003, in Proceedings, 28th International Cosmic Ray Conference (ICRC 2003): Tsukuba, Japan, July 31-August 7, 2003, 619–622
- Greisen (1966) Greisen, K. 1966, Phys. Rev. Lett., 16, 748
- Heck et al. (1998) Heck, D., Knapp, J., Capdevielle, J. N., Schatz, G., & Thouw, T. 1998, CORSIKA: a Monte Carlo code to simulate extensive air showers.
- HESS & ANDERSON (2013) HESS, V. F., & ANDERSON, C. D. 2013, in Physics 1922–1941 (Elsevier), 351 – 377
- Hoerandel (2004) Hoerandel, J. R. 2004, Astropart. Phys., 21, 241
- Ivanov (2012) Ivanov, D. 2012, PhD thesis, Rutgers U., Piscataway
- Ivanov (2016) —. 2016, PoS, ICRC2015, 349
- Kakimoto et al. (1996) Kakimoto, F., Loh, E. C., Nagano, M., et al. 1996, Nucl. Instrum. Meth., A372, 527
- Kampert & Unger (2012) Kampert, K.-H., & Unger, M. 2012, Astropart. Phys., 35, 660
- Knurenko & Petrov (2015) Knurenko, S., & Petrov, I. 2015, J. Phys. Conf. Ser., 632, 012098
- Knurenko et al. (2015) Knurenko, S., Petrov, I., Petrov, Z., & Sleptsov, I. 2015, EPJ Web Conf., 99, 04001
- Laboratory (2004) Laboratory, N.-A. R. 2004, Global Data Assimilation System (GDAS1) Archive Information, ,
- Linsley et al. (1961) Linsley, J., Scarsi, L., & Rossi, B. 1961, Phys. Rev. Lett., 6, 485
- Nagano et al. (1992) Nagano, M., Teshima, M., Matsubara, Y., et al. 1992, J. Phys., G18, 423
- Ohoka et al. (1997) Ohoka, H., Yoshida, S., & Takeda, M. 1997, Nucl. Instrum. Meth., A385, 268
- Ostapchenko (2007) Ostapchenko, S. 2007, AIP Conf. Proc., 928, 118
- Ostapchenko (2011) —. 2011, Phys. Rev., D83, 014018
- Pierog et al. (2015) Pierog, T., Karpenko, I., Katzy, J. M., Yatsenko, E., & Werner, K. 2015, Phys. Rev., C92, 034906
- Prosin et al. (2015) Prosin, V. V., et al. 2015, EPJ Web Conf., 99, 04002
- Song (2004) Song, C. 2004, Astropart. Phys., 22, 151
- Stratton (2012) Stratton, S. R. 2012, PhD thesis, Rutgers U., Piscataway, doi:10.7282/T3154FSB
- Tameda et al. (2009) Tameda, Y., et al. 2009, Nucl. Instrum. Meth., A609, 227
- Tokuno et al. (2012) Tokuno, H., et al. 2012, Nucl. Instrum. Meth., A676, 54
- Tomida et al. (2013) Tomida, T., et al. 2013, EPJ Web Conf., 53, 10003
- Zatsepin & Kuzmin (1966) Zatsepin, G. T., & Kuzmin, V. A. 1966, JETP Lett., 4, 78, [Pisma Zh. Eksp. Teor. Fiz.4,114(1966)]