Search for low-mass dark matter with CDMSlite using a profile likelihood fit
The Cryogenic Dark Matter Search low ionization threshold experiment (CDMSlite) searches for interactions between dark matter particles and germanium nuclei in cryogenic detectors. The experiment has achieved a low energy threshold with improved sensitivity to low-mass (10 GeV/) dark matter particles. We present an analysis of the final CDMSlite data set, taken with a different detector than was used for the two previous CDMSlite data sets. This analysis includes a data “salting” method to protect against bias, improved noise discrimination, background modeling, and the use of profile likelihood methods to search for a dark matter signal in the presence of backgrounds. We achieve an energy threshold of 70 eV and significantly improve the sensitivity for dark matter particles with masses between 2.5 and 10 GeV/ compared to previous analyses. We set an upper limit on the dark matter-nucleon scattering cross section in germanium of 5.410 cm at 5 GeV/, a factor of 2.5 improvement over the previous CDMSlite result.
Multiple astronomical and cosmological observations point to the existence of dark matter (DM), indicating that approximately 25% of the universe consists of a non-luminous, non-baryonic form of matter of unknown composition Tanabashi et al. (2018); Ade et al. (2016).
A class of hypothetical particles called Weakly Interacting Massive Particles (WIMPs) Steigman and Turner (1985) is consistent with the observational evidence and would be a cold (non-relativistic) relic from the early universe that may be directly detectable by terrestrial detectors Jungman et al. (1996).
Supersymmetric theories naturally predict the existence of WIMPs with masses at the electroweak scale, but with no evidence of such particles at the LHC Aad et al. (2015); Khachatryan et al. (2015), direct-detection DM experiments have begun to consider low-mass alternatives Aguilar-Arevalo et al. (2016); Petricca et al. (2017); Agnese et al. (2018a); Agnes et al. (2018). Theories that predict DM particles with masses GeV/ include, but are not limited to, asymmetric DM, which relates the DM problem to the baryon asymmetry of the universe Petraki and Volkas (2013); Zurek (2014), and hidden sector scenarios in which DM couples to Standard Model particles through new force mediators like the dark photon Hooper et al. (2012); Foot (2013).
In CDMSlite, cryogenic germanium detectors developed by the SuperCDMS Collaboration were operated at high voltage to amplify the signal from ionization by particle interactions via the Neganov-Trofimov-Luke (NTL) effect Neganov and Trofimov (1985); Luke (1988). This amplification provides sub-keV detection thresholds for nuclear recoils, enabling searches for low-mass DM particles Agnese et al. (2014a, 2016, 2018a). This paper presents results from the third and final run of CDMSlite, and represents the first blind analysis of data taken in this mode. We employ new rejection techniques to effectively remove instrumental backgrounds that limited previous analyses, while the remaining dominant background contributions are modeled within a profile likelihood fit.
Section II describes the operation and calibration of CDMSlite detectors. Section III presents a method of data blinding based upon the addition of artificial events to the data, while Sec. IV describes how instrumental backgrounds are effectively removed. Section V describes the definition of a fiducial volume (using the radial parameter discussed in Sec. II) to eliminate the contribution of events with misreconstructed energies at high detector radii. Sections VI and VII discuss models for the energy spectra of DM-signal and background events, which are used as inputs to a profile likelihood fit to search for a DM signal in Sec. VIII. We find no evidence for such a signal and present improved upper limits on the spin-independent DM-nucleon cross section in Sec. IX.
Ii Description of the Experiment
The SuperCDMS Soudan experiment was located at the Soudan Underground Laboratory in northern Minnesota. The experiment operated 15 germanium interleaved Z-sensitive Ionization and Phonon (iZIP) detectors, arranged in 5 stacks (“towers”) and read out with CDMS II electronics Akerib et al. (2004, 2005); Ahmed et al. (2009, 2009). The iZIPs—cylindrical Ge single crystals with a diameter of 76 mm, a height of 25 mm, and a resulting mass of 600 g—were equipped with phonon sensors composed of tungsten transition edge sensors (TESs) and aluminum fins for phonon collection, patterned on their top and bottom faces. The operational temperature was 50 mK. Interleaved with the phonon sensors were charge-collecting electrodes with a bias voltage applied between them (+2 V on one face and -2 V on the other) to separate and collect the electrons and holes liberated in particle interactions. Nuclear recoils (NRs) produce fewer electron-hole pairs for a given recoil energy than electron recoils (ERs), allowing for an event-by-event discrimination between these two types of interactions Agnese et al. (2013).
In 2012 we explored the operation of an iZIP detector in an alternative configuration Agnese et al. (2014a) in which a higher bias across the detector amplifies the ionization signal by producing NTL phonons. As charge carriers drift across the crystal due to the electric field, they quickly reach a terminal velocity and the additional work done on the carriers is transferred to the crystal lattice in the form of NTL phonons. The energy contribution from NTL phonons is
where is the absolute value of the electric charge, the voltage drop experienced by a charge pair, and is the number of electron-hole pairs produced. The total phonon energy generated by a recoiling particle is the sum of the initial recoil energy and the energy of NTL phonons:
In germanium, the average energy required to produce one electron-hole pair for an electron recoil is eV Antman et al. (1966), giving . Therefore, a 75 V potential difference across the detector amplifies the ionization signal by a factor of 26 for an electron recoil.
The hardware trigger, based on the total phonon signal, was tuned on the CDMSlite detector to achieve as low of a threshold as possible while also maintaining a manageable trigger rate. When a trigger occurs, the data acquisition electronics record the phonon signals as waveforms, digitized at 625 kHz and lasting 6.6 ms, from all active detectors in the array. The signals from the charge-collecting electrodes were also read out as waveforms for each trigger; however, this information was used minimally in this analysis.
ii.1 Energy Scale
“Electron equivalent” energy units (keV) are the most convenient for analysis of data from the CDMSlite runs, because the observed backgrounds consist primarily of ER events. The electron equivalent energy is the electron recoil energy that would produce the same amount of phonon energy as is observed in the detector.
We calibrate the energy scale using a Cf neutron source. Activation of Ge by neutron capture produces Ge, which decays by electron capture with a 11.43 day half-life Hampel and Remsberg (1985). These decays produce peaks at the -, -, and -shell binding energies of Ga of 10.37, 1.30, and 0.16 keV, respectively Bearden and Burr (1967). The prominent -shell peak is used to calibrate the energy scale to keV and correct for any time variation in the detector response, and the - and -shell peaks are used to check the resulting energy scale for linearity in the energy region of interest.
NRs produce fewer charge pairs and therefore a smaller ionization signal than ERs of the same recoil energy, and we parametrize the smaller ionization signal by the energy-dependent ionization yield . The number of electron-hole pairs is then given by . The total measured energy in terms of the event recoil energy and ionization yield is:
For ERs, by definition. To convert from an electron equivalent energy to a nuclear-recoil equivalent energy (denoted with units of keV), we correct Eq. 3 for the difference in yield between nuclear and electron recoils, while assuming that each electron-hole pair experiences the full applied bias :
We use the Lindhard model Lindhard et al. (1963a); *Lindhard1963; *Lindhard1968 for the yield as a function of nuclear-recoil energy:
where , , and is the atomic number of the detector material. Measurements of in germanium are generally consistent with a small range of values approximately centered on the Lindhard model prediction of Jones and Kraner (1971); *Jones1975; Sattler et al. (1966); Messous et al. (1995); Barbeau et al. (2007). We account for the spread in experimental measurements as a systematic uncertainty on , as discussed in Sec. VIII.2.
ii.2 Operating Conditions
For CDMSlite Run 3, we operated a single detector in CDMSlite mode from February to May 2015 for a total livetime of 60.9 days. The top detector in the second tower was selected based on the two qualities that contributed most to lower analysis thresholds. First, this detector exhibited stable operation for a range of applied bias voltage up to nearly 75 V. Second, because of its reduced susceptibility to vibrational noise, this detector’s phonon energy resolution was among the best in the detector array. While a different detector was selected for CDMSlite Run 1 and Run 2 based on the same two metrics, the decision to switch detectors for Run 3 was also intended to demonstrate reproducibility of the CDMSlite operating technique across multiple detectors.
We applied a 75 V bias to one side of the detector with the other side grounded, following the same biasing scheme used in the previous CDMSlite runs. We also adopted the Run 2 “pre-biasing” procedure in which the detector bias was temporarily increased (to 85 V for Run 3) prior to the start of each data series Agnese et al. (2018a).
The voltage at the detector differed from the applied voltage because of a parasitic resistance that caused a significant voltage drop across a bias resistor ( M) upstream of the detector. The parasitic resistance caused a current draw from the power supply, , which we continuously measured in order to monitor the detector voltage:
We found that the parasitic resistance was correlated with the temperature of the room that housed the electronics. In April 2015 we adjusted the environmental conditions of this room to increase the parasitic resistance, thus lowering the leakage current and stabilizing the detector voltage at 75 V. Prior to April 2015, the detector voltage drifted between 50 V and 75 V. This resulted in 30% variations in the total phonon energy scale, shown in Fig. 1. We correct for this variation in the analysis, accounting for the small difference in the correction factor for nuclear versus electron recoils.
Following the stabilization of the detector voltage at V, the phonon noise performance worsened, indicating that the optimal operating voltage was slightly less than 75 V. Based on these two distinct operating conditions—bias voltage stability and noise performance—we divided the Run 3 data set into two periods: Period 1 and Period 2 (before and after April 1, respectively). This division facilitates optimization of certain stages of the analysis, which were performed separately for the two periods.
Additionally, the base temperature varied from 45 to 57 mK over the course of the run. We applied a temperature-dependent empirical linear correction of up to 5% to the energy scale. This correction was based on the positive correlation observed between the reconstructed energy of the -shell events and the recorded base temperature, and is shown in Fig. 2.
ii.3 Optimal Filter Energy and Position Reconstruction
Because CDMSlite detectors have non-uniform electric fields, the NTL amplification and the reconstructed recoil energy vary with the location at which an event takes place inside the detector. For most events, in Eq. 1 is equal to the full potential difference between the detector faces, resulting in maximal NTL amplification. However, as shown in Fig. 3, near the detector sidewall can be smaller; the voltage drop experienced by an electron-hole pair (and thus the NTL amplification) can be reduced such that the reconstructed energy of some high-radius events is significantly lower.
While we cannot reconstruct the exact position of an event and thus correct for the specific reduced NTL amplification, we can calculate a parameter that correlates with the radial position of an event and use it to identify events at large radii. We employ optimal filter algorithms Anderson and Moore (1979) to reconstruct the energy and position of events. Optimal filters weight frequency components of the raw pulses to maximize the signal-to-noise ratio when fitting for the amplitude of a pulse, and the standard optimal filter algorithm assumes constant pulse shapes.
CDMSlite phonon pulses are slightly variable in shape, with differing proportions of “slow” and “fast” components from event to event. The former provides a measure of the total event energy, while the latter is sensitive to the event position—events occurring directly underneath a phonon channel cause a faster pulse rise time in that channel than in other channels. We capture both types of information with a two-template optimal filter algorithm (2TOF) Agnese et al. (2018a). The first template is constructed from the average of many pulses, and then a second template is constructed from the average shape of the residual pulses (relative to the first template). These correspond to the “slow” and “fast” templates, respectively.
The definition of the radial parameter, which we denote by , remains the same as was used in Run 2 Agnese et al. (2018a, 2016). It takes advantage of the phonon channel layout with an outer annulus and three inner wedge-shaped channels, comparing the amplitude of the fast template and the pulse start time between the outer and the inner channels. The parameter identifies higher radius events that can experience reduced NTL gain and is used in Sec. V for fiducialization.
In addition to defining a radial parameter, the 2TOF is used to improve the event energy reconstruction. For each event, the best-fit amplitude from the fast template is used to apply a correction of up to 5% to the leading order energy estimation, which is derived from the best-fit amplitude of the slow template using a separate optimal filter algorithm that specifically deweights the high-frequency components of the phonon pulses. We use the same correction procedure as that described in Sec. II.C of Ref. Agnese et al. (2018a).
ii.4 Energy Resolution Model
We require a good model of the energy resolution in order to calculate the expected energy spectra for signal and backgrounds. We model the total CDMSlite energy resolution as in Ref. Agnese et al. (2018a):
The energy-independent term describes the baseline resolution and accounts for electronics noise and any drift in the operating conditions. The Fano term accounts for fluctuations in the number of generated charges Fano (1947) and is proportional to . The term reflects the position dependence of the event within the detector due to the electric field, TES response, etc., and is proportional to . Separating out the energy dependence we end up with the three model parameters , , and .
We use several measurements to determine the resolution model for Run 3. We use randomly triggered events to determine the zero-energy noise distribution. Additionally we use the widths of the -, -, and -shell Ge activation peaks (see Sec. II.1) to determine the energy dependence of the resolution. We fit these peaks with a combination of a Gaussian and linear background model in order to determine the width of the peaks.
|[keV ]||[eV ]|
Because the zero-energy baseline resolution varies with the applied bias voltage and with environmental conditions, all of which changed between Period 1 and Period 2, we calculate separate livetime-weighted average resolutions for each period. These are given in Table 1. The measured widths of the -, -, and -shell peaks are consistent between Period 1 and Period 2, and so common values are used for both periods.
|[eV ]||[eV ]|
Table 2 gives the best-fit parameter values for the model of Eq. 8. The coefficient is consistent between the periods, but is larger than the value predicted by measurements of the Fano factor, which is Antman et al. (1966); Bilger (1967). The values of also agree within uncertainties between the two periods.
We apply this energy-dependent resolution model when calculating the expected energy distribution for the background and DM signal components. We propagate uncertainties in the model parameters as systematic uncertainties in the profile likelihood fit of Sec. VIII.
Iii Blinding Strategy
To avoid bias during the analysis, we adopted a blindness scheme to prevent analyzers from tuning the analysis to reach a desired result. Because instrumental noise is a significant and time-varying source of events, it is desirable to be able to see all events at each stage of the analysis. Therefore, rather than hiding events in the signal region, we implemented data “salting” in which a fraction of the DM-search events are replaced with artificial signal-like events. This procedure effectively masks the true amount of DM signal in the data. The number and energy distribution of the artificial events were hidden from the analyzers. All analysis was done on the salted data until the last step, when we removed the added events, replaced the originals, and performed the final fits. We opted to replace events with salt, rather than solely adding salt, to avoid the need to work around the sequential event IDs that are a feature of our data format. This had the added benefit of protecting against a possible tendency to overly tune cuts to the particular events in the salted data, since analyzers knew that some unknown number of events would be added back in after unblinding.
The salting procedure itself was openly developed and known to analyzers in advance, with a number of input parameters randomized and hidden until unblinding. Table 3 lists these parameters, their allowed ranges (known in advance), and their randomly selected values that were hidden until unblinding. The goal of this process was to produce a set of artificial events with an energy spectrum approximating a DM-induced nuclear-recoil distribution, with other event parameters (e.g. goodness-of-fit, radial parameter, etc.) consistent with detector-bulk events uniformly distributed in time and location. We generated artificial events using a pulse simulation similar to that described in Sec. VI.B of Ref. Agnese et al. (2018a) in which fast and slow pulse templates were added to in-run noise samples. The relative amplitudes of the two templates were determined by fitting each channel to calibration data. The salting procedure is described step-by-step below.
Select data events to replace:
First, the number of events to replace with salt was selected randomly, and this number was kept hidden from the analyzers. The goal was to choose a number of events such that, after application of cuts, the remaining salt “signal” is between one and three times the predicted 90% confidence level limit for the analysis. Based on the size of the CDMSlite Run 3 data set and the passage fraction of trial salt data sets generated with Run 2 data and cuts, the range was set to 280–840 events.
The events to be replaced by salt were chosen randomly from the data set with a uniform time distribution. When events were replaced with salt, only the waveform data was changed, without changing any of the metadata such as trigger masks, timestamps, etc. Therefore, only events that generated a trigger on the CDMSlite detector were considered. An additional preselection cut requiring the reconstructed energy to be greater than 3.5 keV removed the majority of cryocooler-induced low-frequency noise events (discussed in Sec. IV.2), which represent the largest source of non-uniformity in the event time distribution. To select an event to replace, a random time was chosen within the CDMSlite Run 3 duration, weighted by the experiment livetime in one-day bins, and the nearest event passing preselection cuts was selected. If the chosen time was between data series, it was discarded. This process was repeated until the chosen number of events was selected.
Choose an energy for each salt event:
The event energies were chosen from an exponential distribution with a constant offset:
where the exponential component was chosen to roughly approximate a WIMP spectrum and the constant offset was chosen so that salt existed over the analysis energy region of interest. and are randomized hidden parameters, sampled logarithmically from to 3 keV for , and from to keV for . The chosen energy was also restricted from 0.05 to 5 keV to match the expected signal region of interest. The randomly selected parameters used were keV and keV, resulting in a nearly uniform distribution of salt events over the CDMSlite region of interest.
Construct the artificial pulses:
For each salt event, we constructed six artificial pulses (one for each phonon and charge channel). Each pulse, in turn, was constructed from the sum of a baseline noise waveform sampled during data acquisition, and the fast and slow templates used for 2TOF reconstruction.
The fast and slow pulse templates were scaled based on the reconstructed amplitudes of calibration events. For each salt event, we randomly chose a calibration event near the target energy from the set of all calibration events passing basic preselection cuts. These cuts included selection for good values for the bias voltage, current, base temperature, and the phonon pulse shape and noise event cuts described in Sec. IV.1. Initially, we chose only from calibration events within 10 eV of the target energy, after rescaling for corrections from varying parasitic resistance and temperature. If no events were found in this window (excluding events that were already used for salt), the search was repeated with the range extended to 50 and then 100 eV. All reconstructed amplitudes were scaled by the ratio of the target salt energy to the calibration event energy, maintaining the relative amplitudes of the fast and slow templates. In this way we produced salt events mimicking uniform bulk event distributions (e.g. in the radial parameter) without specifically modeling any of those variables.
Prior to beginning the salting procedure, a volunteer with substantial analysis experience was chosen to inspect the resulting salt. Several distributions were inspected with and without salt highlighted, to ensure that the salt did not significantly deviate from the data. If problems were identified, a fix was implemented, and the entire salting process was restarted. After validation, the pre-release inspector was excluded from any further analysis of the salted data set.
|Number of salt events||280–840||linear||393|
|Spectrum constant weight ( in Eq. 9)||–3||log||0.6967|
|Spectrum exponential slope ( in Eq. 9)||0.5–2||log||1.299|
Iv Quality Cuts
A set of data quality cuts removes instrumental noise triggers, poorly reconstructed events, and periods when the detector was behaving anomalously. Because this analysis employs profile likelihood methods to search for DM—fitting background and signal models to events that pass all cuts—it is imperative to identify and remove all instrumental noise events whose distributions cannot be modeled with a probability distribution in the fit. We use multivariate techniques in the lowest energy range of the analysis, where the experiment is most sensitive to DM particles with mass 10 GeV/, to reduce instrumental noise leakage to less than 1 event while maintaining as low of an energy threshold as possible.
iv.1 Overview of Data Quality Cuts
We accept only events for which the power supply bias voltage was set to 75 V. We also remove any events in time coincidence with the NuMI neutrino beam Michael et al. (2008), including events whose time relative to the NuMI beam cannot be determined due to missing GPS information.
Cuts remove time intervals with anomalously high trigger rates. The “prepulse,” a 1 ms length of waveform data preceding the trigger and read out with each event pulse, is used to monitor noise and reject events with elevated noise. Specifically we remove events in which the variance of the prepulse samples exceeds the average variance for events in the same three-hour data series by more than . We also designed cuts to remove electronics glitch events, which arise from instrumental noise and are characterized by pulse shapes with faster rise and fall times than signal pulses.111Throughout this section “signal” refers to good events caused by energy deposition in the detector, and “background” refers to instrumental noise events. These cuts identify glitches that caused multiple detectors to trigger, glitches in the outer charge channel of the detector that could be coincident with phonon triggers, and glitches that are similar to signal events in all but pulse shape. Events that did not cause a trigger on the CDMSlite detector were also removed. These cuts (excluding the pulse-shape glitch cut whose efficiency is considered separately) reduced the Run 3 livetime from 66.9 to 60.9 days.
Due to their low interaction probability, DM particles are expected to interact at most once in our detector array. Therefore events that deposit energy above threshold in multiple detectors are removed. Events coincident with the muon veto surrounding the experiment are also removed. These two cuts have a combined signal efficiency of 98.94 0.01 %.
Information from pulse-shape fits can discriminate signal events from instrumental noise events having a characteristic pulse shape. Six different templates are fit to each event using the optimal filter method: a signal template, a square pulse template, an electronics glitch template with fast rise and fall times, and three low-frequency noise (LF noise) templates. The instrumental noise templates were created by identifying instrumental noise events in test processings of the data set, and averaging a collection of the raw pulses from the different instrumental noise sources. Three different LF noise templates were created because the LF noise assumes different pulse shapes, discussed in Sec. IV.2.
The values for each fit are used to classify and remove instrumental noise events. First, events with a high value for the signal-template fit are irregularly shaped (e.g. from event pileup or pulse saturation) and are removed. To remove particular classes of instrumental noise events, we use the difference of values between the different template fits:
where OF corresponds to the standard signal-template fit, and LF, ‘glitch’ and ‘square’ correspond to the fits using the LF noise, glitch and square pulse templates respectively. Lower values of indicate events that have a more signal-like shape.
Glitch and square events have relatively uniform pulse shapes and do not resemble the signal pulse shape. Therefore, a single template for each is sufficient to efficiently discriminate against these event types.
The and distributions for good signal events are parabolic as a function of event energy, and so we use simple two-dimensional cuts defined in the vs. energy and vs. energy planes. The signal efficiency of these cuts is energy dependent and 80% down to the analysis threshold.
iv.2 Low-Frequency Noise Discrimination
Broadband low-frequency ( 1 kHz) noise due to vibrational sources, shown to be primarily generated by the cryocooler that provides supplemental cooling power for the experiment, dominates the trigger rate for the CDMSlite detector Agnese et al. (2018a).
In contrast to the other classes of instrumental noise events, LF noise events have variable pulse shapes and overlap substantially with the bandwidth of signal pulses. LF noise is therefore significantly more challenging to remove while maintaining high signal efficiency. Three LF noise templates are used to help identify a wide variety of LF noise shapes with parameters. Above reconstructed energies of 250 eV, where the signal to noise in the waveforms is sufficiently high that LF noise events can be identified relatively simply, we use cuts on values to remove LF noise events. Because discriminating against LF noise while maintaining signal efficiency is increasingly challenging at low energy, below 250 eV we use boosted decision trees (BDTs) to improve the discrimination power of the LF noise cuts. In particular, we tune two BDT-based cuts using the bifurcated analysis technique Rusu (2003); Nix et al. (2010) to ensure that less than one LF noise event leaks past the cuts.
iv.2.1 Bifurcated Analysis
The bifurcated analysis method uses side band information (i.e. information outside of the signal region) to estimate a certain background’s leakage past a set of quality cuts when no model exists for the background. We use this method for LF noise triggers because models for this background were found to be prone to significant systematic uncertainties.
The number of LF noise events leaking past a set of cuts is given by:
where is the passage fraction of the cuts and is the number of LF noise events. While both and are unknown, they can be estimated if there exist two uncorrelated sets of cuts that are both sensitive to LF noise events. Labeling the uncorrelated cuts A and B, denoting their known signal efficiencies as and , denoting the unknown leakage fractions of LF noise events past the cuts as and , and denoting the number of good (not LF noise) events as , the numbers of events passing the individual and combined cuts are:
where, for example, is the number of events that pass cut A but not cut B.
For uncorrelated cuts, the above system of equations can be solved to derive the number of LF noise events leaking past both cuts. For the case of cuts with 100 signal efficiency,
where side band information is used to estimate leakage into the signal region. We include a small correction to Eq. 13 that accounts for the known 100 signal efficiencies of cuts A and B, where the signal efficiencies of the bifurcated cuts are measured using the method discussed in Sec. VI.1.
Two different LF noise cuts were designed that are uncorrelated so that the bifurcated analysis can be applied. These two cuts use three sets of parameters that are sensitive to LF noise in separate ways:
the three parameters from pulse shape fits to the three different LF noise templates.
the variable, which represents the time since the last cryocooler cycle. The cycle period is 0.8 seconds and LF noise causes triggers more frequently in the 0.2 seconds after the start of the cycle. This behavior is similar, though not identical, to that observed for the CDMSlite Run 2 detector Agnese et al. (2018a).
the correlation of the phonon waveforms between the CDMSlite detector and the other detectors in the tower, because the vibrational sources producing LF noise triggers couple to all detectors in a tower.
The first bifurcated cut (cut A) used primarily parameters to discriminate against LF noise; the second bifurcated cut (cut B) used primarily cryocooler time and cross-detector correlations. A BDT was used to reduce the bifurcated cuts to a single dimension (BDT A and BDT B). Both BDTs were trained using a ‘background’ sample of LF noise (selected by removing events that fail the other quality cuts and removing events that are clearly good phonon pulses) and a ‘signal’ sample of simulated good phonon pulses with noise. Details of the phonon pulse simulation are discussed in Sec. VI.1. For every event a BDT score is generated between 1 and 1, with a larger BDT score corresponding to a more signal-like event.
The bifurcated analysis was then performed by placing cuts on the BDT A and BDT B scores and calculating the number of LF noise events leaking past the cuts, with cut values chosen such that the total LF noise event leakage is 1. Figure 5 shows the signal box (upper right) defined by the bifurcated cuts for Period 1; the estimated LF noise leakage for this period is events. A similar analysis on the Period 2 data gives an estimated event leakage of events.
The choice of the cut location also assured minimal correlation between cuts. This was verified by the method of “box relaxation.” As a bifurcated cut is loosened, new events will enter into the signal box and the bifurcated leakage estimate will change. If the cuts are uncorrelated, the bifurcated analysis estimate will grow by the number of new events in the box (to within uncertainties). Because the BDT cut efficiency for signal events is not 100% (see Sec. VI.1), we must correct for the contribution of signal events being added to the box as the cut is relaxed. We verified that the number of events entering the box matched the bifurcated analysis’s prediction to within uncertainties, which is consistent with the cuts being uncorrelated and therefore supporting the validity of the leakage estimates. Figure 6 illustrates this agreement.
V Fiducial Volume Selection
A cut on the radial parameter (Sec. II.3) defines a fiducial volume in order to remove events with reduced NTL gain (RNTLs) near the detector side wall. The definition of this cut is improved compared to the CDMSlite Run 2 analysis Agnese et al. (2016, 2018a).
We characterize RNTLs by modeling their distribution as a function of reconstructed energy and . The modeling is done in several steps:
Determine the energy response of the detector, using the NTL gain as a function of position inside the detector (Sec. V.1);
Determine the rates of events that contribute RNTLs into the signal region below 2 keV (Sec. V.2);
Model the distribution of RNTLs in (Sec. V.3);
Model the resolution of as a function of energy and (Sec. V.4);
Construct a Monte Carlo simulation based on these models to determine the expected distribution of RNTLs in the energy- plane, and define a cut in this plane to remove these events (Sec. V.5); and
Extend the cut above 2 keV where begins to change due to phonon-sensor saturation (Sec. V.6).
v.1 Energy Distribution of RNTLs
We use a smoothed histogram of the effective potential distribution shown in Fig. 3 to determine the energy response of the detector to a homogeneously distributed mono-energetic source of events.
We define the RNTLs to include any event whose recoil location results in a reconstructed recoil energy that differs from the true recoil energy by more than the detector energy resolution. This corresponds to events that see less than 93.3% of the full bias voltage . For electron recoils, the measured event energy is reduced from the nominal expectation according to
where is the potential difference experienced by charge carriers produced at the recoil location, and is the nominal potential difference.
The shape of the voltage distribution is a source of systematic uncertainty for the distribution of RNTLs, and to account for this we perform the same analysis with an alternate voltage distribution containing more features in the voltage spectrum from the simulation. This predicts a slightly higher leakage rate of RNTLs given the same radial cut, and gives us a handle on the systematic uncertainty on the rate of RNTLs we expect to pass our radial cut.
v.2 Identification of RNTLs
Following the analysis done in Run 2 Agnese et al. (2018a), we use the 11.43 day half-life of the Ge produced during neutron calibrations to statistically differentiate -shell capture events from other backgrounds in different regions of the energy- plane. This study of Ge decay events finds that of events receive full Luke gain and thus are reconstructed at the correct energy, making the remaining RNTLs.
We then use this fraction to calculate the number of RNTLs in our data set. We first determine the number of -shell events from Ge decays by fitting the time distribution of events in the -shell line at 10.37 keV with a component that decays with the 11.43 day half-life of Ge plus a flat component due to other backgrounds.
Using the known number and energy of the -shell events and the shape of the tail in the distribution from the voltage map, we can then determine the expected number of -shell RNTLs in our signal region below 2 keV. The contributions of -shell and -shell events are estimated by scaling the number of RNTL events from -shell decays by the theoretical branching ratios between these shells (see Table 4). All RNTL events from -shell or -shell decays are in the energy region of interest for this analysis.
The rates of other backgrounds below the shell are determined by assuming that they are distributed uniformly in volume, energy, and rate, measuring their rate in a region free from RNTLs and Ge events (, keV) and extrapolating to the full volume and energy range. Similarly, the rate of events above the shell is extrapolated from a region higher in energy than the shell that is free from RNTLs and Ge events (, keV).
The final step for calculating the rate of RNTLs is to scale the rates by the energy-dependent efficiencies of all the other cuts. We estimate that there are RNTLs in the signal region before applying a fiducial volume cut.
v.3 Distribution of RNTLs in
The majority of RNTLs are measured only slightly lower in energy than their true energy, because the distribution of inside the detector peaks strongly at the nominal voltage. Thus the energy regions just below the strong and -shell Ge-decay peaks provide good samples of RNTLs whose properties can be studied to determine their distribution in the radial parameter .
We model the radial distribution of RNTLs by defining a region in the radial parameter () outside of which we observe no RNTLs, and selecting events in this region within a small energy range below the -shell capture peak (0.7–1.2 keV). Creating a cumulative distribution function in for these events gives us an idea of the distribution of RNTLs in . A systematic uncertainty on this distribution is estimated by removing the upper bound in while narrowing the energy window, which creates a distribution that predicts slightly more RNTLs passing the same cut.
v.4 Resolution of
To model the resolution of , we create sets of simulated events based on the 2TOF templates and fits of Ge -shell capture events (a large sample that well represents the true distribution through the full range of ) in the manner done in Ref. Agnese et al. (2018a). Differently from what was done for the Run 2 analysis, we simulate each event with 100 different noise traces and use the resulting output to find the spread in due to the noise, as a function of and energy. By fitting a Gaussian distribution to these sets of simulations, we build a model of the resolution as a function of “true” and energy (Fig. 7).
v.5 RNTL Monte Carlo Simulation
Combining the expected energy distributions of RNTL events, the voltage map model, and the resolution model for as a function of energy, we can model the RNTL distribution throughout the full energy- plane. We use a Monte Carlo method to sample these distributions and thus produce a prediction for the 2D probability distribution of the data in these variables. We set a cut on as a function of energy based on this distribution, such that we expect RNTLs passing the cut. The systematic error is estimated from Monte Carlo simulations with the alternate radial and voltage models (with the radial distribution of RNTLs being the larger contributor). The cut boundary was chosen such that the expected distribution of RNTLs passing the cut is uniform in energy between 0.07 and 2 keV. The radial parameter cut imposes an analysis threshold of 70 eV, which is determined by the lowest well-determined bound of the radial resolution model.
v.6 Radial Cut above 2 keVee
In Sec. IX.1 we will estimate the sensitivity of the experiment to DM interactions based on background expectations derived from higher energies (5–25 keV) that are then extrapolated down into the signal region (0.07–2.0 keV). Therefore, fiducialization at higher energies is needed. Above 2 keV, the threshold of the radial cut is set differently because we cannot model as well, due to saturation effects in the phonon pulse shape affecting the measured . Instead, we set a restrictive cut at in above 2 keV so that we expect zero RNTLs in this region. The full range of data with the radial cut applied is shown in Fig. 8.
Vi Signal Efficiency
We calculate the DM signal efficiency of the Sec. IV quality cuts and Sec. V fiducial volume cut by simulating raw pulses, processing them through the analysis pipeline to calculate the different cut variables, applying the cuts to them, and calculating their passage fraction as a function of energy.
The trigger efficiency is calculated with an alternate technique using information derived from events triggered on the rest of the detector array. It turns out that the trigger efficiency is a minor contributor to the total efficiency because the data quality and fiducial volume cuts are less efficient than the trigger at low energy.
vi.1 Data Quality Cuts
The efficiency of the signal template cut, the and cuts, and the two BDT-based LF noise cuts is calculated using simulations of the total phonon pulse (i.e. the sum of the pulses for all phonon channels read out from the CDMSlite detector). These simulations depend on accurately representing the phonon readout noise in the pulses as well as the shapes of the true phonon pulses. We accomplish this by combining noise traces from randomly triggered events with noiseless phonon pulse templates. The true phonon pulses contain pulse-shape variations; to recreate these variations we use a linear combination of the fast and slow templates (see Sec. II.3): ). For each simulated pulse, we select values for the simulated pulse amplitude and the fast template component from a two-dimensional distribution of these parameters drawn from the full DM-search data set. These simulated pulses span the energy range of interest for the analysis.
The cryocooler timing variable and waveform correlations between detectors are also recreated for the simulated pulses, which are inputs to the BDT-based LF noise cuts. The noise traces from which the simulated pulses are formed are uniformly distributed in . Because DM signal events should also be uniformly distributed in this variable, the simulated pulse uses the variable from the noise trace. The noise traces also provide the detector-detector correlation variables. When the noise trace is acquired, the waveforms on the other detectors in the tower are also recorded. After adding the simulated phonon pulse to the noise trace on the CDMSlite detector, we calculate the waveform correlations between detectors. Finally, we calculate the BDT scores for the simulated data and apply the cuts. The combined efficiency of all data quality cuts, including the energy-independent multiples and muon veto cuts, is shown in Fig. 9.
vi.2 Fiducial Volume
The efficiency of the fiducial volume cut can be measured with techniques similar to those used to construct the RNTL model in Sec. V. We use a Monte Carlo simulation based upon the resolution model of to simulate the radial parameter distribution for events having the full NTL amplification. We model the distribution for these events after that of events with reconstructed energies in the -shell line. We statistically subtract the small contribution of non-Ge backgrounds from this distribution and deconvolve the radial-parameter resolution at 1.3 keV.
The result is what is expected to be the underlying “true” distribution of for events at the -shell energy. We then use the model of to scale this distribution according to energy, thereby creating energy-dependent probability distributions for . Finally, we apply the radial cut to these simulated distributions, and by doing so obtain the efficiency of the fiducial volume cut for events with full NTL amplification.
To obtain the full efficiency of the radial cut, this number must be multiplied by the percentage of events reconstructed at the correct energy (i.e. having the full NTL amplification), as the resolution model for is valid only for those events at the correct energy. We specifically set the cut to remove all RNTLs; we therefore estimate the full efficiency of the radial cut by multiplying by the percentage of non-RNTLs (86).
vi.3 Trigger Efficiency
The data acquisition system for CDMSlite issues a trigger and reads out events only when an energy deposition is large enough to create a significant increase of the signal above the baseline noise and thus exceed the trigger threshold. To measure the trigger efficiency we select events that have triggered in the other active detectors because they are an unbiased sample of events with respect to the CDMSlite detector’s trigger. The trigger efficiency is then given by the fraction of events at any given energy (measured in the CDMSlite detector) that also generate a trigger in the CDMSlite detector. We use Cf calibration data, which has a significantly higher event rate than the DM-search data, to decrease the statistical uncertainty of the trigger efficiency measurement. To model the trigger efficiency as a function of energy, we fit an error function to the data using the same method as was used in the Run 2 analysis Agnese et al. (2018a). The final trigger efficiency curve is shown in Fig. 9. Above 0.09 keV the trigger efficiency is equal to 100% with negligible statistical uncertainty.
The efficiencies for the trigger, the data quality cuts, and the fiducial volume cut are combined by multiplying their mean values and propagating their respective uncertainties.
Incorporating the signal efficiency into the likelihood, described in Sec. VIII, is most easily accomplished by parameterizing the final efficiency using a functional form with a limited number of model parameters. We find that a three-parameter error function,
is a good parametrization of the total efficiency curve. This simple efficiency parametrization deviates from the data slightly ( %) in the 0.15–0.4 keV range. We verified that this deviation results in a negligible change in the expected DM sensitivity. We determine the best-fit values of , , and as well as the covariance between these parameters, denoted by a matrix E. This matrix is used to propagate uncertainties in the efficiency parameters into the profile likelihood fit of Sec. VIII.
Vii Background Models
The SuperCDMS cryostat was surrounded by layers of shielding that blocked almost all external radiation, such as -rays and neutrons from the cavern walls. Thus, the radioactivity of the shielding and the other apparatus materials was the dominant source of background. We use Monte Carlo simulations, as well as data-driven fits, to model these backgrounds.
The primary backgrounds modeled for this analysis are cosmogenic activation of the crystal, neutron activation from Cf calibration, Compton scattering from primordial isotopes in the apparatus materials, and Pb contamination on the surfaces of the detector and its copper housing.
vii.1 Cosmogenic Activation
Cosmic rays can cause spallation resulting in cosmogenic activation of the crystals and apparatus materials during fabrication, storage, and transportation above ground. In germanium detectors, tritium contamination is a significant background, with contributions from other isotopes that decay primarily either by -decay or electron capture (EC). The additional cosmogenically-produced isotopes that undergo -decay have endpoints of (MeV) and relatively small production rates. These can generally be ignored. The isotopes that undergo EC give discrete lines in the detectors below 10 keV and were observed in the CDMSlite Run 2 spectrum Agnese et al. (2018b). We describe analytic models for the tritium beta-decay spectrum and the EC lines.
Non-relativistic -decay theory suffices to model tritium’s decay spectrum because its endpoint, or -value, satisfies the relationship , where is the electron mass. The distribution of the electron’s kinetic energy is described by
where is a normalization constant and is the Fermi function Krane (1988). The non-relativistic approximation for the Fermi function is given by
Here is the atomic number of the daughter nucleus, is the fine structure constant, and is the electron’s momentum Povh et al. (2008). The analytical description given by Eqs. 16 and 17 describes the tritium background used for the likelihood analysis.
vii.1.2 Electron Capture Peaks
The cosmogenic isotopes that decay via EC and are present in the measured CDMSlite spectrum are listed in Table 4 with their shell energies and relative amplitudes, normalized to the shell. The observed energy distribution is a Gaussian peak at the energy of the respective shell with a width given by the detector’s energy resolution.
In our background model, the amplitude ratio between the -, - and -shell peaks is assumed to be as given in Table 4. The contribution of each EC isotope to the spectrum is given by an equation of the type
where are the amplitudes of the respective shells, are the shell energies, and are the energy resolutions at the respective energies.
By modeling the EC peaks with Eq. 18, the number of events in the shell is the only free parameter in the likelihood fit, with the other peak amplitudes determined from the branching ratios.
vii.2 Electron Capture of Ge
Neutrons from the Cf calibration source can be captured by Ge, creating Ge, which undergoes EC. Although we use these peaks for calibration (see Sec. II.1) and although they decay with a half-life 11.43 days, they are still a source of background. They are modeled using the same functional form as the cosmogenic EC peaks (Eq. 18) with the one exception that due to the large overall number of events the L peak is not negligible and is thus included in the fit. This component, omitted from Table 4, has an energy of 1.14 keV and relative amplitude of 0.0011.
vii.3 Compton Scattering
The Monash University Compton Model Brown et al. (2014) calculates properties of the scattered incident photon and the detector’s recoiling electron by accounting for the atomic binding energy. This treatment is necessary to replicate the phenomenon of “Compton steps”—step-like features created in the energy spectrum because the detector collects at least the binding energy of any freed electron. For example, the electrons in the shell of germanium have a binding energy of 11.1 keV. This energy is deposited in the detector due to the reorganization of the electron shells, along with any additional energy that is given to the freed electron by the incident gamma. Thus, an electron from the shell can never deposit less than 11.1 keV in the detector, and likewise for electrons in the other atomic shells. Naïvely we would expect the number of electrons in each shell to determine the relative size of the steps; however details of the electron wave functions can also affect the step size. The Compton steps have been directly observed in silicon detectors Ramanathan et al. (2017). In germanium, only the -shell step has been measured directly, and so other methods must be used to estimate the lower energy steps Barker (2017).
The dominant contributors to the Compton background are the radiogenic photons from trace amounts of contamination in the experimental materials. These originate from the shield materials (polyethylene and lead) as well as the cryostat and towers (copper). To estimate the shape of this particular background, we carried out a Geant4 simulation Agostinelli et al. (2003); *Allison2006; *Allison2016 of U decays originating from the cryostat cans. The spectrum of deposited energy in the CDMSlite detector from this decay was determined to be characteristic of all bulk contamination. We fit a model consisting of a sum of error functions,
to the the simulated events that scatter once in the CDMSlite detector. The location of each step is given by , while is the energy resolution at that energy given by the energy resolution model of Sec. II. The , the amplitudes of the error functions, are the relative step sizes, and are chosen so that Eq. 19 is normalized to one over the energy range 0–20 keV. Due to the binned nature of the fit, only the amplitudes of the first four steps could be accurately determined. The constant term in Eq. 19 has a value of 0.005 keV and accounts for a flat background required to fit the simulated spectrum.
vii.4 Surface Backgrounds
Surface events are primarily due to the decay of Pb, which is a long-lived daughter of Rn. Radon exposure can cause Pb to become implanted into the surfaces of the detectors and their surrounding copper housings. Radiation from the Pb decay chain consists primarily of betas, Auger electrons, Pb ions, and alphas which have a small mean free path in Ge and will deposit the majority of their energy within a few millimeters of the detector’s surface. To understand this background and build a model of its expected distribution in energy, we use a Geant4 simulation and a detector response function. We normalize the predicted rate of surface backgrounds using a study of alphas in SuperCDMS iZIP data.
vii.4.1 Detector Response of CDMSlite
Surface events will deposit all their energy within a few millimeters of the detector surface, depending on the particle type. Due to the asymmetric electric field shown in Fig. 3, many surface events at large radii will experience reduced NTL gain and be removed by the fiducial volume cut. To properly model this background in CDMSlite, an approximation of the detector response is needed such that reduced NTL events can be removed.
The detector response model uses the voltage map of Fig. 3 and the resolution model of Eq. 8 to approximate the total phonon energy measured in the detector. Each component of the energy resolution model is implemented independently. For example, the energy deposited in a Geant4 simulation is used to determine the average number of electron-hole pairs produced, then an integer number of actual pairs is drawn from the distribution of width . A yield correction is applied to NRs based on the Lindhard model (see Eq. 5). The location of the Geant4 event in the detector is used to determine the experienced voltage for the event and thus the total phonon energy using Eq. 3. is given by the energy deposited in the simulation.
We do not attempt to simulate the radial parameter for surface events. Instead, because the radial cut removes events at large radii that have reduced NTL amplification due to the reduced electric potential, we use a cut on the experienced of the events as a proxy for the fiducial volume cut. This was set at , where the simulation itself used volts.
vii.4.2 Simulation of Pb Contamination
In Geant4, we use the Screened Nuclear Recoil physics list Mendenhall and Weller (2005) to model the implantation of Pb into the material surfaces along with any recoil of nuclei by subsequent decays to the stable isotope Pb. We consider three locations from where surface events may originate: the copper directly above the detector (“top lid”, TL), the cylindrical housing (H) and the surface of the germanium crystal itself (Ge).
We simulated energy deposition from the decays of Pb, Bi, and Po for the three locations. Applying the detector response function to each simulated decay yields the expected spectrum for this analysis. Additionally, we consider only events with energy deposition in the top detector of the tower (single-scatter events), since that is the location of the CDMSlite detector. The spectra from all three decays can be added under the assumption of secular equilibrium between the two daughters and the Pb parent. This is a valid assumption because the longest daughter half-life in this chain is 138 days, which is short compared to the time between the last exposure to radon and the beginning of the measurement. The spectra from the three locations are included in the likelihood fit of Sec. VIII.1 to account for all possible surface background events.
We normalize the surface background rate with an independent measurement of the alpha decay events in the CDMSlite detector (similar to the surface-event normalization in Ref. Agnese et al. (2014b)), using a data set with a livetime of 380 days taken with the detector operated in iZIP mode. Because this iZIP-mode data set provides more detailed information on event positions, the observed rates could be attributed to surface event sources originating from parents on the top lid, housing, and detector surface. The detector surface rate is deduced from the surface facing the neighboring detector. This rate is then subtracted (with the appropriate surface area scaling) from the event rate measured on the side wall and the surface facing the top lid to determine the rate from the other two locations (H and TL). Because the determination of an individual source’s contribution depends on subtracting the contribution of the other sources, this normalization procedure introduces a negative correlation between the various components.
We compare the observed alphas from the detector surface, top lid, and housing to the simulated number to determine a scaling factor for the simulation. The single-scatter events that pass the voltage cut in the simulation are then scaled to the Run 3 livetime to get the expected number of surface events. The germanium, housing, and top lid are estimated to respectively contribute 3.4, 6.5, and 17 events from 0–2 keV after signal efficiency cuts have been applied.
vii.4.4 Discussion of Uncertainties
There are two main sources of systematic uncertainty on the energy spectra for surface events: uncertainties in the voltage map that determines the voltage for each event, and the location of the fiducial volume cut. The map in Fig. 3 assumes no additional detectors in the tower. Including the detector beneath the CDMSlite detector results in a difference of 0.5 V and 1 V for the top and bottom faces respectively, which we incorporate as a systematic uncertainty. Additionally, we model uncertainties in the fiducial volume cut (using the voltage cut as a proxy for the radial parameter cut) by varying the voltage cut from roughly V to V. These two sources of error are independent and can be added in quadrature.
The surface backgrounds are included into the likelihood of Sec. VIII using event densities, , that are functions of morphing parameters, . Here iterates from 1 to 3, corresponding to the three surface background sources. The morphing parameters, collectively denoted as , are used in order to incorporate both the uncertainty on spectral shape and uncertainty on the normalization from the alpha study. They allow the event density to smoothly vary within the 1 uncertainty band as:
where () for the upper (lower) expression. A value of results in the nominal event density (), results in the upper +1 event density (), and results in the lower 1 event density (). The event densities, shown in Fig. 11, are normalized such that the integral of gives the expected number of surface events as indicated by the alpha study.
Because the systematic uncertainties from the voltage cut are positively correlated between the different surface backgrounds, the morphing parameters of the three surface backgrounds are positively correlated. We encode correlations from these common systematics, as well as correlations resulting from the alpha decay normalization study, in a covariance matrix M between morphing parameters. Information from the alpha study prefers constraints on centered at zero. Fits to the CDMSlite energy spectra above the region of interest for this analysis, done as part of a sensitivity study described in Sec. IX.1, favor slightly negative values for the . We use the fitted values and covariances from that study as constraints in the likelihoood fit of Sec. VIII.
Viii Profile Likelihood Analysis
To incorporate information about backgrounds when searching for a DM signal, we use the profile likelihood ratio (PLR) method, which improves upon previous CDMSlite DM searches in multiple ways. First, it provides improved sensitivity over the optimum interval limit-setting method Yellin (2002, 2007) as implemented in the Run 2 analysis because the known backgrounds are taken into account. Second, the PLR approach can in principle be used in a discovery framework, potentially allowing for discovery of a signal. Third, the PLR approach naturally incorporates systematic uncertainties into signal and background models and reflects those systematic uncertainties in the sensitivity.
The PLR method fits the probability distribution functions (PDFs) for a DM signal and all background sources accounted for in our background model to the energy spectrum of events that pass all cuts. Separate PDFs are used for Period 1 and Period 2. CDMSlite has greatest sensitivity to DM masses between 1 and 10 GeV/. Because the corresponding expected energy spectrum from a DM signal is concentrated below 2 keV, we restrict our final likelihood fit (and thus our DM search) to the 0.07–2 keV energy range, where 0.07 keV is the analysis threshold. Tests of the likelihood fit done prior to unsalting on simulated data sets validated the fitting method.
viii.1 Likelihood Function
We use an unbinned extended likelihood to fit for the number of DM and background events in the final data set. One-dimensional PDFs, denoted by and normalized to unity over the energy range of the fit, describe the signal and non-surface background distributions as a function of energy. We calculate the signal PDF using standard DM halo assumptions and the Helm nuclear form factor Lewin and Smith (1996); Helm (1956), as a function of the DM mass. The number of fitted DM events is denoted and is related to the DM cross section . The non-surface background model is comprised of six PDFs from the sources discussed in Sec. VII: Compton scattering events, tritium, and four different EC isotopes (Ge/Ge, Ga, Zn, Fe). The number of background events from these different sources is given by , where iterates from 1 to 6.
We include the surface background distributions in the likelihood not as PDFs but as event densities, denoted , which account for both spectral shape and normalization. This was done because the energy spectra of these backgrounds vary with the systematic uncertainties considered and correlate with their normalizations, both parameterized by the morphing parameters, , discussed in Sec. VII.4. The number of background events from the surface background sources is given by , where iterates from 1 to 3.
While the normalizations of the surface background event density distributions are constrained by the alpha measurements discussed in Sec. VII.4, we place no constraints on the number of events contributing from the other background sources. Spectral information alone is used to fit these backgrounds and differentiate them from the DM signal distribution.
The full extended likelihood function is
where is number of events in the data set, is the total number of fitted signal and background events, is a set of nuisance parameters that vary the shapes of the PDFs as a function of systematic uncertainties, and is a constraint term that encodes prior constraints on these nuisance parameters as well as the morphing parameters .
viii.2 Systematic Uncertainties & Constraints
The parameters in Eq. 21 incorporate systematic uncertainties from the NR ionization yield (described in Sec. II.1), the signal efficiency, and detector resolution into the likelihood. These sources are parametrized respectively by Lindhard’s parameter, three efficiency parameters , and six resolution parameters ; so . The NR ionization yield parameter shifts the signal distribution as described in Sec. II.1. The signal efficiency parameters scale the distributions by the shape given by Eq. 15, and the resolution parameters smear distributions with a resolution given by Eq. 8 and parameters from Table 1.
The term in Eq. 21 is given by
which constrains the and variables by their central values, uncertainties, and correlations as determined with prior information. These constraints dictate the extent to which the systematic uncertainty parameters can alter the shape (and, in the case of , the normalization) of the signal and background distributions.
The 1D Gaussian constraint on allows this parameter’s fitted value to differ from the theoretical value for germanium, = 0.157. The systematic uncertainty is the Gaussian’s width, , as estimated from auxiliary measurements of the ionization yield in germanium Barker and Mei (2012). Because these measurements do not provide precise information about the NR ionization yield, particularly at low energy, we use a weak constraint on by choosing .
We constrain the three parameters describing the signal efficiency, , with a 3D Gaussian prior using the results of Sec. VI.4. The center of the 3D Gaussian is given by the best-fit values of the parameters , and its shape is determined by the covariance matrix between best-fit values, given by E. We similarly constrain the resolution parameters, , using the 6D Gaussian prior from the resolution model of Sec. II.4, with best-fit resolution model values of and covariance matrix R. Because the Period 1 and Period 2 detector resolutions were modeled independently, R contains zeros in elements linking the two periods. The morphing parameters, , which incorporate systematics of the surface backgrounds, are constrained in the final term of Eq. 22. The expected values for the morphing parameters, , as well as the covariance matrix (M) between them, determine the constraint. We take the constraints for the morphing parameters from the sensitivity study described in Section IX.1.
viii.3 Upper Limit Calculation
We test the hypothesis that a DM signal with spin-independent cross section , for a certain mass, exists in the data. Because the best-fit value of for the DM masses considered in this analysis is found to be well below the experiment’s sensitivity (calculated in Sec. IX.1), we choose to set an upper limit. Using the likelihood ratio statistic described in Ref. Aprile et al. (2011), all parameters in the likelihood other than (i.e. the systematic uncertainty parameters and the numbers of background events) are profiled out as nuisance parameters by maximizing as a function of these parameters with held constant. Explicitly, is defined as
where is defined as
The numerator of is the likelihood of a fit that has constrained the signal component to the test hypothesis value , and are the values of the nuisance parameters that maximize the likelihood given the constraint on . The denominator of is the likelihood with no constraints—the cross section is permitted to float, along with the nuisance parameters, and the values that maximize the likelihood are labeled and . Signal hypotheses for which are compatible with the data when calculating upper limits. Therefore is set to 0 in these cases, which is the value that indicates the highest degree of compatibility between the signal hypothesis and the data.
This profiling method yields a likelihood ratio function that is solely a function of . We calculate the value for which the signal hypothesis () is rejected at the 90% confidence level (CL) by comparing the obtained from the data to the expected distribution of when the signal hypothesis is true, . While significant computation is required to calculate for every tested signal hypothesis , Wilks’ theorem Wilks (1938) indicates that asymptotically approaches a distribution that has equal contributions from a Dirac delta function distribution centered at zero and a distribution with one degree of freedom. Monte Carlo calculations have verified that converges to the distribution predicted by Wilks’ theorem for a variety of tested signal hypotheses within the sensitivity of the Run 3 analysis, and therefore the theoretical distribution is used to set the upper limit.
Additionally, the CL technique Read (2002) is used to protect against the possibility of the PLR method excluding a DM-nucleon cross section lower than the sensitivity of the experiment, which can occur if the background statistically fluctuates to a low number of events. A consequence of this protection, which we have verified with Monte Carlo simulations, is that the CL technique gives a slightly higher 90% excluded signal cross section than would otherwise be obtained (i.e. provides a limit with over-coverage) and is therefore conservative.
ix.1 Sensitivity Calculation
Prior to unsalting the data, we calculated the 90% CL sensitivity of the Run 3 analysis to a DM signal based on projected background rates in this analysis’s energy region of interest (ROI), 0.07–2.0 keV. The sensitivity calculation also uses the likelihood framework presented in Sec. VIII. To estimate the background rates in the ROI, we measure them in the 5–25 keV range and extrapolate the rates to lower energy. We choose 5 keV because salt was not inserted above this energy and because the DM signal contribution above this energy for DM masses 10 GeV/ is expected to be negligible. Also, because 5 keV is below the lowest -shell energy of the EC isotopes considered, all background components are constrained in this range. We perform a maximum likelihood fit, using the likelihood defined in Eq. 21 but without the DM signal. We also omit the resolution and efficiency systematic uncertainties because those extra terms are unnecessary when fitting the 5–25 keV background spectrum. This fit provides best-fit values of, as well as covariances between, background rates in the 5–25 keV range for the nine background components. The expected background in the ROI can directly be calculated from the best fit in the 5–25 keV range. The uncertainty is determined from the covariance matrix of the fit.
Background-only pseudo-experiments are then generated by sampling from the nine different background distributions. The number of events thrown for each background component is randomized, first by sampling from the 9D Gaussian distribution provided by the 5–25 keV maximum likelihood fit and second by adding a Poissonian fluctuation to the sampled value. The 90% CL PLR limit, using the CL technique, is calculated for 500 of these pseudo-experiments, and the resulting 1 and 2 sensitivity bands are shown respectively by the green and yellow bands in Fig. 13.
In addition to determining parameters for generating the pseudo-experiments, the 5–25 keV fit provides constraints on the surface background morphing parameters (the of Eq. 22). While this fit used a prior constraint centered at 0 for all morphing parameters, the respective posteriors peaked at 0.19, 0.2, and 0.25 for the germanium, top lid, and housing surface background locations respectively. This indicates a slightly lower surface background rate than predicted by the alpha decay study. These updated central values for the constraint were used in the likelihood for both the sensitivity estimate and the final limit, along with an updated covariance matrix for the morphing parameters.
ix.2 Evaluating the Goodness of Fit
Because the likelihood fitting procedure described in Sec. VIII.1 provides no information as to the goodness of fit (GOF) of the model to the data, we define a procedure to evaluate the GOF that estimates a probability (i.e. a -value) for the data given the model. We use the Cramér-von Mises GOF statistic Birnbaum (1953) because it does not require binning of the data, overcomes some deficiencies of the more common Kolomogorov-Smirnov test, yet is still relatively simple compared to some alternative GOF metrics.
The particular GOF procedure that we use incorporates the systematic uncertainties described in Sec. VIII.2. We fit the data using the likelihood of Eq. 21 without a DM component and calculate the Cramér-von Mises statistic using the best-fit total background model (i.e. the red dashed line in Fig. 12). One output from the fit to the data is the covariance between all systematic uncertainty parameters. We then generate 1000 pseudo-experiments that are representative of the model’s fit to the data. Using the covariance matrix between systematic uncertainty parameters, we randomize the systematic uncertainty parameters for each pseudo-experiment, which slightly changes the shape of the individual background components. We then sample those individual background components using Poisson fluctuations around the best-fit value from the fit to the final spectrum. Finally, we fit these pseudo-experiments and calculate a Cramér-von Mises statistic for each of them. The -value is then the fraction of pseudo-experiments with a Cramér-von Mises statistic greater than the one for the data fit.
Prior to unsalting, we agreed on a -value threshold of 0.05, below which we would investigate inaccuracies in the background model, abandon the limit obtained with the profile likelihood method, and resort to the more conservative optimum interval Yellin (2002, 2007) limit-setting technique. Upon unsalting we found a -value of 0.988, indicating a particularly good fit. Checks of biases in the GOF evaluation were performed and none were discovered. We therefore accept the 90% CL limit provided by the profile likelihood method.
ix.3 DM Limit and Background Rates
The final Run 3 spectrum after application of all selection criteria is shown in Fig. 12. The main features are the Ge electron-capture - and -shell peaks at 1.30 and 0.16 keV respectively. Events contributed from backgrounds other than Ge exist between the peaks and are well modeled. We do not observe a population of events below the shell, which is consistent with the steep decrease of the signal efficiency in this range and consistent with the expectations from the background model.
While the best-fit individual background components are shown in Fig. 12, this figure does not provide a visualization of the covariances between background components. As expected, a strong covariance is observed between the Compton and H background components, which in this energy range do not contain sufficiently distinct spectral features to remove their degeneracy in the fit. The surface background components are strongly correlated through the prior constraint covariance matrix, M, described in Sec. VII.4.4. We find that the surface background component covariances from the likelihood fit match the prior constraint covariances, indicating that these 0.07–2.0 keV data do not provide any additional information on the surface background.
|Range||Run 2 Rate||Run 3 Rate|
We calculate the average background rates of single-scatter events between the Ge peaks, corrected for efficiency, as shown in Table 6. The higher background rates, relative to Run 2, are consistent with the expected background rates based on the position of the detector in the tower. The Run 2 detector had neighboring detectors on both of its faces. By contrast, the Run 3 detector was the top detector in the tower and therefore had one face exposed to the top copper lid. Additionally, it is expected that identification of multiple scatters in the Run 3 detector is diminished because of its position in the tower; therefore, a higher fraction of multiple scatter events could be passing the multiples cut and contributing to the background rates shown in Table 6 for Run 3.
Figure 13 shows the final CDMSlite Run 3 limit calculated with the spectrum in Fig. 12. From 2.5–10 GeV/ we find a factor of 2–3 improvement in the excluded DM-nucleon cross section over the CDMSlite Run 2 optimum interval analysis Allison et al. (2016). This improvement is achieved despite the smaller exposure (36 vs. 70 kg-days) and higher background rate in Run 3, demonstrating the discrimination power of the PLR method. Below 2.5 GeV/, we exclude little to no additional parameter space because the effective energy threshold for this analysis is slightly higher than that for CDMSlite Run 2.
These results demonstrate successful modeling of radioactive backgrounds in CDMSlite detectors down to very low energies, as well as the power of a profile likelihood fit to set strong limits on a potential DM signal even in the presence of irreducible backgrounds. Unlike previous CDMSlite analyses, the profile likelihood method used here potentially permits the detection of a signal. Key analysis developments enabling this approach include improved rejection of instrumental backgrounds using detector-detector correlations in a boosted decision tree, removal of events at high radii with misreconstructed energies by an improved fiducial volume cut, and Monte Carlo modeling of surface backgrounds in the detector. The SuperCDMS collaboration is currently constructing a new experiment, SuperCDMS SNOLAB, which will use the CDMSlite technique in detectors designed specifically for high-voltage operation Kurinsky et al. (2016); Agnese et al. (2017). The results obtained here provide a proof of principle that backgrounds for these detectors can be successfully understood at a level that would permit not merely the setting on upper limits in the presence of backgrounds, but potentially the discovery of a low-mass DM signal.
The SuperCDMS collaboration gratefully acknowledges technical assistance from the staff of the Soudan Underground Laboratory and the Minnesota Department of Natural Resources. The iZIP detectors were fabricated in the Stanford Nanofabrication Facility, which is a member of the National Nanofabrication Infrastructure Network, sponsored and supported by the NSF. Funding and support were received from the National Science Foundation, the U.S. Department of Energy, Fermilab URA Visiting Scholar Grant No. 15-S-33, NSERC Canada, the Canada Excellence Research Chair Fund, and MultiDark (Spanish MINECO). The SuperCDMS collaboration prepared this document using the resources of the Fermi National Accelerator Laboratory (Fermilab), a U.S. Department of Energy, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE-AC02-07CH11359. Pacific Northwest National Laboratory is operated by Battelle Memorial Institute under Contract No. DE-AC05-76RL01830 for the U.S. Department of Energy. SLAC is operated under Contract No. DEAC02-76SF00515 with the U.S. Department of Energy.
- Tanabashi et al. (2018) M. Tanabashi et al., “The Review of Particle Physics,” Phys. Rev. D 98, 030001 (2018).
- Ade et al. (2016) P. A. R. Ade et al., “Planck 2015 results: XIII. Cosmological parameters,” Astron. Astrophys. 594, A13 (2016).
- Steigman and Turner (1985) G. Steigman and M. S. Turner, ‘‘Cosmological constraints on the properties of weakly interacting massive particles,” Nuclear Physics B 253, 375 – 386 (1985).
- Jungman et al. (1996) G. Jungman, M. Kamionkowski, and K. Griest, “Supersymmetric dark matter,” Phys. Rep. 267, 195–373 (1996).
- Aad et al. (2015) G. Aad et al., ‘‘Search for new phenomena in final states with an energetic jet and large missing transverse momentum in collisions at TeV with the ATLAS detector,” Eur. Phys. J. C 75, 299 (2015).
- Khachatryan et al. (2015) V. Khachatryan et al., “Search for dark matter, extra dimensions, and unparticles in monojet events in protonâproton collisions at TeV,” Eur. Phys. J. C 75, 235 (2015).
- Aguilar-Arevalo et al. (2016) A. Aguilar-Arevalo et al., “Search for low-mass WIMPs in a 0.6 kg day exposure of the DAMIC experiment at SNOLAB,” Phys. Rev. D 94, 082006 (2016).
- Petricca et al. (2017) F. Petricca et al., “First results on low-mass dark matter from the CRESST-III experiment,” arXiv:1711.07692 .
- Agnese et al. (2018a) R. Agnese et al. (SuperCDMS Collaboration), “Low-mass dark matter search with CDMSlite,” Phys. Rev. D 97, 022002 (2018a).
- Agnes et al. (2018) P. Agnes et al., “Low-mass Dark Matter Search with the DarkSide-50 Experiment,” arXiv:1802.06994 .
- Petraki and Volkas (2013) K. Petraki and R. R. Volkas, “Review of Asymmetric Dark Matter,” Int. J. Mod. Phys. A 28, 1330028 (2013).
- Zurek (2014) K. M. Zurek, “Asymmetric Dark Matter: Theories, signatures, and constraints,” Phys. Rep. 537, 91–121 (2014).
- Hooper et al. (2012) D. Hooper, N. Weiner, and W. Xue, “Dark forces and light dark matter,” Phys. Rev. D 86, 056009 (2012).
- Foot (2013) R. Foot, “Hidden sector dark matter explains the DAMA, CoGeNT, CRESST-II and CDMS/Si experiments,” Phys. Rev. D 88, 025032 (2013).
- Neganov and Trofimov (1985) B. Neganov and V. Trofimov, “Calorimetric method measuring ionizing radiation,” Otkrytia i Izobret. 146, 215 (1985).
- Luke (1988) P. N. Luke, “Voltage-assisted calorimetric ionization detector,” J. Appl. Phys. 64, 6858 (1988).
- Agnese et al. (2014a) R. Agnese et al. (SuperCDMS Collaboration), “Search for Low-Mass Weakly Interacting Massive Particles Using Voltage-Assisted Calorimetric Ionization Detection in the SuperCDMS Experiment,” Phys. Rev. Lett. 112, 041302 (2014a).
- Agnese et al. (2016) R. Agnese et al. (SuperCDMS Collaboration), “New Results from the Search for Low-Mass Weakly Interacting Massive Particles with the CDMS Low Ionization Threshold Experiment,” Phys. Rev. Lett. 116, 071301 (2016).
- Akerib et al. (2004) D. S. Akerib et al. (CDMS Collaboration), “First results from the cryogenic dark matter search in the soudan underground laboratory,” Phys. Rev. Lett. 93, 211301 (2004).
- Akerib et al. (2005) D. S. Akerib et al. (CDMS Collaboration), “Exclusion limits on the WIMP-nucleon cross section from the first run of the Cryogenic Dark Matter Search in the Soudan Underground Laboratory,” Phys. Rev. D 72, 052009 (2005).
- Ahmed et al. (2009) Z. Ahmed et al. (CDMS Collaboration), “Search for weakly interacting massive particles with the first five-tower data from the cryogenic dark matter search at the soudan underground laboratory,” Phys. Rev. Lett. 102, 011301 (2009).
- Agnese et al. (2013) R. Agnese et al. (SuperCDMS Collaboration), “Demonstration of surface electron rejection with interleaved germanium detectors for dark matter searches,” Appl. Phys. Lett. 103, 164105 (2013).
- Antman et al. (1966) S. Antman, D. Landis, and R. Pehl, “Measurements of the Fano factor and the energy per hole-electron pair in germanium,” Nucl. Instrum. Methods 40, 272–276 (1966).
- Hampel and Remsberg (1985) W. Hampel and L. P. Remsberg, “Half-life of Ge,” Phys. Rev. C 31, 666–667 (1985).
- Bearden and Burr (1967) J. A. Bearden and A. F. Burr, “Reevaluation of X-Ray Atomic Energy Levels,” Rev. Mod. Phys. 39, 125–142 (1967).
- Lindhard et al. (1963a) J. Lindhard, V. Nielsen, M. Scharff, and P. V. Thomsen, “Integral Equations Governing Radiation Effects (Notes on Atomic Collisions, III),” Mat. Fys. Medd. K. Dan. Vid. Selsk. 33 (1963a).
- Lindhard et al. (1963b) J. Lindhard, M. Scharff, and H. E. Schiott, “Range Concepts and Heavy Ion Ranges (Notes on Atomic Collisions, II),” Mat. Fys. Medd. K. Dan. Vid. Selsk. 33 (1963b).
- Lindhard et al. (1968) J. Lindhard, V. Nielsen, and M. Scharff, “Approximation Method in Classical Scattering by Screened Coulomb Fields (Notes on Atomic Collisions, I),” Mat. Fys. Medd. K. Dan. Vid. Selsk. 36 (1968).
- Jones and Kraner (1971) K. W. Jones and H. W. Kraner, “Stopping of 1- to 1.8-keV Ge Atoms in Germanium,” Phys. Rev. C 4, 125–129 (1971).
- Jones and Kraner (1975) K. W. Jones and H. W. Kraner, “Energy lost to ionization by 254-eV Ge atoms stopping in Ge,” Phys. Rev. A 11, 1347–1353 (1975).
- Sattler et al. (1966) A. R. Sattler, F. L. Vook, and J. M. Palms, “Ionization Produced by Energetic Germanium Atoms within a Germanium Lattice,” Phys. Rev. 143, 588–594 (1966).
- Messous et al. (1995) Y. Messous et al., “Calibration of a Ge crystal with nuclear recoils for the development of a dark matter detector,” Astropart. Phys. 3, 361–366 (1995).
- Barbeau et al. (2007) P. S. Barbeau, J. I. Collar, and O. Tench, “Large-mass ultralow noise germanium detectors: performance and applications in neutrino and astroparticle physics,” JCAP 2007, 009–009 (2007).
- Anderson and Moore (1979) B. D. O. Anderson and J. B. Moore, Optimal Filtering (Prentice Hall, 1979).
- Fano (1947) U. Fano, “Ionization Yield of Radiations. II. The Fluctuations of the Number of Ions,” Phys. Rev. 72, 26–29 (1947).
- Bilger (1967) H. R. Bilger, “Fano Factor in Germanium at 77Â°K,” Phys. Rev. 163, 238–253 (1967).
- Michael et al. (2008) D. Michael et al. (MINOS Collaboration), “The magnetized steel and scintillator calorimeters of the MINOS experiment,” Nucl. Instrum. Methods Phys. Res. A 596, 190–228 (2008).
- Rusu (2003) V. L. Rusu, Measurement of the Total B Solar Neutrino Flux at the Sudbury Neutrino Observatory, Ph.D. thesis, University of Pennsylvania (2003).
- Nix et al. (2010) J. Nix, J. Ma, G. Perdue, Y. Zheng, and Y. Wah, “Blind background prediction using a bifurcated analysis scheme,” Nucl. Instrum. Methods in Physs. Res. A 615, 223 – 229 (2010).
- Agnese et al. (2018b) R. Agnese et al. (SuperCDMS Collaboration), “Production Rate Measurement of Tritium and Other Cosmogenic Isotopes in Germanium with CDMSlite,” Accepted for publication in Astropart. Phys., arXiv:1806.07043 .
- Krane (1988) K. S. Krane, Introductory Nuclear Physics, 2nd ed. (John Wiley & Sons, Inc, 1988) pp. 272 – 282.
- Povh et al. (2008) B. Povh, K. Rith, C. Scholz, and F. Zetsche, Particles and Nuclei: An Introduction to the Physical Concepts (Springer, 2008) pp. 271 – 279.
- Thompson et al. (2009) A. Thompson, D. Attwood, E. Gullikson, M. Howells, K.-J. Kim, J. Kirz, J. Kortright, I. Lindau, Y. Liu, P. Pianetta, A. Robinson, J. Scofield, J. Underwood, G. Williams, and H. Winick, X-Ray Data Booklet, Tech. Rep. (Lawrence Berkeley National Laboratory, Berkeley, 2009).
- Schönfeld (1998) E. Schönfeld, “Calculation of fractional electron capture probabilities,” Appl. Radiat. Isot. 49, 1353–1357 (1998).
- Brown et al. (2014) J. Brown, M. Dimmock, J. Gillam, and D. Paganin, “A low energy bound atomic electron Compton scattering model for Geant4,” Nucl. Instrum. Methods Phys. Res. B 338, 77–88 (2014).
- Ramanathan et al. (2017) K. Ramanathan, A. Kavner, A. E. Chavarria, P. Privitera, D. Amidei, T.-L. Chou, A. Matalon, R. Thomas, J. Estrada, J. Tiffenberg, and J. Molina, “Measurement of low energy ionization signals from Compton scattering in a charge-coupled device dark matter detector,” Phys. Rev. D 96, 042002 (2017).
- Barker (2017) D. Barker, “Low Energy Background Spectrum in CDMSlite,” in Proc. 38th Int. Conf. High Energy Phys., Vol. 282 (Proceedings of Science, Chicago, IL, 2017) p. 874.
- Agostinelli et al. (2003) S. Agostinelli et al. (Geant4 Collaboration), “Geant4 – a simulation toolkit,” Nucl. Instrum. Methods Phys. Res. A 506, 250–303 (2003).
- Allison et al. (2006) J. Allison et al. (Geant4 Collaboration), “Geant4 developments and applications,” IEEE Trans. Nucl. Sci. 53, 270–278 (2006).
- Allison et al. (2016) J. Allison et al. (Geant4 Collaboration), “Recent developments in Geant4,” Nucl. Instrum. Methods Phys. Res. A 835, 186–225 (2016).
- Mendenhall and Weller (2005) M. H. Mendenhall and R. A. Weller, ‘‘An algorithm for computing screened Coulomb scattering in Geant4,” Nucl. Instrum. Methods Phys. Res. B 227, 420–430 (2005).
- Agnese et al. (2014b) R. Agnese et al. (SuperCDMS Collaboration), “Search for Low-Mass Weakly Interacting Massive Particles with SuperCDMS,” Phys. Rev. Lett. 112, 241302 (2014b).
- Yellin (2002) S. Yellin, “Finding an upper limit in the presence of an unknown background,” Phys. Rev. D 66, 032005 (2002).
- Yellin (2007) S. Yellin, “Extending the optimum interval method,” arXiv:0709.2701 .
- Lewin and Smith (1996) J. Lewin and P. Smith, “Review of mathematics, numerical factors, and corrections for dark matter experiments based on elastic nuclear recoil,” Astropart. Phys. 6, 87–112 (1996).
- Helm (1956) R. H. Helm, “Inelastic and Elastic Scattering of 187-Mev Electrons from Selected Even-Even Nuclei,” Phys. Rev. 104, 1466–1475 (1956).
- Barker and Mei (2012) D. Barker and D.-M. Mei, “Germanium detector response to nuclear recoils in searching for dark matter,” Astropart. Phys. 38, 1–6 (2012).
- Aprile et al. (2011) E. Aprile et al. (XENON100 Collaboration), “Likelihood approach to the first dark matter results from XENON100,” Phys. Rev. D 84, 052003 (2011).
- Wilks (1938) S. S. Wilks, “The large-sample distribution of the likelihood ratio for testing composite hypotheses,” Ann. Math. Statist. 9, 60–62 (1938).
- Read (2002) A. L. Read, “Presentation of search results: the CL technique,” Journal of Physics G: Nuclear and Particle Physics 28, 2693 (2002).
- Birnbaum (1953) Z. W. Birnbaum, “Distribution-free tests of fit for continuous distribution functions,” Ann. Math. Statist. 24, 1–8 (1953).
- Tan et al. (2016) A. Tan et al. (PandaX-II Collaboration), “Dark matter results from first 98.7 days of data from the pandax-ii experiment,” Phys. Rev. Lett. 117, 121303 (2016).
- Amole et al. (2017) C. Amole et al. (PICO Collaboration), “Dark matter search results from the bubble chamber,” Phys. Rev. Lett. 118, 251301 (2017).
- Angloher et al. (2016) G. Angloher et al., “Results on light dark matter particles with a low-threshold cresst-ii detector,” The European Physical Journal C 76, 25 (2016).
- Jiang et al. (2018) H. Jiang et al. (CDEX Collaboration), ‘‘Limits on light weakly interacting massive particles from the first data of the cdex-10 experiment,” Phys. Rev. Lett. 120, 241301 (2018).
- Kurinsky et al. (2016) N. Kurinsky, P. Brink, R. Partridge, B. Cabrera, and M. Pyle, “SuperCDMS SNOLAB Low-Mass Detectors: Ultra-Sensitive Phonon Calorimeters for a Sub-GeV Dark Matter Search,” in Proceedings of the 38th Int. Conf. High Energy Phys. (Chicago, IL, 2016) PoS(ICHEP2016)1116, arXiv:1611.04083 .
- Agnese et al. (2017) R. Agnese et al. (SuperCDMS Collaboration), “Projected sensitivity of the SuperCDMS SNOLAB experiment,” Phys. Rev. D 95, 082002 (2017).