Optimized Blind Gamma-ray Pulsar Searches at Fixed Computing Budget
The sensitivity of blind gamma-ray pulsar searches in multiple years worth of photon data, as from the Fermi LAT, is primarily limited by the finite computational resources available. Addressing this “needle in a haystack” problem, we here present methods for optimizing blind searches to achieve the highest sensitivity at fixed computing cost. For both coherent and semicoherent methods, we consider their statistical properties and study their search sensitivity under computational constraints. The results validate a multistage strategy, where the first stage scans the entire parameter space using an efficient semicoherent method and promising candidates are then refined through a fully coherent analysis. We also find that for the first stage of a blind search incoherent harmonic summing of powers is not worthwhile at fixed computing cost for typical gamma-ray pulsars. Further enhancing sensitivity, we present efficiency-improved interpolation techniques for the semicoherent search stage. Via realistic simulations we demonstrate that overall these optimizations can significantly lower the minimum detectable pulsed fraction by almost at the same computational expense.
Subject headings:gamma rays: general – methods: data analysis – methods: statistical – pulsars: general
The Fermi Large Area Telescope (LAT; Atwood et al., 2009) has an unprecedented sensitivity to detect the periodic gamma-ray emission from spinning neutron stars. Owing to the LAT, the number of detected gamma-ray pulsars has vastly increased from a handful to about 150 (for a recent review see e.g., Caraveo, 2013), making these objects a dominant Galactic source class at GeV energies.
So far, the largest fraction of LAT-detected gamma-ray pulsars has been uncovered indirectly (Abdo et al., 2013). In this approach, pulsar ephemerides known from previous radio observations are used to assign rotational phases to the gamma-ray photons, which are then tested for pulsations. Dedicated radio searches at positions of unidentified gamma-ray sources in the Fermi-LAT Second Source Catalog (2FGL; Nolan et al., 2012) have been particularly successful in discovering many new radio pulsars, and have provided ephemerides for subsequent gamma-ray phase-folding (e.g., Ransom et al., 2011; Guillemot et al., 2012; Abdo et al., 2013).
The direct detection of new gamma-ray pulsars, which are not known beforehand from other wavelengths, requires blind searches for periodicity in the sparse gamma-ray photon data (e.g., Chandler et al., 2001). With the Fermi-LAT, for the first time such blind searches have been successful (Abdo et al., 2009). Notably, many of the gamma-ray pulsars found this way have so far remained undetected at radio wavelengths (Abdo et al., 2013), implying that blind searches are the only way to access this pulsar population. Currently, hundreds of Fermi-LAT sources still remain unidentified, but feature pulsar-like properties (Ackermann et al., 2012; Lee et al., 2012) and thus likely harbor undiscovered pulsars.
The key problem in blind searches for gamma-ray pulsars is the enormous computational demand involved, which is what limits the search sensitivity. Since the relevant pulsar parameters are unknown in advance, one has to search a dense grid covering a multidimensional parameter space. The number of search grid points increases rapidly with longer observation times. For observations spanning multiple years, “brute-force” (most sensitive but most expensive) methods, which involve fully coherently tracking the pulsar rotational phase over the entire observational data time span, are unfeasible. Therefore, the efficiency of blind-search methods is crucial, because optimal strategies are those that provide the best search sensitivity at fixed computing cost. This is the main theme of this work.
The problem is generally best addressed by a multistage search scheme (e.g., Meinshausen et al., 2009). This also applies to blind searches for gravitational-wave pulsars, i.e. spinning neutron stars emitting periodic gravitational waves (Brady et al., 1998; Brady & Creighton, 2000; Cutler et al., 2005; Prix & Shaltev, 2012). The basic idea is that in a first stage, the entire search parameter space is scanned but employing a much lower resolution, and therefore at much lower computing cost, which can most efficiently discard unpromising regions. This reduction in parameter resolution is accomplished by semicoherent methods, in which only time intervals of data much shorter than one year are coherently analyzed whose results are then incoherently summed over multiple years. In subsequent stages, only small promising regions (i.e. pulsar candidates) are followed up with higher resolution at higher computational expense, by using longer coherent integration times.
One semicoherent method appropriate for the first search stage in gamma-ray pulsar searches is the seminal “time differencing technique” by Atwood et al. (2006, hereafter A06). It can basically be seen as the application of the classic Blackman–Tukey method (Blackman & Tukey, 1958) to gamma-ray data: To search along the -dimension (estimating the power spectrum) A06 calculated the discrete Fourier transform (DFT) of the autocorrelation function between photon arrival times up to a maximum lag. This significantly improved the efficiency over earlier methods (e.g., Brady & Creighton, 2000; Chandler et al., 2001, summing power of many DFTs from subintervals), because the autocorrelation function can be computed at negligible cost thanks to the sparsity of the photon arrival times. The success of the A06 method has been spectacularly demonstrated by the blind-search discovery of gamma-ray pulsars (Abdo et al., 2009; Saz Parkinson et al., 2010) within the first Fermi mission year.
Using further improved methods, in part originally developed for blind searches for gravitational-wave
pulsars (Pletsch & Allen, 2009; Pletsch, 2010), analyzing about three years of LAT data revealed
new gamma-ray pulsars (Pletsch et al., 2012b, c).
Crucial methodological improvements included the use of an analytic metric on
to construct the grid over both sky position and frequency derivative.
This allowed pulsars to be found that are much farther from the
LAT catalog sky position than was possible previously.
In addition, a photon weighting scheme (first studied by Kerr, 2011)
was used for both photon selection and for the
search computations to ensure near optimal detection significance.
For enlarged computational resources we have recently moved this ongoing search effort onto
the volunteer computing system
The paper is organized as follows. In Section 2, we describe the statistical detection of pulsations in general. In Section 3, we discuss the statistical properties of coherent blind searches and study their computational cost scalings using the parameter-space metric. We also investigate the efficiency of harmonic summing for different pulse profiles. In Section 4, we describe the statistical properties of a semicoherent blind-search method and compare the respective computing demand using the semicoherent metric. Section 5 presents a collection of technical improvements for the implementation of the semicoherent search stage, including efficient interpolation methods and automated candidate follow-up procedures. We demonstrate the superiority from combining these advances through realistic simulations in Section 6. Finally, conclusions follow in Section 7.
2. Statistical detection of pulsations
In blind pulsar searches the pulse profile (the periodic light curve) and the exact parameters describing the rotational evolution of the neutron star are unknown in advance. As (Bickel et al., 2008) have pointed out, unless the pulse profile shape is precisely known, there is no universally optimal statistical test, because any most powerful test for one template profile will not be most powerful against another. Any test can only be most sensitive to a finite-dimensional class of targets. Thus, for computational feasibility of a blind search an efficient (potentially suboptimal) template pulse profile to test against should attain only modest reduction in detection sensitivity compared to an optimal template. The construction of such a test can be guided by the profiles of known gamma-ray pulsars, which we will consider below.
For isolated pulsars the search parameters describing the rotational phase of the neutron star is at least four-dimensional, consisting of frequency , spindown rate , and sky position with right ascension and declination . To the LAT-registered arrival times sky-position dependent corrections (“barycentric corrections”) are applied in order to obtain the photon arrival times at the solar system barycenter (SSB). Then the rotational phase is described by
where and are defined at reference time , when the phase equals the constant .
Apart from the arrival time, for each of detected gamma-ray photons, indexed by , the LAT also records the photon’s reconstructed energy and direction. From these a weight, , can be computed measuring the probability that it has originated from the target source (Bickel et al., 2008; Kerr, 2011). Using these probability weights efficiently avoids testing different hard selection cuts on energy and direction (implying binary weights), providing near optimal pulsation detection sensitivity (Kerr, 2011; Pletsch et al., 2012b).
The observed gamma-ray pulse profile , the flux as a function of , can be written as
where is the pulsed fraction that is estimated by the number of pulsed gamma-ray photons divided by the total number of photons. represents the pulse profile (undisturbed by background) and is a probability density function on , which can be expressed as a Fourier series
with the complex Fourier coefficients , defined at harmonic order as
Hence the total flux can be rewritten as
If is an exact sinusoidal pulse profile, then from Equation (4) it follows that and all other coefficients vanish, . As another example, if the pulse profile is a Dirac delta function, i.e. the narrowest possible profile, then all coefficients are equal, , implying equal Fourier power at all harmonic orders.
In general, the null hypothesis is given by , meaning that all phases are uniformly distributed (i.e. no pulsations). From the likelihood for photon arrival times Bickel et al. (2008) derived a score test statistic for ,
where we defined the normalization constant (different from Bickel et al., 2008) as
and is given by
with the time-dependent part of the phase and the normalization constant defined as
Thus, we denote by the coherent Fourier power at the th harmonic,
Appealing to the Central Limit Theorem (since in all practical cases) the normalization choice of Equation (9) has the convenient property that the coefficients and become independent Gaussian random variables with zero mean and unit variance under the null hypothesis. Therefore, to good approximation each is -distributed with degrees of freedom, as will be discussed below. Thus, is the weighted sum of coherent Fourier powers,
Therefore, as noted by Bickel et al. (2008), the test statistic is invariant under phase shifts (i.e. independent of reference phase ) and only depends on the amplitudes of the Fourier coefficients , but not on their phases. Moreover, Beran (1969) showed earlier that if the pulse profile is known a priori, a test statistic following from for binary weights is locally most powerful for testing uniformity of a circular distribution, assuming unknown and weak (small ) signal strength.
3. Coherent Test Statistics
In what follows, we examine the sensitivity of coherent blind searches at fixed computational cost, taking into account the statistical properties and sensitivity scalings in terms of relevant quantities. For simplicity, during the remainder of this section we here assume hard photon selection cuts, i.e., binary weights only, , such that reduces to
However, the main conclusions obtained will also have applicability when arbitrary (i.e., non-binary) weights are used.
3.1. Statistical Properties
Suppose the photon data contains a pulsed signal, , whose pulse profile can be expressed in terms of complex Fourier coefficients, as in Equation (4). In this case, we show in Appendix A that for moderately strong pulsed signals the distribution of can be well approximated by a noncentral -distribution (Groth, 1975; Guidorzi, 2011) with degrees of freedom. Thus, in the perfect-match case (the pulsar parameters , , and sky position are precisely known), the first two moments are approximately given by
where photons are assumed to be “pulsed” and accordingly photons
are “non-pulsed” (i.e., background).
Thus, the second summand in Equation (14a) represents the noncentrality
Therefore, by comparison to Equation (14a) the noncentrality parameter is just .
A similar calculation for , based on the above relations shows that if ,
and for , one obtains
Thus, the amplitude S/N for the test statistic can be expressed as
A similar expression has been derived by Bickel et al. (2008) who used this parameter as an approximate measure of the sensitivity of the test statistic , since the larger the S/N the higher the probability of detection. However, it is only an approximate sensitivity measure, because any meaningful sensitivity comparison must be done at fixed probability of false alarm as will be described below. Equation (18) also shows that the S/N is maximized if , i.e., when the template pulse profile perfectly matches the , representing the signal pulse profile. However, as Bickel et al. (2008) correctly note, practical blind searches can only test for a finite-dimensional class of template pulse profiles.
A particularly simple template profile for a given value of is
With this choice, measures the coherent Fourier power summed over the first harmonics, which we therefore refer to as incoherent harmonic summing. The resulting statistic is also known as (Buccheri et al., 1983),
Maximizing over different values of as also recovers the widely used -test by de Jager et al. (1989).
The template of Equation (19) has the additional benefit that
the statistical distribution of is known analytically.
Therefore, we use this to obtain realistic sensitivity scalings
for such coherent test statistics. Since is
Correspondingly, the S/N is written as
In the Neyman–Pearson sense, we define search sensitivity from the lowest threshold pulsed fraction required to achieve a certain detection probability for a given number of photons and at given false alarm probability . For the false alarm probability is computed as
where denotes the probability density function for the -distributed variable with noncentrality parameter . The probability of detection for a noncentrality parameter of is
The minimum detectable pulsed-fraction threshold for summing coherent power from harmonics, , is obtained by first inverting Equation (24) to get the threshold test-statistic value , which in a second step is substituted in Equation (25) to numerically find the required threshold S/N:
Finally, Equation (23) can be used to convert the threshold S/N into , which defines the coherent search sensitivity as
Assuming the overall photon count rate, , is constant throughout the entire coherent integration time, then the search sensitivity increases with the well-known square-root scaling of ,
Thus, we have obtained an expression for the search sensitivity, separating the two effects of photon count rate (or integration time) and pulse profile shape. Regarding the latter effect, Equation (28) reveals that the sensitivity only improves with including higher harmonics (i.e. increasing ) if the pulse profile shape is such that increases more quickly than the “statistical penalty” factor . While this is true for the narrowest possible pulse profile (a Dirac delta function), we show below that the same does not hold in general for typical gamma-ray pulsar profiles.
3.2. Effects of Pulse Profile on Sensitivity
From Equation (28) in the previous section, we have seen how the sensitivity for pulsation detection depends on the shape of the pulse profile, represented by the Fourier coefficients . Therefore, it is instructive to examine the change in sensitivity as a function of the number of harmonics for some exemplary profiles. Thus, we consider the following ratio,
which compares in the statistical sense the search sensitivity of including harmonics, compared to using the fundamental only (in absence of any computational constraints).
In the ideal case, where all harmonics have equal power , the pulse profile is a Dirac delta function as described above. In this case, , and the sensitivity is a monotonically increasing function of at fixed detection probability, , and fixed false alarm probability, . To illustrate this, consider the following example, assuming that and . Then, to good approximation, the corresponding S/N threshold can be described by
Hence, with increasing , the threshold S/N decreases and becomes constant in the limit of large , in which case the statistical penalty factor () becomes . Since this scaling is slower than the pulse profile factor in this case, the sensitivity is monotonically increasing with . This is also shown in Figure 1, using the exact values for that we calculated numerically.
To obtain a more realistic signal pulse-profile model, we considered those of the known gamma-ray pulsars. We carried out a harmonic analysis of the pulse profile shapes of the known gamma-ray pulsars listed in the second Fermi LAT pulsar catalog (Abdo et al., 2013) and computed their Fourier coefficients, . These are shown in Figure 2 (top panel) and illustrate that for most of the known gamma-ray pulsars the largest fraction of Fourier power is typically in a single harmonic that is either the first (mostly single-peaked profiles) or the second (mostly two-peaked profiles). Therefore, before computing an average profile (by averaging the ), it makes sense to divide the pulsars into these two groups (based on whether or not ). These results, separately for each group, are displayed in the two bottom panels of Figure 2.
We use the resulting two sets of coefficients to calculate the sensitivity scaling with from Equation (28) as also shown in Figure 1. Notice that for the typical pulse profiles, in contrast to the Dirac delta pulse-profile, when summing more than a certain number of harmonics, the sensitivity starts to decrease (at fixed and ). This is because the Fourier powers at the higher harmonics become vanishingly small and thus effectively only contribute “noise” when summed (i.e. the statistical penalty factor cannot be overcome anymore).
These results also illustrate the success of the -test for targeted pulsation searches in gamma-ray data with known pulsar ephemerides, because this test maximizes the Fourier power sums over the first harmonics. Maximizing only over fewer harmonics could likely already be sufficient (or even be more sensitive due to the reduced trials factor) in most cases, as suggested by Figure 1. Besides, further improvements over the -test could also be achieved by employing one or more template profiles that are more representative of the typical gamma-ray profile (than the delta function) to compute the test statistic. Using the average profile from the known pulsars from above for this seems the simplest first step. While also conducting a principal component analysis appears worthwhile, we defer a detailed study of this to future work.
So far, we have not considered the computational costs involved, which is only justifiable for computationally inexpensive targeted searches. In contrast, blind searches are limited by computational power. Therefore, in the following section, we will revisit the efficiency of harmonic summing under the constraint of a fixed computational cost.
3.3. Grid-point Counting for Coherent Search
In blind searches, the pulsar’s rotational and positional parameters are unknown a priori. Therefore, one has to construct a grid in the multidimensional search parameter space that is explicitly searched, i.e., the test statistic is to be computed at each grid point. Therefore the question arises: What is the most efficient scheme for constructing the search grid? If grid points are placed too far apart potential pulsar signals might be missed. On the other hand, it is highly inefficient to place grid points too closely together, because of redundancy resulting from strongly correlated nearby grid points. The problem of constructing efficient search grids has been intensively studied in the context of gravitational-wave searches (see, e.g., Brady et al., 1998; Brady & Creighton, 2000; Prix, 2007; Pletsch & Allen, 2009; Pletsch, 2010) and we employ some of these concepts here.
The key element is a distance metric on the search space (Balasubramanian et al., 1996; Owen, 1996). The metric provides an analytic geometric tool measuring the expected fractional loss in squared S/N for any given pulsar-signal location at a nearby grid point.
Let the vector collect the actual pulsar signal parameters. In a blind search for isolated pulsars, this vector is at least four-dimensional, . For simplicity, we begin by considering the metric at the fundamental harmonic (). As will be shown below, it is subsequently straightforward to generalize the results to higher harmonic orders. Following Equation (15), let denote the S/N for the perfect-match case, i.e., at the signal parameter-space location. In a blind search the signal parameters generally will not coincide with a grid point , but will typically have some offset,
These offsets lead to a (time-dependent) residual phase and therefore a fractional loss in squared S/N results, which is commonly referred to as mismatch,
The metric is obtained from a Taylor expansion of the mismatch to second order in the offsets at the signal location ,
This equation defines a positive definite metric tensor with components , where and label the tensor indices. In Appendix B, we derive explicit expressions for the coherent metric for a simplified phase model that is appropriate for the purpose of grid construction. We also find that the resulting metric tensor is diagonal, which greatly simplifies the grid construction. The results of this derivation will therefore be used in what follows.
As noted by Prix & Shaltev (2012), the probability distribution of signal mismatches in a given search grid constructed with a certain maximal mismatch depends on the structure and dimensionality of the search parameter space. The corresponding average mismatch in each dimension, , will generally be smaller by a characteristic geometric factor , depending on the actual search-grid construction. For example, for hyper-cubical lattices, is known to be . In order to construct a hyper-cubical grid in which the maximum mismatch due to an offset in each parameter is , then the grid point spacing in each parameter should be,
Denote by the four-dimensional parameter space, spanned by , which is to be searched. Thus, when searching for pulsars with spin frequencies in the range , with spin-down rates in the range , and whose sky location is confined by the LAT to a region of area , the proper volume can be written as
In principle, the metric coefficients (and hence also the grid point spacings) can vary throughout the parameter space. Indeed, for the metrics considered in this work, the grid point spacing in the sky dimensions depends on the spin frequency of the pulsar. In order to avoid having to construct a separate sky grid for each search frequency value, we adopt the conservative approach of using the highest frequency searched for the sky grid construction. The metric (and hence also the grid point spacing) becomes uniform throughout . The total number of search-grid points for a coherent blind search over is therefore simply the product of the number of grid points in each dimension.
as is found to be diagonal. In Appendix B we derive that
where we defined,
and where we have denoted the Earth’s orbital angular frequency as , and the light travel-time from the Earth to the SSB as s.
To analytically study the scaling of as a function of , the function can be well approximated by
The validity of this approximation is illustrated in Figure 3.
Hence, the total number of grid points required in a coherent search is
Equation (40) tells us that for coherent integration times much shorter than half a year the sky metric components also still scale with , such that increases approximately as . After half a year of coherent integration the sky metric components quickly approach the resolution saturation as the maximum baseline (1 AU) is reached, and thereafter become approximately independent of . Therefore scales only as in this regime.
3.4. Coherent Search Sensitivity at Fixed Computing Cost
For computational efficiency, we use the fast Fourier transform (FFT) algorithm (Frigo & Johnson, 2005) to scan the -dimension. There are two steps involved in calculating an FFT, each with an associated computational cost. Firstly, it is necessary to construct a discrete time series by interpolating (e.g. by binning) the photon arrival times into equidistant samples. The cost of this step is proportional to the number of photon arrival times which must be interpolated. Secondly, the discrete time series must be transformed into a discretely sampled frequency spectrum, using the FFT algorithm. For a maximum frequency of , and a coherent integration time of there are frequency samples, and the computational cost of calculating the FFT is proportional to . We assume that the cost of calculating the FFT is much larger than the cost of creating the discrete time series. Compared to the cost of computing explicitly for photon times at frequencies, which is proportional to , it is clear that the FFT method offers more efficiency provided .
The spacing of frequency samples output by the FFT is . According to the metric [see Equation (B11a)] this implies a worst-case mismatch due to frequency offsets of , which obviously also leads to a high average mismatch. However, as we will discuss in Section 5.2, it is possible to reduce this mismatch at almost no extra computational cost by interpolating the frequency spectrum. In the following derivations, we therefore separate the total mismatch into two components: a constant mismatch due to the frequency spacing, determined by the interpolation method used, which has a negligible effect on the overall computing cost; and the mismatch due to offsets in the remaining parameters, , which can be freely varied to construct an optimal grid.
For every grid point in an FFT must be computed, and hence the overall computation time for the search is simply the cost of calculating one FFT multiplied by the number of FFTs that must be computed. The total cost, (measured in units of time), is
where is an implementation and computing hardware dependent constant, and where is the number of frequency samples that would be calculated using a grid with an arbitrary maximum mismatch per dimension of ,
The total computational cost is therefore
where the constant depends on ,
For a search grid constructed with maximum mismatch , the search sensitivity will scale with the average mismatch as (Prix & Shaltev, 2012). Thus, from Equation (28) it follows that the search sensitivity without harmonic summing scales as
For a computing cost , Equation (44) can be used to obtain (numerically) the maximum . Substituting this value of in Equation (46) finally yields the search sensitivity at the given computational cost.
3.5. Efficiency of Harmonic Summing at Fixed Computing Cost
Based on the results of the previous sections, we now investigate the efficiency of incoherent harmonic summing under computational cost constraints. More precisely, we address the question of whether it is more efficient in blind searches to sum harmonics, or to instead use a longer coherent integration time without harmonic summing at the same computing cost.
Thus, we consider the test statistic , which
incoherently sums Fourier powers from
higher harmonics. In Appendix C we derive
the parameter space metric for the statistic, denoted by ,
and find that ,
represents a refinement factor due to harmonic summing, and
is the metric tensor for of Equation (37).
Therefore, to ensure equal sensitivity throughout the original parameter
In practice, to utilize the efficiency of the FFT, it would be necessary to construct a sub-optimal grid in which the range in and is extended by a factor of , and the coherent powers summed appropriately over harmonics. The sky-grid in this case may still be constructed using the refinement factor , leading to the computing cost being times at the same coherent integration time. While this method may quickly become infeasible due to the amount of memory required, we use this only as a theoretically efficient method to compare to an equally costly search using only the fundamental harmonic power.
We here assume that the small extra cost of actually
summing the is negligible.
From Equation (27) above, we found that the search sensitivity of incoherent harmonic summing is given by
Hence, to compare the search sensitivities and at fixed computing cost, in principle the following steps are required. First, for a given computing cost , Equations (44) and (46) provide the corresponding coherence time and sensitivity , respectively. Second, by equating , Equation (47) then can be solved (numerically) for , which finally is used to obtain the sensitivity from Equation (48). It should be noted that in comparing and the same values of and must be assumed. We here also assume the same mismatch in either case, because as shown in Appendix E, the optimal mismatch at fixed computing cost is independent of coherent integration time, number of harmonics summed, and computing power available. Notably, a similar result has been found previously by Prix & Shaltev (2012) in the context of gravitational-wave pulsar searches.
In the following, we describe an analytical approximation to the numerical approach above which we show to be sufficiently accurate for typical search setups. This approximation is based on ignoring the slowly varying factors in Equations (44) and (47), such that
Then from , it immediately follows that must be shorter by the factor ,
We show in Appendix D that the obtained from this approximation slightly overestimates the sensitivity , while being accurate to within less than about for typical search setups. Using Equation (50) to substitute in Equation (48) one obtains for the ratio of search sensitivities,
which remarkably is independent of and . This sensitivity ratio of Equation (51) is shown in Figure 4 and is found to be greater than unity for typical gamma-ray pulsars. Only for unrealistically narrow pulse profiles (i.e. a Dirac delta function), the sensitivity ratio can remain close to or slightly below unity. It also should be pointed out that we obtained these results despite the generous assumptions in favor of the harmonic summing approach. First, we ignored the extra costs of summing the power values. Second, we neglected the possible extra trials when one would maximize the test statistics over different . Third, the analytical approximation of Equation (50) overestimates the true (and hence the sensitivity ) as we show by numerical evaluation in Figure 11.
Hence the basic moral is clear: For blind searches for isolated gamma-ray pulsars, whose sensitivity is limited by computing power rather than the amount of available data, a more sensitive search strategy is to employ a longer coherence time instead of using incoherent harmonic summing at the same computational cost.
4. Semicoherent test statistics
The key property of the semicoherent test statistics is that only pairs of photon arrival times () whose separation , also called lag, is at most (which is much shorter than ) are combined coherently, otherwise incoherently. Hence, we refer to as the coherence window size and denote by the ratio of total observational data time span of the semicoherent search and ,
Compared to fully coherent methods, this semicoherent approach drastically reduces the computing cost since fewer search grid points are required (due to the lower parameter-space resolution as will be described in Section 4.2) at the expense of reduced search sensitivity. In Section 4.3 we argue that this tradeoff is a profitable one, because at fixed given computing cost the overall search sensitivity of the semicoherent searches outperform fully coherent searches restricted to data spans shorter than by the computational constraints.
To derive a semicoherent test statistic, notice the (unnormalized) coherent Fourier power from Equation (10) for the fundamental frequency (first harmonic) can also be written in the following form,
Thus, the semicoherent statistic is formed by multiplying the terms in the above double sum with a real lag window , such that
where the lag window has an effective size ,
and thus must fall off rapidly outside the interval . Blackman & Tukey (1958) were the first to consider power spectral estimators of the form of , which can be seen as the Fourier transform of the lag-windowed covariance sequence (Stoica & Moses, 2005). The semicoherent statistic is just a more general version of the classic Blackman-Tukey method (Blackman & Tukey, 1958) in spectral analysis, e.g. if the phase model was simply only. Hence, can also be seen as a local spectral average of values over neighboring frequencies weighted according to the frequency response of (Stoica & Moses, 2005).
As outlined in (Pletsch et al., 2012b), for special forms of the lag window, can also be obtained by summing time-windowed coherent power from overlapping subsets of data. This implies a lag window that must be always positive semidefinite, because it is formed by the convolution of the time window with itself in this case (Stoica & Moses, 2005), whereas the more general form as of Equation (54) in principle can have arbitrary lag windows.
In general, the choice of lag-window function has an impact on the sensitivity of the statistic . In tests with simulated LAT data, for the purpose of pulsation detection we found that the best sensitivity is provided by the simple rectangular lag window,
which also allows for an efficient implementation as will be described in more detail in Section 5. The usage of the rectangular lag window could also be motivated from the following viewpoint. Considering the significant sparseness of the LAT data, typically all pairs of photon times fall at different lags (for any practical sampling time, see Section 5.1). Therefore, one could argue that optimally (for minimum variance) all lags (i.e., all photon pairs) should be weighted equally when forming , which is exactly what implements. Thus, in the remainder of this manuscript we will keep using the rectangular lag window to calculate .
4.1. Statistical Properties
To examine the statistical properties of the semicoherent statistic, , it is useful to rewrite Equation (54) as
Under the null hypothesis, and assuming , we show in Appendix F that follows a normal distribution, whose first two moments of the noise distribution of are:
Now consider that the photon data contains a pulsed signal (i.e. ) with a pulse profile defined by Fourier coefficients . Then the expectation value of is obtained as
Thus, for we can identify the amplitude S/N as
To extract the scalings of the semicoherent S/N in terms of the relevant search parameters, we assume hard photon-selection cuts, i.e., binary photon weights, for the remainder of this section. Then Equation (57) reduces to
In this case, as derived in Appendix F, the first two moments of the noise distribution are
We show in Appendix F, that for moderately strong signals the first two moments of the distribution of are approximately given by