A PowerFlux polarization projection relations

All-sky search for periodic gravitational waves in LIGO S4 data

Abstract

We report on an all-sky search with the LIGO detectors for periodic gravitational waves in the frequency range  Hz and with the frequency’s time derivative in the range to zero. Data from the fourth LIGO science run (S4) have been used in this search. Three different semi-coherent methods of transforming and summing strain power from Short Fourier Transforms (SFTs) of the calibrated data have been used. The first, known as “StackSlide”, averages normalized power from each SFT. A “weighted Hough” scheme is also developed and used, and which also allows for a multi-interferometer search. The third method, known as “PowerFlux”, is a variant of the StackSlide method in which the power is weighted before summing. In both the weighted Hough and PowerFlux methods, the weights are chosen according to the noise and detector antenna-pattern to maximize the signal-to-noise ratio. The respective advantages and disadvantages of these methods are discussed. Observing no evidence of periodic gravitational radiation, we report upper limits; we interpret these as limits on this radiation from isolated rotating neutron stars. The best population-based upper limit with confidence on the gravitational-wave strain amplitude, found for simulated sources distributed isotropically across the sky and with isotropically distributed spin-axes, is (near 140 Hz). Strict upper limits are also obtained for small patches on the sky for best-case and worst-case inclinations of the spin axes.

pacs:
04.80.Nn, 95.55.Ym, 97.60.Gb, 07.05.Kf
1

] ]

The LIGO Scientific Collaboration, http://www.ligo.org

I Introduction

We report on a search with the LIGO (Laser Interferometer Gravitational-wave Observatory) detectors (2); (3) for periodic gravitational waves in the frequency range  Hz and with the frequency’s time derivative in the range to zero. The search is carried out over the entire sky using data from the fourth LIGO science run (S4). Isolated rotating neutron stars in our galaxy are the prime target.

Using data from earlier science runs, the LIGO Scientific Collaboration (LSC) has previously reported on searches for periodic gravitational radiation, using a long-period coherent method to target known pulsars (4); (5); (6), using a short-period coherent method to target Scorpius X-1 in selected bands and search the entire sky in the  Hz band (7), and using a long-period semi-coherent method to search the entire sky in the  Hz band (8). Einstein@Home, a distributed home computing effort running under the BOINC architecture (9), has also been searching the entire sky using a coherent first stage, followed by a simple coincidence stage (10). In comparison, this paper: 1) examines more sensitive data; 2) searches over a larger range in frequency and its derivative; and 3) uses three alternative semi-coherent methods for summing measured strain powers to detect excess power from a continuous gravitational-wave signal.

The first purpose of this paper is to present results from our search for periodic gravitational waves in the S4 data. Over the LIGO frequency band of sensitivity, the S4 all-sky upper limits presented here are approximately an order of magnitude better than published previously from earlier science runs (7); (8). After following up on outliers in the data, we find that no candidates survive, and thus report upper limits. These are interpreted as limits on radiation from rotating neutron stars, which can be expressed as functions of the star’s ellipticity and distance, allowing for an astrophysical interpretation. The best population-based upper limit with confidence on the gravitational-wave strain amplitude, found for simulated sources distributed isotropically across the sky and with isotropically distributed spin-axes, is (near 140 Hz). Strict upper limits are also obtained for small patches on the sky for best-case and worst-case inclinations of the spin axes.

The second purpose of this paper, along with the previous coherent (7) and semi-coherent (8) papers, is to lay the foundation for the methods that will be used in future searches. It is well known that the search for periodic gravitational waves is computationally bound; to obtain optimal results will require a hierarchical approach that uses coherent and semi-coherent stages (11); (12); (13); (14). A fifth science run (S5), which started in November 2005, is generating data at initial LIGO’s design sensitivity. We plan to search this data using the best methods possible, based on what is learned from this and previous analyses.

In the three methods considered here, one searches for cumulative excess power from a hypothetical periodic gravitational wave signal by examining successive spectral estimates based on Short Fourier Transforms (SFTs) of the calibrated detector strain data channel, taking into account the Doppler modulations of detected frequency due to the Earth’s rotational and orbital motion with respect to the Solar System Barycenter (SSB), and the time derivative of the frequency intrinsic to the source. The simplest method presented, known as “StackSlide” (15); (13); (14); (16), averages normalized power from each SFT. In the Hough method reported previously (8); (11), referred to here as “standard Hough”, the sum is of binary zeroes or ones, where an SFT contributes unity if the power exceeds a normalized power threshold. In this paper a “weighted Hough” scheme, henceforth also referred to as “Hough”, has been developed and is similar to that described in Ref. (17). This scheme also allows for a multi-interferometer search. The third method, known as “PowerFlux” (18), is a variant of the StackSlide method in which the power is weighted before summing. In both the weighted Hough and PowerFlux methods, the weights are chosen according to the noise and detector antenna pattern to maximize the signal-to-noise ratio.

The Hough method is computationally faster and more robust against large transient power artifacts, but is slightly less sensitive than StackSlide for stationary data (8); (16). The PowerFlux method is found in most frequency ranges to have better detection efficiency than the StackSlide and Hough methods, the exceptions occurring in bands with large non-stationary artifacts, for which the Hough method proves more robust. However, the StackSlide and Hough methods can be made more sensitive by starting with the maximum likelihood statistic (known as the -statistic (19); (11); (7)) rather than SFT power as the input data, though this improvement comes with increased computational cost. The trade-offs among the methods means that each could play a role in our future searches.

In brief, this paper makes several important contributions. It sets the best all-sky upper limits on periodic gravitational waves to date, and shows that these limits are becoming astrophysically interesting. It also introduces methods that are crucial to the development of our future searches.

This paper is organized as follows: Section II briefly describes the LIGO interferometers, focusing on improvements made for the S4 data run, and discusses the sensitivity and relevant detector artifacts. Section III precisely defines the waveforms we seek and the associated assumptions we have made. Section IV gives a detailed description of the three analysis methods used and summarizes their similarities and differences, while Section V gives the details of their implementations and the pipelines used. Section VI discusses the validation of the software and, as an end-to-end test, shows the detection of simulated pulsar signals injected into the data stream at the hardware level. Section VII describes the search results, and Section VIII compares the results from the three respective methods. Section IX concludes with a summary of the results, their astrophysical implications, and future plans.

Ii The LIGO Detector Network and the S4 Science Run

The LIGO detector network consists of a 4-km interferometer in Livingston Louisiana (called L1) and two interferometers in Hanford Washington, one 4-km and another 2-km (H1 and H2, respectively).

The data analyzed in this paper were produced during LIGO’s 29.5-day fourth science run (S4) (20). This run started at noon Central Standard Time (CST) on February 22 and ended at midnight CST on March 23, 2005. During the run, all three LIGO detectors had displacement spectral amplitudes near in their most sensitive frequency band near 150 Hz. In units of gravitational-wave strain amplitude, the sensitivity of H2 is roughly a factor of two worse than that of H1 and L1 over much of the search band. The typical strain sensitivities in this run were within a factor of two of the design goals. Figure 1 shows representative strain spectral noise densities for the three interferometers during the run. As discussed in Section V below, however, non-stationarity of the noise was significant.

Changes to the interferometers before the S4 run included the following improvements (20):

  • Installation of active seismic isolation of support structures at Livingston to cope with high anthropogenic ground motion in the 1-3 Hz band.

  • Thermal compensation with a CO laser of mirrors subject to thermal lensing from the primary laser beam to a greater or lesser degree than expected.

  • Replacement of a synthesized radio frequency oscillator for phase modulation with a crystal oscillator before S4 began (H1) and mid-way through the S4 run (L1), reducing noise substantially above 1000 Hz and eliminating a comb of  Hz lines. (The crystal oscillator replacement for H2 occurred after the S4 run.)

  • Lower-noise mirror-actuation electronics (H1, H2, & L1).

  • Higher-bandwidth laser frequency stabilization (H1, H2, & L1) and intensity stabilization (H1 & L1).

  • Installation of radiation pressure actuation of mirrors for calibration validation (H1).

  • Commissioning of complete alignment control system for the L1 interferometer (already implemented for H1 & H2 in S3 run).

  • Refurbishment of lasers and installation of photodiodes and electronics to permit interferometer operation with increased laser power (H1, H2, & L1).

  • Mitigation of electromagnetic interference (H1, H2, & L1) and acoustic interference (L1).

The data were acquired and digitized at a rate of 16384 Hz. Data acquisition was periodically interrupted by disturbances such as seismic transients, reducing the net running time of the interferometers. The resulting duty factors for the interferometers were 81% for H1 and H2, and 74% for L1. While the H1 and H2 duty factors were somewhat higher than those in previous science runs, the L1 duty factor was dramatically higher than the 40% typical of the past, thanks to the increased stability from the installation of the active seismic isolation system at Livingston.

Figure 1: Median amplitude strain noise spectral densities from the three LIGO interferometers during the S4 run, along with the Initial LIGO design sensitivity goal.

Iii Signal Waveforms

The general form of a gravitational-wave signal is described in terms of two orthogonal transverse polarizations defined as “” with waveform and “” with waveform . The calibrated response seen by an interferometric gravitational-wave detector is then (19)

(1)

where is time in the detector frame, is the source right ascension, is the source declination, is the polarization angle of the wave, and are the detector antenna pattern functions for the two orthogonal polarizations. For periodic (nearly pure sinusoidal) gravitational waves, which in general are elliptically polarized, the individual components have the form

(2)
(3)

where and are the amplitudes of the two polarizations, and is the phase of the signal at the detector. (One can also define the initial phase of the signal, , but in this paper it can be taken to be an unknown and irrelevant constant).

For an isolated quadrupolar gravitational-wave emitter, characterized by a rotating triaxial ellipsoid mass distribution, the amplitudes and are related to the inclination angle of the source, , and the wave amplitude, , by:

(4)
(5)

where is the angle of its spin axis with respect to the line of sight between source and detector. For such a star, the gravitational-wave frequency, , is twice the rotation frequency, , and the amplitude is given by

(6)

Here is the distance to the star, is the principal moment of inertia with respect to its spin axis, and is the equatorial ellipticity of the star (19). Assuming that all of the frequency’s derivative, , is due to emission of gravitational radiation and that takes the canonical value , we can relate to and and use Eq. (6) to obtain

(7)

by eliminating , or

(8)

by eliminating . These are referred to, respectively, as the spin-down limits on strain and ellipticity. (See Eqs. (8), (9), and (19) of (7) for more details of the derivation.)

Note that the methods used in this paper are sensitive to periodic signals from any type of isolated gravitational-wave source (e.g., freely precessing or oscillating neutron stars as well as triaxial ones), though we present upper limits in terms of and . Because we use semi-coherent methods, only the instantaneous signal frequency in the detector reference frame, , needs to be calculated. In the detector reference frame this can, to a very good approximation, be related to the instantaneous SSB-frame frequency by (8)

(9)

where is the detector’s velocity with respect to the SSB frame, and is the unit-vector corresponding to the sky-location of the source. In this analysis, we search for signals well described by a nominal frequency at the start of the S4 run and a constant first time derivative , such that

(10)

These equations ignore corrections to the time interval at the detector compared with that at the SSB and relativistic corrections. These corrections are negligible for the one month semi-coherent searches described here, though the LSC Algorithm Library (LAL) code (21) used by our searches does provide routines that make all the corrections needed to provide a timing accuracy of 3 . (The LAL code also can calculate for signals arriving from periodic sources in binary systems. Including unknown orbital parameters in the search, however, would greatly increase the computational cost or require new methods beyond the scope of this article.)

Iv Overview of the Methods

iv.1 Similarities and Differences

The three different analysis methods presented here have many features in common, but also have important differences, both major and minor. In this Section we give a brief overview of the methods.

The parameter space

All three methods are based on summing measures of strain power from many SFTs that have been created from 30-minute intervals of calibrated strain data. Each method also corrects explicitly for sky-position dependent Doppler modulations of the apparent source frequency due to the Earth’s rotation and its orbital motion around the SSB, and the frequency’s time derivative, intrinsic to the source (see Fig. 2). This requires a search in a four-dimensional parameter space; a template in the space refers to a set of values: . The third method, PowerFlux, also searches explicitly over polarization angle, so that .

Figure 2: An illustration of the discrete frequency bins of the Short Fourier Transform (SFTs) of the data are shown vertically, with the discrete start times of the SFTs shown horizontally. The dark pixels represent a signal in the data. Its frequency changes with time due to Doppler shifts and intrinsic evolution of the source. By sliding the frequency bins, the power from a source can be lined up and summed after appropriate weighting or transformation. This is, in essence, the starting point for all of the semi-coherent search methods presented here, though the actual implementations differ significantly.

All three methods search for initial frequency in the range  Hz with a uniform grid spacing equal to the size of an SFT frequency bin,

(11)

where is the time-baseline of each SFT. The range of is determined by the noise curves of the interferometers, likely detectable source frequencies (22), and limitations due to the increasing computational cost at high frequencies.

The range of values searched is for the StackSlide and PowerFlux methods and for the Hough method. The ranges of are determined by the computational cost, as well as by the low probability of finding an object with higher than the values searched—in other words, the ranges of are narrow enough to complete the search in a reasonable amount of time, yet wide enough to include likely signals. All known isolated pulsars spin down more slowly than the two values of used here, and as seen in the results section, the ellipticity required for higher is improbably high for a source losing rotational energy primarily via gravitational radiation at low frequencies. A small number of isolated pulsars in globular clusters exhibit slight spin-up, believed to arise from acceleration in the Earth’s direction; such spin-up values have magnitudes small enough to be detectable with the zero-spin-down templates used in these searches, given a strong enough signal. The parameter ranges correspond to a minimum spin-down timescale (the gravitational-wave spin-down age) of 40 years for a source emitting at 50 Hz and 800 years for a source at 1000 Hz. Since for known pulsars (23) this characteristic timescale is at least hundreds of years for frequencies on the low end of our range and tens of millions of years for frequencies on the high end, we see again that the ranges of are wide enough to include sources from this population.

As discussed in our previous reports (8); (7), the number of sky points that must be searched grows quadratically with the frequency , ranging here from about five thousand at 50 Hz to about two million at 1000 Hz. All three methods use nearly isotropic grids which cover the entire sky. The PowerFlux search also divides the sky into regions according to susceptibility to stationary instrumental line artifacts. Sky grid and spin-down spacings and other details are provided below.

Upper limits

While the parameter space searched is similar for the three methods, there are important differences in the way upper limits are set. StackSlide and Hough both set population-based frequentist limits on by carrying out Monte Carlo simulations of a random population of pulsar sources distributed uniformly over the sky and with isotropically distributed spin-axes. PowerFlux sets strict frequentist limits on circular and linear polarization amplitudes and , which correspond to limits on most and least favorable pulsar inclinations, respectively. The limits are placed separately on tiny patches of the sky, with the highest strain upper limits presented here. In this context “strict” means that, regardless of its polarization angle or inclination angle , regardless of its sky location (within fiducial regions discussed below), and regardless of its frequency value and spin-down within the frequency and spin-down step sizes of the search template, an isolated pulsar of true strain amplitude , would have yielded a higher measured amplitude than what we measure, in at least 95% of independent observations. The circular polarization limits apply only to the most favorable inclinations (, ), regardless of sky location and regardless of frequency and spin-down, as above.

Due to these different upper limit setting methods, sharp instrumental lines are also handled differently. StackSlide and Hough carry out removal of known instrumental lines of varying widths in individual SFTs. The measured powers in those bins are replaced with random noise generated to mimic the noise observed in neighboring bins. This line cleaning technique can lead to a true signal being missed because its apparent frequency may coincide with an instrumental line for a large number of SFTs. However, population-averaged upper limits are determined self-consistently to include loss of detection efficiency due to line removal, by using Monte Carlo simulations.

Since its limits are intended to be strict, that is, valid for any source inclination and for any source location within its fiducial area, PowerFlux must handle instrumental lines differently. Single-bin lines are flagged during data preparation so that when searching for a particular source an individual SFT bin power is ignored when it coincides with the source’s apparent frequency. If more than 80% of otherwise eligible bins are excluded for this reason, no attempt is made to set a limit on strain power from that source. In practice, however, the 80% cutoff is not used because we have found that all such sources lie in certain unfavorable regions of the sky, which we call “skybands” and which we exclude when setting upper limits. These skybands depend on source frequency and its derivative, as described in Sec. V.4.4.

Data Preparation

Other differences among the methods concern the data windowing and filtering used in computing Fourier transforms and concern the noise estimation. StackSlide and Hough apply high pass filters to the data above , in addition to the filter used to produce the calibrated data stream, and use Tukey windowing. PowerFlux applies no additional filtering and uses Hann windowing with 50% overlap between adjacent SFT’s. StackSlide and Hough use median-based noise floor tracking (24); (25); (26). In contrast, Powerflux uses a time-frequency decomposition. Both of these noise estimation methods are described in Sec. V.

The raw, uncalibrated data channels containing the strain measurements from the three interferometers are converted to a calibrated “” data stream, following the procedure described in (27), using calibration reference functions described in (28). SFTs are generated directly from the calibrated data stream, using 30-minute intervals of data for which the interferometer is operating in what is known as science-mode. The choice of 30 minutes is a tradeoff between intrinsic sensitivity, which increases with SFT length, and robustness against frequency drift during the SFT interval due to the Earth’s motion, source spin-down, and non-stationarity of the data (8). The requirement that each SFT contain contiguous data at nominal sensitivity introduces duty factor loss from edge effects, especially for the Livingston interferometer (20%) which had typically shorter contiguous-data stretches. In the end, the StackSlide and Hough searches used 1004 SFTs from H1 and 899 from L1, the two interferometers with the best broadband sensitivty. For PowerFlux, the corresponding numbers of overlapped SFTs were 1925 and 1628. The Hough search also used 1063 H2 SFTs. In each case, modest requirements were placed on data quality to avoid short periods with known electronic saturations, unmonitored calibration strengths, and the periods immediately preceding loss of optical cavity resonance.

iv.2 Definitions And Notation

Let be the number of SFTs, the time-baseline of each SFT, and the number of uniformly spaced data points in the time domain from which the SFT is constructed. If the time series is denoted by (), then our convention for the discrete Fourier transform is

(12)

where , and . For , the frequency index corresponds to a physical frequency of .

In each method, the “power” (in units of spectral density) associated with frequency bin and SFT is taken to be

(13)

It proves convenient to define a normalized power by

(14)

The quantity is the single-sided power spectral density of the detector noise at frequency , the estimation of which is described below. Furthermore, a threshold, , can be used to define a binary count by (11):

(15)
Quantity Description
Power for SFT & template
Normalized power for SFT & template
Binary count for SFT & template
Power spect. noise density for SFT & template
at midpoint of SFT for template
at midpoint of SFT for template
Table 1: Summary of notation used.

When searching for a signal using template the detector antenna pattern and frequency of the signal are found at the midpoint time of the data used to generate each SFT. Frequency dependent quantities are then evaluated at a frequency index corresponding to the bin nearest this frequency. To simplify the equations in the rest of this paper we drop the frequency index and use the notation given in Table 1 to define various quantities for SFT and template .

iv.3 Basic StackSlide, Hough, and PowerFlux Formalism

We call the detection statistics used in this search the “StackSlide Power”, , the “Hough Number Count”, , and the “PowerFlux Signal Estimator”, . The basic definitions of these quantities are given below.

Here the simple StackSlide method described in (16) is used; the “StackSlide Power” for a given template is defined as

(16)

This normalization results in values of with a mean value of unity and, for Gaussian noise, a standard deviation of . Details about the value and statistics of in the presence and absence of a signal are given in Appendix B and (16).

In the Hough search, instead of summing the normalized power, the final statistic used in this paper is a weighted sum of the binary counts, giving the “Hough Number Count”:

(17)

where the Hough weights are defined as

(18)

and the weight normalization is chosen according to

(19)

With this choice of normalization the Hough Number Count lies within the range . Thus, we take a binary count to have greater weight if the SFT has a lower noise floor and if, in the time-interval corresponding to this SFT, the beam pattern functions are larger for a particular point in the sky. Note that the sensitivity of the search is governed by the ratios of the different weights, not by the choice of overall scale. In the next section we show that these weights maximize the sensitivity, averaged over the orientation of the source. This choice of was originally derived in (17) using a different argument and is similar to that used in the PowerFlux circular polarization projection described next. More about the Hough method is given in (8); (11).

The PowerFlux method takes advantage of the fact that less weight should be given to times of greater noise variance or smaller detector antenna response to a signal. Noting that power estimated from the data divided by the antenna pattern increases the variance of the data at times of small detector response, the problem reduces to finding weights that minimize the variance, or in other words that maximize the signal-to-noise ratio. The resulting PowerFlux detection statistic is (18),

(20)

where the PowerFlux weights are defined as

(21)

and where

(22)

As noted previously, the PowerFlux method searches using four linear polarization projections and one circular polarization projection. For the linear polarization projections, note that is evaluated at the angle , which is the same as evaluated at the angle ; for circular polarization, the value of is independent of . Finally note that the factor of in Eq. (20) makes dimensionless and is chosen to make it directly related to an estimate of the squared amplitude of the signal for the given polarization. Thus is also called in this paper the “PowerFlux Signal Estimator”. (See (18) and Appendix A for further discussion.)

We have shown in Eqs. (16)-(22) how to compute the detection statistic (or signal estimator) for a given template. The next section gives the details of the implementation and pipelines used, where these quantities are calculated for a set of templates and analyzed.

V Implementations and Pipelines

v.1 Running Median Noise Estimation

The implementations of the StackSlide and Hough methods described below use a “running median” to estimate the mean power and, from this estimate, the power spectral density of the noise, for every frequency bin of every SFT. PowerFlux uses a different noise decomposition method described in its implementation section below.

Note that for Gaussian noise, the single-sided power spectral density can be estimated using

(23)

where the angle brackets represent an ensemble average. The estimation of must guard against any biases introduced by the presence of a possible signal and also against narrow spectral disturbances. For this reason the mean, , is estimated via the median. We assume that the noise is stationary within a single SFT, but allow for non-stationarities across different SFTs. In every SFT we calculate the “running median” of for every frequency bins centered on the bin, and then estimate  (24); (25); (26) by dividing by the expected ratio of the median to the mean.

Note, however, that in the StackSlide search, after the estimated mean power is used to compute in the denominator of Eq. (14) these terms are summed in Eq. (16), while the Hough search applies a cutoff to obtain binary counts in Eq. (15) before summing. This results in the use of a different correction to get the mean in the StackSlide search from that used in the Hough search. For a running median using 101 frequency bins, the effective ratio of the median to mean used in the StackSlide search was (which was chosen to normalize the data so that the mean value of the StackSlide Power equals one) compared with the expected ratio for an exponential distribution of used in the Hough search (which is explained in Appendix A of (8)). It is important to realize that the results reported here are valid independent of the factor used, since any overall constant scaling of the data does not affect the selection of outliers or the reported upper limits, which are based on Monte Carlo injections subjected to the same normalization.

v.2 The StackSlide Implementation

Algorithm and parameter space

The StackSlide method uses power averaging to gain sensitivity by decreasing the variance of the noise (15); (13); (14); (16). Brady and Creighton (13) first described this approach in the context of gravitational-wave detection as a part of a hierarchical search for periodic sources. Their method consists of averaging the power from a demodulated time series, but as an approximation did not include the beam pattern response of the detector. In Ref. (16), a simple implementation is described that averages the normalized power given in Eq. (14). Its extension to averaging the maximum likelihood statistic (known as the -statistic) which does include the beam pattern response is mentioned in Ref. (16) (see also (19); (11); (7)), and further extensions of the StackSlide method are given in (14).

As noted above, the simple StackSlide method given in (16) is used here and the detection statistic, called the “StackSlide Power”, is defined by Eq. (16). The normalization is chosen so that the mean value of is equal to and its standard deviation is for Gaussian noise alone. For simplicity, the StackSlide Power signal-to-noise ratio (in general the value of minus its mean value and then divided by the standard deviation of ) will be defined in this paper as , even for non-Gaussian noise.

Figure 3: Flow chart for the pipeline used to find the upper limits presented in this paper using the StackSlide method.

The StackSlide code, which implements the method described above, is part of the C-based LSC Algorithms Library Applications (LALapps) stored in the lscsoft CVS repository (21). The code is run in a pipeline with options set to produce the results from a search and from Monte Carlo simulations. Parallel jobs are run on computer clusters within the LSC, in the Condor environment (30), and the final post processing steps are performed using Matlab (31). The specific StackSlide pipeline used to find the upper limits presented in this paper is shown in Fig. 3. The first three boxes on the left side of the pipeline can also be used to output candidates for follow-up searches.

A separate search was run for each successive  Hz band within  Hz. The spacing in frequency used is given by Eq. (11). The spacing in was chosen as that which changes the frequency by one SFT frequency bin during the observation time , i.e., so that . For simplicity seconds days was chosen, which is greater than or equal to for each interferometer. Thus, the part of the parameter space was over-covered by choosing

(24)

Values of in the range were searched. This range corresponds to a search over values of , which is the same as PowerFlux used in its low-frequency search (discussed in Section. V.4).

The sky grid used is similar to that used for the all-sky search in (7), but with a spacing between sky-grid points appropriate for the StackSlide search. This grid is isotropic on the celestial sphere, with an angular spacing between points chosen for the - Hz band, such that the maximum change in Doppler shift from one sky grid point to the next would shift the frequency by half a bin. This is given by

(25)

where is the magnitude of the velocity of the detector in the SSB frame, and is the angle between and the unit-vector giving the sky-position of the source. Equations (24) and (25) are the same as Eqs. (19) and (22) in (8), which represent conservative choices that over-cover the parameter space. Thus, the parameter space used here corresponds to that in Ref. (8), adjusted to the S4 observation time, and with the exception that a stereographic projection of the sky is not used. Rather an isotropic sky grid is used like the one used in (7).

One difficulty is that the computational cost of the search increases quadratically with frequency, due to the increasing number of points on the sky grid. To reduce the computational time, the sky grid spacing given in Eq. (25) was increased by a factor of above  Hz. This represents a savings of a factor of in computational cost. It was shown through a series of simulations, comparing the upper limits in various frequency bands with and without the factor of 5 increase in grid spacing, that this changes the upper limits on average by less than than , with a standard deviation of . Thus, this factor of increase was used to allow the searches in the  Hz band to complete in a reasonable amount of time.

It is not surprising that the sky grid spacing can be increased, for at least three reasons. First, the value for given in Eq. (25) applies to only a small annular region on the sky, and is smaller than the average change. Second, only the net change in Doppler shift during the observation time is important, which is less than the maximum Doppler shift due to the Earth’s orbital motion during a one month run. (If the Doppler shift were constant during the entire observation time, one would not need to search sky positions even if the Doppler shift varied across the sky. A source frequency would be shifted by a constant amount during the observation, and would be detected, albeit in a frequency bin different from that at the SSB.) Third, because of correlations on the sky, one can detect a signal with negligible loss of SNR much farther from its sky location than the spacing above suggests.

Line cleaning

Figure 4: The StackSlide Power for the  Hz band with no sliding. Harmonics of 1 Hz instrumental lines are clearly seen in H1 (top) and L1 (bottom). These lines are removed from the data by the StackSlide and Hough searches using the method described in the text, while PowerFlux search tracks these lines and avoids them when setting upper limits.
Figure 5: The L1 amplitude spectral density in a narrow frequency band estimated from 10 SFTs before and after the line cleaning used by the StackSlide pipeline. In the band shown, the  Hz bin, and one bin either side of this bin have been replaced with estimates of the noise based on neighboring bins.
IFO Num. Description
Hz Hz Hz Hz
H1 46.7 1 0.0 0.0 Cal. Line
H1 393.1 1 0.0 0.0 Cal. Line
H1 973.3 1 0.0 0.0 Cal. Line
H1 1144.3 1 0.0 0.0 Cal. Line
H1 0.0 1.0 1500 0.0006 0.0006 1 Hz Comb
L1 54.7 1 0.0 0.0 Cal. Line
L1 396.7 1 0.0 0.0 Cal. Line
L1 1151.5 1 0.0 0.0 Cal. Line
L1 0.0 1.0 1500 0.0006 0.0006 1 Hz Comb
Table 2: Instrumental lines cleaned during the StackSlide search. The frequencies cleaned are found by starting with that given in the first column, and then taking steps in frequency given in the second column, repeating this the number of times shown in the third column; the fourth and fifth columns show how many additional Hz are cleaned to the immediate left and right of each line.

Coherent instrumental lines exist in the data which can mimic a continuous gravitational-wave signal for parameter space points that correspond to little Doppler modulation. Very narrow instrumental lines are removed (“cleaned”) from the data. In the StackSlide search, a line is considered “narrow” if its full width is less than of the  Hz band, or less than  Hz. The line must also have been identified a priori as a known instrument artifact. Known lines with less than this width were cleaned by replacing the contents of bins corresponding to lines with random values generated by using the running median to find the mean power using 101 bins from either side of the lines. This method is also used to estimate the noise, as described in Section V.1.

It was found when characterizing the data that a comb of narrow  Hz harmonics existed in the H1 and L1 data, as shown in Fig. 4. Table 2 shows the lines cleaned during the StackSlide search. As the table shows, only this comb of narrow  Hz harmonics and injected lines used for calibration were removed. As an example of the cleaning process, Fig. 5 shows the amplitude spectral density estimated from SFTs before and after line cleaning, for the band with the  Hz line at  Hz.

Excluded Bands Description
Hz
Power lines
to Power line harmonics
Violin modes
Violin mode harmonics
Violin mode harmonics
Table 3: Frequency bands excluded from the StackSlide search.

The cleaning of very narrow lines has a negligible effect on the efficiency to detect signals. Very broad lines, on the other hand, cannot be handled in this way. Bands with very broad lines were searched without any line cleaning. There were also a number of highly disturbed bands, dominated either by the harmonics of  Hz power lines or by the violin modes of the suspended optics, that were excluded from the StackSlide results. (Violin modes refer to resonant excitations of the steel wires that support the interferometer mirrors.) These are shown in Table 3. While these bands can be covered by adjusting the parameters used to find outliers and set upper limits, we will wait for future runs to do this.

Upper limits method

After the lines are cleaned, the powers in the SFTs are normalized and the parameter space searched, with each template producing a value of the StackSlide Power, defined in Eq. (16). For this paper, only the “loudest” StackSlide Power is kept, resulting in a value for each  Hz band, and these are used to set upper limits on the gravitational-wave amplitude, . (The loudest coincident outliers are also identified, but none survive as candidates after follow-up studies described in Sec. VII.1.1.) The upper limits are found by a series of Monte Carlo simulations, in which signals are injected in software with a fixed value for , but with otherwise randomly chosen parameters, and the parameter space points that surround the injection are searched. The number of times the loudest StackSlide Power found during the Monte Carlo simulations is greater than or equal to is recorded, and this is repeated for a series of values. The confidence upper limit is defined to be the value of that results in a detected StackSlide Power greater than or equal to of the time. As shown in Fig. 3, the line cleaning described above is done after each injection is added to the input data, which folds any loss of detection efficiency due to line cleaning into the upper limits self-consistently.

Figure 6: Measure confidence vs. for an example band ( Hz in H1). A best-fit straight line is used to find the value of corresponding to confidence and to estimate the uncertainties in the results (see text).

Figure 6 shows the measured confidence versus for an example frequency band. The upper limit finding process involves first making an initial guess of its value, then refining this guess using a single set of injections to find an estimate of the upper limit, and finally using this estimate to run several sets of injections to find the final value of the upper limit. These steps are now described in detail.

To start the upper limit finding process, first an initial guess, , is used as the gravitational-wave amplitude. The initial guess need not be near the sought-after upper limit, just sufficiently large, as explained below. A single set of injections is done (specifically was used) with random sky positions and isotropically distributed spin axes, but all with amplitude . The output list of StackSlide Powers from this set of injections is sorted in ascending order and the ’th (specifically for the th) smallest value of the StackSlide Power is found, which we call , Note that the goal is to find the value of that makes , so that of the output powers are greater than the maximum power found during the search. This is what we call the confidence upper limit. Of course, in general will not equal unless our first guess was very lucky. However, as per the discussion concerning Eq. (61), is proportional to (i.e, removing the mean value due to noise leaves on average the power due to the presence of a signal). Thus, an estimate of the confidence upper limits is given by the following rescaling of ,

(26)

Thus an estimated upper limit, , is found from a single set of injections with amplitude ; the only requirement is that is chosen loud enough to make .

It is found that using Eq. (26) results in a estimate of the upper limit that is typically within of the final value. For example, the estimated upper limit found in this way is indicated by the circled point in Fig. 6. The value of then becomes the first value for in a series of Monte Carlo simulations, each with injections, which use this value and neighboring values, measuring the confidence each time. The Matlab (31) polyfit and polyval functions are then used to find the best-fit straight line to determine the value of corresponding to confidence and to estimate the uncertainties in the results. This is the final step of the pipeline shown in Fig. 3.

v.3 The Hough Transform Implementation

Description of Algorithm

The Hough transform is a general method for pattern recognition, invented originally to analyze bubble chamber pictures from CERN (32); (33); it has found many applications in the analysis of digital images (34). This method has already been used to analyze data from the second science run (S2) of the LIGO detectors (8) and a detailed description can be found in (11). Here we present only a brief description, emphasizing the differences between the previous S2 search and the S4 search described here.

The Hough search uses a weighted sum of the binary counts as its final statistic, as given by Eqs. (15) and (19). In the standard Hough search as presented in (11); (8), the weights are all set to unity. The weighted Hough transform was originally discussed in (17). The software for performing the Hough transform has been adapted to use arbitrary weights without any significant loss in computational efficiency. Furthermore, the robustness of the Hough transform method in the presence of strong transient disturbances is not compromised by using weights because each SFT contributes at most (which is of order unity) to the final number count.

The following statements can be proven using the methods of (11). The mean number count in the absence of a signal is , where is the number of SFTs and is the probability that the normalized power, of a given frequency bin and SFT defined by Eq. (14), exceeds a threshold , i.e., is the probability that a frequency bin is selected in the absence of a signal. For unity weighting, the standard deviation is simply . However, with more general weighting, it can be shown that is given by

(27)

where . A threshold on the number count corresponding to a false alarm rate is given by

(28)

Therefore depends on the weights of the corresponding template . In this case, the natural detection statistic is not the “Hough Number Count” , but the significance of a number count, defined by

(29)

where and are the expected mean and standard deviation for pure noise. Values of can be compared directly across different templates characterized by differing weight distributions.

The threshold (c.f. Eq. 15) is selected to give the minimum false dismissal probability for a given false alarm rate. In (8) it was shown that the optimal choice for is which correspond to a peak selection probability . It can be shown that the optimal choice is unchanged by the weights and hence is used once more (35).

Consider a population of sources located at a given point in the sky, but having uniformly distributed spin axis directions. For a template that is perfectly matched in frequency, spin-down, and sky-position, and given the optimal peak selection threshold, it can be shown (35) that the weakest signal that can cross the threshold with a false dismissal probability has an amplitude

(30)

where

(31)
(32)

As before, and are the values of the beam pattern functions at the mid-point of the SFT. To derive (30) we have assumed that the number of SFTs is sufficiently large and that the signal is weak (11).

From (30) it is clear that the scaling of the weights does not matter; leaves unchanged for any constant . More importantly, it is also clear that the sensitivity is best, i.e.  is minimum, when is maximum:

(33)

This result is equivalent to Eq. (18).

In addition to improving sensitivity in single-interferometer analysis, the weighted Hough method allows automatic optimal combination of Hough counts from multiple interferometers of differing senstivities.

Figure 7: The improvement in the significance as a function of the mismatch in the sky-position. A signal is injected in fake noise at and the weights are calculated at . The curve is the observed significance as a function of while the horizontal line is the observed significance when no weights are used. See main text for more details.

Ideally, to obtain the maximum increase in sensitivity, we should calculate the weights for each sky-location separately. In practice, we break up the sky into smaller patches and calculate one weight for each sky-patch center. The gain from using the weights will be reduced if the sky patches are too large. From equation (32), it is clear that the dependence of the weights on the sky-position is only through the beam pattern functions. Therefore, the sky patch size is determined by the typical angular scale over which and vary; thus for a spherical detector using the beam pattern weights would not gain us any sensitivity. For the LIGO interferometers, we have investigated this issue with Monte-Carlo simulations using random Gaussian noise. Signals are injected in this noise corresponding to the H1 interferometer at a sky-location , while the weights are calculated at a mismatched sky-position . The significance values are compared with the significance when no weights are used. An example of such a study is shown in Fig. 7. Here, we have injected a signal at , , zero spin-down, , and a signal to noise ratio corresponding approximately to a - level without weights. The figure shows a gain of at , decreasing to zero at rad. We get qualitatively similar results for other sky-locations, independent of frequency and other parameters. There is an additional gain due to the non-stationarity of the noise itself, which depends, however, on the quality of the data. In practice, we have chosen to break the sky up into 92 rectangular patches in which the average sky patch size is about rad wide, corresponding to a maximum sky position mismatch of rad in Fig. 7.

The Hough Pipeline

The Hough analysis pipeline for the search and for setting upper limits follows roughly the same scheme as in (8). In this section we present a short description of the pipeline, mostly emphasizing the differences from (8) and from the StackSlide and PowerFlux searches. As discussed in the previous subsection, the key differences from the S2 analysis (8) are (i) using the beam-pattern and noise weights, and (ii) using SFTs from multiple interferometers.

The total frequency range analyzed is 50-1000 Hz, with a resolution as in (11). The resolution in is given in (24), and the reference time for defining the spin-down is the start-time of the observation. However, unlike StackSlide and PowerFlux, the Hough search is carried out over only 11 values of , including zero, in the range [, ]. This choice is driven by the technical design of the current implementation, which uses look-up-tables and partial Hough maps as in (8). This implementation of the Hough algorithm is efficient when analyzing all resolvable points in , as given in (24), but this approach is incompatible with the larger step sizes used in the other search methods, which permit those searches to search a larger range for comparable computational cost.

The sky resolution is similar to that used by the StackSlide method for as given by (25). At frequencies higher than this, the StackSlide sky-resolution is 5 times coarser, thus the Hough search is analyzing about 25 more templates at a given frequency and spin-down value. In each of the 92 sky patches, by means of the stereographic projection, the sky patch is mapped to a two dimensional plane with a uniform grid of that resolution . Sky Patches slightly overlap to avoid gaps among them (see (8) for further details).

Figure 8: Two example histograms of the normalized Hough number count compared to a Gaussian distribution for the H1 detector in the frequency band 150-151 Hz. The upper figure corresponds to a a patch located at the north pole for the case in which the weights are used. The number of templates analyzed in this 1Hz band is of , the number of SFTs 1004, the corresponding mean and is obtained from the weights. The lower figure corresponds to a patch at the equator using the same data. In this case the number of templates analyzed in this 1Hz band is of , and its corresponding .

Figure 8 shows examples of histograms of the number counts in two particular sky patches for the H1 detector in the 150-151 Hz band. In all the bands free of instrumental disturbances, the Hough number count distributions follows the expected theoretical distribution, which can be approximated by a Gaussian distribution. Since the number of SFTs for H1 is 1004, the corresponding mean and the standard deviation is given by Eq. (27). The standard deviation is computed from the weights and varies among different sky patches because of varying antenna pattern functions.

IFO Description
Hz Hz Hz Hz
H1 392.365 1 0.01 0.01 Cal. SideBand
H1 393.835 1 0.01 0.01 Cal. SideBand
H2 54.1 1 0.0 0.0 Cal. Line
H2 407.3 1 0.0 0.0 Cal. Line
H2 1159.7 1 0.0 0.0 Cal. Line
H2 110.934 36.9787 4 0.02 0.02 37 Hz Oscillator
L1 154.6328 8.1386 110 0.01 0.01 8.14 Hz Comb
L1 0.0 36.8725 50 0.02 0.02 37 Hz Oscillator (*)
Table 4: Instrumental lines cleaned during the Hough search that were not listed in Table 2 (see text). (*) These lines were removed only in the multi-interferometer search.

The upper limits on are derived from the loudest event, registered over the entire sky and spin-down range in each Hz band, not from the highest number count. As for the StackSlide method, we use a frequentist method, where upper limits refer to a hypothetical population of isolated spinning neutron stars which are uniformly distributed in the sky and have a spin-down rate uniformly distributed in the range [, ]. We also assume uniform distributions for the parameters , , and . The strategy for calculating the 95 upper limits is roughly the same scheme as in (8), except for the treatment of narrow instrumental lines.

Known spectral disturbances are removed from the SFTs in the same way as for the StackSlide search. The known spectral lines are, of course, also consistently removed after each signal injection when performing the Monte-Carlo simulations to obtain the upper limits.

The narrow instrumental lines “cleaned” from the SFT data are the same ones cleaned during the StackSlide search shown in Table 2, together with ones listed in Table 4. The additional lines listed in Table 4 are cleaned to prevent large artifacts in one instrument from increasing the false alarm rate of the Hough multi-interferometer search. Note that the L1 36.8725 Hz comb was eliminated mid-way through the S4 run by replacing a synthesized radio frequency oscillator for phase modulation with a crystal oscillator, and these lines were not removed in the Hough L1 single-interferometer analysis.

No frequency bands have been excluded from the Hough search, although the upper limits reported on the bands shown in Table 3, that are dominated by 60 Hz power line harmonics or violin modes of the suspended optics, did not always give satisfactory convergence to an upper limit. In a few of these very noisy bands, upper limits were set by extrapolation, instead of interpolation, of the Monte-Carlo simulations. Therefore the results reported on those bands have larger error bars. No parameter tuning was performed on these disturbed bands to improve the upper limits.

v.4 The Powerflux Implementation

The PowerFlux method is a variant on the StackSlide method in which the contributions from each SFT are weighted by the inverse square of the average spectral power density in each band and weighted according to the antenna pattern sensitivity of the interferometer for each point searched on the sky. This weighting scheme has two advantages: 1) variance on the signal strength estimator is minimized, improving signal-to-noise ratio; and 2) the estimator is itself a direct measure of source strain power, allowing direct parameter estimation and dramatically reducing dependence on Monte Carlo simulations. Details of software usage and algorithms can be found in a technical document (18). Figure 9 shows a flow chart of the algorithm, discussed in detail below.

Figure 9: Flow chart for the pipeline used to find the upper limits presented in this paper using the PowerFlux method.

Noise decomposition

Noise estimation is carried out through a time/frequency noise decomposition procedure in which the dominant variations are factorized within each nominal 0.25 Hz band as a product of a spectral variation and a time variation across the data run. Specifically, for each 0.25 Hz band, a matrix of logarithms of power measurements across the  mHz SFT bins and across the SFT’s of the run is created. Two vectors, denoted TMedians and FMedians, are initially set to zero and then iteratively updated according to the following algorithm:

  1. For each SFT (row in matrix), the median value (logarithm of power) is computed and then added to the corresponding element of TMedians while subtracted from each matrix element in that row.

  2. For each frequency bin (column in matrix), the median value is computed and then added to the corresponding FMedians element, while subtracted from each matrix element in that column.

  3. The procedure repeats from step 1 until all medians computed in steps 1 and 2 are zero (or negligible).

The above algorithm typically converges quickly. The size of the frequency band treated increases with central frequency, as neighboring bins are included to allow for maximum and minimum Doppler shifts to be searched in the next step.

For stationary, Gaussian noise and for noise that follows the above assumptions of underlying factorized frequency and time dependence, the expected distribution of residual matrix values can be found from simulation. Figure 10 shows a sample expected residual power distribution following noise decomposition for simulated stationary, Gaussian data, along with a sample residual power distribution from the S4 data (0.25-Hz band of H1 near 575 Hz, in this case) following noise decomposition. The agreement in shape between these two distributions is very good and is typical of the S4 data, despite sometimes large variations in the corresponding TMedians and FMedians vectors, and despite, in this case, the presence of a moderately strong simulated pulsar signal (Pulsar2 in Table 5).

The residuals are examined for outliers. If the largest residual value is found to lie above a threshold of 1.5, that corresponding 0.25 Hz band is flagged as containing a “wandering line” because a strong but drifting instrumental line can lead to such outliers. The value 1.5 is determined empirically from Gaussian simulations. An extremely strong pulsar could also be flagged in this way, and indeed the strongest injected pulsars are labelled as wandering lines. Hence in the search, the wandering lines are followed up, but no upper limits are quoted here for the affected bands.

Figure 10: Typical residual logarithmic power following noise decomposition for a sample 0.25-Hz band of H1 data (crosses) near 575 Hz in a band containing an injected pulsar. The residual is defined as the difference between a measured power for a given frequency bin in a given 30-minute period and the value predicted by the FMedians and TMedians vectors. The smooth curve is for a simulation in Gaussian noise.

Line flagging

Sharp instrumental lines can prevent accurate noise estimation for pulsars that have detected frequencies in the same  mHz bin as the line. In addition, strong lines tend to degrade achievable sensitivity by adding excess apparent power in an affected search. In early LIGO science runs, including the S4 run, there have been sharp instrumental lines at multiples of 1 Hz or 0.25 Hz, arising from artifacts in the data acquisition electronics.

To mitigate the most severe of these effects, the PowerFlux algorithm performs a simple line detection and flagging algorithm. For each 0.25 Hz band, the detected summed powers are ranked and an estimated Gaussian sigma computed from the difference in the 50% and 94% quantiles. Any bins with power greater than 5.0 are marked for ignoring in subsequent processing. Specifically, when carrying out a search for a pulsar of a nominal true frequency, its contribution to the signal estimator is ignored when the detected frequency would lie in the same  mHz bin as a detected line. As discussed below, for certain frequencies, spin-downs and points in the sky, the fraction of time a putative pulsar has a detected frequency in a bin containing an instrumental line can be quite large, requiring care. The deliberate ignoring of contributing bins affected by sharp instrumental lines does not lead to a bias in resulting limits, but it does degrade sensitivity, from loss of data. In any 0.25 Hz band, no more than five bins may be flagged as lines. Any band with more than five line candidates is examined manually.

Signal estimator

Once the noise decomposition is complete, with estimates of the spectral noise density for each SFT, the PowerFlux algorithm computes a weighted sum of the strain powers, where the weighting takes into account the underlying time and spectral variation contained in TMedians and FMedians and the antenna pattern sensitivity for an assumed sky location and incident wave polarization. Specifically, for an assumed polarization angle and sky location, the following quantity is defined for each bin of each SFT :

(34)

where is the -dependent antenna pattern for the sky location, defined in Eq. (22). (See also Appendix A.)

As in Sec. IV.2, to simplify the notation we define as the value of for SFT and a given template .

For each individual SFT bin power measurement , one expects an underlying exponential distribution, with a standard deviation equal to the mean, a statement that holds too for . To minimize the variance of a signal estimator based on a sum of these powers, each contribution is weighted by the inverse of the expected variance of the contribution. Specifically, we compute the following signal estimator:

(35)
(36)

where and are the expected uncorrected and antenna-corrected powers of SFT averaged over frequency. Since the antenna factor is constant in this average, . Furthermore, is a estimate of the power spectral density of the noise. The replacement gives Eq. (20).

Note that for an SFT with low antenna pattern sensitivity , the signal estimator receives a small contribution. Similarly, SFT’s for which ambient noise is high receive small contributions. Because computational time in the search grows linearly with the number of SFT’s and because of large time variations in noise, it proves efficient to ignore SFT’s with sky-dependent and polarization-dependent effective noise higher than a cutoff value. The cutoff procedure saves significant computing time, with negligible effect on search performance.

Specifically, the cutoff is computed as follows. Let be the ordered estimated standard deviations in noise, taken to be the ordered means of , where is the number of frequency bins used in the search template. Define to be the index for which the quantity is minimized. Only SFT’s for which are used for signal estimation. In words, defines the last SFT that improves rather than degrades signal estimator variance in an unweighted mean. For the weighted mean used here, the effective noise contributions are allowed to be as high as twice the value found for . The choice of is determined empirically.

The PowerFlux search sets strict, frequentist, all-sky 95% confidence-level upper limits on the flux of gravitational radiation bathing the Earth. To be conservative in the strict limits, numerical corrections to the signal estimator are applied: 1) a factor of for maximum linear polarization mismatch, based on twice the maximum half-angle of mismatch (see Appendix A) and 2) a factor of for bin-centered signal power loss due to Hann windowing (applied during SFT generation); and 3) a factor of for drift of detected signal frequency across the width of the  mHz bins used in the SFT’s. Note that the use of rectangular windowing would eliminate the need for correction 2) above, but would require a larger correction of for 3)

Antenna pattern and noise weighting in the PowerFlux method allows weaker sources to be detected in certain regions of the sky, where run-averaged antenna patterns discriminate in declination and diurnal noise variations discriminate in right ascension. Figure 11 illustrates the resulting variation in effective noise across the sky for a 0.25-Hz H1 band near 575 Hz for the circular polarization projection. By separately examining SNR, one may hope to detect a signal in a sensitive region of the sky with a strain significantly lower than suggested by the strict worst-case all-sky frequentist limits presented here, as discussed below in section VI.4. Searches are carried out for four linear polarizations, ranging over polarization angle from to in steps of and for (unique) circular polarization.

Figure 11: Sky map of run-summed PowerFlux weights for a 0.25-Hz band near 575 Hz for one choice of linear polarization in the S4 H1 data. The normalization corresponds roughly to the effective number of median-noise SFT’s contributing to the sum.

A useful computational savings comes from defining two different sky resolutions. A “coarse” sky gridding is used for setting the cutoff value defined above, while fine grid points are used for both frequency and amplitude demodulation. A typical ratio of number of coarse grid points to number of fine grid points used for Doppler corrections is 25.

Sky banding

Stationary and near-stationary instrumental spectral lines can be mistaken for a periodic source of gravitational radiation if the nominal source parameters are consistent with small variation in detected frequency during the time of observation. The variation in the frequency at the detector can be found by taking the time derivative of Eq. (9), which gives,

(37)

The detector’s acceleration, in this equation is dominated by the Earth’s orbital acceleration , since the diurnal part of the detector’s acceleration is small and approximately averages to zero during the observation. Thus, it should be emphasized that a single instrumental line can mimic sources with a range of slightly different frequencies and assumed different positions in the sky that lie in an annular band. For a source assumed to be zero, the center of the band is defined by a circle 90 degrees away from the direction of the average acceleration of the Earth during the run where , i.e., toward the average direction of the Sun during the run. For source spin-downs different from zero, there can be a cancellation between assumed spin-down (or spinup) that is largely cancelled by the Earth’s average acceleration, leading to a shift of the annular region of apparent Doppler stationarity toward (away from) the Sun.

A figure of merit found to be useful for discriminating regions of “good” sky from “bad” sky (apparent detected frequency is highly stationary) is the “ parameter”:

(38)

where is the Earth’s angular velocity vector about the solar system barycenter. The term is a measure of the Earth’s average acceleration during the run, where is taken to be the noise-weighted velocity of the H1 detector during the run. Regions of sky with small for a given and have stationary detected frequency. As discussed below in section VI.4, such regions are not only prone to high false-alarm rates, but the line flagging procedure described in section V.4.2 leads to systematically underestimated signal strength and invalid upper limits. Hence limits are presented here for only sources with greater than a threshold value denoted . The minimum acceptable value chosen for is found from software signal injections to be for the 1-month S4 run and can be understood to be

(39)

where is the minimum total number of  mHz detection bins occupied by the source during the data run for reliable detection. In practice, we use still larger values for the H1 interferometer () and L1 interferometer () during the S4 run for the limits presented here because of a pervasive and strong comb of precise 1-Hz lines in both interferometers. These lines, caused by a GPS-second synchronized electronic disturbance and worse in L1 than in H1, lead to high false-alarm rates from that data for lower values of . For the frequency and spin-down ranges searched in this analysis, the average fractions of sky lost to the skyband veto are 15% for H1 and 26% for L1.

Figures 12-14 illustrate the variation in the fraction of sky marked as “bad” as assumed source frequency and spin-down are varied. Generally, at low frequencies, large sky regions are affected, but only for low spin-down magnitude, while at high frequencies, small sky regions are affected, but the effects are appreciable to larger spin-down magnitude. It should be noted that the annular regions of the sky affected depend upon the start time and duration of a data run. The longer the data run, the smaller is the region of sky for which Doppler stationarity is small. Future LIGO data runs of longer duration should have only small regions near the ecliptic poles for which stationary instrumental lines prove troublesome.

Figure 12: S4 sky band regions (good - light gray, bad for L1 - medium or dark gray, bad for H1 & L1 - dark gray) for a source frequency = 100 Hz and three different assumed spin-down choices: a) zero; b) Hz s; and c) Hz s. The black circle indicates the average position of the Sun during the data run.
Figure 13: S4 sky band regions (good - light gray, bad for L1 - medium or dark gray, bad for H1 & L1 - dark gray) for a source frequency = 300 Hz and three different assumed spin-down choices: a) zero; b) Hz s; and c) Hz s. The black circle indicates the average position of the Sun during the data run.
Figure 14: S4 sky band regions (good - light gray, bad for L1 - medium or dark gray, bad for H1 & L1 - dark gray) for a source frequency = 1000 Hz and three different assumed spin-down choices: a) zero; b) Hz s; and c) Hz s. The black circle indicates the average position of the Sun during the data run.

Grid-point upper limit determination

An intermediate step in the PowerFlux analysis is the setting of upper limits on signal strength for each sky-point for each  mHz bin. The limits presented here for each interferometer are the highest of these intermediate limits for each 0.25-Hz band over the entire “good” sky. The intermediate limits are set under the assumption of Gaussian residuals in noise. In brief, for each  mHz bin and sky-point, a Feldman-Cousins (36) 95% confidence-level is set for an assumed normal distribution with a standard deviation determined robustly from quantiles of the entire 0.25 Hz band. The Feldman-Cousins approach provides the virtues of a well behaved upper limit even when background noise fluctuates well below its expectation value and of smooth transition between 1-sided and 2-sided limits, but in practice the highest upper limit for any 0.25 Hz band is invariably the highest measured power plus 1.96 times the estimated standard deviation on the background power for that bin, corresponding to a conventional a priori 1-sided 97.5% upper CL. A Kolmogorov-Smirnov (KS) statistic is computed to check the actual power against a Gaussian distribution for each 0.25 Hz band. Those bands that fail the KS test value of 0.07 ( 5 deviation for the S4 data sample) are flagged as “Non-Gaussian”, and no upper limits on pulsars are quoted here for those bands, although a full search is carried out. Bands subject to violin modes and harmonics of the 60 Hz power mains tend to fail the KS test because of sharp spectral slope (and sometimes because non-stationarity of sharp features leads to poor noise factorization).

Figure 15 provides an example of derived upper limits from one narrow band. The figure shows the distribution of PowerFlux strain upper limits on linear polarization amplitude for a sample 0.25 Hz band of S4 H1 data near 149 Hz. The highest upper limit found is (corresponding to a worst-case pulsar upper limit on of ). The bimodal distribution arises from different regions of the sky with intrinsically different antenna pattern sensitivities. The peak at corresponds to points near the celestial equator where the run-averaged antenna pattern sensitivity is worst.

Figure 15: Histogram of Feldman Cousins 95% confidence-level upper limits in a 0.25-Hz band near 149 Hz in S4 H1 data. Each entry corresponds to the highest upper limit in the band for a single sky location.

Vi Hardware Injections and Validation

Name (Hz) () (radians) (radians) (radians)
Pulsar0
Pulsar1
Pulsar2
Pulsar3
Pulsar4
Pulsar5
Pulsar6
Pulsar7
Pulsar8
Pulsar9