Statistical and systematic errors for gravitational-wave inspiral signals: A principal component analysis

Statistical and systematic errors for gravitational-wave inspiral signals:
A principal component analysis

Frank Ohme School of Physics and Astronomy, Cardiff University, The Parade, Cardiff, CF24 3AA, United Kingdom Max-Planck-Institut für Gravitationsphysik, Callinstrasse 38, D-30167 Hannover, Germany    Alex B. Nielsen Max-Planck-Institut für Gravitationsphysik, Callinstrasse 38, D-30167 Hannover, Germany    Drew Keppel Max-Planck-Institut für Gravitationsphysik, Callinstrasse 38, D-30167 Hannover, Germany    Andrew Lundgren Max-Planck-Institut für Gravitationsphysik, Callinstrasse 38, D-30167 Hannover, Germany
July 14, 2019
Abstract

Identifying the source parameters from a gravitational-wave measurement alone is limited by our ability to discriminate signals from different sources and the accuracy of the waveform family employed in the search. Here we address both issues in the framework of an adapted coordinate system that allows for linear Fisher-matrix type calculations of waveform differences that are both accurate and computationally very efficient. We investigate statistical errors by using principal component analysis of the post-Newtonian (PN) expansion coefficients, which is well conditioned despite the Fisher matrix becoming ill conditioned for larger numbers of parameters. We identify which combinations of physical parameters are most effectively measured by gravitational-wave detectors for systems of neutron stars and black holes with aligned spin. We confirm the expectation that the dominant parameter of the inspiral waveform is the chirp mass. The next dominant parameter depends on a combination of the spin and the symmetric mass ratio. In addition, we can study the systematic effect of various spin contributions to the PN phasing within the same parametrization, showing that the inclusion of spin-orbit corrections up to next-to-leading order, but not necessarily of spin-spin contributions, is crucial for an accurate inspiral waveform model. This understanding of the waveform structure throughout the parameter space is important to set up an efficient search strategy and correctly interpret future gravitational-wave observations.

pacs:
04.80.Nn, 04.25.Nx, 04.30.Db,
preprint: AEI-2013-049preprint: LIGO-P1300017

I Introduction

Ground-based gravitational-wave (GW) detectors of the Laser Interferometer Gravitational-wave Observatory (LIGO) Abbott et al. (2009); Sigg (2008); Smith (2009); Harry (2010) and Virgo Acernese et al. (2008); Accadia et al. (2011) collaborations are currently being upgraded to provide sensitivities capable of directly detecting GWs from compact binary coalescences of binary black holes and neutron star (NS) systems Abadie et al. (2010). Such detections would constitute the first direct detection of NS-BH and binary BH systems. The gravitational waveforms from these systems will provide unprecedented information about the physical nature of these systems, and extracting this information relies on overlapping the noisy detector data with accurate theoretical signal predictions.

The waveform from a quasicircular inspiraling compact binary system can be obtained from knowledge of the energy and energy flux of the system. In general relativity these can be calculated perturbatively in a expansion (where is the relative velocity of the bodies, is the speed of light), known as a post-Newtonian (PN) expansion (see, e.g., Blanchet (2006) and references therein). These calculations provide the coefficients in such an expansion in terms of the fundamental physical parameters. Various different expansion schemes exist that lead to different approximants Buonanno et al. (2009). For quasicircular, adiabatic orbits, the tangential velocity can be related to the orbital frequency of the compact bodies, which, for the dominant GW mode, is equivalent to half the GW frequency. In this way an expansion in GW frequency, , can be obtained, with each successive term corresponding to a higher order in the PN expansion.

These waveforms depend on a number of physical parameters such as the masses and magnitudes and orientations of the objects’ spins. An important task will be extracting as much of this information as possible given the observational constraints and detector sensitivities. Although the masses and spins of the constituent objects are typically the parameters of greatest astrophysical interest, in practice the detectors are actually sensitive to combinations of these parameters. This is because rather than variations in the individual source parameters, only sufficiently strong waveform variations that are louder than the noise background can be distinguished by the detector. An example of this may be seen already in the Newtonian regime where the waveform of the binary inspiral depends only on what is commonly called the chirp mass, , a combination of the two individual masses and , given by . Systems with the same chirp mass emit GWs with the same phase in this simple approximation, and they cannot be distinguished. Although this degeneracy is broken by higher order terms, it remains true that GW detections can put much stronger constraints on a combination of the masses characteristic of the binary system, in this case the chirp mass, than it can on the physical parameters of either individual object.

One major difference between Newtonian dynamics and general relativity is that in the relativistic domain the spin angular momenta of the inspiraling objects affects the orbit and thus the gravitational waveform. In general relativity these spin effects first show up in the PN expansion as spin-orbit interactions at 1.5PN order, corresponding to order corrections. At higher orders one also encounters spin-spin interactions. With PN corrections beyond the Newtonian term and in particular spin effects, it is less obvious which combinations of the physical parameters are most accurately measured, but empirical overlap studies Baird et al. (2013); Hannam et al. (2013) have recently pointed out that apart from the chirp mass, there is a close degeneracy between mass ratio and spins for BHs with spins aligned with the orbital angular momentum. (The waveforms of spinning NSs exhibit additional degeneracies, e.g., between the NS spin and the equation-of-state dependent quadrupole moment, but in this paper we neglect the spin of the NS as it is expected to be small in compact binary systems Bildsten and Cutler (1992); Kochanek (1992).)

Here we formalize the search for well-measurable parameters and degeneracies in the PN inspiral waveform. We employ a linearization of waveform differences equivalent to the Fisher-matrix approximation Vallisneri (2008), but we demonstrate that a convenient higher-dimensional coordinate choice in terms of the PN expansion coefficients allows for very accurate, yet computationally cheap calculations of the waveform (dis)agreement.

The method we employ is similar to the one used by Tagoshi and Tanaka Tanaka and Tagoshi (2000), Sathyaprakash and Schutz Sathyaprakash and Schutz (2003), Pai and Arun Pai and Arun (2013) and Brown et al. Brown et al. (2012a). We write the waveform as a series expansion in frequency space and use the expansion coefficients as model parameters to construct a Fisher matrix. Using the eigenvalues and eigenvectors of this Fisher matrix we then determine which combinations of the expansion coefficients the detector is most sensitive to, which amounts to finding the principal components of the Fisher matrix. In contrast to Pai and Arun our focus is determining the best measured combinations of parameters given aligned spinning general relativity waveforms and an Advanced LIGO noise curve Shoemaker (). We also discuss implications of a parameter dependent frequency cutoff.

An example of principal component analysis (PCA) of spinning signals for the proposed LISA (Laser Interferometer Space Antenna) space-based mission was given in Sathyaprakash and Schutz (2003). We, however, consider ground-based detectors, specifically Advanced LIGO, where the expected signal-to-noise ratio (SNR) of most detections is rather low. Then it will be especially important to know which parameters can be measured since it is unlikely that all physical parameters will be measurable with reasonable accuracy. The number of principal components with a prescribed accuracy determined by the PCA will define an effective dimension of the space of solutions to be searched Brown et al. (2012a). A solution space with a small effective dimension will need relatively few templates to be searched, which speeds up search times Tanaka and Tagoshi (2000); Brown et al. (2012a).

Our main aim here is to determine what the principal components represent physically. This cannot reduce any uncertainty in the measurement of physical parameters, which is typically large because of the correlation between the parameters. However, an uncorrelated set of parameters will give more tightly constrained directions in the likelihood space and also provide a convenient coordinate system in which to evaluate the overlap of differing waveforms.

In calculating the principal components we use as much information about the PN coefficients of aligned spinning systems as is currently available. The functional dependence of the PN coefficients on the physical parameters dictates how our principal components vary across parameter space. Furthermore, we investigate what the contribution of various terms in these coefficients are and how excluding them might affect parameter values through parameter bias. This helps to show which terms are important in the parameter estimation problem and gives some indication of how yet-to-be-calculated terms may affect our results.

This paper is organized as follows. After a general introduction of the Fisher-matrix approximation and PCA in Sec. II.1, we specify the waveform model in Sec. II.2 and argue by virtue of the Bauer-Fike theorem that even though the higher-dimensional Fisher matrix may be ill conditioned, the corresponding principal components with large eigenvalues can be calculated accurately. In Sec. III we demonstrate explicitly the superiority of our approach over standard Fisher-matrix estimates in terms of physical parameters, and we extensively analyze the physical dependence of the first principal components. Section IV extends our algorithm to the case of different waveform models, which enables us to identify crucial contributions to the PN phasing. We conclude with Sec. V.

Ii Waveform and methodology

ii.1 Fisher matrix and principal component analysis

The fundamental question that is underlying matched-filter searches for GWs is how different is a waveform (the detected signal) to another waveform (the template). Assuming a GW detector with noise spectral density , the appropriate difference is commonly defined by

(1)

where the inner product reads

(2)

Here, denotes the Fourier transform of and is the complex conjugation. Throughout this paper, we shall assume the noise spectral density of Advanced LIGO, in the zero detuned high power configuration detailed in Shoemaker (), with a lower cutoff frequency at . The upper frequency cutoff, , is determined by the signal templates we assume. Since we are dealing exclusively with inspiral signals and ignoring the merger and ringdown phase, it is common practice to cut the inspiral frequencies at the equivalent innermost stable circular orbit of the Schwarzschild spacetime, at

(3)

(where is the total mass of the system). We could use a more general, possibly spin-dependent, description of the upper cutoff frequency, but that is not the focus of this paper, and our algorithm is readily applicable to more complicated forms of .

By evaluating the distance (1) or the overlap (2) one can, with some confidence, draw conclusions about the origin of the detected signal, if the template agrees very well with the data. Two main issues arise, however. Various templates with different source parameters may all agree well in terms of the predicted GW signals, and if the remaining small differences are buried below the detector noise level it becomes impossible to definitively identify the true source parameters from just the GW observation. In addition, the template family may not be an exact representation of the real waveform, which again limits our ability to unambiguously identify the source of the detected signal.

We shall analyze both effects here, statistical uncertainties due to similar waveforms for different parameters and systematic biases due to inaccuracies in the waveform model. As such analyses are crucial for the correct interpretation of GW observations, there are already a number of publications addressing these issues under various assumptions. In particular, PN inspiral waveforms have been analyzed by several authors, who calculated the measurement accuracy of various source parameters assuming a similar frequency-domain model to the one we employ here, but to lower PN expansion order Finn and Chernoff (1993); Jaranowski and Królak (1994); Cutler and Flanagan (1994); Poisson and Will (1995); Nielsen (2013) or only for nonspinning systems Arun et al. (2005). Systematic errors between different PN approximants describing nonspinning systems have been studied with extensive overlap calculations in Buonanno et al. (2009).

We elaborate on the existing insights here by improving the linearization of (1) through a suitable coordinate choice in combination with a PCA, which allows us to understand in a systematic way which combinations of physical parameters are best constrained and which analytical contributions to the inspiral waveform are crucial to correctly recover the source parameters.

The details of our strategy are as follows. We assume that and can be parametrized by a common waveform manifold such that and , where is the vector of waveform parameters with components (we shall put these in concrete terms in Sec. II.2). With a minimization of the distance (1) in mind, we next apply the well-known linearization

(4)

where and is the Fisher information matrix,

(5)

For more details about this approach and the validity of the Fisher-matrix approximation, see for instance Vallisneri (2008). We simply use it as a convenient linearization here.

The inverse of the Fisher matrix is the covariance matrix of the waveform parameters, , and instead of quoting the variances of the used parameters (as done, e.g., in Finn and Chernoff (1993); Jaranowski and Królak (1994); Cutler and Flanagan (1994); Poisson and Will (1995); Arun et al. (2005); Nielsen (2013)), we proceed by diagonalizing the Fisher matrix. The result can be written as

(6)

where denotes the th component of the th eigenvector of the Fisher matrix. is a diagonal matrix with positive eigenvalues on the diagonal, i.e., . Since the eigenvectors are also eigenvectors of the covariance matrix, , we have thus performed a PCA, and the vector

(7)

describes the principal components of the system.

Working with these coordinates rather than the original parametrization has the great advantage that the waveform difference (4) becomes simply

(8)

We can now easily conclude from the size of the eigenvalues which principal components (or which principal directions in parameter space) affect the waveform strongly. This will be important to understand how well constrained and therefore measurable certain parameter combinations are, given that waveforms that differ below the noise floor cannot be distinguished from each other.

Typically, the smallest difference that is measurable is quoted to be

(9)

see Lindblom et al. (2008) and further discussions in Lindblom (2009); Damour et al. (2011). Here, however, we follow the recent discussion by Baird et al. Baird et al. (2013), who detail that the 90% confidence interval in the posterior probability distribution is given to linear order by

(10)

where is the value for which the probability of obtaining that value or less in a distribution with degrees of freedom is 90%. We shall later restrict ourselves to three physical parameters (the two masses of the compact objects and one spin magnitude) where and waveforms with distance

(11)

cannot be distinguished at 90% confidence. Note that for SNR 10, this is approximately equivalent to the region of waveforms with overlap greater than 97% Baird et al. (2013).

ii.2 Waveform model

Let us specify in this section what waveform manifold we are using, which set of parameters we are considering, and how we can take advantage of the methods presented in the previous section.

In GW data analysis, the inspiral waveform of a coalescing compact binary is most conveniently expressed in a closed form in the Fourier domain, which allows for a very fast evaluation of thousands of templates, if needed. We shall use the same signal description here, which is commonly referred to as the “TaylorF2” approximant. Derived from the stationary-phase approximation of the PN energy balance law Damour et al. (2001, 2002), it reads

(12)

where is an amplitude term which we set by requiring a particular SNR, , and is an arbitrary reference frequency as detailed below. We do not consider contributions from higher harmonics, which can be found in Blanchet et al. (2008); Arun et al. (2009); hence is simply a constant determined by the binary’s total mass, distance, orientation and sky location. With this assumption (12) is often called the restricted PN waveform Buonanno et al. (2009).

The phase, , is given as a series in the gravitational wave frequency ,

(13)

where we introduce to make all coefficients dimensionless. The expansion coefficients and have been determined within the PN formalism in various publications (see Blanchet (2006) and references therein). Currently, the highest PN order to which they are known is 3.5 () for nonspinning contributions. Spin-dependent contributions enter as leading-order and next-to-leading order spin-orbit effects at 1.5 and 2.5PN order (Kidder (1995); Apostolatos et al. (1994); Blanchet et al. (2006). We also include the tail-induced spin-orbit contribution to the flux at 3PN order (Blanchet et al. (2011). In addition, spin-spin effects are included at relative 2PN order Kidder (1995); Damour (2001); Poisson (1998); Mikoczi et al. (2005) and a 2.5PN contribution to the flux is associated with energy flux into the BH Alvi (2001). The explicit set of coefficients we employ can be read off the phase expansion in Eq. (2.91) of Ohme (2012a), or equivalently Eq. (A21) of Ajith et al. (2007), with the exception of the 3PN term, where we also add the recently calculated tail-induced spin-orbit term (see Eq. (6.6) of Blanchet et al. (2011)). We note that in the course of finalizing this paper, PN spin contributions at even higher expansion order have been computed in Bohé et al. (2013) which are not included in this study.

Three physical source parameters define the binaries we consider: one object is a BH of mass , the other is a NS of mass . In addition, the BH is allowed to have a spin aligned with the direction of the orbital angular momentum of the binary, which we parametrize through the dimensionless quantity

(14)

The NS spin is assumed to be negligible and set to zero. We could also include the spin of the second compact object without any modification to our algorithm, as long as the spins are aligned with the orbital angular momentum. However, astrophysical expectations are that NSs in compact binaries do not spin rapidly Bildsten and Cutler (1992); Kochanek (1992), which is confirmed by the fact that the highest NS spin parameter in a binary observed to date Burgay et al. (2003) has a value of . In addition, it was argued from a purely GW data analysis point of view that two spin parameters can efficiently be mapped onto one effective spin parameter, essentially without changing the waveform manifold Ajith et al. (2011); Santamaría et al. (2010); Ajith (2011). In that sense, one spinning BH can simply be seen as a representative of the class of aligned-spin systems with the same effective spin.

We thus consider the PN coefficients in (13) generally as functions of . Note, however, that we work under the assumption of general relativity, in which and many terms are exactly zero.111In non-Einstein theories some of these terms may not be zero, for example in Brans-Dicke gravity a term arises already at a value of Yunes and Pretorius (2009). In order to capture waveforms of any gravitational theory, as performed in for example Li et al. (2012), who include a possible nonzero term, these terms would need to be incorporated. In this work we focus on the general relativity results and set these terms explicitly to zero.

Two coefficients deserve further attention. is a constant phase that comes with no frequency-dependent factor, and it represents what is usually referred to as an additional parameter: the “phase at coalescence” or the initial phase. The corresponding initial time or “time at coalescence” is included in , which is a linear contribution to the phase. When we estimate waveform differences later, we are not interested in any discrepancy caused by time or phase shifts, so we shall project these parameters out of the Fisher matrix.

Now that we have defined our parameters and waveform model, we could proceed by calculating the Fisher matrix (5) for the parameters , where and represent the free time and phase shift, or equivalently for any other five-dimensional parametrization that can uniquely be mapped to the above parameters. It was found, however, in various publications that assuming a moderate SNR (between 10 and 20) and the noise spectral density of the (initial or advanced) LIGO detector leads to rather large statistical parameter biases in some parameters of interest. The linearization of the waveform difference implies that waveforms can definitively be told apart only if parameters like the individual masses or the mass ratio are considerably different Cutler and Flanagan (1994); Poisson and Will (1995); Nielsen (2013) (to the order of tens of percents and more). The validity of the linearization is rightly questioned in these cases, and the true confidence interval (10) likely has more structure than the ellipsoid predicted in the Fisher-matrix approach.

We partly circumvent these problems here by using the PN coefficients themselves as free parameters rather than the original physical parameters, i.e.,

(15)

All nonzero PN coefficients up to are now considered to be free parameters, which yields a ten-dimensional parameter space that includes eight () and two (). As noted above, the same idea, combined with an eigenvector analysis, has already been presented by Tagoshi and Tanaka Tanaka and Tagoshi (2000) and Brown et al. Brown et al. (2012a) in the context of template placing, and by Pai and Arun Pai and Arun (2013) to probe PN theory with GW observations. We shall show below how this trick is also useful to understand statistical and systematic errors of inspiral waveforms. One important feature we will exploit is that the Fisher matrix (5) becomes almost parameter independent for the choice (15), which makes extrapolating the waveform difference (4) to large parameter variations much more accurate Tanaka and Tagoshi (2000); Babak et al. (2006). Specifically, the only parameter dependence in the resulting matrix

(16)

is inherited from the upper cutoff frequency (3). The exponents and are solely functions of and . ( is simply 0, 1 or 2, depending on the number of logarithmic coefficients in , ; depends on the PN order that each component corresponds to. Given the mapping , we can express .)

We shall later show how we can exploit the fact that in this convenient parametrization, finding the confidence interval around a given target signal transforms to a simple geometric exercise in the flat space, assuming we can fix the cutoff frequency parameter independently (which we shall do in Secs. III.2 and IV). An illustration of advantages and caveats of this geometric interpretation is provided by Fig. 1. In addition, we shall show in Sec. III.1 how a weak nonflatness, inherited by a parameter dependent , can be incorporated properly in accurate overlap calculations.

Figure 1: Geometric interpretation of calculating waveform differences in a higher-dimensional flat space (8). The actual manifold of inspiral waveforms is illustrated as a two-dimensional curved surface that can be embedded in a higher-dimensional flat space [see coordinate choice (15)], here depicted in three dimensions. The confidence interval has a trivial geometry in the higher-dimensional space (here illustrated as a ball around the target waveform, which is shown as a white dot), which has to be projected back onto the physical waveform manifold.

As mentioned earlier, the waveform variations we are interested in are obtained by projecting the time and phase shifts out of the Fisher matrix, and we can do so following the simple procedure Owen (1996)

(17)

Here, denotes the inverse of the two-dimensional submatrix of corresponding to the parameters (phase shift) and (time shift). The projected matrix is effectively eight dimensional, thus we shall find eight eigenvectors and eigenvalues that govern waveform changes according to (8).

Let us mention two caveats before we continue with presenting our results. First, in calculating the Fisher matrix with more than just our five physical parameters, we implicitly treat all PN coefficients as independent, which is clearly not true for the underlying lower-dimensional waveform manifold. So we have to take care to “project” our results back onto physically meaningful quantities. We shall do so by reexpressing the principal components not just as functions of PN coefficients, but eventually as functions of the physical parameters.

Doing so will render all the principal components as functions of the physical parameters and in the underlying lower-dimensional waveform manifold they will not all be independent. When the number of PN coefficients is larger than the number of physical parameters (which it typically will be in our applications) the excess principal components will not give additional information about the physical parameters. In practice we will find that the errors on these excess principal components will be large anyway and they can be ignored. Note that a different strategy was followed in Brown et al. (2012a) where the authors perform an additional PCA to rotate the original principal directions onto the physical manifold. This additional step, however, depends on a freely chosen sample of source parameters, and we instead focus on the identification of, in absolute terms, best measurable combinations of PN coefficients, independent of how the particular model is oriented in that space.

ii.3 Numerical stability

The second issue to be aware of is that the condition number of the Fisher matrix, which we can view as the ratio of greatest and smallest eigenvalue of , can become quite large for the matrices we are considering. This means that inverting the matrix can become susceptible to numerical errors. Calculating the eigenvalues for these matrices, however, is still a well-conditioned problem by virtue of the Bauer-Fike theorem which states Datta (2010)

(18)

for a diagonalizable matrix with eigenvalue and a perturbed matrix with eigenvalue . The factor is the condition number of the diagonalizing matrix and denotes the matrix 2-norm of the perturbation matrix . Since in the case of a real symmetric matrix , the diagonalizing matrix is orthogonal with condition number , we are guaranteed that the eigenvalue problem is well conditioned. A small change in caused by numerical truncation error will only cause a small change in the eigenvalues. This well-conditioned property is in fact true for any normal matrix satisfying with well-separated eigenvalues. This means that the large eigenvalues (and this is what we are interested in) of the Fisher matrix can be reliably computed even when the original matrix contains many PN coefficients and is itself ill conditioned.

There are, of course, more sources of error in addition to the numerical inversion of a badly conditioned matrix. In our case of the eight-dimensional matrix (as introduced in Sec. II.2) we find the largest source of error to be the numerical integration of (16). We compared various methods – simple uniform discretization with subsequent extrapolation to infinite resolution and local adaptive methods – and the difference we find is of the order for the target signal we shall analyze in the next section (see Fig. 2 and Table 1). We have to compare this number to the size of the eigenvalues that we try to calculate, and by doing so we shall find in Table 1 that the first (i.e., greatest) three eigenvalues are guaranteed to be well determined, subsequent eigenvalues are of the same order or smaller than the error bound (18). Note, however, that by comparing the eigenvalues and eigenvectors obtained from different integration techniques directly, we find that the uncertainty in the fourth eigenvector is still negligible. The numerical values of higher eigenvalues cannot be trusted, but we shall show that this does not harm our calculations.

There is a similar estimate for the eigenvectors that are perturbed by under a perturbation of the Fisher matrix by . In this case we have Datta (2010)

(19)

For the eigenvectors with large eigenvalues , this ensures that their components are also well conditioned despite the matrix being ill conditioned. This is sufficient for our purposes since we focus on waveform differences calculated via (8), which are dominated by the eigenvectors with large eigenvalues. In numerical testing it was observed that the large eigenvalues are always well separated and that the eigenvalues and eigenvector components were stable for small changes of the Fisher matrix . This stability of the diagonalization process applies to both the numerical errors and also to small deviations in the noise spectral density, which will also give rise to small changes in the Fisher matrix.

Iii Measurable parameter combinations – Statistical error

Let us now present the results we obtain with the method and waveform model introduced in Sec. II. We first estimate statistical errors by asking the question: Which waveforms in a neighborhood of a given signal are similar to the latter to an extent that they cannot be distinguished at a 90% confidence level? We shall show that

  1. our method of estimating waveform differences is superior to standard Fisher-matrix estimates (that are carried out in terms of the physical parameters), and we find no considerable differences between our computationally cheap method and full overlap calculations;

  2. because of an approximate degeneracy between mass ratio and spin, the individual masses cannot be determined accurately in GW observations alone, but

  3. through a PCA we are able to identify the parameter combinations that are accurately measurable.

Our results complement previous publications that estimated the measurability of source parameters of spinning systems either by Fisher-matrix calculations Cutler and Flanagan (1994); Poisson and Will (1995); Nielsen (2013) or recently by direct overlap calculations Baird et al. (2013); Hannam et al. (2013).

iii.1 Advantage of different parametrizations

The target system we consider for illustration is a binary containing a nonspinning NS of mass and a BH with and a spin of . We further assume a moderately high SNR of 20. We can now easily demonstrate the efficacy of our approach by estimating the 90% confidence interval [defined by (10)] around the target signal with various strategies and compare the results with proper overlap calculations.

The standard Fisher-matrix estimate in terms of is the simplest way of obtaining a multidimensional ellipse around the target parameters. Here we adopt the commonly used parametrization in terms of the symmetric mass ratio and the total mass ,

(20)

Using the logarithms of the mass-dependent parameters improves the condition number of the Fisher matrix, and the square root of the diagonal elements of the inverse matrix (properly scaled) immediately yield the dimensions of the confidence interval,

(21)

Evidently, these ranges are extremely large, and we would have to incorporate prior restrictions of the parameters to obtain a slightly more realistic estimate of the parameter uncertainties Cutler and Flanagan (1994); Poisson and Will (1995); Vallisneri (2008); Nielsen (2013). However, we merely use it as an illustration here.

Of course, it is well known that particular parameter combinations can potentially be measured much more accurately. The canonical example is the chirp mass

(22)

which governs the Newtonian-order PN phase coefficient. In the above example, Fisher-matrix calculations in terms of instead of (the other parameters remain unchanged) reveal

(23)

and we shall later formalize the search for such well-determined parameters.

Figure 2: The estimated 90% confidence interval around a NS-BH signal with component masses and , SNR 20 and a BH spin of , as indicated by the (red online) dot. Dashed and solid lines are Fisher-matrix estimates in terms of the physical parameters (with parameter differences in terms of the logarithm of parameters, e.g. for the dashed lines or with parameters linearized as for the solid lines). The gray region is obtained by actual waveform overlaps, or equivalently by the PCA introduced in Sec. II. Both methods yield indistinguishable results.

For now, we focus on the other physical parameters and show in Fig. 2 the boundaries of the confidence interval, projected onto the - plane according to (17). As we use as one parameter, the corresponding variation reads , which gives rise to the dashed curve in Fig. 2. Since we work only to linear order in the parameters, we may as well express , which in turn leads to the solid ellipse in Fig. 2. Already these two linear approximations differ considerably in the range where we apply them, indicating that we should not trust the linearization for such high parameter variations.

Instead, the gray shaded region in Fig. 2 can be obtained by actual overlap calculations according to (1) and (2), and then simply recording the area where the distance between target and model waveform satisfies (11). Note that in order to compare with the projected Fisher matrix, we optimized the inner product over all parameters not shown in Fig. 2 (these are , and ). This approach, however, entails nontrivial computational efforts, and a much more efficient method is to use the parametrization (15) discussed in Sec. II.2 that embeds the waveform manifold in flat space where the Fisher matrix is parameter independent. In that case, calculating the waveform difference (4) becomes a simple, yet very accurate matrix multiplication. Indeed, when we calculate the data for Fig. 2, there is no distinguishable difference between actual overlap results and Fisher-matrix estimates in terms of PN coefficients (15), but the computation times differ by several orders of magnitude, the latter calculation being much faster.222The actual improvement in computational cost depends on the details of the implementation. We tested our speed-optimized single-core implementations of both Fourier-transform and Fisher-matrix based algorithms, and the latter performed between 140 and several hundred times faster than the standard fast-Fourier-transform method, depending on the accuracy of the time optimization.

The details of this fast algorithm are as follows. We calculate and diagonalize in terms of the PN coefficients (15). Inverting this matrix to estimate the accuracy of these parameters is likely to introduce large numerical errors, because as noted in both Pai and Arun (2013) and Berti et al. (2005), large Fisher matrices of this form are ill conditioned and numerical inversion routines cannot be trusted. However, diagonalizing the Fisher matrix is numerically more stable due to (18) and (19), and we only need to accurately calculate the eigenvectors with large eigenvalues.

Table 1: The principal components of the Fisher matrix that treats the PN phase coefficients as free parameters (see Sec. II.2). The target signal is the same as in Fig. 2. We report the eigenvalues and the theoretical spread (Fisher) in the 90% confidence interval, assuming all PN coefficients to be free and independent parameters. The latter should be contrasted with the actual variation (actual) on the lower dimensional waveform manifold, which in turn affects the waveform difference (8) by the amount stated in the last column. Note that the numbers shown here depend on the reference frequency in (13), and we employed .

Table 1 reports these eigenvalues that enter the waveform difference through (8). Assuming a maximally allowed square difference of , Eq. (11), we can then simply scale the inverse square root of to obtain the theoretical range of principal components in the confidence interval [denoted by (Fisher) in Table 1]. However, this will be of little value, since we cannot easily transform this range back to physical parameters, and the actual waveform manifold is only a subset of the eight-dimensional ellipse we have just calculated, see Fig. 1.

Instead, we densely populate the (physical) parameter space around the target signal, transform these coordinates to the space (by a matrix multiplication) and select all points that fulfill (11). This is a computationally very cheap procedure which allows us to find the actual spread in both physical parameters and principal components. Of course, , restricted to the physical waveform manifold (labeled by the word “actual” in Table 1), has to be less than or equal to the theoretical prediction that assumes all PN coefficients to be independent, and indeed, this is what we find in Table 1. Moreover, we conclude from these numbers that only the first four principal components contribute significantly to the waveform difference, and we can neglect the others for all practical purposes, as their actual variation is diminished by the small associated eigenvalue.

The superiority of our PCA over standard Fisher-matrix estimates is based on two key modifications. First, we increase the dimensionality of our coordinate space such that the physically allowed waveform manifold is embedded into a space with only weakly varying matrix coefficients. As stated before, this makes the extrapolation to large parameter deviations much more accurate. On the other hand, one might think about using the first principal components as new coordinates (instead of the physical coordinates , and ) without increasing the dimensionality, and although locally this is just a linear transformation, the predicted confidence interval would still be more accurate. The reason for this is that different coordinate choices yield the same result locally (i.e., to linear order, as one can also see in Fig. 2), but for larger distances, it becomes increasingly important which specific set of parameters is bounded by the Fisher-matrix estimate. Thus, inverting the whose functional form is adapted to the PN waveform structure is more accurate than the simple linear approximation in physical parameters.

Let us make a final remark on the power of our approximation. Previous uses of PN coefficients as free parameters in the Fisher matrix have largely neglected a parameter-dependent cutoff frequency in the inner product (2), mainly because the considered systems had low enough total mass to place out of the detector’s sensitivity band. For mixed NS-BH binaries we still may want to neglect the merger and ringdown of the signal, but the waveform then has to be terminated in the detector band with a consistent cutoff frequency that is at least total-mass dependent as in (3). Such additional complications do not spoil the accuracy of the estimate, as the following simple calculation shows. Assume the waveforms of two systems , , with total masses . Their distance is based on an integral in Fourier space, and due to we can simply expand

(24)

where the brackets indicate the integration range, and the second part only contains because has been terminated already in this frequency range. The first part can accurately be estimated with a parameter-independent Fisher matrix that incorporates the smaller upper cutoff frequency. The second part is proportional to a simple integral over in the respective frequency range. Both contributions have been included in the data shown in Fig. 2.

As evident from Fig. 2, the true confidence interval is considerably smaller than predicted by “standard” Fisher-matrix estimates. From efficiently calculating waveform differences for many neighboring points, we can now simply read off the range of physical parameters that fulfills (11), and we find

(25)

Expressed in individual masses, we find

(26)

Hence, at 90% confidence, we would not be able to tell purely from a GW observation whether the observed system is composed of two rather heavy and hardly spinning NSs, or a light NS and a significantly spinning BH. The same conclusion was recently drawn in a detailed study by Hannam et al. Hannam et al. (2013).

iii.2 Accurately measurable principal components

Apart from computational benefits, the method of diagonalizing the Fisher matrix, thereby finding uncorrelated parameters, enables us to systematically understand which combinations of physical parameters are well measurable and, in turn, along which paths GW signals are almost degenerate. We consider again our example of a / BH/NS system with a BH spin of 0.3.

We start with the standard Fisher matrix in terms of the physical parameters . Note again that the Fisher matrix in terms of these parameters is strongly parameter dependent, and the results we shall obtain below are only valid for the considered target system. Nevertheless, we present them as an instructive illustration of the basic method before moving on to a more general interpretation.

After projecting out the time and phase shift [see Eq. (17)], the eigenvalues we find are separated by 4 orders of magnitude, respectively, and the first principal component (with the highest eigenvalue) reads

(27)

The spin dependence is rather small, so we neglect it for simplicity and find that is remarkably similar to the chirp mass (22). It is not surprising, but reassuring that the PCA indeed finds the parameter that has already been considered as the best-measured quantity as the first principal component. Note, however, that the spread of in the 90% confidence interval is , therefore much smaller than the variation in , cf. (23). We can easily understand this by imagining the long ellipsoidal shape of the estimated interval that extends to very different lengths along the principal components. A minimal rotation (such as from to ) can dramatically increase the extent of the ellipse along the given direction.

Nevertheless, it is important to keep in mind that the results of the above analysis will slightly change with different values of the source parameters, different variants of the detector noise curve and other cutoff frequencies. Thus, it is likely to be more practical to consider as the approximately best measured parameter. It is still much more accurately determined than the second principal component , so we proceed with calculating the Fisher matrix in terms of . After projection and diagonalization, is indeed the dominating contribution to , and reads

(28)

We have neglected the small contribution (that has an estimated coefficient of ) in (28) as we regard the chirp mass as essentially measured by and we are now interested in determining the remaining parameters. As empirically observed and discussed in Sec. III.1, we cannot simply determine the individual masses by measuring the two “best” parameters very accurately. Even though the chirp mass is only mass dependent, the next principal component is a combination of (symmetric) mass ratio and spin. Thus, as long as the spin value is not determined by other means, we cannot neglect it. Neglecting it would result in the mass ratio being measured incorrectly.

The second principal component within the 90% confidence interval is uncertain by , which by itself is not a large uncertainty. However, to extract the physically more interesting parameters and and with them the individual masses, we need to consider the third principal component as well, which reads

(29)

We again neglect the small contribution here (entering with a factor ). This third principal component, however, is measurable only by which severely limits our ability to identify the physical parameters. A visualization of the range of parameters restricted by the spread in and is shown in Fig. 3. Note that this is entirely consistent with the standard Fisher-matrix ellipse in Fig. 2 and the numbers presented in (III.1). In particular, we see that due to the tilt of in the - plane and the fact that can only take values between and , the allowed spin is actually somewhat restricted, whereas we cannot restrict the mass ratio of the observed system at all, at 90% confidence (and the assumptions underlying the detector and waveform model).

Figure 3: Illustration of the mass-ratio/spin degeneracy through principal components for the same target signal as shown in Fig. 2. Even though is accurately measurable (), it only restricts the parameter space to a line in the - space. The next principal component, , is less well determined () and restricts the parameter space only weakly. The best measured parameter determines the chirp mass and does not impact this plot.

The arguments we have just provided are simple and instructive, but, just as in Sec. III.1, the results are not very accurate, and the specific functional forms of and vary throughout the parameter space. Again, the better suited parametrization in terms of the PN coefficients can potentially cure both problems to some extent, because we have already demonstrated that the Fisher-matrix estimates are much more accurate in this higher-dimensional space. In addition to that, by using the PN coefficients we automatically impose a waveform-adapted functional dependence upon the physical parameters that will lead to principal components that are not only simple linear combinations of the source parameters.

0PN 1PN 1.5PN 2PN 2.5PN log 3PN 3PN log 3.5PN
1 0.98 0.17 0.06 0.02 -0.03 0.00 0.00 0.00
2 -0.18 0.77 0.45 0.17 -0.36 -0.07 -0.10 -0.07
3 0.05 -0.47 0.07 0.17 -0.65 -0.20 -0.46 -0.25
4 0.02 -0.32 0.45 0.25 -0.27 0.09 0.68 0.30
5 0.01 -0.23 0.71 0.07 0.53 0.11 -0.30 -0.24
6 0.00 -0.05 0.22 -0.45 -0.03 -0.32 -0.30 0.74
7 0.00 0.02 -0.19 0.77 0.15 0.15 -0.32 0.47
8 0.00 0.00 -0.04 0.27 0.25 -0.90 0.20 -0.13
Table 2: Contributions of PN phase coefficients to principal components, obtained with a normalization frequency . The cutoff frequency employed here is (cutoff of the reference signal in Fig. 2), although this affects the highest values only weakly.

Let us stress again that using the PN coefficient (15) as free parameters makes the Fisher matrix only weakly varying throughout the parameter space, thus we can analyze principal components for an entire range of source parameters by diagonalizing just one matrix. The transformation matrix encodes which PN coefficients contribute most significantly to each principal component, which we illustrate in Table 2. We have chosen the cutoff frequency to be that of our previous target signal, i.e., (3) with , . However, we also tested cutoff frequencies between and (which is even beyond the tidal disruption frequency for our example system, but a reasonable cutoff for lower mass systems), and the important contributions in the first two principal components vary by less than 10%.

It is worth pointing out that the numbers in Table 2 depend on the normalization frequency chosen in (13), and it is a well-known ambiguity of any PCA that it changes with scale variations of the used variables. We have chosen such that Table 2 gives a good indication of which PN coefficients are important in the Advanced LIGO detector band, but alone are not invariant quantities. The invariant quantity is the waveform difference in the form of Eq. (8), and our aim is to illustrate individual contributions to this difference from various principal components. Therefore, by visualizing the (-dependent) values of , we can immediately gauge how strongly the GW signal changes throughout the parameter space. To do that, we interpret as a function of the physical parameters by first expressing the individual principal components as linear combinations of PN coefficients according to (7). We then replace each PN coefficient by the appropriate phase expansion term that in turn depends on , and .

Figure 4 shows contours of the constant first principal component.

Figure 4: Contours of the first principal component (solid lines), decreasing from bottom left to top right in steps of 1000 times the predicted accuracy within a 90% confidence interval, (Fisher), cf. Table 1. Here we assume nonspinning binaries, although the picture is largely independent of the spin. Dashed (red online) lines are contours of constant chirp mass (22), (increasing from left to right).

We plot contours in steps of 1000 times the predicted accuracy of in a 90% confidence interval (see Table 1), hence we see again that is exceptionally well measurable. In addition, by simple comparison with constant chirp-mass lines or by the fact that the Newtonian phase contribution in is clearly dominating, we confirm once more, in a systematic way, that the chirp mass is to a good approximation the best measurable parameter in GW inspiral waveforms. Furthermore, we see in Fig. 4 that smaller chirp masses can be measured more accurately (in absolute terms), because the spacing between contours decreases towards the bottom left which allows for a fine waveform distinction perpendicular to constant- lines.

However, we should keep in mind that the actual best-measurable parameter is a PN-like expansion series with not only a -dependent dominant term, but also - and -dependent higher-order corrections. Indeed, Fig. 4 does not change very sensitively with varying spin, but we find noticeable deviations of contours from constant- lines when the spin parameter is close to .

Figure 5: Contours of principal components (), obtained from a PCA in terms of PN phase coefficients, see Table 2. We show slices of constant (essentially constant chirp mass). Contours of and are drawn in steps of (Fisher) as given in Table 1, i.e., two neighboring lines visualize the 90% confidence interval around any point located on the imaginary line centered within this interval. For better readability, contours of are drawn in 5 times bigger steps, i.e., (Fisher). Values of the contours increase from bottom to top in all cases.

In any case, can be very well constrained by GW measurements, and we use this fact to analyze the other principal components in the - plane only. The other physical parameter, the total mass , is then determined for each point by inverting , where we use the value of that corresponds to our target signal (see Fig. 2) as the constant on the right-hand side. Figure 5 illustrates the resulting contours of , . We find that both and are reasonably well measurable, i.e., after detecting a signal, we cannot only be confident about the value of the chirp mass (under the simplifying assumptions made here), the associated values of and also restrict the range of plausible source parameters to rather narrow bands in the mass-ratio/spin space. However, these two bands have very similar structure, and accurately identifying the values of and individually remains hard. This issue is illustrated in Fig. 6, where we overlay the predicted confidence intervals of and around our fiducial target signal, and the result we find is entirely consistent with the confidence interval depicted in Fig. 2. Note that adding information from higher principal components , , does not add any more constraints as their uncertainty is too large, which can be seen for in the right panel of Fig. 5. In fact, as mentioned earlier, in the specific case we consider the waveforms only depend on three physical parameters, hence a fourth principal component such as cannot add any information for determining physical parameters.

Figure 6: Combination of two principal component ranges from Fig. 5 around the target signal indicated by a (red online) circle. The intersection of both ranges is a good approximation of the 90% confidence interval around the target parameters (cf. the actual interval in Fig. 2), and it is more accurate than the estimate in Fig. 3. It is still only an upper bound because the waveform difference is actually governed by the square sum of deviations in (8) and we neglect the effect of a mass-dependent cutoff frequency here.

However, it is important to realize that the conclusion that three parameters are measurable is largely independent of the functional form of the PN phase coefficients. One might be tempted to relate this fact to the three physical parameters we started with, but we just showed that three well-constrained principal components do not automatically imply that the physical parameters can be determined to the same accuracy. Even more important is the inverse conclusion. If we had a waveform model in the form (13), but with phase coefficients that are functions of more parameters (e.g., a second spin or tidal parameters of the NS), we would only be able to constrain three parameter combinations (, , ), except when the variation of the PN coefficients through the parameter space of interest is dramatically increased. Of course, the functional form of the principal components might be different, thus Figs. 4-6 may change, but1 the PCA is performed before the phase coefficients have to be specified, so Table 2 and the eigenvalues in Table 1 are generally valid.

For convenience and future reference, it is useful to explicitly write down a simplified version of that includes all spin contributions that are known for the relevant terms. According to Table 2, all 8 PN phase coefficients enter with nonvanishing contributions, some of which, however, are much smaller than others. We compared the full expansion with simplified versions that included only the one (two, three) most dominant contribution(s). The values of such simplified expressions are different to the full result, but since we are interested in variations of rather than the values itself, we only have to make sure that the structure of a simplified preserves the one shown in the left panel of Fig. 5. (The same argument is underlying our identification of as the approximately best-measurable parameter.) We find including only the 1PN and 1.5PN phase terms is not sufficient, but by also adding the 2.5PN log term, we find reasonable agreement between full and simplified .

We therefore conclude

(30)

where all PN coefficients are understood as the phase contribution at . Note again that results of PCAs generally depend on the initial choice of parameters, and there is no fundamental principle which would guarantee that (30) is similar to found earlier in terms of physical parameters, (28). However, here we find indeed that a linear expansion of (30) in and for constant yields (28). Put differently, the linear tangent to the constant- line at the point of our target signal in Fig. 5 is accurately described by (28). While this consistency is reassuring for assigning some physical meaning to the principal components, we point out it does not hold for or any higher components that only exist in the unphysical eight-dimensional space.

For convenience of the reader, we explicitly detail the phase coefficients appearing in (30) below for the more general case of two spinning bodies, with spins aligned to the angular momentum (recall, the PCA remains unaffected if the NS would be spinning, too). In the form used in Ajith et al. (2007); Ohme (2012a), the PN phase coefficients read

(31)
(32)
(33)

where we used and the spin combinations

(34)

It is interesting to note that while is spin independent, and contain the leading order and next-to-leading-order spin-orbit terms, respectively Blanchet et al. (2006). The terms cubic in the spins are due to the energy flow into the BHs Alvi (2001). These are less important and not valid for NSs. However, quadratic self-spin terms Mikoczi et al. (2005) and quadrupole-monopole contributions Poisson (1998) that appear at relative 2PN order (i.e., they are part of ) affect the overall signal less strongly, as they have no significant contribution to the second principal component . The same applies to even higher spin-orbit terms at 3PN order Blanchet et al. (2011). We shall verify the importance of particular PN spin contributions in the next section properly, but the results are already indicated by the form of the principal components.

We refrain from analyzing in the same detail. It is mainly determined by , and and is thereby sensitive to even higher spin corrections. Also the highest order we consider, 3.5PN (), influences to considerably larger extent than or . We thus expect that, of the first three principal components, will be the most sensitive to higher, as yet uncalculated, PN coefficients. This may imply that the detectors are indeed sensitive to changes in the values of higher order corrections to the PN expansion, even if their absolute value is small relative to lower order terms. However, once more PN terms have been calculated, they can easily be included in our algorithm and the waveform model can be analyzed for degeneracies with the method presented here.

Iv Model dependence and parameter biases – Systematic error

While the previous section focused on the confusion caused by very similar waveforms within one (perfectly known) waveform manifold, we shall now turn to systematic errors in GW measurements caused by the imperfect knowledge of the waveform family itself. Put differently, we shall estimate in this section how the recovered source parameters and signal strengths are affected when a given signal (the target signal) is not necessarily part of the waveform manifold that is employed in the search.

Fortunately, as long as both the target and search waveforms can be expressed in the form of (12) and (13), we can still use the linear Fisher-matrix approximation in terms of the PN expansion coefficients , Eq. (15), to estimate the effect of different waveform families. The only difference to Sec. III is that now the PN phase coefficients change not only due to a change of the physical source parameters, but also due to a different functional form . This means that the following study will be restricted to TaylorF2-like waveform representations; however, we are free to modify or even drop individual PN contributions to quantify their importance in a way meaningful for data-analysis applications.

Our strategy is as follows. Just as outlined in Sec. II.1, we use the Fisher matrix (5) in terms of the PN phase coefficients (15), and the transformation to principal components detailed in Table 2 is valid independently of the functional form of the parameters. Thus, we pick a target signal by fixing the source parameters and reference model and transform to the principal components as usual,

(35)

We then consider a different search model and transform from the associated PN parameters to the same space of principal components, such that

(36)

Again, there is no difference to what we did to analyze statistical errors, just now will be different from for the same set of physical parameters. We can, however, vary the parameters of the search model to minimize the difference

(37)

Note that we effectively also minimize over time and phase shifts, but this is implicit in our method through the projection of the associated parameters. The difference between target parameters and those that minimize (37) are referred to as systematic biases, and they indicate to what extent a model-based GW search would be confused by the use of an incomplete waveform model.

As an illustration, let us consider the following scenario. The target signal we assume is that for aligned-spin binaries including all known spin corrections as detailed in Sec. II.2. This is the waveform model we analyzed in Sec. III. Fixing the masses again to , , we now ask the question: How well can the mass parameters be recovered if the BH is possibly spinning and the employed search waveform model is that for nonspinning objects? We easily address this question by using standard minimization techniques (we employ a grid-based minimization followed by a local minimization along the gradient) and find the values that minimize (37). Recall, the target parameters define and the variable are closed-form functions of and (or equivalent parametrizations). As discussed above, we do not need to employ sophisticated template bank generation algorithms nor calculate direct overlaps between any waveforms to answer this question.

Figure 7: Systematic bias introduced by a nonspinning model searching for the waveform of a binary with a nonspinning NS and a BH with aligned spin as indicated by the horizontal axis. The solid (blue online) line shows the bias in the recovered total mass, the dashed (red online) line indicates the bias in the symmetric mass ratio (20). The gray dotted line shows the bias in the chirp mass (22), for which the scale on the right-hand side is valid.

The result of this exercise is shown in Fig. 7. Not surprisingly, we find that the bias in the chirp mass is rather small, which is due to the fact that the first principal component is dominated by this quantity. The second mass parameter, either total mass or symmetric mass ratio , however, can be massively biased if the search does not include spin when the source is characterized by a considerable spin. We did not restrict the best-matching parameters to physically meaningful ranges, and for target spins we find that exceeds its physical range beyond . We could have anticipated this already from the confidence interval shown in Fig. 2 (and further interpreted in Figs. 3 and 6), because there the 90% confidence interval, extrapolated by eye, would intersect the line at larger, unphysical values of .

Apart from the bias in the recovered parameters, the actual agreement between the best-match waveform and the target signal is of interest, as this constitutes an estimate of the detection efficiency (i.e., how many signals would be lost in the search due to an imperfect agreement between signal and search family). As many authors in the GW literature before, we shall express the effectualness of the waveform family by the fitting factor

(38)

where we optimize over the entire waveform family, i.e., over all physical parameters (we added the subscript to distinguish the actual freedom in the waveform manifold from the higher dimensional parametrization we are using in this paper). The right-hand side of (38) can be calculated efficiently from the outcome of (37). This equals the fitting factor under the assumptions that Flanagan and Hughes (1998); McWilliams et al. (2010); Ohme (2012b), which is true for our simplified model when we neglect the variable cutoff frequency.

The fitting factor that corresponds to Fig. 7 deviates from unity by as much as 15% for highly negative BH spins and less than 2% for positive spin values if we allow for unphysical symmetric mass ratios. If we restrict then the fitting factor drops without bound for increasing spin to an extent where we cannot trust our approximation of the inner product any more.

Comparing spinning against nonspinning models was a rather extreme case for illustration, and we shall now turn to smaller differences between the target model (which we keep fixed as the best model that includes all known spin terms) and the search family. We are particularly interested in the effect of various spin contributions to the PN phasing, which we will successively drop from the search model to analyze how well this “reduced” model can identify the original signal.

We restrict our study to the case we considered before of a NS and a BH, but we simulate all BH spins in steps of . For each of these target signals we minimize the difference (37) to various search models and record the fitting factor as well as the parameter biases. Each entry of Table 3 corresponds to the simulated spin value that was recovered with the maximal disagreement in terms of fitting factor and bias, respectively.

Model
no SO tail 0.44 0.20 46 0.24
3.5/2.5 0.25 0.06 28 0.51 (0.23)
3.5/2.0 24 (810) 0.23 (0.48) 49 1.31 (0.63)
no 0.13 0.08 35 0.32 (0.20)
3.5/1.5 13 (450) 0.18 (0.22) 49 1.04 (0.63)
2.5/2.5 0.61 (1.02) 0.50 74 1.54 (0.60)
Table 3: Systematic errors of various models (see text) searching for the waveforms that include all known PN terms. The simulated target signals are from binaries with the heavier object spinning with . We report the maximal disagreement between the best-fit model waveforms and the target signals in terms of the fitting factor (38) and the biases in chirp mass, symmetric mass ratio and spin. Searches were performed with unrestricted and restricted spin range, and italic numbers in parentheses indicate the values in case we obtain significantly different numbers when the search space is restricted to physically meaningful spins .

The search models we consider are as follows:

no SO tail

Up to 3.5PN nonspinning and spinning contributions included (incomplete 3PN and 3.5PN spin corrections inherited from re-expanded lower-order terms, see discussion in Hannam et al. (2010)), but without the 3PN spin-orbit tail contribution derived in Blanchet et al. (2011);

3.5/2.5PN

Up to 3.5PN nonspinning and up to 2.5PN spinning contributions included, i.e., no incomplete spin terms considered;

3.5/2.0PN

Up to 3.5PN nonspinning and up to 2.0PN spinning contributions included, i.e., next-to-leading-order spin-orbit coupling dropped;

no

Same as 3.5/2.5PN, but without quadratic spin terms at 2PN order;

3.5/1.5

Up to 3.5PN nonspinning contributions included plus only the leading order spin-orbit coupling at 1.5PN;

2.5/2.5

Up to 2.5PN spinning and nonspinning contributions included.

Interestingly, we find from the results in Table 3 that the reduced search models have reasonably high fitting factor with the full target waveform if at least the dominant and next-to-leading order spin-orbit coupling are included in the model. Higher spin-orbit contributions, quadratic self-spin terms, quadrupole-monopole interactions and even higher-order nonspinning corrections are less important for the detection of the signal.

The systematic biases are not as easily interpretable, because every search model exhibits almost degenerate regions of parameter space with indistinguishable waveforms, similar to what we have analyzed in Sec. III. Thus, a template waveform with a much lower or higher parameter bias might agree with the target signal almost equally well, but the point we report as the result of a numerical optimization procedure does not include this information. We can, however, compare the results in Table 3 with the statistical uncertainty reported under the assumption of a perfectly known waveform family, Eq. (III.1), which leads us to the conclusion that the systematic errors reported in Table 3 are acceptable for the models with low fitting factor, except the 2.5/2.5PN case where nonspinning contributions are truncated.

It is important, however, to point out that neglecting the 2.5PN spin-orbit coupling leads to a severe loss in , and searches employing only the leading-order spin corrections are prone to miss signals from binaries with considerable spin. The numbers in Table 3 were obtained by allowing unbounded values for the spin of the waveform model, and indeed, particularly the cases with low fitting factor achieve the best agreement with unphysical values of . If we instead restrict the search parameter space to physically meaningful ranges , we obtain different numbers in some cases, given in parentheses in Table 3. Note that the already badly performing models then become completely disconnected to the target waveform space which results in absurdly high deviation of from unity. These numbers are an artifact of using the Fisher matrix to estimate large waveform mismatches and cannot be trusted. However, the fact that we would be unable to detect some spinning systems with such models is only emphasized by these results.

It is interesting to note that we already observed a similar effect with nonspinning searches and unphysical values of , as discussed in connection with Fig. 7. Here we cannot allow for unphysical as some spinning contributions contain , see Eqs. (31)-(III.2), which has no real solution for . However, we have just illustrated that unphysical values of the spin(s) may potentially inflate the waveform manifold enough to increase the detection efficiency such that signals that are not part of the search family have a higher chance of being detected.

V Conclusions

In this paper, we have considered nonprecessing inspiral waveforms of GWs emitted by coalescing NS-BH binaries. Such models are essential ingredients for the ongoing efforts to directly detect GWs for the first time, and the success and astrophysical output of such detections will depend sensitively on our understanding of the waveform family employed in the search.

By combining the well-known Fisher-matrix approach with a suitable higher-dimensional coordinate choice, we have demonstrated that the analysis of degeneracies in the waveform space can be made considerably more accurate than previous Fisher-matrix studies of parameter measurabilities, while still much faster than full overlap calculations between individual waveforms. Even though the high-dimensional Fisher matrix may be ill conditioned, we argued that the relevant information about the waveforms can be extracted through, instead of inversion, diagonalization of the Fisher matrix. This is because only the eigenvectors with large eigenvalues affect the waveform considerably. Thus, this procedure (which we identify as a PCA) is still well conditioned, and we explicitly presented how we can efficiently find confidence intervals around a given signal including a parameter-dependent cutoff frequency.

The coordinate choice we employed is based on assuming the PN phase expansion coefficients are free parameters Tanaka and Tagoshi (2000); Sathyaprakash and Schutz (2003); Pai and Arun (2013); Brown et al. (2012a). This approach relies on the waveform model being written as a simple amplitude and a complex phase which is a sum of purely frequency-dependent functions, each multiplied by a single parameter at most. Extending this strategy to accommodate more complicated functions, such as full inspiral-merger-ringdown models or precessing systems, is difficult as these models do not obey this simple analytic form. However, we restricted ourselves to a regime where the merger and ringdown part of the signal do not contribute significantly, and recent investigations show that modeling precessing systems may be based on a modulation of nonprecessing signals Schmidt et al. (2011); Boyle et al. (2011); Schmidt et al. (2012). In addition, waveform families that are used for detection purposes are unlikely to model full precession Ajith (2011); Brown et al. (2012b); Ajith et al. (2012). Thus, even though realistic signals are expected to contain some amount of precession, it is worth analyzing nonprecessing signals first.

In agreement with previous publications Cutler and Flanagan (1994); Poisson and Will (1995); Baird et al. (2013); Hannam et al. (2013), we found that the individual masses of the binary’s constituents cannot be well constrained by GW observations alone. This is because even though the chirp mass is measurable very accurately, the second mass parameter can be confused by the presence of spin. Disentangling spin and mass ratio would require yet another measurement, which is not accurate enough to place useful bounds on the individual masses.

With the analysis carried out in this paper, we can now rephrase these results in a more formal manner, following the results of our PCA. The first, very accurately determined principal component is dominated by the chirp mass (with higher-order spin-dependent corrections). The second principal component can be seen as a combination of symmetric mass ratio and spin that may somewhat restrict the spin magnitude, but does not allow for an unambiguous measurement of either parameter. A third principal component is also measurable to reasonable accuracy, but it adds little restrictions to the range in mass ratio and spin in our case. Higher principal components cannot be well constrained by GW measurements and they do not vary enough through the parameter space of interest to add much information.

It is important to point out that the explicit form of the principal components is model and gauge dependent (in our case, the scale freedom is expressed by the normalization frequency ), so the interpretation of the waveform structure in terms of principal components reveals no fundamental property of the waveform manifold. It is nevertheless a useful concept to understand the prospects and limitations of modeled GW searches. For instance, we have demonstrated that three parameters can be measured accurately, but whether or not these lead to astrophysically meaningful statements has to be determined by the explicit dependence of these well-measured parameters on physically interesting quantities.

This explicit form of principal directions in parameter space is, in turn, derived in terms of PN expansion coefficients. In our analysis, we have found that higher-order terms also have a noticeable influence on the third principal component, suggesting that yet undetermined nonspinning and especially spinning corrections may influence our ability to measure parameters in the future. Among the currently determined PN contributions we identified the leading and next-to-leading order spin-orbit terms as crucial spin corrections that need to be included in the waveform model to not change the manifold drastically. Again, our fitting-factor study of systematic errors was limited to Fourier-domain models of the form detailed above. It would be interesting to contrast our results with comparisons between various approximants in the time and frequency domain. Note however, that time-domain models (such as the TaylorTn approximants) can, in principle, be transformed to an analytic form in Fourier space as well, where the difference between those models and the TaylorF2 model employed here would lie entirely in undetermined higher-order phase corrections. Their effect can be studied in our framework by allowing the parameter space to be extending beyond 4PN order, which we leave for future work.

As long as such higher-order terms have not been fully calculated, we need to ensure that the waveform model chosen for use in GW searches is capable of detecting signals from other, equally plausible models as well. One way of doing this is by allowing unphysical source parameters. We have, somewhat artificially, compared the full waveform model here with reduced search families that were lacking certain contributions, which we take as a guideline to the situation we are actually facing. Namely, that we search for signals in the universe (that may or may not be well described by the full theory of general relativity) with a restricted, incomplete, PN model. We have demonstrated that allowing an unphysical spin parameter beyond unit magnitude can indeed reduce the systematic difference of waveform families. It remains to be tested more thoroughly whether, for the detection problem, that reduces ambiguities to a negligible extent. Parameter estimation pipelines, on the other hand, obviously cannot use this freedom. However, our algorithm also provides an easy way to estimate parameter biases between different waveform models with physically meaningful bounds.

In summary, our results provide a formal answer to the question of what can be measured by GW observations of inspiraling NS-BH binaries. This is important astrophysically, but also has some immediate applications for standard GW data-analysis techniques. For instance, constructing a discrete template bank for spinning signals with predefined maximal mismatch between templates becomes much simpler in our adapted coordinates (as detailed in Brown et al. (2012a); Harry et al. (2013)), and our results promote a physical understanding of the resulting parameter-space coverage. A related question is important for the relatively small number of numerical-relativity waveforms that have to be calculated in order to calibrate complete inspiral-merger-ringdown models (see Ohme (2012b) for an overview). Their parameter-space coverage should take advantage of the dominant directions in waveform space that can be estimated with our method. Equally, testing inspiral degeneracies in the merger regime is an important task Pürrer et al. (2013) that, however, requires identifying these degeneracies first.

In addition, our principal coordinates should allow for a geometric, very efficient parameter estimation beyond the template with highest overlap, and even more advanced and accurate parameter-estimation routines (such as Markov chain Monte-Carlo methods) may benefit from knowing the preferred directions in parameter space that we identify.

Finally, calculating inner products between different waveforms is the core of all matched-filter algorithms, and the computationally very efficient approximation we suggest should allow for a tremendous speed-up of all analyses that are centered in a regime of high overlaps. We have presented an example study of systematic errors that otherwise had been computationally very challenging. Now, however, such studies can quickly be repeated and extended for other setups (e.g., different PN coefficients, detector noise curves, etc.) which facilitates easy sanity checks and on-line comparisons of results obtained with different waveform families.

Acknowledgements

It is a pleasure to thank Stephen Fairhurst, Mark Hannam, Ian Harry, Badri Krishnan, Chris Messenger and Bangalore Sathyaprakash for many useful discussions. We are also grateful to Francesco Pannarale for sharing his insights into NS spin expectations and observations, and for carefully reading the initial manuscript. FO would also like to thank Lukas Ohme, simply for being there when this paper became public. FO is supported by STFC Grant No. ST/I001085/1.

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
118160
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description