Beyond Fisher: exact sampling distributions of the maximum-likelihood estimator in gravitational-wave parameter estimation

Beyond Fisher: exact sampling distributions of the maximum-likelihood estimator in gravitational-wave parameter estimation

Michele Vallisneri Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109
Abstract

Gravitational-wave astronomers often wish to characterize the expected parameter-estimation accuracy of future observations. The Fisher matrix provides a lower bound on the spread of the maximum-likelihood estimator across noise realizations, as well as the leading-order width of the posterior probability, but it is limited to high signal strengths often not realized in practice. By contrast, Monte Carlo Bayesian inference provides the full posterior for any signal strength, but it is too expensive to repeat for a representative set of noises. Here I describe an efficient semianalytical technique to map the exact sampling distribution of the maximum-likelihood estimator across noise realizations, for any signal strength. This technique can be applied to any estimation problem for signals in additive Gaussian noise.

pacs:
04.30.Db, 04.25.Nx, 04.80.Nn, 95.55.Ym

The first direct detections of gravitational-wave (GW) signals are likely to be achieved in the second half of this decade with second-generation ground-based interferometric detectors such as Advanced LIGO and Virgo Harry and the LIGO Scientific Collaboration (2010); *avirgo, which are sensitive to high-frequency GWs (10–1000 Hz). Third-generation instruments such as the Einstein Telescope Special issue (2011) promise much greater reach and yield, while space-based observatories similar to LISA Prince et al. (2009) will be sensitive to low-frequency GWs ( Hz) in a band populated by thousands of detectable sources in the Galaxy and far beyond. A survey of the scientific literature on GW data analysis in this predetection era reveals a few dominant genres: articles on data-analysis methods and implementations (e.g., Allen et al., 2005); nondetection and upper-limit analyses of actual collected data (e.g., Abbott et al., 2008); and “prospects” papers that examine combinations of GW sources and detectors to characterize the expected rates of detection and accuracies of source-parameter estimation (e.g., Cutler and Flanagan, 1994).

Papers in this last class, especially when they are concerned with Bayesian inference for modeled GW signals, follow a well-rehearsed structure: a) derive the GW signal as a function of the source parameters ; b) fix fiducial true values for them; c) simulate a realization of detector noise, usually assumed as additive and Gaussian; d) derive the noise-dependent probability distribution for the source parameters given the data ; e) characterize the error and uncertainty of ; f) finally, repeat steps c, d, and e for a sufficiently broad sample of noise realizations, since each of these will result in different parameter estimates and uncertainties. Optimally, repeat for different fiducial .

Step d [calculating ] is usually very computationally expensive: for more than two or three source parameters, in Bayesian inference it must be performed with a stochastic technique such as Markov chain Monte Carlo integration Liu (2001), which may involve evaluations of the likelihood for different s. Each evaluation requires obtaining the GW signal and performing an FFT (say, for points) to compute the likelihood of the noise residual [see the discussion around Eq. (1)]. Step f is therefore a Monte Carlo integration of Monte Carlos integrations (perhaps again of them), with a total computational cost, for a single , of floating-point operations, more than can usually be procured easily. More emphatically, I call this scheme the Holy Grail of parameter-estimation prospects: even the papers that try hardest (e.g., Nissanke et al., 2011), perform meta-Monte Carlo studies of combinations of noise realizations and fiducial sources.

Most “prospects” papers, instead, take advantage of the fact that steps d to f can be short-circuited when the signal-to-noise ratio (SNR) of detection is sufficiently high that collapses to a normal distribution centered around the maximum-likelihood parameters , with covariance given by the inverse Fisher matrix, which is not a function of . At high SNR, the distribution of for different noise realizations is itself normal and centered at , with covariance again given by the inverse Fisher matrix. The computational cost of this approximation (for a single and -point likelihoods) is floating-point operations, where is the number of source parameters, affording virtually instantaneous results.

Unfortunately, as I discuss at length in Ref. Vallisneri (2008) and as highlighted elsewhere Cutler and Flanagan (1994); Balasubramanian et al. (1996); *PhysRevD.57.3408; *PhysRevD.57.4588; *PhysRevD.82.124065, for many practical parameter-estimation problems SNRs will not be sufficiently high to justify the approximation. The crux of the problem is that the Fisher matrix, built from the partial derivatives of the signal, can represent correctly only if is linear in all the across ranges comparable to the expected parameter errors. These decrease as the SNR grows, making the condition less stringent; in Ref. Vallisneri (2008), I provide a criterion to determine when the SNR is high enough.

Thus, in general a Monte Carlo integration is needed for step d; furthermore, the meta-Monte Carlo integration of step f is also necessary, because the particular noise realization does affect parameter uncertainties, as shown in Fig. 2 for the toy model discussed later in this Letter. It is true Nissanke et al. (2010) that for additive Gaussian noise the average of across all ’s equals ; but this averaged likelihood describes an unrealistic limit (infinite observations of the same source with a finite total SNR) that is not representative of average or typical errors, except trivially in the high-SNR limit. Thus, the shape and width of generic cannot generally be obtained from , as suggested in Ref. Cornish (2010). In Ref. Vallisneri (2008) I derive an expansion of in powers of , where -dependent terms are seen at next-to-leading order for both the centering and shape of the posterior.

In this Letter I describe and test a technique to perform as less thorough, but more affordable survey than a nested Monte Carlo: mapping the sampling distribution of the maximum-likelihood (henceforth, ML) estimator [the maximum of each ] over all noise realizations. From a classical-statistics (a.k.a. “frequentist”) viewpoint, the spread of this distribution is formally the uncertainty of the ML point estimator. From a Bayesian viewpoint Jaynes (2003), if priors are unimportant, the spread of this distribution describes one major component of the expected error (the other is the intrinsic spread of the posterior for each ).

Mapping the sampling distribution of the ML estimator.

We consider a large set of experiments in which we observe the signal , where is the -dimensional vector of true parameter values and is an -dimensional vector (e.g., a time series) with . In each experiment, the detector output is , where is additive Gaussian noise distributed with . Here is the standard signal inner product, given in one convention Cutler and Flanagan (1994) by , with denoting the Fourier transform and the complex conjugate. In this Letter, without loss of generality, we treat as the inner product of an abstract linear space, and let .

For a given and noise realization , the ML estimator maximizes the probability of the residual noise obtained by subtracting the postulated signal from the data ; that is, it maximizes

(1)

Thus, must satisfy the vector equation

(2)

where the are the partial derivatives of .

Our purpose is to map the distribution of across all noise realizations. We can do this by enumerating all possible ’s [weighted by ], figuring out the corresponding to each, and accumulating the resulting distribution of . Formally,

(3)

Unfortunately, because is generally a complicated function of the , it is difficult to solve for given , so we can only integrate Eq. (3) by using a Monte Carlo approach where we generate full, high-dimensional realizations of (e.g., as time series), search parameter space for by repeatedly evaluating Eq. (1), and iterate for many different .

The main result of this paper is that there is a more effective way to map : we enumerate the , and compute the total probability weight of the that are compatible with each. (Put slightly differently, for each we count how many experiments would yield it as the maximum of likelihood.) Because the are linear functions of the , this weight is the probability mass of the -dimensional subspace of the noise vectors that solve for all . Using the -of-function relation for the vector , , we can then rewrite as

(4)
Figure 1: To compute , we integrate over all noise realizations that satisfy —see the discussion below Eq. (4). Here , , and . The resulting is shown top right.

Figure 1 exemplifies the process of integrating over the ’s compatible with a chosen . In this low-dimensional model 111Suggested by C. Cutler in a personal communication (2010)., detector data are described by a point in the plane, and the signal family by the unit circle. For each , the true signal is displaced to a different with probability indicated by the shading. Projecting back to identifies the ML waveform and parameters . All noise realizations that produce an within the gray sector project to the same , so integrating over the sector yields .

Equation (4) is especially powerful because, contrary to appearance, it does not require integration over the full dimensions of the noise, but only over coordinate directions corresponding to the and . Since both these sets of functions are linear in , they can be seen as jointly normal random variables that are fully characterized by their inner products; all the other noise degrees of freedom have no effect on the integral other than its normalization. (We spell this out in detail in the next section.) With Eq. (4) in hand, we can then sample directly (for low ), or by Markov chain Monte Carlo techniques.

Evaluating the master integral.

Equation (4) can be evaluated elegantly as the expectation value of a function (the determinant) of correlated random variables. For clarity, we follow a more pedestrian approach: we begin by transforming the -dimensional integral over the to new coordinates where the first few basis vectors span the random variables of interest. Namely, we write in terms of a new basis where the first vectors are obtained by orthonormalizing the ,

(5)

furthermore, we obtain the -th to -th basis vectors by orthonormalizing the with ; the remaining noise combinations complete the basis.

Figure 2: Likelihood maps obtained for the toy model (11) with different ’s and with the same (), located at the origin of the plots. Contours are plotted at levels that would correspond to 1-, 2-, and 3- ellipses for a normal distribution. The signal is shown above each map.

We can now rewrite all the variables that appear in Eq. (4) in terms of the new basis:

(6)

here the , , and are the coefficients that expand the , , and , respectively, in terms of the . We use Einstein summations, and treat both as a double index and as a single index flattened to the range . By virtue of orthonormalization, for and for , with and . Furthermore,

(7)

with , and with the sums over limited to and the sums over limited to .

To perform the integration in Eq. (4), we use the first to fix , yielding a -normalization factor of ; we use the second to fix (after the terms proportional to in the cancel out), yielding a factor of ; and so on. Next, we perform the trivial integral over the “signal-orthogonal” noise degrees of freedom that correspond to the with , leaving

(8)

here the sums over span 222Inside the determinant, the first terms of the form vanish because of the s applied previously. , and the sum over spans .

For a given , the main computational cost of evaluating Eq. (8) resides in the orthonormalization and in the computation of the , which together require inner products (and therefore -dimensional signal FFTs); by contrast, the -dimensional Gaussian integral can be evaluated much more cheaply (e.g., by a 10,000-point Monte Carlo integration over drawn from a normal distribution), since the integrand is a function of small matrices and not long FFTs.

Fisher-matrix limit.

The well-known high-SNR, Fisher-matrix limit, in which the waveform can be approximated as a linear function of the parameters (and in which the have a simple jointly-normal distribution), follows easily by specializing Eq. (8). Without loss of generality, we set ; it follows that . The integration over the is trivial, and yields , where is the Fisher matrix. Furthermore, , where is the error of the ML estimator, so the first exponential of Eq. (8) can be rewritten as , since the , for , span a complete basis for the . Last, because of the structure of the , . Taking everything together, we reproduce the Fisher-matrix result for the distribution of 333Formally, we see that Eq. (8) tends to Eq. (9) because the terms other than in the determinant are suppressed by a factor 1/SNR. This follows from and , and , and .,

(9)

The general result [Eq. (8)] can be restated in terms of , in a form more suitable to computation:

(10)

here is the -dimensional square matrix given by the products with , : the primes denote projection orthogonal to (i.e., ); and is the matrix obtained from the -dimensional vector of integration variables by remapping indices.

Toy model.

To exemplify the use of Eqs. (4) to map for low-SNR parameter estimation, let us consider a family of sine–Gaussian signals given by

(11)

We consider the problem of jointly estimating and , but not , which we fix to yield SNR = 5. For our example, we select and . As shown in Fig. 2, at this low SNR different noise realizations yield strikingly different likelihood maps for the same true signal—all quite different from the ellipsoidal Fisher prediction. In each map, the pale blue dot marks the location of —indeed, the distribution of blue dots is just the desired .

In Fig. 3, I show maps of obtained by using both the brute-force approach [Eq. (3)] and the new method [Eq. (4)]. For the former (the noisy white curves and shading), I produced 100,000 likelihood maps with different ’s drawn from , and 2D-histogrammed the resulting . For the latter (dark curves), I used Eq. (4), as implemented by Eq. (8). As expected, the maps agree, but the new method is considerably faster. For comparison, the top–right plot shows the Fisher-matrix prediction (9).

Conclusions.

I have described a novel approach to create exact maps, for any SNR, of the distribution of the ML estimator for the source parameters of a signal embedded in additive Gaussian noise. This distribution would be obtained in a large set of observations of the same true signal with different noise realizations, each appearing with probability . Given a single observation, such a map embodies the frequentist notion of uncertainty for the ML estimator. From a Bayesian viewpoint, if priors are unimportant, the map characterizes the distribution of possible maxima of posterior probabilities.

Figure 3: Exact (left) and Fisher-matrix (right) sampling distribution of . Contours are plotted as in Fig. 2, with the noisy white curves and shading derived from the numerical maxima of 100,000 likelihood maps for different ’s, and the dark curves from Eq. (4). The dot marks the most probable , slightly displaced from .

In comparison to the computational cost of the “Holy Grail” nested Monte Carlo ( operations), we estimate the cost of Eq. (4) as follows: Monte Carlo samples of candidate s, times inner products, times floating-point operations for each inner-product FFT (again assuming ); thus, even for , this scheme involves operations—a thousand times cheaper.

These maps can be used directly, in both frequentist and Bayesian frameworks, to study parameter-estimation prospects, but also to perform stringent tests of Fisher-matrix predictions at low SNRs, and to provide proposal distributions for Monte Carlo searches of unknown sources. An interesting feature in this regard is that , a local condition, does not distinguish between primary and secondary maxima of the likelihood, and it will include the latter (if they are sufficiently probable) in the maps; thus Eq. (4) could be exploited to enable jumps between separated peaks in complex likelihoods, which are generally very difficult to locate. While this result was derived in the context and with the motivation of GW science, it is applicable to statistical inference for any problem where noise can be regarded as Gaussian and additive, such as several that arise in high-energy physics and observational cosmology.

Acknowledgments.

I am grateful to G. Cicuta, N. Cornish, C. Cutler, F. Feroz, M. Hobson, J. Jewell, I. Mandel, S. Nissanke, E. Onofri, R. O’Shaughnessy, and T. Prince, as well as two anonymous referees, for useful suggestions and for reviewing this manuscript. This work was supported by the RTD program at the Jet Propulsion Laboratory, California Institute of Technology, where it was performed under contract with the National Aeronautics and Space Administration. Copyright 2011 California Institute of Technology. Government sponsorship acknowledged.

References

  • Harry and the LIGO Scientific Collaboration (2010) G. M. Harry and the LIGO Scientific Collaboration, Class. Quant. Grav. 27, 084006 (2010).
  • Virgo Collaboration (2009) Virgo Collaboration, “Advanced Virgo baseline design,” tech. report VIR-0027A-09, tds.ego-gw.it/ql/?c=6589 (2009).
  • Special issue (2011) Special issue, Gen. Rel. Grav. 43, 361 (2011).
  • Prince et al. (2009) T. A. Prince et al., “LISA: Probing the Universe with Gravitational Waves,” list.caltech.edu/mission_documents (2009).
  • Allen et al. (2005) B. Allen et al., arXiv:gr-qc/0509116 (2005).
  • Abbott et al. (2008) B. Abbott et al.Astrophys. J. Lett. 683, L45 (2008).
  • Cutler and Flanagan (1994) C. Cutler and É. E. Flanagan, Phys. Rev. D 49, 2658 (1994).
  • Liu (2001) J. S. Liu, Monte Carlo strategies in scientific computing (Springer, New York, 2001).
  • Nissanke et al. (2011) S. Nissanke et al.Astrophys. J.  739, 99 (2011).
  • Vallisneri (2008) M. Vallisneri, Phys. Rev. D 77, 042001 (2008).
  • Balasubramanian et al. (1996) R. Balasubramanian, B. S. Sathyaprakash,  and S. V. Dhurandhar, Phys. Rev. D 53, 3033 (1996).
  • Balasubramanian and Dhurandhar (1998) R. Balasubramanian and S. V. Dhurandhar, Phys. Rev. D 57, 3408 (1998).
  • Nicholson and Vecchio (1998) D. Nicholson and A. Vecchio, Phys. Rev. D 57, 4588 (1998).
  • Vitale and Zanolin (2010) S. Vitale and M. Zanolin, Phys. Rev. D 82, 124065 (2010).
  • Nissanke et al. (2010) S. Nissanke et al.Astrophys. J.  725, 496 (2010).
  • Cornish (2010) N. J. Cornish, arXiv:1007.4820 [gr-qc] (2010).
  • Jaynes (2003) E. T. Jaynes, Probability Theory: The Logic of Science (Cambridge University Press, 2003).
  • (18) Suggested by C. Cutler in a personal communication (2010).
  • (19) Inside the determinant, the first terms of the form vanish because of the s applied previously.
  • (20) Formally, we see that Eq. (8\@@italiccorr) tends to Eq. (9\@@italiccorr) because the terms other than in the determinant are suppressed by a factor 1/SNR. This follows from and , and , and .
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
13851
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description