Matrix-free Large Scale Bayesian inference in cosmology
In this work we propose a new matrix-free implementation of the Wiener sampler which is traditionally applied to high dimensional analysis when signal covariances are unknown. Specifically, the proposed method addresses the problem of jointly inferring a high dimensional signal and its corresponding covariance matrix from a set of observations. Our method implements a Gibbs sampling adaptation of the previously presented messenger approach, permitting to cast the complex multivariate inference problem into a sequence of uni-variate random processes. In this fashion, the traditional requirement of inverting high dimensional matrices is completely eliminated from the inference process, resulting in an efficient algorithm that is trivial to implement. Using cosmic large scale structure data as a showcase, we demonstrate the capabilities of our Gibbs sampling approach by performing a joint analysis of three dimensional density fields and corresponding power-spectra from Gaussian mock catalogues. These tests clearly demonstrate the ability of the algorithm to accurately provide measurements of the three dimensional density field and its power-spectrum and corresponding uncertainty quantification. Moreover, these tests reveal excellent numerical and statistical efficiency which will generally render the proposed algorithm a valuable addition to the toolbox of large scale Bayesian inference in cosmology and astrophysics.
keywords:large scale – reconstruction –Bayesian inference – cosmology – observations – methods – numerical
Ever increasing amounts and precision of modern cosmological and astrophysical data demands fast and robust methods to address corresponding large scale inference problems of analysing these observations and extracting new knowledge on our Universe. Particularly, the Wiener filter has become a standard tool for the analysis of large data sets, often involving many millions of parameters, with widespread applications in cosmology and astrophysics. Even though relying on a linear data model and Gaussian statistics, the Wiener filter approach is still a standard and well valued method when requiring a robust approach for the inference of high dimensional signals as occurring in the analysis of large scale structure (LSS) or cosmic microwave background (CMB) data. For this reason, Wiener filtering has been frequently applied to a variety of large scale structure analysis problems, specifically the inference of the three dimensional density field from galaxy observations (see e.g. Lahav et al., 1994; Fisher et al., 1994; Fisher et al., 1995; Ganon & Hoffman, 1993; Zaroubi et al., 1995; Fisher et al., 1995; Hoffman, 1994; Sheth, 1995; Zaroubi et al., 1999; Zaroubi, 2002; Erdoğdu et al., 2004; Kitaura & Enßlin, 2008; Erdoğdu et al., 2006; Kitaura et al., 2009; Jasche et al., 2010; Jasche & Wandelt, 2013).
In the field of CMB analysis, the Wiener filter is frequently employed as a map making algorithm or for the joint inference of temperature fluctuations and corresponding power-spectra (see e.g. Eriksen et al., 2004; Jewell et al., 2004; O’Dwyer et al., 2004; Wandelt et al., 2004; Eriksen et al., 2004; Smith et al., 2007; Larson et al., 2007; Eriksen et al., 2007; Elsner & Wandelt, 2013). Similar approaches have also been used for the optimal reconstruction of images from radio Interferometry Sutton & Wandelt (2006); Sutter et al. (2013) or to generate improved maps of the galactic Faraday emission Oppermann et al. (2012). Furthermore, the Wiener filter also constitutes an integral part of the recently presented general purpose statistical analysis framework NIFTY (Numerical Information Field Theory) (Selig et al., 2013).
Traditionally, numerical implementations of the Wiener filter rely on Krylov space methods, such as conjugate gradients, to invert matrices and solve high dimensional systems of linear equations (see e.g. Kitaura & Enßlin, 2008, and references therein). However, implementation and testing of these numerically demanding methods also constitutes a certain hurdle and generally requires some investment into code development before such methods can be used for specific scientific applications.
Recently a particular elegant and simple way of implementing the Wiener filter via a messenger method has been proposed, which remedies the hurdle of implementing numerical matrix inversion techniques for very high dimensional systems (Elsner & Wandelt, 2013). In particular, Elsner & Wandelt (2013) proposes to introduce a messenger field to mediate between different preferred orthogonal bases, in which signal and noise covariance matrices can be expressed conveniently. In this approach information from the data is transmitted to the signal via a messenger field, that can generally be transformed efficiently from one bases representation to another. In this fashion the algorithm avoids the requirement to apply the inverse Wiener covariance matrix to data.
In this work, we will pick up these ideas and propose an efficient and easy to implement Gibbs sampling approach to address Bayesian large scale inference problems in cosmology or astrophysics. Our primary intention is to propose a simple, nevertheless powerful, algorithm for the joint inference of a signal and corresponding covariance matrix from observations that can be implemented and operated by everyone, even inexperienced users. Specifically, in this work, we will exemplify the performance of this algorithm in case of a LSS analysis aiming at the joint inference of the three dimensional density field and cosmological power-spectrum from galaxy surveys.
The introduction of a messenger field yields an augmented Wiener posterior distribution, whose structure lends itself to a ideal multiple block sampling approach. In a first step the messenger field is realised from a normal distribution conditional on the observation in real space, followed by a second step of sampling the signal conditional on the previously sampled messenger field in Fourier-space. Here, the augmented Wiener posterior distribution is chosen such, that sampling these two random fields can be trivially achieved by generating a sequence of uni-variate normal random variates in their respective basis representations. Iterating these processes will then provide samples from the Wiener posterior without the need of performing matrix inversions or any other multi-parameter operation, except for basis transformations. Further we will complement this algorithm, by a power-spectrum sampling method to jointly infer the signal and its covariance matrix. Likewise, as described in the following, this algorithm will also only require the ability to generate uni-variate normal and inverse gamma variates.
Consequently, we arrive at an efficient algorithm, which, at every stage, reduces the full joint problem to a sequence of independent uni-variate sub-problems. The advantage of this algorithm lies in its ease of implementation, thus greatly reducing the required investment in code development for large scale Bayesian inference projects. In the following we will describe the implementation of this algorithm and exemplify it in case of a mock LSS analysis. The paper is structured as follows. In section 2 we will describe the messenger approach and the resulting augmented Wiener posterior distribution. Section 3, describes the numerical implementation of the proposed method. To estimate the performance of the algorithm in a realistic scenario, as an example, we will apply it to an artificial galaxy survey, which will be described in section 4. In the following section 5, we will discuss the results of these tests and conclude the paper with a summary and conclusion in section 6.
2 The augmented Wiener posterior
As described in the introduction, the aim of this work is to present an easy to implement algorithm for the large scale Bayesian problem of jointly inferring a signal and its covariance matrix in a high dimensional setting. Specifically, we aim at exploring the joint posterior distribution of the signal and its covariance matrix conditional on observations . Using Bayes rule this posterior distribution can be rewritten as:
where is the signal covariance prior, is the signal prior and is the likelihood normalised by the evidence . Note, that observations are assumed to be conditionally independent of the signal covariance matrix once the signal is given, specifically . In the following we will assume linear data models of the form:
where is a linear measurement response operator and is a normally distributed noise vector with zero mean and noise covariance matrix . Further, assuming a Gaussian prior for the signal yields the famous Wiener posterior for the inference of the signal given as:
Complication in inferring signals from the Wiener posterior arises for the analysis of large and complex data sets since the sizes of signal and noise covariance matrices scale quadratically with the number of signal parameter and data points (Elsner & Wandelt, 2013). This fact generally renders storage and processing of dense systems impractical. Although it is often possible to find a set of bases in which the respective noise and signal covariances can be represented by sparse matrices, it is generally not possible to jointly represent them as sparse systems in a single basis. As a consequence traditional MCMC methods rely on the implementation of complex numerical algorithms such as Krylov space methods or gradient based hybrid Monte Carlo approaches to solve the corresponding sets of linear equations (for examples in cosmological applications see e.g. Eriksen et al., 2004; Jewell et al., 2004; O’Dwyer et al., 2004; Wandelt et al., 2004; Eriksen et al., 2004; Larson et al., 2007; Eriksen et al., 2007; Elsner & Wandelt, 2013; Lahav et al., 1994; Fisher et al., 1994; Fisher et al., 1995; Ganon & Hoffman, 1993; Zaroubi et al., 1995; Fisher et al., 1995; Hoffman, 1994; Sheth, 1995; Zaroubi et al., 1999; Zaroubi, 2002; Erdoğdu et al., 2004; Kitaura & Enßlin, 2008; Erdoğdu et al., 2006; Kitaura et al., 2009; Jasche et al., 2010; Jasche & Wandelt, 2013).
In this situation Elsner & Wandelt (2013) proposed to introduce a normally distributed messenger field with covariance matrix , to mediate between the respective bases in which and can be represented as sparse systems. In particular, the covariance matrix is chosen to be proportional to the diagonal matrix, a property which is conserved under orthogonal basis transforms. The introduction of this additional random field to the inference process yields an augmented Wiener posterior for the joint inference of the signal and the messenger field given as:
Note, if the messenger covariance matrix is proportional to a diagonal matrix , marginalising over the messenger field will yield the target distribution, given in equation (3), if the augmented noise covariance is chosen as:
Furthermore, requiring the augmented noise covariance matrix to be positive definite, yields:
specifically we choose to be the minimum of all entries in in the observed domain.
3 A large scale Gibbs sampler
This section describes the derivation of our algorithm and describes its numerical implementation.
3.1 Generating signal realisations
The augmented Wiener posterior distribution given in equation (4) lends itself to a multiple block sampling approach. In particular, the problem of jointly exploring the augmented Wiener posterior can be reduced to the task of sequentially sampling the signal field and the messenger field . Specifically we propose to generate random variates of the respective fields via the following two step sampling approach:
Iterating these processes will yield samples from the joint augmented Wiener posterior distribution. Marginalisation is then trivially achieved by simply discarding the respective realisations of the messenger field , yielding signal realisations correctly drawn from the target Wiener posterior given in equation (3). The important point to remark, as demonstrated by the augmented Wiener posterior distribution, given in equation (4), and as manifested by the proposed sampling procedure given in equation (8), is that information between data and signal is not transmitted directly between those two fields but is mediated via a third messenger field. As the messenger covariance matrix is diagonal in the respective bases in which signal and noise covariances can be described as sparse diagonal systems, random variates for signal and messenger fields can be generated by independently drawing uni-variate normal realisations for the individual elements of the respective fields in the respective bases. Specifically signal realisations are generated via the process:
with and . The index labels the different elements of the respective vectors, and the hat operator indicates that all quantities have been transformed to the basis in which the Signal covariance matrix assumes its diagonal shape . In a analogous fashion uni-variate normal variates can be generated for the individual elements of the messenger field as:
Note, that for sampling the messenger field all quantities are given in the basis in which the noise covariance becomes a diagonal matrix. It should also be remarked that the messenger covariances and are the same only for normalised orthogonal transforms, otherwise they differ by the multiplicative normalisation constant.
Consequently, generating random signal variates from the Wiener posterior given in equation (3) only relies on the ability to draw uni-variate Gaussian random numbers and to perform orthonormal basis transformations to switch between different basis representations. In typical cosmological applications these orthonormal basis transformations are used to switch between real and Fourier space, which is achieved via fast and efficient implementations of the fast Fourier or Spherical Harmonic transformation algorithms (see e.g. O’Dwyer et al., 2004; Jewell et al., 2009; Kitaura & Enßlin, 2008; Jasche & Wandelt, 2013).
As can be seen from the derivation presented above, at no point does our approach rely on the storage and inversion of covariance matrices. A pseudo code for the proposed signal sampling algorithm is given in Algorithm 1.
3.2 Sampling the signal covariance
Once a realisation of the signal as generated by Algorithm 1 is available, sampling the signal covariance matrix becomes a particularly trivial task in the basis where it assumes its diagonal form. As can be seen from the full joint posterior distribution given in equation (1), the conditional signal covariance posterior solely depends on the covariance and signal prior once a signal realisation has been specified:
where for the last proportionality we used the fact that the determinant of the Jacobian of the coordinate transform induced by the orthonormal transformation is one. For the analysis of the individual elements of the diagonal signal covariance matrix we propose to use Jeffreys’ prior given by , which factorises in the individual matrix elements. Jeffreys’ prior is a solution to a measure invariant scale transformation, and hence is a scale independent prior, as different scales have the same probability (Jeffreys, 1946). For this reason, Jeffreys prior constitutes a optimal choice for many applications, such as the inference of cosmological power-spectra, which constitute scale measurements, since it does not introduce any bias on a logarithmic scale (also see discussions in Jasche et al., 2010; Jasche & Wandelt, 2013).
Since due to the diagonal shape of in the corresponding basis representation, also the second factor in equation (13) factorises in the matrix elements , all these elements can be sampled independently. In particular, simple algebraic manipulation of equation (13) reveals that the individual matrix elements have to be drawn from an inverse gamma distribution:
Again the complex joint sampling process of all signal covariance matrix elements can be reduced to the trivial task of independently realising inverse gamma variates. In particular, introducing the coordinate transformation yields a chi-square distribution which gives rise to the sampling algorithm outlined in Algorithm 2. In particular, sequential iteration of algorithms 1 and 2 yields samples of the joint posterior distribution of the signal and its covariance matrix conditional on data. It should be remarked, that in some cases additional symmetries can be exploited to further reduce the required number of parameters to describe the signal covariance matrix. In particular, for cosmological applications one can exploit the homogeneity and isotropy of the Universe to average the covariance matrix over spherical shells in Fourier space. For a discussion of the inverse Gamma sampler in a cosmological setting and the required minor modifications to Algorithm 2 the reader is referred to (see e.g. Jasche et al., 2010; Jasche & Wandelt, 2013)
3.3 Improving statistical efficiency
The algorithms, as outlined above, already provide a correct Markov Chain Monte Carlo approach to explore the joint distribution of a signal and corresponding signal covariance. While this approach provably converges to the target posterior at all regimes probed by the data, it may take prohibitive computational time to generate a sufficient amount of independent samples in the low signal to noise regime (for a discussion of this issue see e.g. Jewell et al., 2009; Jasche & Wandelt, 2013). In particular, the variations in subsequent samples of the signal covariance are solely determined by signal variance, whereas the full joint posterior distribution is governed by signal variance and noise. As a consequence the algorithms described above permit rapid exploration of parameters in the high signal to noise regime, but yield poor statistical efficiency in low signal to noise regimes, where signal variance is typically less than noise. Typically this results in a prohibitively long correlation length of the sequence of sampled signal covariances in the low-signal to noise regime, requiring unfeasible long Markov Chains to generate sufficient numbers of independent samples (Jewell et al., 2009; Jasche & Wandelt, 2013). Fortunately, the messenger approach permits to devise a particularly simple approach to overcome these limitations via a simple change of coordinates. Rather than separating the steps of sampling the signal and covariance matrix, as was described above, here we propose to combine the sampling steps of the signal and its covariance matrix conditional on a realisation of the messenger field by exploring the conditional posterior distribution:
Introducing the following change of coordinates then yields the transformed distribution:
which again factorises in the individual elements. Exploring the joint distribution of and can then again be achieved by sampling individual elements via a block sampling algorithm. In the first step realisations for the components are drawn via the following process:
Conditional on these realisations of samples for the elements of the signal covariance can be generated by drawing random variates from the conditional distribution:
Unfortunately, sampling this distribution directly is not possible. Consequently, the proposed Markov Chain Monte Carlo approach has to rely on a Metropolis-Hastings acceptance step. For this purpose we introduce the change of coordinate , which yields the distribution:
It can be seen, that the resultant distribution for is essentially a normal distribution multiplied by the factor which ensures that samples of the covariance matrix will be strictly positive definite. Although direct sampling from this distribution is not possible, our tests have shown, that generating proposals from a truncated normal distribution yields nearly ideal acceptance rates in a Metropolis-Hastings step. In particular we propose to use a independence sampler by generating proposals via the process:
where denotes the Heaviside function. Using this particular proposal distribution then yields the standard Metropolis-Hastings acceptance probability for each individual element:
As in the previous sections, the proposed algorithm only relies on independent updates in the Markov Chain and hence is trivial to implement. The pseudo code for this algorithm is given in algorithm 3. The proposed algorithm is optimal to sample the low signal to noise regime. In particular, introducing the initial change of coordinates moved the covariance matrix from the signal prior to the messenger posterior distribution in equation (16). As a consequence, step size between subsequent covariance matrix samples is not determined by the prior variance but by the larger noise variance represented by .
3.4 Comments on parallelization
Generally Markov Chain Monte Carlo methods can be parallelized by either running several Markov chains in parallel or by distributing intra chain operations on multiple cores to speed up single serial chains. In particular, execution times for serial chain operation of our algorithm can be reduced by parallelizing loops over independent uni-variate sampling processes. In this case, care has to be taken when generating pseudo random numbers, a operation which is usually not thread-safe. A possible way, to prevent collisions between random numbers, is to assign individual pseudo random number generators with different seeds to each core respectively. In this fashion, each process will independently generate a unique sequence of pseudo random numbers. Intra chain parallelization nevertheless still requires communication between different cores when performing ortho normal transforms, introducing waiting times for the fastest CPUs in the system. This parallelization scheme therefore is only reasonable when Markov chains exhibit long burn-in times and running parallel chains would be too wasteful. On the other hand, as will be demonstrated below, our algorithm shows short burn-in times and correlation lengths. For this reason, running several completely independent Markov chains from over-dispersed initial conditions and with different respective seeds is favourable over intra chain parallelization. As communication between independent Markov chains is not required, this approach will also show ideal scaling with the number of cores and independent samples, as generated by the sampling Process.
4 Generation of a mock galaxy catalogue
To demonstrate the numerical performance of the proposed method in a realistic setting, we will generate artificial galaxy observations, following the approach previously described in Jasche et al. (2010) and Jasche & Wandelt (2013). In particular, we will perform a mock analysis on a cubic equidistant grid of side length consisting of grid nodes. The underlying cosmic density contrast field , being the signal to infer, is generated from a zero-mean normal distribution with the covariance matrix corresponding to a cosmological power spectrum, including baryon acoustic oscillations, generated via the prescription of Eisenstein & Hu (1998) and Eisenstein & Hu (1999). Specifically, we assume a standard CDM cosmology with the set of parameters given as (, , , , , ).
As a next step, according to the likelihood described in Equation (3), this density field will be masked with the survey geometry and selection functions and normal distributed noise will be added. Following the description in Jasche & Wandelt (2013), we aim to emulate characteristic features of the Sloan Digital Sky survey data release 7 (SDSS DR7) (Abazajian et al., 2009). In particular, we employ the redshift completeness of the SDSS DR7, which was computed with the MANGLE code provided by Swanson et al. (2008) and has been stored on a HEALPIX map with (Górski et al., 2005). Further, we assume a radial selection function following from a standard Schechter luminosity function with standard r-band parameters ( , ), and we limit the survey to only include galaxies within an apparent Petrosian r-band magnitude range and within the absolute magnitude ranges to . As usual, the radial selection function is then given by the integral of the Schechter luminosity function over the range in absolute magnitude. The product of the two dimensional survey geometry and the selection function at each point in the three dimensional volume yields the survey response operator:
where and are the right ascension and declination coordinates corresponding to the th volume element, and is the corresponding redshift. Given these definitions and a realisation of the three dimensional density field , a realisation for the artificial galaxy number counts is obtained by:
where is a white-noise field drawn from zero-mean and unit variance normal distribution, and the expected average number of galaxies is obtained via integration of the Schechter luminosity function by :
In this section we discuss the results of the application to an artificial galaxy catalogue, with particular emphasis of the statistical efficiency of the algorithm.
5.1 Statistical efficiency
Generally, in a Bayesian context the ill-posed inverse problem of inferring signals from observations, being subject to statistical uncertainty, is addressed by providing numerical representations of the the corresponding posterior distribution. Here we achieve this goal via the proposed Gibbs sampling process, providing random realisations of the large scale structure and corresponding power-spectra conditioned to observations. This sampled representation of the posterior distribution then permits to address the inverse problem by providing summary statistics, accurately accounting for all uncertainties involved in the inference process. Nevertheless, Markov chain Monte Carlo methods generally draw random variates from the posterior distribution by generating a sequence of solutions satisfying ergodicity.
This approach generally yields a highly correlated sequence of solutions, which will almost surely converge to the target posterior distribution in the large sample limit. As we seek to provide summary statistics for the parameters of interest, the performance of the proposed Gibbs sampler is determined by its ability to generate independent samples. This statistical efficiency consequently is of crucial importance for any MCMC method, and is usually quantified by the number of iterations required to generate a statistically independent sample. In order to determine the statistical efficiency of the proposed method, we will follow the standard procedure of estimating the correlation length of power-spectrum amplitudes within the chain (see e.g. Eriksen et al., 2004; Jasche et al., 2010; Jasche & Wandelt, 2013). Assuming all parameters in the Markov chain to be independent of each other, the correlation between subsequent power-spectrum amplitudes can be quantified in terms of the auto-correlation function:
where is the distance in the chain measured in iterations. The results of this analysis are presented in the right panel of Figure 1, where we estimated the correlation length from 80,000 recorded samples obtained by the application of our method to mock data. In our tests we recorded every tenth sample generated by the Markov Chain. Typically we determine the correlation length by the lag in samples that is required for the auto-correlation function to drop below ten percent (Eriksen et al., 2004). Given this definition of correlation length, the test clearly indicates correlation lengths samples for all recorded power-spectrum modes. Consequently, besides the ease of implementation the proposed method also exhibits excellent statistical efficiency for larges scale statistical applications in modern cosmology and astrophysics.
5.2 Inferring density maps and power-spectra
The messenger sampler, as proposed in this work, aims at the joint inference of a signal and its corresponding unknown correlation matrix. Specifically, here, we propose to use this method for the joint inference of the cosmological three dimensional density field and its cosmological power-spectrum from observations. As discussed above, inference of signals from noisy data is generally an ill-posed task, as there exists no unique solution. The proposed method addresses this issue by exploring the joint posterior of the cosmic density field and the power-spectrum via an efficient Gibbs sampling approach, providing a set of solutions being compatible with the observations. As a scientific result, this method therefore provides a numerical representation of the target posterior in terms of random realisations of density fields and power-spectra conditioned to the observations. In this fashion, the ensemble of generated density fields and power-spectra permits us to generate any desired statistical summary of the parameters under consideration and to account for all joint and correlated uncertainties (see also Eriksen et al., 2004; Jasche et al., 2010; Jasche & Wandelt, 2013). As an example, we inferred the ensemble mean of the cosmic power-spectrum and corresponding uncertainties from the set of 80,000 generated power-spectrum samples. The results for the ensemble mean power-spectrum together with the one and two sigma confidence regions are presented in left panel of Figure 1. It can be seen, that the inferred power-spectrum nicely follows the underlying fiducial power-spectrum from which the mock realisation was drawn. Also, we do not observe any particular bias throughout the entire range of recovered Fourier modes. Note, as a remark, that these results are generally compatible with previous approaches relying on different sampling strategies (see e.g. Jasche et al., 2010; Jasche & Wandelt, 2013).
Besides the cosmological power-spectrum, the method also provides maps of the three dimensional matter distribution. In particular, in Figure 2 we demonstrate the ensemble mean density field estimated from 80,000 density samples along with the corresponding standard deviations, quantifying the uncertainty. As anticipated from standard Wiener filtering approaches, the inferred density field recovers the underlying signal best in regions of high signal to noise and approaches mean density in regions of low signal to noise (see e.g. Kitaura & Enßlin, 2008; Kitaura et al., 2009; Jasche et al., 2010; Jasche & Wandelt, 2013). The corresponding standard deviations for each volume element of the inferred density field are presented in the lower panels of Figure 2 indicating corresponding uncertainties at all spatial points in the observed domain. In this fashion the proposed method not only provides single estimates of the parameters under consideration but also provides thorough uncertainty quantification and means for error propagation. These results therefore permit to derive any desired statistical summary and corresponding uncertainties, which are generally of crucial importance in order not to misinterpret the data.
6 Summary and Conclusion
Modern cosmology has an ever increasing demand for fast and accurate statistical inference methods to counter present and upcoming avalanches of cosmological and astrophysical data. As pointed out in the introduction, inference of signals from observations subject to noise is a ill-posed problem requiring sophisticated statistical methods to quantify corresponding statistical uncertainties. Specifically large scale Bayesian inference, such as the joint inference of three dimensional matter density fields and corresponding cosmological power-spectra, from observations relies on complex and numerically expensive MCMC methods, often involving implementations of Krylov space methods or gradient based Hybrid Monte Carlo approaches (see e.g. Kitaura & Enßlin, 2008; Jasche & Wandelt, 2013). Not only are these methods numerically expensive but are also hard to test and debug especially in large scale applications (Cook et al., 2006). Besides these issues, the requirement to implement Krylov space or HMC methods constitutes a significant hurdle for rapid prototyping and development of large scale Bayesian inference methods in cosmology and astrophysics. To address these issues, in this work we present a new, efficient and trivial to implement Gibbs sampling approach for the joint inference of cosmological density fields and power-spectra for linear data models. This approach picks up basic ideas of the recently proposed messenger method for Wiener filtering, described by Elsner & Wandelt (2013), and does not require any matrix inversions to explore high dimensional parameter spaces. As described in section 2, introducing a messenger field to mediate between different preferred bases, in which signal and noise covariance matrices can be expressed conveniently, yields an augmented Wiener posterior distribution which can be efficiently explored via a multiple block Gibbs sampling approach (Elsner & Wandelt, 2013). In particular, the proposed method turns the cumbersome approach of inverting multi-million dimensional matrices into the task of sequentially drawing random numbers from only uni-variate normal distributions. While trivial to implement, iteration of this process yields full multi-variate random fields drawn from the desired target Wiener posterior distribution, hence correctly addressing the large scale inference problem. To address also problems in which the covariance matrix of the signal to infer is unknown, we add a power-spectrum block sampler to jointly infer the signal and it’s power spectrum.
As described in section 3, this power-spectrum sampler permits to efficiently explore the high as well as the low signal to noise regime. While low signal to noise regimes are dominated by stochastic noise, high signal to noise regions are dominated solely by the much smaller sample variance. Using this sample variance to determine step sizes in Markov transitions will result in correct sampling of the target posterior, but may also yield unfeasibly long Markov chains as parameters in the low signal-to-noise regime remain correlated over many steps. To counter this problem, in 3 we introduced a coordinate transform which permits to perform larger steps in low-signal-to noise regimes via a Metropolis Hastings transition step. The combination of both approaches yields a covariance matrix sampler that is efficient at all regimes, while only requiring the ability to generate uni-variate random numbers.
In this fashion the task of jointly sampling a signal and its covariance matrix can be addressed purely by a sequence of uni-variate sampling processes.
In section 5 we exemplify the performance of our method in a cosmological setting by applying it to a artificial galaxy mock catalogue, described previously in section 4, aiming at the joint inference of the three dimensional density distribution and its cosmological power-spectrum from observations. This artificial data set emulates dominant features of the Sloan Digital Sky Survey data release 7, in particular survey geometry, selection effects and noise, and thus constitutes a realistic test scenario.
A particular important aspect, when dealing with Markov Chain Monte Carlo algorithms, is the determination of their statistical efficiency. As any Markov Chain Monte Carlo method generates a sequence of correlated samples the amount of actually produced independent samples is limited by the total length of the chain. In section 5.1 we therefore analysed the intra-chain correlation length between subsequently generated samples of the cosmological power-spectrum. These test demonstrates formidable statistical efficiency for the proposed method over the entire range of Fourier-modes present in the analysis. Specifically these tests indicate that the proposed Markov algorithm generates independent samples at every 50th iteration of the Markov chain, where we chose one cycle to consists in ten Markov transitions.
Section 5.2 discusses the results obtained by the proposed Markov method. In particular, the method provides estimates for the ensemble mean cosmological power-spectrum and corresponding uncertainty quantification. The inferred power-spectrum recovers the underlying true signal and shows no sign of bias throughout the entire range of Fourier-modes under consideration. These result are also consistent with previous results (see Jasche et al., 2010; Jasche & Wandelt, 2013).
Furthermore, in our example case, the method also provides inferred three dimensional maps of the cosmic matter distribution. In Figure 2 we demonstrate ensemble mean estimates of the density field and ensemble covariance maps quantifying the corresponding uncertainty. The proposed method therefore not only provides single estimates of signals, but also provides means to quantify and propagate statistical uncertainties for any finally inferred quantity, as is required for modern precision cosmology.
The ease of implementation, numerical and statistical efficiency renders this method an ideal tool for large scale Bayesian applications involving million dimensional problems and linear data models. A particularly important feature of the method is, that it only requires the ability to sample from uni-variate distributions and thus can be trivially implemented and tested by even inexperienced users or can be used for rapid prototyping and development of more complex inference frameworks.
In summary, we propose a statistical and numerically efficient Gibbs sampling approach for the inference of a unknown signal and its covariance matrix from observations subject to statistical uncertainties and systematics. Particularly due to the ease of implementation we anticipate this method to greatly add to the propagation of high precision large scale data analysis methods in cosmology and astrophysics, eventually leading to a more complete understanding of our Universe.
We thank Benjamin Wandelt and Franz Elsner for very useful discussions and comments. Special thanks also go to Stéphane Rouberol for his support during the course of this work, in particular for guaranteeing flawless use of all required computational resources. JJ is partially supported by a Feodor Lynen Fellowship by the Alexander von Humboldt foundation and Benjamin Wandelt’s Chaire d’Excellence from the Agence Nationale de la Recherche. This work made in the ILP LABEX (under reference ANR-10-LABX-63) was supported by French state funds managed by the ANR within the Investissements dAvenir programme under reference ANR-11-IDEX-0004-02.
- Abazajian et al. (2009) Abazajian K. N., et al., 2009, ApJS, 182, 543
- Cook et al. (2006) Cook S. R., Gelman A., Rubin D. B., 2006, J. Comput. Graph. Statist., 15, 675
- Eisenstein & Hu (1998) Eisenstein D. J., Hu W., 1998, ApJ, 496, 605
- Eisenstein & Hu (1999) Eisenstein D. J., Hu W., 1999, ApJ, 511, 5
- Elsner & Wandelt (2013) Elsner F., Wandelt B. D., 2013, A&A, 549, A111
- Erdoğdu et al. (2006) Erdoğdu P., Lahav O., Huchra J. P., Colless M., Cutri R. M., Falco E., George T., Jarrett T., Jones D. H., Macri L. M., Mader J., Martimbeau N., Pahre M. A., Parker Q. A., Rassat A., Saunders W., 2006, MNRAS, 373, 45
- Erdoğdu et al. (2004) Erdoğdu P., Lahav O., Zaroubi S., et al. 2004, MNRAS, 352, 939
- Eriksen et al. (2007) Eriksen H. K., Huey G., Banday A. J., Górski K. M., Jewell J. B., O’Dwyer I. J., Wandelt B. D., 2007, ApJL, 665, L1
- Eriksen et al. (2004) Eriksen H. K., O’Dwyer I. J., Jewell J. B., Wandelt B. D., Larson D. L., Górski K. M., Levin S., Banday A. J., Lilje P. B., 2004, ApJS, 155, 227
- Fisher et al. (1995) Fisher K. B., Lahav O., Hoffman Y., Lynden-Bell D., Zaroubi S., 1995, MNRAS, 272, 885
- Fisher et al. (1994) Fisher K. B., Scharf C. A., Lahav O., 1994, MNRAS, 266, 219
- Ganon & Hoffman (1993) Ganon G., Hoffman Y., 1993, ApJL, 415, L5
- Górski et al. (2005) Górski K. M., Hivon E., Banday A. J., Wandelt B. D., Hansen F. K., Reinecke M., Bartelmann M., 2005, ApJ, 622, 759
- Hoffman (1994) Hoffman Y., 1994, in Balkowski C., Kraan-Korteweg R. C., eds, Unveiling Large-Scale Structures Behind the Milky Way Vol. 67 of Astronomical Society of the Pacific Conference Series, Wiener Reconstruction of the Large-Scale Structure in the Zone of Avoidance. pp 185–+
- Jasche et al. (2010) Jasche J., Kitaura F. S., Wandelt B. D., Enßlin T. A., 2010, MNRAS, 406, 60
- Jasche & Wandelt (2013) Jasche J., Wandelt B. D., 2013, ApJ, 779, 15
- Jeffreys (1946) Jeffreys H., 1946, Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, 186, 453
- Jewell et al. (2004) Jewell J., Levin S., Anderson C. H., 2004, ApJ, 609, 1
- Jewell et al. (2009) Jewell J. B., Eriksen H. K., Wandelt B. D., O’Dwyer I. J., Huey G., Górski K. M., 2009, ApJ, 697, 258
- Kitaura & Enßlin (2008) Kitaura F. S., Enßlin T. A., 2008, MNRAS, 389, 497
- Kitaura et al. (2009) Kitaura F. S., Jasche J., Li C., Enßlin T. A., Metcalf R. B., Wandelt B. D., Lemson G., White S. D. M., 2009, MNRAS, 400, 183
- Lahav et al. (1994) Lahav O., Fisher K. B., Hoffman Y., Scharf C. A., Zaroubi S., 1994, ApJL, 423, L93+
- Larson et al. (2007) Larson D. L., Eriksen H. K., Wandelt B. D., Górski K. M., Huey G., Jewell J. B., O’Dwyer I. J., 2007, ApJ, 656, 653
- O’Dwyer et al. (2004) O’Dwyer I. J., Eriksen H. K., Wandelt B. D., Jewell J. B., Larson D. L., Górski K. M., Banday A. J., Levin S., Lilje P. B., 2004, ApJL, 617, L99
- Oppermann et al. (2012) Oppermann N., et al., 2012, A&A, 542, A93
- Selig et al. (2013) Selig M., Bell M. R., Junklewitz H., Oppermann N., Reinecke M., Greiner M., Pachajoa C., Enßlin T. A., 2013, A&A, 554, A26
- Sheth (1995) Sheth R. K., 1995, MNRAS, 277, 933
- Smith et al. (2007) Smith K. M., Zahn O., Doré O., 2007, Phys. Rev. D, 76, 043510
- Sutter et al. (2013) Sutter P. M., Wandelt B. D., McEwen J. D., Bunn E. F., Karakci A., Korotkov A., Timbie P., Tucker G. S., Zhang L., 2013, MNRAS
- Sutton & Wandelt (2006) Sutton E. C., Wandelt B. D., 2006, ApJS, 162, 401
- Swanson et al. (2008) Swanson M. E. C., Tegmark M., Hamilton A. J. S., Hill J. C., 2008, MNRAS, 387, 1391
- Wandelt et al. (2004) Wandelt B. D., Larson D. L., Lakshminarayanan A., 2004, Phys. Rev. D, 70, 083511
- Zaroubi (2002) Zaroubi S., 2002, MNRAS, 331, 901
- Zaroubi et al. (1999) Zaroubi S., Hoffman Y., Dekel A., 1999, ApJ, 520, 413
- Zaroubi et al. (1995) Zaroubi S., Hoffman Y., Fisher K. B., Lahav O., 1995, ApJ, 449, 446