Gaussianisation for fast and accurate inference from cosmological data

Gaussianisation for fast and accurate inference from cosmological data

Abstract

We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box–Cox transformations and generalisations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in MCMC samples. Further, the model evidence integral (i.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianising transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianise the joint distribution of data from weak lensing and baryon acoustic oscillations (BAO), for different cosmological models, and find a preference for flat CDM. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.

1Introduction

According to the Bayesian paradigm, inference on any data set will yield a posterior probability distribution on the space of model parameters. This density function represents, in its entirety, the full knowledge gained in the attempt to infer the underlying parameters. Such distributions often depart significantly from a Gaussian form. This led to the widespread use of Monte Carlo sampling methods to report the typically non-Gaussian posterior constraints obtained from experiments, such as Planck1. Reconstructing the posterior density from such a Markov Chain Monte Carlo (MCMC) sample, e.g. to visualise the multivariate parameter constraints, or to combine the constraints from multiple data sets, can be nontrivial due to the large sample size necessary to appropriately map the distribution; in addition, the contours often need further smoothing for stylistic reasons.
Instead, we propose to redefine the underlying model parameters, so that the new posterior density approximately takes a Gaussian shape after the transformation from old to new parameters; this presupposes that we begin with a uni-modal posterior density. Such a transformation would allow for enormous data compression: instead of a full MCMC sample from the posterior distribution, we only need to report the Gaussianising transformation, and the first and second moments of the resulting Gaussian distribution. From these alone, we can reconstruct an analytic expression for the full non-Gaussian posterior density, and subsequently combine it with other data sets.
Further, it becomes possible to display and compare non-ellipsoidally-shaped contours of non-Gaussian parameter constraints – whether joint or marginalised – without any smoothing. Thus, this method allows for summarising posterior densities in a versatile and efficient way, which faithfully reproduces the information contained in the full probability density.
The idea of transforming a function to a Gaussian shape is, in principle, not limited to reproducing probability densities. As the integral over a Gaussian can be performed analytically, this opens up a strategy to feasibly compute high-dimensional integrals, such as the model evidence (i.e. the marginal likelihood).
The transformed model parameters are analogous to the normal parameters of the cosmic microwave background (CMB): it has been highly advantageous for rapid likelihood calculation (such as CMBfit, CMBwarp, and PICO, see [19]), to redefine the cosmological model parameters such that the model is approximately linear in these newly defined normal parameters. Thus the likelihood approximately takes the form of a multivariate Gaussian density. For most observables, we would be at a loss to search for a linearising redefinition of the model parameter space directly motivated by the structure of the model itself. Instead, is it possible to computationally find suitable parameters, i.e. a suitable bijective transformation which approximately Gaussianises the posterior in question?
Extending the work of [13], we present an algorithm to find and test such a non-linear Gaussianising transformation from a Markov chain sampling the posterior distribution of the original parameters. In principle, this distribution could stem from any experiment or data type. In Sect. Section 2, we describe the details of the algorithm, verification of the reconstructed posterior distribution, and the specific transformations employed. Following an illustration of these on a toy example in Sect. Section 3, Sect. Section 4 demonstrates the performance of our implementation, using Markov chains from the Planck satellite constraints on cosmological models ([26]). In Sect. Section 5, we present an efficient way to calculate the model evidence, a quantity needed to judge the predictivity of different model parameter spaces, via Gaussianising transformations. Section 6 offers conclusions and suggests future directions.

2Gaussianisation

To find the right multivariate transformation, we will at first adopt the strategy of redefining each model parameter separately, i.e. the first new model parameter will only depend on the first old model parameter, etc. In Sect. Section 2.4, we will drop this assumption and consider transformations which can correlate the model parameters.
The set of all multivariate Gaussianisation transformations, from which we are to pick the optimal one, will be constructed in the following way: assume a family of bijective real-valued functions indexed by real transformation parameters . Given the -dimensional vector of model parameters , we transform to the new (Gaussian-distributed) parameters via

where the full multivariate transformation is now specified by all transformation parameter -tuples , i.e. one for each model parameter. To avoid confusion, we shall from now on distinguish between model parameters (MP), which the posterior probability density depends on, and transformation parameters (TP), which specify one Gaussianising transformation. The algorithm can be applied to arbitrary parametrised transformation families, suitable for various forms of non-Gaussianity – in principle, we could even choose different transformations for each model parameter, instead of using the same shape for all of them.
Assuming such a bijective transformation , we immediately have an analytic form for the posterior density

One still needs to find the mean vector and the covariance matrix of the transformed posterior density . These are estimated from the transformed sample (see Sect. Section 2.1).

2.1Finding the optimal transformation

Given a weighted point sample , containing points in and probability weights , which has been sampled from the posterior distribution in question, we wish to quantify the Gaussianisation properties of different transformations applied to this sample. To this end, we follow [3] (see also [31] and [13]) in maximising the profile likelihood over TP space, i.e. depending only on the real transformation parameters contained in . This likelihood is a function of the transformation parameters , quantifying how well each transformation Gaussianises the distribution of data set ; however, it does not pertain to the posterior density in Eq. (Equation 2), which is a function of the model parameters . For the Gaussian parameters , in Eq. (Equation 2), we insert their standard debiased weighted maximum-likelihood estimators, which depend on the transformed sample

with and . These estimators depend on indirectly, as they are computed after has been transformed with . We arrive at the profile weighted log-likelihood

where several terms independent of have been discarded. In general, both the covariance matrix of the transformed sample and the Jacobian term will depend on the transformation parameters in a non-linear way, hence finding the maximum-likelihood values for the TPs will require numerical optimisation. For this purpose, we have employed the GSL implementation of the well-known Nelder–Mead simplex algorithm ([23]).
As already noted by [13], log-likelihood degeneracies in TP space are common. These may jeopardise the numerical stability of the calculation of . There are generic cases where a moderately large value for one transformation parameter may already result in unmanageably large numerical values for the transformed sample, such as e.g. the power transformation with . Generically, the optimisation algorithm tends to slide into these TP space regions quite easily. Hence, we include a penalty term of the form

where are the parameter values corresponding to the identity transformation. We minimise the function over the real numbers in . Values of and have proven to be highly stabilising, and at the same time do not distort the shape of the resulting analytic posterior distribution.
In this work, we employ the Nelder–Mead algorithm just for illustrating the method – faster and more reliable algorithms to find the global minimum of the likelihood function exist (such as Bound Optimisation BY Quadratic Approximation – BOBYQA, see [25]) and can readily be applied here.

2.2Box–Cox transformations and their kin

The Box–Cox transformation ([3]) is a generalisation of the power map. This transformation family is widely used in statistics and econometrics, e.g. to make data approximately homoscedastic and normal. Our usage is different in that we use it to alter the distribution of model parameters, rather than the distribution of data. Including a shift parameter , the one-dimensional version is defined as

for a single MP , i.e. . Note that the family is continuous at and that the mapping requires . Typically, an MP with a skewed distribution can be transformed to an MP with symmetric, Gaussian distribution upon the appropriate choice of the power TP , e.g., a log-normal distribution can be analytically transformed to a Gaussian with . The identity transformation corresponds to and . Inserting this transformation family into Eq. (Equation 2), we recover the formula given in [13].
As an extension of the Box–Cox family, we propose the Arcsinh–Box–Cox transformation (‘ABC transformation’ hereafter):

The inclusion of the TP will prove particularly useful to remove residual kurtosis from a MP distribution. The identity transformation reads , , .
The Box–Cox family does not form a group, because two subsequent transformations cannot be expressed as another Box–Cox transformation; the same holds for the ABC family. This will be of importance for Sect. Section 2.4.
Box–Cox transformations demonstrate that the domain of the function – in particular its dependence on – requires special attention: for given , it is defined only for , the same holds for ABC transformations. Thus, the optimisation procedure for the sample requires that , the shift parameter for the model parameter , is bounded from below, i.e. . Conversely, this means that, once the optimal transformation parameters are found and inserted into the analytic expression for the original posterior density, Eq. (Equation 2), it is not defined for every value possible value of the MP , but only for . This also necessitates that the normalisation needs to be adjusted, which can be done analytically. However, if the sample is large enough so that the tails of the distributions are properly represented, this truncation of the domain is not problematic.

2.3Verifying the optimal transformation

Once the optimal transformation within its family is found, how do we judge the effectiveness of the resulting Gaussianisation? We adopt the following pragmatic standpoint: if the analytic posterior manages to reproduce the one-dimensional and two-dimensional marginalised contours of the sample, it is deemed acceptable. To this end, we propose the test via a cross-contour (CC) plot. The idea is to characterise a probability density by the location of its contours - the surfaces of constant density - and the probability mass stored inside, i.e. the integral of the density over the interiour of a contour. If two densities and are identical, then they will store the same mass in any region of the parameter space; if they are different, we expect to find different probabilities for the same regions (e.g. the regions bounded by contours of ). Thus, looking at the family of contour-bounded regions of , we can ask: does the probability for these, assigned via , agree with the probability for them assigned via ?
To formalise this, consider the following: given a probability density in dimensions, which takes function values between 0 and , we define the contour-bounded region assigned to the density value as

The probability mass enclosed in any of these is

Now, assuming we have two probability densities and in dimensions, do the contours of reproduce those of ? They do in the relevant sense if for every , the -mass enclosed in the -contour of equals the -mass in this contour, i.e.

It should be noted that this alone is not a sufficient condition for , but the counterexamples, which can be constructed mathematically, are non-generic and can be neglected for our purposes.
To detect deviations of the contours of and , we could simply plot the left and the right side of Eq. (Equation 5) for a grid of -values between 0 and , and plot the points with respect to the line . For concrete problems, it is often more instructive to subtract the right side from the left side, and plot the excess (or deficit) probability mass of inside the contours of . If, in this plot, the excess for every contour is consistent with zero, we have succeded.
In our situation, we compare a point sample with a probability density function – the analytic posterior density as reconstructed via Gaussianisation. The right side of Eq. (Equation 5) is the probability mass in the region where the density is greater or equal to ; the left side is the fraction of the point sample which lies in the same region. So, for every value in the range of , we find the probability mass enclosed in by gridding over a region containing the sample. Similarly, we count the number of points in where the value of is above , to compute the fraction of points that lie within . This fraction is an estimator of the actual probability mass enclosed, because is a discrete sample from the actual posterior distribution. To find the variance of this estimator, we calculate the fraction on 2,000 bootstrap realisations of , and determine the 95%-confidence intervals from these. If, for every , the analytic posterior probability mass inside is within this confidence interval for the sample point fraction within , we judge our reconstruction attempt to be successful.
It should be noted that poor MCMC sampling of the original target density will yield a poor representation of this density by our reconstructed density (Equation 2). Any information about the distribution lost by undersampling cannot be regained. However, as demonstrated in Sect. Section 3, our method reproduces less biased contours than other standard methods of density estimation even in regions of low point density, i.e. where any density estimate must be an extrapolation. Hence, it can be used when the length of the input Markov chains is restricted by computational cost or file size.

2.4Multi-pass transformations

If even the optimal Gaussianising transformation amongst a given family does not bring the posterior density sufficiently close to a Gaussian shape (e.g., as determined via a CC plot), we have two options. We can provide a different family of transformations and redo the optimisation; or we can repeat the process on the sample after the first transformation. As already mentioned in Sect. Section 2.2, the transformation families employed in this work do not form groups. Hence, two subsequent transformations do not result in another transformation from that family, and transforming twice potentially provides a better Gaussianisation than transforming once. In principle, it is possible to apply multiple subsequent transformations, should the quality of the result necessitate it.
In this spirit, we have implemented the following two-pass transformation protocol:

  1. Optimise the TPs of the first transformation.

  2. Linear reshaping: centring, rescaling, rotating.

  3. Optimise the TPs of the second transformation.

Strictly speaking, this transformation, whilst being bijective, no longer falls into the class as set up in Eq. (Equation 1), as different model parameters are mixed. Nonetheless, Eq. (Equation 2) for the analytic posterior density generalises in a straightforward way.
In the second step, the sample after the first Gaussianising transformation is subjected to the following maps (in this order): subtract the sample mean from every parameter, so that the sample is centred on the origin. Then, rescale every parameter such that the standard deviation is unity. Finally, rotate into the eigenbasis of the covariance matrix – this procedure is generally known as principal component analysis (PCA). These reshaping operations not only help to avoid numerical instabilities (centring, rescaling), but also open up new directions for Gaussianisation by presenting uncorrelated parameters to the second Gaussianising transformation, since the transformations defined in Eq. (Equation 1) cannot mix parameters. If two parameters have substantial covariance after step 1, it can be crucial to decorrelate them.
Nevertheless, a price is to be paid for the Gaussianising power added with Step 2: it sacrifices a decisive property of the simple one-step transformation routine, namely that every transformed MP only depends on a single untransformed MP . This property allows for easy marginalisation of the analytic posterior: to compute this, we can marginalise the Gaussianised sample by dropping all coordinates we wish to marginalise out and determining the mean vector and covariance matrix of the remaining ones. Transforming this marginalised Gaussian density back will then yield the marginalised posterior density on the untransformed MPs. However, with linear reshaping included, this is no longer possible.
This may be problematic for some applications (such as visualisation of 1D or 2D marginal distributions, or creating a CC plot), but not for others – as long as we need only the marginal distribution of a single combination of parameters, we can marginalise by discarding all MP columns of the sample except the ones in question, prior to Gaussianising.

Figure 1: Bivariate sample before (top) and after Gaussianisation (bottom). We show the two-dimensional sample and its 1D marginals (red), and compare to the reconstructed analytic posterior density (black): the full 2D contours, and its 1D marginal distributions.
Figure 1: Bivariate sample before (top) and after Gaussianisation (bottom). We show the two-dimensional sample and its 1D marginals (red), and compare to the reconstructed analytic posterior density (black): the full 2D contours, and its 1D marginal distributions.
Figure 2: Bivariate sample before (top) and after Gaussianisation (bottom). We show the two-dimensional sample and its 1D marginals (red), and compare to the reconstructed analytic posterior density (black): the full 2D contours, and its 1D marginal distributions.
Figure 2: Bivariate sample before (top) and after Gaussianisation (bottom). We show the two-dimensional sample and its 1D marginals (red), and compare to the reconstructed analytic posterior density (black): the full 2D contours, and its 1D marginal distributions.

3A toy example

We illustrate these ideas on a two-dimensional example. We draw a sample of 10,000 points from a bivariate Gaussian distribution, and map it through an inverse Box–Cox transformation with known input TP values (see Table 1). All weights are set to unity.
This mock data sample has the advantage that there is at least one Box–Cox transformation which precisely Gaussianises the underlying probability distribution. Figure 2 shows the original sample, and the one transformed with the one-pass Box–Cox transformation which was found to be optimally Gaussianising, i.e. maximising the log-likelihood in Eq. (Equation 3). As this is a comparably simple problem, we have set the penalty term in Eq. (Equation 4) to zero. The Nelder–Mead algorithm was started sixteen times independently with randomised initial conditions. The values of the recovered optimal TPs are shown in Table 1; the standard deviation amongst these sixteen values is of order at worst, so multiple Nelder–Mead runs are not necessary in this low-dimensional example: all of them find the same maximum of the log-likelihood. In high-dimensional cases, however, this strategy can increase the robustness of the procedure.

Figure 3: Profile log-likelihood for the transformation parameters relating to X_1 (top), and to X_2 (bottom). The red star shows the input values, the black cross the recovered values, as projected onto the plane. The degeneracies between different transformation parameters are apparent.
Figure 3: Profile log-likelihood for the transformation parameters relating to (top), and to (bottom). The red star shows the input values, the black cross the recovered values, as projected onto the plane. The degeneracies between different transformation parameters are apparent.
Figure 4: Profile log-likelihood for the transformation parameters relating to X_1 (top), and to X_2 (bottom). The red star shows the input values, the black cross the recovered values, as projected onto the plane. The degeneracies between different transformation parameters are apparent.
Figure 4: Profile log-likelihood for the transformation parameters relating to (top), and to (bottom). The red star shows the input values, the black cross the recovered values, as projected onto the plane. The degeneracies between different transformation parameters are apparent.

The apparent difference between the parameters of the single inverse Box–Cox transformation and the values found for Box–Cox optimisation is due to degeneracies in parameter space. To illustrate these, we show the profile likelihood for where are held fixed at their input values, and vice versa, in Figure 4. The TPs found by the optimisation algorithm (black crosses - note that they are projected onto the plane for which the profile likelihood is shown) are degenerate with the input ones (red star). Both Box–Cox transformations map the distribution to sufficiently Gaussian form.

Figure 5: Comparison of the analytic posterior density, as found via Gaussianisation (dark gray), to kernel density estimation (light gray). The top panel shows the 2D contours of each density estimation method in relation to the original sample (red dots). The bottom panel (CC plots; top: Box–Cox, bottom: KDE) compares the respective contours of each function to the original sample: We determine the fraction of the point sample located inside one probability contour, and plot the excess of this fraction over the probability density mass for that same contour. The band shows the 95%-variance in the point fraction due to sampling.
Figure 5: Comparison of the analytic posterior density, as found via Gaussianisation (dark gray), to kernel density estimation (light gray). The top panel shows the 2D contours of each density estimation method in relation to the original sample (red dots). The bottom panel (CC plots; top: Box–Cox, bottom: KDE) compares the respective contours of each function to the original sample: We determine the fraction of the point sample located inside one probability contour, and plot the excess of this fraction over the probability density mass for that same contour. The band shows the 95%-variance in the point fraction due to sampling.
Figure 6: Comparison of the analytic posterior density, as found via Gaussianisation (dark gray), to kernel density estimation (light gray). The top panel shows the 2D contours of each density estimation method in relation to the original sample (red dots). The bottom panel (CC plots; top: Box–Cox, bottom: KDE) compares the respective contours of each function to the original sample: We determine the fraction of the point sample located inside one probability contour, and plot the excess of this fraction over the probability density mass for that same contour. The band shows the 95%-variance in the point fraction due to sampling.
Figure 6: Comparison of the analytic posterior density, as found via Gaussianisation (dark gray), to kernel density estimation (light gray). The top panel shows the 2D contours of each density estimation method in relation to the original sample (red dots). The bottom panel (CC plots; top: Box–Cox, bottom: KDE) compares the respective contours of each function to the original sample: We determine the fraction of the point sample located inside one probability contour, and plot the excess of this fraction over the probability density mass for that same contour. The band shows the 95%-variance in the point fraction due to sampling.
Table 1: Optimally Gaussianising parameters for the distribution in Fig. , as found with one-pass Box–Cox transformation.
Parameter input value recovered value
2
0.4
3
4

We compare our method of reconstructing an analytic posterior density from an MCMC sample with the standard nonparametric method, kernel density estimation (KDE), which also aims to find a functional form for the probability density. The -contours of the posterior density from Gaussianisation are shown jointly with those from KDE: these employ a Gaussian kernel, whose covariance matrix is estimated from the sample, and Silverman’s rule ([29]) has been used to determine the bandwidth parameter. No additional smoothing has been applied in Figure 6, top panel. The bottom panel shows the excess cross-contour probability masses between analytic posterior and sample, and KDE and sample respectively, as detailed in Sect. Section 2.3. Whereas the Box–Cox posterior is consistent with the sample distribution for every single contour, the KDE contours show a strong bias – the contours are wider than they should be. Given that the precision of the contour reconstruction is, for the Box–Cox method, limited only by the finite size of the sample, it has the potential to perform better than the (biased) kernel density method. Additionally, for applications in which frequent calls of the posterior density are a bottleneck for computation speed, our method of density reconstruction can be advantageous: the additional initial cost for finding the transformation parameters can be outweighed by the subsequent evaluation speedup.

4Performance results: data

Figure 7: One-pass Gaussianisation of a triangular-shaped non-Gaussian feature in a 2D marginal Planck posterior via ABC transformation. Left: original sample (red dots) and reconstructed analytic posterior (black contours). Right: the CC plot shows that for every contour of the analytic posterior, the probability mass inside (white line) equals the fraction of the point sample inside, within its 95%-confidence interval (green band).
Figure 7: One-pass Gaussianisation of a triangular-shaped non-Gaussian feature in a 2D marginal Planck posterior via ABC transformation. Left: original sample (red dots) and reconstructed analytic posterior (black contours). Right: the CC plot shows that for every contour of the analytic posterior, the probability mass inside (white line) equals the fraction of the point sample inside, within its 95%-confidence interval (green band).
Figure 8: One-pass Gaussianisation of a triangular-shaped non-Gaussian feature in a 2D marginal Planck posterior via ABC transformation. Left: original sample (red dots) and reconstructed analytic posterior (black contours). Right: the CC plot shows that for every contour of the analytic posterior, the probability mass inside (white line) equals the fraction of the point sample inside, within its 95%-confidence interval (green band).
Figure 8: One-pass Gaussianisation of a triangular-shaped non-Gaussian feature in a 2D marginal Planck posterior via ABC transformation. Left: original sample (red dots) and reconstructed analytic posterior (black contours). Right: the CC plot shows that for every contour of the analytic posterior, the probability mass inside (white line) equals the fraction of the point sample inside, within its 95%-confidence interval (green band).
Figure 9: Two-pass Gaussianisation of a non-Gaussian model parameter degeneracy in a 2D marginal Planck posterior via ABC transformation, explicitly showing the protocol described in Sect. : (\theta_1, \theta_2) are the parameters after the first transformation; (\pi_1, \pi_2) are the coordinates after rotation into the PCA eigenbasis of the centred and rescaled (\theta_1, \theta_2)-sample, which are finally transformed again. (\Delta_1, \Delta_2) designate the transformation parameters of the first, (\Lambda_1, \Lambda_2) those of the second ABC transformation. Note how crucial the intermediate PCA step is to achieve Gaussianity.
Figure 9: Two-pass Gaussianisation of a non-Gaussian model parameter degeneracy in a 2D marginal Planck posterior via ABC transformation, explicitly showing the protocol described in Sect. : are the parameters after the first transformation; are the coordinates after rotation into the PCA eigenbasis of the centred and rescaled -sample, which are finally transformed again. designate the transformation parameters of the first, those of the second ABC transformation. Note how crucial the intermediate PCA step is to achieve Gaussianity.

To demonstrate how the algorithm works on real data, we have employed MCMC samples from the first data release of the Planck mission (see [26]). This satellite has measured the temperature and polarisation anisotropies in the CMB, whose power spectra are sensitive measures of the underlying cosmology. The Planck Collaboration has published several data products2, including MCMC samples from the posterior probability densities of various cosmological models, generated with CosmoMC (see [20], also: http://cosmologist.info/cosmomc).
The baseline cosmology is the standard model of a flat Universe with cold dark matter and a cosmological constant, commonly known as CDM. It contains six parameters: (today’s baryon density), (today’s cold dark matter density), (scaled sound horizon), (reionisation optical depth), (spectral index of primordial scalar perturbations), and (log power amplitude of primordial scalar perturbations). Several extensions of this baseline model are also listed, including those by adding either of the following parameters: (curvature parameter), (dark energy equation of state), (primordial tensor-to-scalar amplitude ratio), and (effective number of relativistic degrees of freedom). Further, these chains list derived quantities, e.g. today’s Hubble parameter , the age of the Universe, and a variety of foreground modelling parameters, such as and , modelling the amplitudes of Poisson point sources and the cosmic infrared background in the frequency bands , and . These are of particular interest to us, as the most prominent non-Gaussian features of the posterior densities can be seen in them.

Figure 10: Two-pass Gaussianisation of a wall-like non-Gaussian feature in a 2D marginal Planck posterior via ABC transformation; in the same format as as Fig. , but with the transformation parameters (\Delta_1, \Delta_2) and (\Lambda_1, \Lambda_2) that have been found for this sample. It is apparent that after the first transformation, the parameter \theta_1 still exhibits residual kurtosis.
Figure 10: Two-pass Gaussianisation of a wall-like non-Gaussian feature in a 2D marginal Planck posterior via ABC transformation; in the same format as as Fig. , but with the transformation parameters and that have been found for this sample. It is apparent that after the first transformation, the parameter still exhibits residual kurtosis.
Figure 11: One-pass Gaussianisation of a 2D marginal Planck posterior via ABC transformation. Left: original sample and contours of the analytic posterior. Top right: CC plot for the two-pass Gaussianisation in Fig. . Bottom right: CC plot for the one-pass Gaussianisation (see left panel), showing deviations of the cross-contour masses.
Figure 11: One-pass Gaussianisation of a 2D marginal Planck posterior via ABC transformation. Left: original sample and contours of the analytic posterior. Top right: CC plot for the two-pass Gaussianisation in Fig. . Bottom right: CC plot for the one-pass Gaussianisation (see left panel), showing deviations of the cross-contour masses.
Figure 12: One-pass Gaussianisation of a 2D marginal Planck posterior via ABC transformation. Left: original sample and contours of the analytic posterior. Top right: CC plot for the two-pass Gaussianisation in Fig. . Bottom right: CC plot for the one-pass Gaussianisation (see left panel), showing deviations of the cross-contour masses.
Figure 12: One-pass Gaussianisation of a 2D marginal Planck posterior via ABC transformation. Left: original sample and contours of the analytic posterior. Top right: CC plot for the two-pass Gaussianisation in Fig. . Bottom right: CC plot for the one-pass Gaussianisation (see left panel), showing deviations of the cross-contour masses.

The chains, as presented, are not decorrelated, so we thin them by using every 20th sample. We employ the `..._planck_lowl_...' chains, which use only the temperature-temperature correlations. The plots in this section are created using the seven-parameter model including ; the sample contains 11,546 points after thinning.
All these Markov chains assume uniform proper prior densities (i.e. being supported on compact rectangular boxes) and list the log-likelihood for every point (for further details, see [26]).
We show several 2D marginalised posterior samples exhibiting different non-Gaussian features, and how well they are reproduced by the analytic posterior (Eq. Equation 2), such as triangular shapes (see Figure 8), pronounced non-linear degeneracies (see Figure 9), and sharp boundaries (‘walls’) arising from MP space boundaries (see Figure 10).
Figure 9 demonstrates the usefulness of the intermediate PCA in between the ABC transformations: the first transformation has straightened out the curved shape of the maximum, but the distribution still appears skewed towards the upper left direction (see top right panel). This is remedied by reshaping, PCA and another ABC transformation (bottom right panel) – the second Gaussianising transformation having only little effect compared to the PCA.
The Gaussianisation of the distribution in Figure 10 (top left) shows how two concatenated transformations can be more powerful than a single one. The once-transformed sample still exhibits negative excess kurtosis, which is removed by the second transformation (bottom left to bottom right panel).
Further, we compare the CC plots of this two-pass transformation and the one-pass transformation in Figure 12, which also shows the resulting contours (left panel). The associated one-pass CC plot (bottom right) shows a significant deficit of point sample mass compared to the analytic posterior mass, for the posterior contours between and , as well as for , and between and 1. The latter is visible between the 2- and 3-contours close to the wall-like constraint at . By contrast, the CC plot for the two-pass transformation (Fig. Figure 12, top right) demonstrates good agreement between the contours of analytic posterior and point samples.
To demonstrate the algorithm working on a high-dimensional example, we Gaussianise a seven-dimensional Planck MCMC sample with an ABC transformation. In order to visualise the result, we show all one-dimensional and two-dimensional marginal distributions of the point sample and the full analytic posterior density (see Figure 13). We employ one-pass transformations, because, as discussed in Sect. Section 2.4, the marginalisation of the analytic posterior from 7D down to 2D or 1D would not be possible without explicit integration or sampling, had we chosen to use the two-pass protocol.

Figure 13: Reconstruction of a seven-dimensional Planck posterior density via a one-pass ABC transformation: 1D and 2D marginals. Black: marginal analytic posterior density (1D) or 1,2,3 \sigma contours. Red: marginal point sample distributions. For the 1D cases, the histograms have renormalised bar heights, to demonstrate the agreement with the value of the probability density.
Figure 13: Reconstruction of a seven-dimensional Planck posterior density via a one-pass ABC transformation: 1D and 2D marginals. Black: marginal analytic posterior density (1D) or 1,2,3 contours. Red: marginal point sample distributions. For the 1D cases, the histograms have renormalised bar heights, to demonstrate the agreement with the value of the probability density.

5Application: Fast evidence computation

To decide which of two models and , each with their associated MP space, is more predictive on a common set of data , the essential quantity to compute is the model evidence , see [11]. The ratio of the evidences (called Bayes factor) is then used for updating the prior model odds to posterior model odds in the Bayesian sense:

The evidence itself, for each model, can be computed via

i.e. via integration of the (unnormalised) posterior density over the respective parameter space of model – hence the term ‘marginal likelihood’ for . If takes a form with non-Gaussian features, and if the model parameter space is high-dimensional, this integral itself is often difficult to calculate. However, with a bijective transformation that Gaussianises , we can compute the evidence integral analytically. If has the shape of an (unnormalised) multivariate Gaussian

with means , covariance matrix and maximum , the log-evidence reads

Similar expressions for Gaussian posterior densities can be found in [18]. To estimate , we need the absolute normalisation of ; hence this method can only be applied to samples which provide the values for (possibly also in the form of log-likelihood and log-prior). From these, we compute the values of on the optimally-Gaussianised sample by adding the logarithm of the transformation Jacobian, and then fit the parameters , , and of the Gaussian via least-squares regression. This can be performed analytically, and even be used to compute an error bar on the value of – see Appendix A for details. If the prior distribution for one MP, and hence the posterior, is supported only on a finite interval, the same will hold true for the transformed MP if we restrict ourselves to one-pass transformations. If the sample size is large enough to properly represent the cutoff, the Gaussianisation transformation will alleviate this feature, but may not fully remove it. Assuming the marginal distribution to be Gaussian, when in reality we may deal with a truncated Gaussian, will lead to a systematic error in the evidence, so it is advantageous to remove these features before starting the search for optimally Gaussianising TPs. Appendix B details ‘unboxing transformations’, which redefine the MPs, mapping a finite open interval to the entire real line. In fact, it is also possible to use them for posterior density reconstruction, before Step 1 in Sect. Section 2.4. To demonstrate this idea, we compute the evidence integral first on a mock data set, and subsequently on real data from cosmology. For the former, we draw a random sample of length 10,000 from a ten-dimensional log-normal probability distribution, and assign to each point the value of the probability density function, multiplied with a factor of . All weights are set to unity. This mock sample is subjected first to the Gaussianisation procedure with one-pass ABC transformations (no unboxing), and then to the regression outlined in Appendix A to retrieve the best-fit estimate for and its error bar. To verify these, we determine the distribution of the estimator in Eq. (Equation 6) by producing 1,000 bootstrap samples from the transformed sample, and computing on each of them, together with its one-sigma confidence intervals. To increase the reliability of the optimally Gaussianising transformation, 24 independent Gaussianisation runs are started with randomised initial conditions; Figure 14 shows the results for these. Relative to the “true” value of 5, these agree to sub-percent accuracy. It is also noteworthy that a lower local maximum of the likelihood, i.e. one further to the right, will depart more from the true value. Hence, a location in TP space that is close to the exact optimum will yield a biased value for the log-evidence. This indicates that the evidence is a sensitive indicator for departures from Gaussianity. Note that a log-normal distribution can be precisely Gaussianised with Box-Cox transformations – and thus also by ABC transformations, which are a superset of these. Further, it is noteworthy that our analytic procedure yields error bars of the right magnitude, yet somewhat more conservative, compared to their bootstrapped counterparts. This discrepancy arises because the bootstrapped distributions for and deviate slightly from Gaussianity, whereas the analytic error bars assume Gaussian error propagation (see Appendix A for details).

Figure 14: Ratio of log-evidence values for mock data set drawn from a 10-dimensional log-normal distribution, computed after 24 independent Gaussianisation runs to the true value, versus the negative log-likelihood for each run (including the penalty term). Values and error bars shown are from analytic regression, as detailed in App.  (red), and the means and one-sigma confidence intervals from the bootstrapped distributions (black). The black line indicates the true value. The inset is a zoomed-in version of the lower left corner.
Figure 14: Ratio of log-evidence values for mock data set drawn from a 10-dimensional log-normal distribution, computed after 24 independent Gaussianisation runs to the true value, versus the negative log-likelihood for each run (including the penalty term). Values and error bars shown are from analytic regression, as detailed in App. (red), and the means and one-sigma confidence intervals from the bootstrapped distributions (black). The black line indicates the true value. The inset is a zoomed-in version of the lower left corner.


For a demonstration on real-world data, we Gaussianise the joint posterior distribution of MPs of data from weak lensing and baryon acoustic oscillations. The weak lensing data set is the 2D cosmic shear data taken by the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS; see [9]). The CFHTLenS survey analysis combined weak lensing data processing with THELI ([7]), shear measurement with lensfit ([22]), and photometric redshift measurement with PSF-matched photometry ([10]). A full systematic error analysis of the shear measurements in combination with the photometric redshifts is presented in [9], with additional error analyses of the photometric redshift measurements presented in [2].
The BAO data set is the Data Release 9 (DR9) CMASS sample from the Baryon Oscillation Spectroscopic Survey (BOSS), which is part of the Sloan Digital Sky Survey III (SDSS-III) – see [1]. This contains 264,283 massive galaxies in a redshift range , whose correlation function and power spectrum both exhibit the features of baryon acoustic oscillations. The quantity , i.e. the ratio of the comoving sound horizon at the baryon drag epoch and the spherically volume-averaged distance , is a probe of the underlying cosmological parameters – see [24] for details.
To draw samples from the posterior distribution, we use the CosmoPMC software package3, which uses Population Monte Carlo (PMC), an algorithm to approximate the target distribution by a Gaussian mixture model. We compare three cosmological models: standard flat CDM, curved CDM, flat CDM, and curved CDM. The first has a four-dimensional parameter space spanned by matter density , power spectrum normalisation , baryon density , and the normalised Hubble parameter – all other parameters are set to their best fit values for flat CDM, see [27]. The latter two contain a fifth model variable each – curvature parameter and constant dark energy equation-of-state parameter , respectively. For all of these parameters, flat proper priors were chosen.The baseline model – flat CDM is always referred to as model 1, whereas model 2 is one of the two extensions. As a byproduct of the sampling process, PMC provides the model evidence for the data set used – see [16] for further details.

Table 2: Values for evidence and Bayes factor for CFHTLenS+BOSS data set in three cosmological models, as computed with Box-Cox Gaussianisation (G) of weighted samples with 10,000 points each. For comparison: evidence value and Bayes factor from Population Monte Carlo (PMC), and Bayes factor from Savage-Dickey density ratio (SDDR).
flat CDM curved CDM flat CDM
dimension 4 5 5
(G) 486.96 0.01 485.79 0.03 486.09 0.05
(PMC) 487.02 0.03 485.84 0.01 486.00 0.04
(G) n.a. 1.17 0.04 0.87 0.05
(SDDR) n.a. 1.23 0.04 0.93 0.06
(PMC) n.a. 1.19 0.04 1.03 0.05

In the special situation where one model is nested inside the other, the evidence ratio can be computed via the Savage–Dickey density ratio (SDDR) – see [6], and citations therein. Under mild conditions on prior and posterior densities for the full model and the submodel , the ratio can be derived to be

where denotes the extra parameter (or parameters) contained in but not in , is the value of that specifies the submodel , and and are posterior and prior densities of the full model, marginalised over all model parameters but (see [32] for details).
We find that the log-evidence values computed by CosmoPMC need to be offset by a factor of times the log-prior density, where is the number of data sets used. This is due to a non-standard interpretation of the prior density within CosmoPMC. Throughout this work, we apply this correction to the log-evidence values produced by CosmoPMC as well as the log-posterior values extracted from the CosmoPMC output. We follow the practice of ([16]) of accepting a CosmoPMC run as soon as the built-in convergence diagnostic, called perplexity, exceeds a value of . Sampling to even higher values for the perplexity, up to , still changes the CosmoPMC value for by as much as 0.1 – this indicates a residual bias in the statistic. However, since the exact same offset has to appear in the CosmoPMC output values for the log-evidence and for the non-normalised log-posterior , it is not of relevance to demonstrating our method, so investigating its origin is beyond the scope of this work.
Table 2 shows the log-evidences for the three models, and the Bayes factors of CDM compared to either of the two extended models. The numbers in the first line were computed via one-pass Gaussianisation with ABC transformations, preceded by an unboxing transformation. To estimate the scatter of the CosmoPMC and SDDR values for and in the second, fourth, and fifth lines, we rerun CosmoPMC ten times for each model, and determine the mean and average for the CosmoPMC and SDDR estimators. Like for the log-normal sample, 24 independent Gaussianisation runs were started for each sample, and the one with the highest log-likelihood value chosen to transform the sample, which is then subjected to the analytic evidence computation procedure. The values in the first row of Table 2 are the weighted averages and standard deviations of all ten values, where the weights are determined from the analytic error bars as (cf. Appendix A). The values show that the combined data favour CDM over any of the extended models, although the evidence is not strong against either of the two. Our values agree with the numbers of SDDR and PMC within the spread between the latter two estimators, but still small deviations remain, which are larger than the error bars quoted. These may be due to residual non-Gaussianity in the transformed samples, to which the evidence is a sensitive measure.

6Conclusions and outlook

There are several possible extensions of our method, and directions to advance its scope:

Acknowledgements

RLS is grateful to Martin Kilbinger, Karim Benabed, Catherine Heymans and Stephen Feeney for helpful discussions, and has been supported by the Perren Fund in Astronomy, and by IMPACT (UCL MAPS faculty); BJ acknowledges support by an STFC Ernest Rutherford Fellowship, grant reference ST/J004421/1; HVP was supported by the European Research Council under the European Community’s Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement no.306478-CosmicDawn. This work is based on observations obtained with Planck (http://www.esa.int/Planck), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada. In part, this work was supported by National Science Foundation Grant No. PHYS-1066293 and the hospitality of the Aspen Center for Physics.


AAnalytic evidence computation

We outline how the computation of the log-evidence (see Equation 6) can be performed analytically, i.e. without numerical optimisation. Our data consist of a Gaussianised weighted sample of points in , and the values of the transformed log-posterior on each of these points, . To fit a multivariate unnormalised Gaussian

through the values of , we use the regression model

which is linear in each of the regression parameters: the upper-diagonal components of the symmetric matrix , the components of vector , and the scalar . Assuming independence and homoscedasticity, we arrive at our quantity to minimise,

which is quadratic in every regression parameter. Thus, we can write all normal equations of the regression problem,

as a -dimensional linear inhomogeneous vector equation, and solve via singular value decomposition. From the resulting values of , the parameters of the multivariate Gaussian (Eq. Equation 7) can readily be computed as

Furthermore, we can use the analytic regression procedure to find error bars on these estimators, and thus on . To this end, we can analytically find the covariance matrix of all parameters from the form of and the transformed data set , and then approximate the variances of and of by standard Gaussian error propagation. In particular:

with

and

where .

BUnboxing Transformations

A single MP , which is assumed to be constrained to an open interval , is redefined via the unboxing transformation

where denotes the inverse of the cumulative distribution function of the Normal distribution:

, thus designed, has the following properties: it is bijective and smooth; the limits are ; . Further, the midpoint of the interval is fixed: , . Sending the interval boundaries to infinity simultaneously will result in the identity transformation

This is a generalisation of the widely used probit transformation, which maps the unit interval onto the real numbers as . Our modified probit has one huge advantage for the subsequent search for a Gaussianising transformation: If , as a random variable, is uniformly distributed on , then is normally distributed with mean and spread . In statistics, a frequently-used alternative to probit is the logit map . For our purposes, however, the probit is preferable, since a similarly rescaled version of this logit-transformation would yield a distribution with excess kurtosis, instead of a Gaussian. For a -dimensional vector of MPs , constrained to intervals , we unbox each dimension separately, with the appropriate boundaries:

Before starting the search for the Gaussianisation parameters, every point in the original sample is mapped through this transformation.

Footnotes

  1. See [26]. For the Markov chains see http://www.cosmos.esa.int/web/planck/pla; consult http://lambda.gsfc.nasa.gov for an eclectic list of data combinations in various cosmological models.
  2. See http://www.cosmos.esa.int/web/planck/pla.
  3. See http://www2.iap.fr/users/kilbinge/CosmoPMC/.

References

  1. Anderson L. et al., 2012, MNRAS, 427, 4, 3435-3467 (arXiv: 1203.6594)
  2. Benjamin J. et al., 2013, MNRAS, 431, 2, 1547-1564 (arXiv:1212.3327)
  3. Box G.E.P., Cox D.R., 1964, J. R. Statist. Soc. B 26/2, 211-252
  4. Černý V., 1985, J. Optimization Theory and Applications 45/1, 41-51
  5. Chu M., Kaplinghat M., Knox L., 2003, ApJ, 596, 725 (arXiv:astro-ph/021466v2)
  6. Dickey J., 1971, Ann. Math. Statist., 42, 204-223
  7. Erben T. et al., 2013, MNRAS, 433, 3, 2545-2563 (arXiv:1210.8156)
  8. Fendt W.A., Wandelt B.D., 2007, ApJ, 654, 2 (arXiv:astro-ph/0606709)
  9. Heymans C. et al., 2012, MNRAS, 427, 146-166 (arXiv:1210.0032)
  10. Hildebrandt H. et al., 2012, MNRAS, 421, 3, 2355-2367 (arXiv:1111.4434)
  11. Jaynes E.T., 2003, Probability Theory: The Logic of Science, Cambridge University Press, Cambridge
  12. Jimenez R., Verde L., Peiris H.V., 2004, Phys. Rev. D, 70, 023005 (arXiv:astro-ph/0404237)
  13. Joachimi B., Taylor A., 2011, MNRAS, 416, 2, 1010-1022 (arXiv:1103.3370)
  14. Kahn H., Marshall A.W., 1953, J. Operations Res. Soc. Am., 1, 5, 263-278
  15. Kass R., Raftery A.E., 1995, JASA , 90, 430, 779-795
  16. Kilbinger M. et al., 2010, MNRAS, 405, 4, 2381-2390 (arXiv:0912.1614)
  17. Kilbinger M. et al., 2013, MNRAS, 430, 3, 2200-2220 (arXiv:1212.3338)
  18. Kitching T., Taylor A., 2010, MNRAS, 408, 2, 865-875 (arXiv:1003.1136)
  19. Kosowsky A., Milosavljevic M., Jimenez R., 2002, Phys. Rev. D, 66, 063007 (arXiv:astro-ph/0206014v1)
  20. Lewis A., Bridle S., 2002, Phys. Rev. D, 66, 103511 (arXiv:astro-ph/0205436v3)
  21. MacKay D.J.C., 2003, Information Theory, Inference, and Learning Algorithms, Cambridge University Press, Cambridge
  22. Miller L. et al., 2013, MNRAS, 429, 4, 2858-2880 (arXiv:1210.8201)
  23. Nelder J.A., Mead R., 1965, Comput. J., 7, 308–313
  24. Percival W.J. et al., 2007, MNRAS, 381, 3, 1053-1066 (arXiv:0705.3323)
  25. Powell M.J.D., 2007, IMA J. Numer. Analysis, 28, 4, 649-664.
  26. Planck Collaboration (Ade P.A.R. et al.), 2014, A&A, 571, A16 (arXiv:1311.1657)
  27. Planck Collaboration (Ade P.A.R. et al.), 2015, A&A, in press (arXiv:1502.01589)
  28. Sandvik H., Tegmark M., Wang X., Zaldarriaga M., 2004, Phys. Rev. D, 69, 063005 (arXiv:astro-ph/0311544v2)
  29. Silverman B.W., 1998, Density Estimation for Statistics and Data Analysis, London, Chapman & Hall/CRC
  30. Skilling J., 2006, Bayesian Analysis 1 no.4, 833-860
  31. Velilla S., 1993, Statistics & Probability Letters 17, 259
  32. Verde L., Feeney S.M., Mortlock D.J., Peiris H.V., 2013, JCAP09(2013)013 (arXiv:1307.2904)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
   
Add comment
Cancel
Loading ...
10122
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description