Gaussianisation for fast and accurate inference from cosmological data
Abstract
We present a method to transform multivariate unimodal nonGaussian posterior probability densities into approximately Gaussian ones via nonlinear mappings, such as Box–Cox transformations and generalisations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in MCMC samples. Further, the model evidence integral (i.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianising transformation is performed computationally through a maximumlikelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly nonGaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianise the joint distribution of data from weak lensing and baryon acoustic oscillations (BAO), for different cosmological models, and find a preference for flat CDM. Comparing to values computed with the SavageDickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.
keywords:
methods: data analysis, methods: numerical, methods: statistical, cosmology: observations1 Introduction
According to the Bayesian paradigm, inference on any data set will yield a posterior probability distribution on the space of model parameters. This density function represents, in its entirety, the full knowledge gained in the attempt to infer the underlying parameters. Such distributions often depart significantly from a Gaussian form. This led to the widespread use of Monte Carlo sampling methods to report the typically nonGaussian posterior constraints obtained from experiments, such as Planck^{1}^{1}1See Planck XVI (2014); Planck XIII (2015). For the Markov chains see http://www.cosmos.esa.int/web/planck/pla; consult http://lambda.gsfc.nasa.gov for an eclectic list of data combinations in various cosmological models.. Reconstructing the posterior density from such a Markov Chain Monte Carlo (MCMC) sample, e.g. to visualise the multivariate parameter constraints, or to combine the constraints from multiple data sets, can be nontrivial due to the large sample size necessary to appropriately map the distribution; in addition, the contours often need further smoothing for stylistic reasons.
Instead, we propose to redefine the underlying model parameters, so that the new posterior density approximately takes a Gaussian shape after the transformation from old to new parameters; this presupposes that we begin with a unimodal posterior density. Such a transformation would allow for enormous data compression: instead of a full MCMC sample from the posterior distribution, we only need to report the Gaussianising transformation, and the first and second moments of the resulting Gaussian distribution. From these alone, we can reconstruct an analytic expression for the full nonGaussian posterior density, and subsequently combine it with other data sets.
Further, it becomes possible to display and compare nonellipsoidallyshaped contours of nonGaussian parameter constraints – whether joint or marginalised – without any smoothing. Thus, this method allows for summarising posterior densities in a versatile and efficient way, which faithfully reproduces the information contained in the full probability density.
The idea of transforming a function to a Gaussian shape is, in principle, not limited to reproducing probability densities. As the integral over a Gaussian can be performed analytically, this opens up a strategy to feasibly compute highdimensional integrals, such as the model evidence (i.e. the marginal likelihood).
The transformed model parameters are analogous to the normal parameters of the cosmic microwave background (CMB): it has been highly advantageous for rapid likelihood calculation (such as CMBfit, CMBwarp, and PICO, see Kosowsky, Milosavljevic & Jimenez 2002; Chu, Kaplinghat & Knox 2003; Sandvik et al. 2004; Jimenez, Verde & Peiris 2004; Fendt & Wandelt 2007), to redefine the cosmological model parameters such that the model is approximately linear in these newly defined normal parameters. Thus the likelihood approximately takes the form of a multivariate Gaussian density. For most observables, we would be at a loss to search for a linearising redefinition of the model parameter space directly motivated by the structure of the model itself. Instead, is it possible to computationally find suitable parameters, i.e. a suitable bijective transformation which approximately Gaussianises the posterior in question?
Extending the work of Joachimi & Taylor (2011), we present an algorithm to find and test such a nonlinear Gaussianising transformation from a Markov chain sampling the posterior distribution of the original parameters. In principle, this distribution could stem from any experiment or data type. In Sect. 2, we describe the details of the algorithm, verification of the reconstructed posterior distribution, and the specific transformations employed. Following an illustration of these on a toy example in Sect. 3, Sect. 4 demonstrates the performance of our implementation, using Markov chains from the Planck satellite constraints on cosmological models (Planck XVI, 2014; Planck XIII, 2015). In Sect. 5, we present an efficient way to calculate the model evidence, a quantity needed to judge the predictivity of different model parameter spaces, via Gaussianising transformations. Section 6 offers conclusions and suggests future directions.
2 Gaussianisation
To find the right multivariate transformation, we will at first adopt the strategy of redefining each model parameter separately, i.e. the first new model parameter will only depend on the first old model parameter, etc. In Sect. 2.4, we will drop this assumption and consider transformations which can correlate the model parameters.
The set of all multivariate Gaussianisation transformations, from which we are to pick the optimal one, will be constructed in the following way: assume a family of bijective realvalued functions indexed by real transformation parameters . Given the dimensional vector of model parameters , we transform to the new (Gaussiandistributed) parameters via
(1) 
where the full multivariate transformation is now specified by all transformation parameter tuples , i.e. one for each model parameter. To avoid confusion, we shall from now on distinguish between model parameters (MP), which the posterior probability density depends on, and transformation parameters (TP), which specify one Gaussianising transformation. The algorithm can be applied to arbitrary parametrised transformation families, suitable for various forms of nonGaussianity – in principle, we could even choose different transformations for each model parameter, instead of using the same shape for all of them.
Assuming such a bijective transformation , we immediately have an analytic form for the posterior density
One still needs to find the mean vector and the covariance matrix of the transformed posterior density . These are estimated from the transformed sample (see Sect. 2.1).
2.1 Finding the optimal transformation
Given a weighted point sample , containing points in and probability weights , which has been sampled from the posterior distribution in question, we wish to quantify the Gaussianisation properties of different transformations applied to this sample. To this end, we follow Box & Cox (1964) (see also Velilla, 1993 and Joachimi & Taylor, 2011) in maximising the profile likelihood over TP space, i.e. depending only on the real transformation parameters contained in . This likelihood is a function of the transformation parameters , quantifying how well each transformation Gaussianises the distribution of data set ; however, it does not pertain to the posterior density in Eq. (2), which is a function of the model parameters .
For the Gaussian parameters , in Eq. (2), we insert their standard debiased weighted maximumlikelihood estimators, which depend on the transformed sample
(3)  
(4) 
with and . These estimators depend on indirectly, as they are computed after has been transformed with . We arrive at the profile weighted loglikelihood
(5)  
where several terms independent of have been discarded. In general, both the covariance matrix of the transformed sample and the Jacobian term will depend on the transformation parameters in a nonlinear way, hence finding the maximumlikelihood values for the TPs will require numerical optimisation. For this purpose, we have employed the GSL implementation of the wellknown Nelder–Mead simplex algorithm (Nelder & Mead 1965).
As already noted by Joachimi & Taylor (2011), loglikelihood degeneracies in TP space are common. These may jeopardise the numerical stability of the calculation of . There are generic cases where a moderately large value for one transformation parameter may already result in unmanageably large numerical values for the transformed sample, such as e.g. the power transformation with . Generically, the optimisation algorithm tends to slide into these TP space regions quite easily. Hence, we include a penalty term of the form
(6) 
where are the parameter values corresponding to the identity transformation. We minimise the function over the real numbers in . Values of and have proven to be highly stabilising, and at the same time do not distort the shape of the resulting analytic posterior distribution.
In this work, we employ the Nelder–Mead algorithm just for illustrating the method – faster and more reliable algorithms to find the global minimum of the likelihood function exist (such as Bound Optimisation BY Quadratic Approximation – BOBYQA, see Powell, 2007) and can readily be applied here.
2.2 Box–Cox transformations and their kin
The Box–Cox transformation (Box & Cox, 1964) is a generalisation of the power map. This transformation family is widely used in statistics and econometrics, e.g. to make data approximately homoscedastic and normal. Our usage is different in that we use it to alter the distribution of model parameters, rather than the distribution of data. Including a shift parameter , the onedimensional version is defined as
(7) 
for a single MP , i.e. . Note that the family is continuous at and that the mapping requires . Typically, an MP with a skewed distribution can be transformed to an MP with symmetric, Gaussian distribution upon the appropriate choice of the power TP , e.g., a lognormal distribution can be analytically transformed to a Gaussian with . The identity transformation corresponds to and . Inserting this transformation family into Eq. (2), we recover the formula given in Joachimi & Taylor (2011).
As an extension of the Box–Cox family, we propose the Arcsinh–Box–Cox transformation (‘ABC transformation’ hereafter):
(8) 
The inclusion of the TP will prove particularly useful to remove residual kurtosis from a MP distribution. The identity transformation reads , , .
The Box–Cox family does not form a group, because two subsequent transformations cannot be expressed as another Box–Cox transformation; the same holds for the ABC family. This will be of importance for Sect. 2.4.
Box–Cox transformations demonstrate that the domain of the function – in particular its dependence on – requires special attention: for given , it is defined only for , the same holds for ABC transformations. Thus, the optimisation procedure for the sample requires that , the shift parameter for the model parameter , is bounded from below, i.e. . Conversely, this means that, once the optimal transformation parameters are found and inserted into the analytic expression for the original posterior density, Eq. (2), it is not defined for every value possible value of the MP , but only for . This also necessitates that the normalisation needs to be adjusted, which can be done analytically. However, if the sample is large enough so that the tails of the distributions are properly represented, this truncation of the domain is not problematic.
2.3 Verifying the optimal transformation
Once the optimal transformation within its family is found, how do we judge the effectiveness of the resulting Gaussianisation? We adopt the following pragmatic standpoint: if the analytic posterior manages to reproduce the onedimensional and twodimensional marginalised contours of the sample, it is deemed acceptable. To this end, we propose the test via a crosscontour (CC) plot. The idea is to characterise a probability density by the location of its contours  the surfaces of constant density  and the probability mass stored inside, i.e. the integral of the density over the interiour of a contour. If two densities and are identical, then they will store the same mass in any region of the parameter space; if they are different, we expect to find different probabilities for the same regions (e.g. the regions bounded by contours of ). Thus, looking at the family of contourbounded regions of , we can ask: does the probability for these, assigned via , agree with the probability for them assigned via ?
To formalise this, consider the following: given a probability density in dimensions, which takes function values between 0 and , we define the contourbounded region assigned to the density value as
The probability mass enclosed in any of these is
Now, assuming we have two probability densities and in dimensions, do the contours of reproduce those of ? They do in the relevant sense if for every , the mass enclosed in the contour of equals the mass in this contour, i.e.
(9) 
It should be noted that this alone is not a sufficient condition for , but the counterexamples, which can be constructed mathematically, are nongeneric and can be neglected for our purposes.
To detect deviations of the contours of and , we could simply plot the left and the right side of Eq. (9) for a grid of values between 0 and , and plot the points with respect to the line . For concrete problems, it is often more instructive to subtract the right side from the left side, and plot the excess (or deficit) probability mass of inside the contours of . If, in this plot, the excess for every contour is consistent with zero, we have succeded.
In our situation, we compare a point sample with a probability density function – the analytic posterior density as reconstructed via Gaussianisation. The right side of Eq. (9) is the probability mass in the region where the density is greater or equal to ; the left side is the fraction of the point sample which lies in the same region. So, for every value in the range of , we find the probability mass enclosed in by gridding over a region containing the sample. Similarly, we count the number of points in where the value of is above , to compute the fraction of points that lie within . This fraction is an estimator of the actual probability mass enclosed, because is a discrete sample from the actual posterior distribution. To find the variance of this estimator, we calculate the fraction on 2,000 bootstrap realisations of , and determine the 95%confidence intervals from these. If, for every , the analytic posterior probability mass inside is within this confidence interval for the sample point fraction within , we judge our reconstruction attempt to be successful.
It should be noted that poor MCMC sampling of the original target density will yield a poor representation of this density by our reconstructed density (2). Any information about the distribution lost by undersampling cannot be regained. However, as demonstrated in Sect. 3, our method reproduces less biased contours than other standard methods of density estimation even in regions of low point density, i.e. where any density estimate must be an extrapolation. Hence, it can be used when the length of the input Markov chains is restricted by computational cost or file size.
2.4 Multipass transformations
If even the optimal Gaussianising transformation amongst a given family does not bring the posterior density sufficiently close to a Gaussian shape (e.g., as determined via a CC plot), we have two options. We can provide a different family of transformations and redo the optimisation; or we can repeat the process on the sample after the first transformation. As already mentioned in Sect. 2.2, the transformation families employed in this work do not form groups. Hence, two subsequent transformations do not result in another transformation from that family, and transforming twice potentially provides a better Gaussianisation than transforming once. In principle, it is possible to apply multiple subsequent transformations, should the quality of the result necessitate it.
In this spirit, we have implemented the following twopass transformation protocol:

Optimise the TPs of the first transformation.

Linear reshaping: centring, rescaling, rotating.

Optimise the TPs of the second transformation.
Strictly speaking, this transformation, whilst being bijective, no longer falls into the class as set up in Eq. (1), as different model parameters are mixed. Nonetheless, Eq. (2) for the analytic posterior density generalises in a straightforward way.
In the second step, the sample after the first Gaussianising transformation is subjected to the following maps (in this order): subtract the sample mean from every parameter, so that the sample is centred on the origin. Then, rescale every parameter such that the standard deviation is unity. Finally, rotate into the eigenbasis of the covariance matrix – this procedure is generally known as principal component analysis (PCA). These reshaping operations not only help to avoid numerical instabilities (centring, rescaling), but also open up new directions for Gaussianisation by presenting uncorrelated parameters to the second Gaussianising transformation, since the transformations defined in Eq. (1) cannot mix parameters. If two parameters have substantial covariance after step 1, it can be crucial to decorrelate them.
Nevertheless, a price is to be paid for the Gaussianising power added with Step 2: it sacrifices a decisive property of the simple onestep transformation routine, namely that every transformed MP only depends on a single untransformed MP . This property allows for easy marginalisation of the analytic posterior: to compute this, we can marginalise the Gaussianised sample by dropping all coordinates we wish to marginalise out and determining the mean vector and covariance matrix of the remaining ones. Transforming this marginalised Gaussian density back will then yield the marginalised posterior density on the untransformed MPs. However, with linear reshaping included, this is no longer possible.
This may be problematic for some applications (such as visualisation of 1D or 2D marginal distributions, or creating a CC plot), but not for others – as long as we need only the marginal distribution of a single combination of parameters, we can marginalise by discarding all MP columns of the sample except the ones in question, prior to Gaussianising.
3 A toy example
We illustrate these ideas on a twodimensional example. We draw a sample of 10,000 points from a bivariate Gaussian distribution, and map it through an inverse Box–Cox transformation with known input TP values (see Table 1). All weights are set to unity.
This mock data sample has the advantage that there is at least one Box–Cox transformation which precisely Gaussianises the underlying probability distribution. Figure 1 shows the original sample, and the one transformed with the onepass Box–Cox transformation which was found to be optimally Gaussianising, i.e. maximising the loglikelihood in Eq. (5). As this is a comparably simple problem, we have set the penalty term in Eq. (6) to zero. The Nelder–Mead algorithm was started sixteen times independently with randomised initial conditions. The values of the recovered optimal TPs are shown in Table 1; the standard deviation amongst these sixteen values is of order at worst, so multiple Nelder–Mead runs are not necessary in this lowdimensional example: all of them find the same maximum of the loglikelihood. In highdimensional cases, however, this strategy can increase the robustness of the procedure.
The apparent difference between the parameters of the single inverse Box–Cox transformation and the values found for Box–Cox optimisation is due to degeneracies in parameter space. To illustrate these, we show the profile likelihood for where are held fixed at their input values, and vice versa, in Fig. 2. The TPs found by the optimisation algorithm (black crosses  note that they are projected onto the plane for which the profile likelihood is shown) are degenerate with the input ones (red star). Both Box–Cox transformations map the distribution to sufficiently Gaussian form.
Parameter  input value  recovered value 
2  
0.4  
3  
4 
We compare our method of reconstructing an analytic posterior density from an MCMC sample with the standard nonparametric method, kernel density estimation (KDE), which also aims to find a functional form for the probability density. The contours of the posterior density from Gaussianisation are shown jointly with those from KDE: these employ a Gaussian kernel, whose covariance matrix is estimated from the sample, and Silverman’s rule (Silverman 1998) has been used to determine the bandwidth parameter. No additional smoothing has been applied in Fig. 3, top panel. The bottom panel shows the excess crosscontour probability masses between analytic posterior and sample, and KDE and sample respectively, as detailed in Sect. 2.3. Whereas the Box–Cox posterior is consistent with the sample distribution for every single contour, the KDE contours show a strong bias – the contours are wider than they should be. Given that the precision of the contour reconstruction is, for the Box–Cox method, limited only by the finite size of the sample, it has the potential to perform better than the (biased) kernel density method. Additionally, for applications in which frequent calls of the posterior density are a bottleneck for computation speed, our method of density reconstruction can be advantageous: the additional initial cost for finding the transformation parameters can be outweighed by the subsequent evaluation speedup.
4 Performance results: Planck data
To demonstrate how the algorithm works on real data, we have employed MCMC samples from the first data release of the Planck mission (see Planck XVI, 2014). This satellite has measured the temperature and polarisation anisotropies in the CMB, whose power spectra are sensitive measures of the underlying cosmology. The Planck Collaboration has published several data products^{2}^{2}2See http://www.cosmos.esa.int/web/planck/pla., including MCMC samples from the posterior probability densities of various cosmological models, generated with CosmoMC (see Lewis & Bridle, 2002, also: http://cosmologist.info/cosmomc).
The baseline cosmology is the standard model of a flat Universe with cold dark matter and a cosmological constant, commonly known as CDM. It contains six parameters: (today’s baryon density), (today’s cold dark matter density), (scaled sound horizon), (reionisation optical depth), (spectral index of primordial scalar perturbations), and (log power amplitude of primordial scalar perturbations). Several extensions of this baseline model are also listed, including those by adding either of the following parameters: (curvature parameter), (dark energy equation of state), (primordial tensortoscalar amplitude ratio), and (effective number of relativistic degrees of freedom). Further, these chains list derived quantities, e.g. today’s Hubble parameter , the age of the Universe, and a variety of foreground modelling parameters, such as and , modelling the amplitudes of Poisson point sources and the cosmic infrared background in the frequency bands , and . These are of particular interest to us, as the most prominent nonGaussian features of the posterior densities can be seen in them.
The chains, as presented, are not decorrelated, so we thin them by using every 20th sample. We employ the ‘..._planck_lowl_...’
chains, which use only the temperaturetemperature correlations. The plots in this section are created using the sevenparameter model including ; the sample contains 11,546 points after thinning.
All these Markov chains assume uniform proper prior densities (i.e. being supported on compact rectangular boxes) and list the loglikelihood for every point (for further details, see Planck XVI, 2014).
We show several 2D marginalised posterior samples exhibiting different nonGaussian features, and how well they are reproduced by the analytic posterior (Eq. 2), such as triangular shapes (see Fig. 4), pronounced nonlinear degeneracies (see Fig. 5), and sharp boundaries (‘walls’) arising from MP space boundaries (see Fig. 6).
Figure 5 demonstrates the usefulness of the intermediate PCA in between the ABC transformations: the first transformation has straightened out the curved shape of the maximum, but the distribution still appears skewed towards the upper left direction (see top right panel). This is remedied by reshaping, PCA and another ABC transformation (bottom right panel) – the second Gaussianising transformation having only little effect compared to the PCA.
The Gaussianisation of the distribution in Fig. 6 (top left) shows how two concatenated transformations can be more powerful than a single one. The oncetransformed sample still exhibits negative excess kurtosis, which is removed by the second transformation (bottom left to bottom right panel).
Further, we compare the CC plots of this twopass transformation and the onepass transformation in Fig. 7, which also shows the resulting contours (left panel). The associated onepass CC plot (bottom right) shows a significant deficit of point sample mass compared to the analytic posterior mass, for the posterior contours between and , as well as for , and between and 1. The latter is visible between the 2 and 3contours close to the walllike constraint at . By contrast, the CC plot for the twopass transformation (Fig. 7, top right) demonstrates good agreement between the contours of analytic posterior and point samples.
To demonstrate the algorithm working on a highdimensional example, we Gaussianise a sevendimensional Planck MCMC sample with an ABC transformation. In order to visualise the result, we show all onedimensional and twodimensional marginal distributions of the point sample and the full analytic posterior density (see Fig. 8). We employ onepass transformations, because, as discussed in Sect. 2.4, the marginalisation of the analytic posterior from 7D down to 2D or 1D would not be possible without explicit integration or sampling, had we chosen to use the twopass protocol.
5 Application: Fast evidence computation
To decide which of two models and , each with their associated MP space, is more predictive on a common set of data , the essential quantity to compute is the model evidence , see Jaynes (2003); MacKay (2003); Kass & Raftery (1995); Skilling (2006). The ratio of the evidences (called Bayes factor) is then used for updating the prior model odds to posterior model odds in the Bayesian sense:
(10) 
The evidence itself, for each model, can be computed via
(11) 
i.e. via integration of the (unnormalised) posterior density over the respective parameter space of model – hence the term ‘marginal likelihood’ for . If takes a form with nonGaussian features, and if the model parameter space is highdimensional, this integral itself is often difficult to calculate.
However, with a bijective transformation that Gaussianises , we can compute the evidence integral analytically. If has the shape of an (unnormalised) multivariate Gaussian
(12) 
with means , covariance matrix and maximum , the logevidence reads
(13) 
Similar expressions for Gaussian posterior densities can be found in Taylor & Kitching (2010). To estimate , we need the absolute normalisation of ; hence this method can only be applied to samples which provide the values for (possibly also in the form of loglikelihood and logprior). From these, we compute the values of on the optimallyGaussianised sample by adding the logarithm of the transformation Jacobian, and then fit the parameters , , and of the Gaussian via leastsquares regression. This can be performed analytically, and even be used to compute an error bar on the value of – see Appendix A for details.
If the prior distribution for one MP, and hence the posterior, is supported only on a finite interval, the same will hold true for the transformed MP if we restrict ourselves to onepass transformations. If the sample size is large enough to properly represent the cutoff, the Gaussianisation transformation will alleviate this feature, but may not fully remove it. Assuming the marginal distribution to be Gaussian, when in reality we may deal with a truncated Gaussian, will lead to a systematic error in the evidence, so it is advantageous to remove these features before starting the search for optimally Gaussianising TPs. Appendix B details ‘unboxing transformations’, which redefine the MPs, mapping a finite open interval to the entire real line. In fact, it is also possible to use them for posterior density reconstruction, before Step 1 in Sect. 2.4.
To demonstrate this idea, we compute the evidence integral first on a mock data set, and subsequently on real data from cosmology. For the former, we draw a random sample of length 10,000 from a tendimensional lognormal probability distribution, and assign to each point the value of the probability density function, multiplied with a factor of . All weights are set to unity. This mock sample is subjected first to the Gaussianisation procedure with onepass ABC transformations (no unboxing), and then to the regression outlined in App. A to retrieve the bestfit estimate for and its error bar. To verify these, we determine the distribution of the estimator in Eq. (13) by producing 1,000 bootstrap samples from the transformed sample, and computing on each of them, together with its onesigma confidence intervals. To increase the reliability of the optimally Gaussianising transformation, 24 independent Gaussianisation runs are started with randomised initial conditions; Fig. 9 shows the results for these. Relative to the “true” value of 5, these agree to subpercent accuracy. It is also noteworthy that a lower local maximum of the likelihood, i.e. one further to the right, will depart more from the true value. Hence, a location in TP space that is close to the exact optimum will yield a biased value for the logevidence. This indicates that the evidence is a sensitive indicator for departures from Gaussianity. Note that a lognormal distribution can be precisely Gaussianised with BoxCox transformations – and thus also by ABC transformations, which are a superset of these. Further, it is noteworthy that our analytic procedure yields error bars of the right magnitude, yet somewhat more conservative, compared to their bootstrapped counterparts. This discrepancy arises because the bootstrapped distributions for and deviate slightly from Gaussianity, whereas the analytic error bars assume Gaussian error propagation (see App. A for details).
For a demonstration on realworld data, we Gaussianise the joint posterior distribution of MPs of data from weak lensing and baryon acoustic oscillations. The weak lensing data set is the 2D cosmic shear data taken by the CanadaFranceHawaii Telescope Lensing Survey (CFHTLenS; see Heymans et al. 2012; Kilbinger et al. 2013). The CFHTLenS survey analysis combined weak lensing data processing with THELI (Erben et al. 2013), shear measurement with lensfit (Miller et al. 2013), and photometric redshift measurement with PSFmatched photometry (Hildebrandt et al. 2012). A full systematic error analysis of the shear measurements in combination with the photometric redshifts is presented in Heymans et al. (2012), with additional error analyses of the photometric redshift measurements presented in Benjamin et al. (2013).
The BAO data set is the Data Release 9 (DR9) CMASS sample from the Baryon Oscillation Spectroscopic Survey (BOSS), which is part of the Sloan Digital Sky Survey III (SDSSIII) – see Anderson et al. (2012). This contains 264,283 massive galaxies in a redshift range , whose correlation function and power spectrum both exhibit the features of baryon acoustic oscillations. The quantity , i.e. the ratio of the comoving sound horizon at the baryon drag epoch and the spherically volumeaveraged distance , is a probe of the underlying cosmological parameters – see Percival et al. (2007) for details.
To draw samples from the posterior distribution, we use the CosmoPMC software package^{3}^{3}3See http://www2.iap.fr/users/kilbinge/CosmoPMC/., which uses Population Monte Carlo (PMC), an algorithm to approximate the target distribution by a Gaussian mixture model. We compare three cosmological models: standard flat CDM, curved CDM, flat CDM, and curved CDM. The first has a fourdimensional parameter space spanned by matter density , power spectrum normalisation , baryon density , and the normalised Hubble parameter – all other parameters are set to their best fit values for flat CDM, see Planck XIII (2015). The latter two contain a fifth model variable each – curvature parameter and constant dark energy equationofstate parameter , respectively. For all of these parameters, flat proper priors were chosen.The baseline model – flat CDM is always referred to as model 1, whereas model 2 is one of the two extensions. As a byproduct of the sampling process, PMC provides the model evidence for the data set used – see Kilbinger et al. (2010) for further details.
flat CDM  curved CDM  flat CDM  

dimension  4  5  5 
(G)  486.96 0.01  485.79 0.03  486.09 0.05 
(PMC)  487.02 0.03  485.84 0.01  486.00 0.04 
(G)  n.a.  1.17 0.04  0.87 0.05 
(SDDR)  n.a.  1.23 0.04  0.93 0.06 
(PMC)  n.a.  1.19 0.04  1.03 0.05 
In the special situation where one model is nested inside the other, the evidence ratio can be computed via the Savage–Dickey density ratio (SDDR) – see Dickey (1971); Verde et al. (2013), and citations therein. Under mild conditions on prior and posterior densities for the full model and the submodel , the ratio can be derived to be
(14) 
where denotes the extra parameter (or parameters) contained in but not in , is the value of that specifies the submodel , and and are posterior and prior densities of the full model, marginalised over all model parameters but (see Verde et al., 2013 for details).
We find that the logevidence values computed by CosmoPMC need to be offset by a factor of times the logprior density, where is the number of data sets used. This is due to a nonstandard interpretation of the prior density within CosmoPMC. Throughout this work, we apply this correction to the logevidence values produced by CosmoPMC as well as the logposterior values extracted from the CosmoPMC output. We follow the practice of (Kilbinger et al., 2010, 2013) of accepting a CosmoPMC run as soon as the builtin convergence diagnostic, called perplexity, exceeds a value of . Sampling to even higher values for the perplexity, up to , still changes the CosmoPMC value for by as much as 0.1 – this indicates a residual bias in the statistic. However, since the exact same offset has to appear in the CosmoPMC output values for the logevidence and for the nonnormalised logposterior , it is not of relevance to demonstrating our method, so investigating its origin is beyond the scope of this work.
Table 2 shows the logevidences for the three models, and the Bayes factors of CDM compared to either of the two extended models. The numbers in the first line were computed via onepass Gaussianisation with ABC transformations, preceded by an unboxing transformation. To estimate the scatter of the CosmoPMC and SDDR values for and in the second, fourth, and fifth lines, we rerun CosmoPMC ten times for each model, and determine the mean and average for the CosmoPMC and SDDR estimators. Like for the lognormal sample, 24 independent Gaussianisation runs were started for each sample, and the one with the highest loglikelihood value chosen to transform the sample, which is then subjected to the analytic evidence computation procedure. The values in the first row of Table 2 are the weighted averages and standard deviations of all ten values, where the weights are determined from the analytic error bars as (cf. App. A). The values show that the combined data favour CDM over any of the extended models, although the evidence is not strong against either of the two.
Our values agree with the numbers of SDDR and PMC within the spread between the latter two estimators, but still small deviations remain, which are larger than the error bars quoted. These may be due to residual nonGaussianity in the transformed samples, to which the evidence is a sensitive measure.
6 Conclusions and outlook

We have discussed how to transform a posterior probability density approximately into a multivariate Gaussian, and various applications thereof:

From the parameters of the Gaussianised distribution and those of the transformation, we can reconstruct an analytic expression for the original posterior probability density, given a point sample drawn from it. This facilitates the combination of different data sets to obtain the joint posterior density.

Further, this analytic posterior can be used to display contours of the density in question or its marginals, without the need for density estimates or smoothing procedures. Also, in reproducing the contours of the probability density reliably, it outperforms kernel density estimates.

We suggest that, instead of distributing lengthy point samples in the form of a Markov chain, to use a Gaussianising transformation to disseminate a posterior density. Only the transformation parameters and the first and second moments of the resulting Gaussian are needed to reproduce the posterior density in its functional form; hence we can achieve substantial data compression.


We have demonstrated this algorithm with our implementation in C (code on request), which employs Markov Chain Monte Carlo (MCMC) samples from Planck data. We used Box–Cox and Arcsinh–Box–Cox (ABC) transformations to Gaussianise various marginal distributions with distinctive nonGaussian features, and showed the resulting contours.

One distinctive application of Gaussianising transformations, which we discuss and demonstrate here, is a novel method to compute the model evidence of a posterior distribution, given a point sample from it. We have tested this method on cosmological data from lensing and baryon acoustic oscillations, for different cosmological models, and find slight preference for CDM. Compared to the numerical results from Population Monte Carlo (PMC) and the SavageDickey Density Ratio (SDDR), our new method of computing the evidence agrees well within the spread of the other two.

We have introduced the crosscontour (CC) plot as a tool to decide whether one probability density reproduces the contours of another, or if they do not, to detect where they deviate.
There are several possible extensions of our method, and directions to advance its scope:

It is possible to engage new families of transformations, designed to cure a wider spectrum of nonGaussian features that a multivariate probability density may possess – in our implementation, new families can easily be included.

Gaussianisation may be employed for fast sampling from a nonGaussian probability density, in case that the Gaussianising parameters are either known exactly or to sufficient accuracy. Afterwards, it is possible to quickly draw a point sample from a multivariate Gaussian distribution, and transform this sample with the inverse map.

To improve the accuracy of the evidence computation, it is possible to replace the loglikelihood of Eq. (5) with another loss function, which penalises deviations from Gaussianity in a sharper manner.

So far, we have been working with unimodal probability densities. We require the transformations to be bijective, hence we cannot map a multimodal distribution into a unimodal Gaussian. However, we may be able to transform such a density into a mixture of (possibly overlapping) Gaussians, where we now have to estimate the weight factor for each constituent from the transformed sample, in addition to each and . The requisite number of components can possibly be determined with standard clustering algorithms.
Acknowledgements
RLS is grateful to Martin Kilbinger, Karim Benabed, Catherine Heymans and Stephen Feeney for helpful discussions, and has been supported by the Perren Fund in Astronomy, and by IMPACT (UCL MAPS faculty); BJ acknowledges support by an STFC Ernest Rutherford Fellowship, grant reference ST/J004421/1; HVP was supported by the European Research Council under the European Community’s Seventh Framework Programme (FP7/20072013) / ERC grant agreement no.306478CosmicDawn. This work is based on observations obtained with Planck (http://www.esa.int/Planck), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada. In part, this work was supported by National Science Foundation Grant No. PHYS1066293 and the hospitality of the Aspen Center for Physics.
References
 Anderson et al. (2012) Anderson L. et al., 2012, MNRAS, 427, 4, 34353467 (arXiv: 1203.6594)
 Benjamin et al. (2013) Benjamin J. et al., 2013, MNRAS, 431, 2, 15471564 (arXiv:1212.3327)
 Box & Cox (1964) Box G.E.P., Cox D.R., 1964, J. R. Statist. Soc. B 26/2, 211252
 Černý (1985) Černý V., 1985, J. Optimization Theory and Applications 45/1, 4151
 Chu, Kaplinghat & Knox (2003) Chu M., Kaplinghat M., Knox L., 2003, ApJ, 596, 725 (arXiv:astroph/021466v2)
 Dickey (1971) Dickey J., 1971, Ann. Math. Statist., 42, 204223
 Erben et al. (2013) Erben T. et al., 2013, MNRAS, 433, 3, 25452563 (arXiv:1210.8156)
 Fendt & Wandelt (2007) Fendt W.A., Wandelt B.D., 2007, ApJ, 654, 2 (arXiv:astroph/0606709)
 Heymans et al. (2012) Heymans C. et al., 2012, MNRAS, 427, 146166 (arXiv:1210.0032)
 Hildebrandt et al. (2012) Hildebrandt H. et al., 2012, MNRAS, 421, 3, 23552367 (arXiv:1111.4434)
 Jaynes (2003) Jaynes E.T., 2003, Probability Theory: The Logic of Science, Cambridge University Press, Cambridge
 Jimenez, Verde & Peiris (2004) Jimenez R., Verde L., Peiris H.V., 2004, Phys. Rev. D, 70, 023005 (arXiv:astroph/0404237)
 Joachimi & Taylor (2011) Joachimi B., Taylor A., 2011, MNRAS, 416, 2, 10101022 (arXiv:1103.3370)
 Kahn & Marshall (1953) Kahn H., Marshall A.W., 1953, J. Operations Res. Soc. Am., 1, 5, 263278
 Kass & Raftery (1995) Kass R., Raftery A.E., 1995, JASA , 90, 430, 779795
 Kilbinger et al. (2010) Kilbinger M. et al., 2010, MNRAS, 405, 4, 23812390 (arXiv:0912.1614)
 Kilbinger et al. (2013) Kilbinger M. et al., 2013, MNRAS, 430, 3, 22002220 (arXiv:1212.3338)
 Taylor & Kitching (2010) Kitching T., Taylor A., 2010, MNRAS, 408, 2, 865875 (arXiv:1003.1136)
 Kosowsky, Milosavljevic & Jimenez (2002) Kosowsky A., Milosavljevic M., Jimenez R., 2002, Phys. Rev. D, 66, 063007 (arXiv:astroph/0206014v1)
 Lewis & Bridle (2002) Lewis A., Bridle S., 2002, Phys. Rev. D, 66, 103511 (arXiv:astroph/0205436v3)
 MacKay (2003) MacKay D.J.C., 2003, Information Theory, Inference, and Learning Algorithms, Cambridge University Press, Cambridge
 Miller et al. (2013) Miller L. et al., 2013, MNRAS, 429, 4, 28582880 (arXiv:1210.8201)
 Nelder & Mead (1965) Nelder J.A., Mead R., 1965, Comput. J., 7, 308â313
 Percival et al. (2007) Percival W.J. et al., 2007, MNRAS, 381, 3, 10531066 (arXiv:0705.3323)
 Powell (2007) Powell M.J.D., 2007, IMA J. Numer. Analysis, 28, 4, 649664.
 Planck XVI (2014) Planck Collaboration (Ade P.A.R. et al.), 2014, A&A, 571, A16 (arXiv:1311.1657)
 Planck XIII (2015) Planck Collaboration (Ade P.A.R. et al.), 2015, A&A, in press (arXiv:1502.01589)
 Sandvik et al. (2004) Sandvik H., Tegmark M., Wang X., Zaldarriaga M., 2004, Phys. Rev. D, 69, 063005 (arXiv:astroph/0311544v2)
 Silverman (1998) Silverman B.W., 1998, Density Estimation for Statistics and Data Analysis, London, Chapman & Hall/CRC
 Skilling (2006) Skilling J., 2006, Bayesian Analysis 1 no.4, 833860
 Velilla (1993) Velilla S., 1993, Statistics & Probability Letters 17, 259
 Verde et al. (2013) Verde L., Feeney S.M., Mortlock D.J., Peiris H.V., 2013, JCAP09(2013)013 (arXiv:1307.2904)
Appendix A Analytic evidence computation
We outline how the computation of the logevidence (see Eq. 13) can be performed analytically, i.e. without numerical optimisation. Our data consist of a Gaussianised weighted sample of points in , and the values of the transformed logposterior on each of these points, . To fit a multivariate unnormalised Gaussian
(15) 
through the values of , we use the regression model
(16) 
which is linear in each of the regression parameters: the upperdiagonal components of the symmetric matrix , the components of vector , and the scalar . Assuming independence and homoscedasticity, we arrive at our quantity to minimise,
(17) 
which is quadratic in every regression parameter. Thus, we can write all normal equations of the regression problem,
(18) 
as a dimensional linear inhomogeneous vector equation, and solve via singular value decomposition. From the resulting values of , the parameters of the multivariate Gaussian (Eq. 15) can readily be computed as
(19)  
(20)  
(21) 
Furthermore, we can use the analytic regression procedure to find error bars on these estimators, and thus on . To this end, we can analytically find the covariance matrix of all parameters from the form of and the transformed data set , and then approximate the variances of and of by standard Gaussian error propagation. In particular:
(22)  
(23) 
with
(24) 
and
(25)  
where .
Appendix B Unboxing Transformations
A single MP , which is assumed to be constrained to an open interval , is redefined via the unboxing transformation
(26) 
where denotes the inverse of the cumulative distribution function of the Normal distribution:
(27) 
, thus designed, has the following properties: it is bijective and smooth; the limits are ; . Further, the midpoint of the interval is fixed: , . Sending the interval boundaries to infinity simultaneously will result in the identity transformation
(28) 
This is a generalisation of the widely used probit transformation, which maps the unit interval onto the real numbers as . Our modified probit has one huge advantage for the subsequent search for a Gaussianising transformation: If , as a random variable, is uniformly distributed on , then is normally distributed with mean and spread .
In statistics, a frequentlyused alternative to probit is the logit map . For our purposes, however, the probit is preferable, since a similarly rescaled version of this logittransformation would yield a distribution with excess kurtosis, instead of a Gaussian.
For a dimensional vector of MPs , constrained to intervals , we unbox each dimension separately, with the appropriate boundaries:
(29) 
Before starting the search for the Gaussianisation parameters, every point in the original sample is mapped through this transformation.