Gaussianisation for fast and accurate inference from cosmological data
We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box–Cox transformations and generalisations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in MCMC samples. Further, the model evidence integral (i.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianising transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianise the joint distribution of data from weak lensing and baryon acoustic oscillations (BAO), for different cosmological models, and find a preference for flat CDM. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.
keywords:methods: data analysis, methods: numerical, methods: statistical, cosmology: observations
According to the Bayesian paradigm, inference on any data set will yield a posterior probability distribution on the space of model parameters. This density function represents, in its entirety, the full knowledge gained in the attempt to infer the underlying parameters. Such distributions often depart significantly from a Gaussian form. This led to the widespread use of Monte Carlo sampling methods to report the typically non-Gaussian posterior constraints obtained from experiments, such as Planck111See Planck XVI (2014); Planck XIII (2015). For the Markov chains see http://www.cosmos.esa.int/web/planck/pla; consult http://lambda.gsfc.nasa.gov for an eclectic list of data combinations in various cosmological models.. Reconstructing the posterior density from such a Markov Chain Monte Carlo (MCMC) sample, e.g. to visualise the multivariate parameter constraints, or to combine the constraints from multiple data sets, can be nontrivial due to the large sample size necessary to appropriately map the distribution; in addition, the contours often need further smoothing for stylistic reasons.
Instead, we propose to redefine the underlying model parameters, so that the new posterior density approximately takes a Gaussian shape after the transformation from old to new parameters; this presupposes that we begin with a uni-modal posterior density. Such a transformation would allow for enormous data compression: instead of a full MCMC sample from the posterior distribution, we only need to report the Gaussianising transformation, and the first and second moments of the resulting Gaussian distribution. From these alone, we can reconstruct an analytic expression for the full non-Gaussian posterior density, and subsequently combine it with other data sets.
Further, it becomes possible to display and compare non-ellipsoidally-shaped contours of non-Gaussian parameter constraints – whether joint or marginalised – without any smoothing. Thus, this method allows for summarising posterior densities in a versatile and efficient way, which faithfully reproduces the information contained in the full probability density.
The idea of transforming a function to a Gaussian shape is, in principle, not limited to reproducing probability densities. As the integral over a Gaussian can be performed analytically, this opens up a strategy to feasibly compute high-dimensional integrals, such as the model evidence (i.e. the marginal likelihood).
The transformed model parameters are analogous to the normal parameters of the cosmic microwave background (CMB): it has been highly advantageous for rapid likelihood calculation (such as CMBfit, CMBwarp, and PICO, see Kosowsky, Milosavljevic & Jimenez 2002; Chu, Kaplinghat & Knox 2003; Sandvik et al. 2004; Jimenez, Verde & Peiris 2004; Fendt & Wandelt 2007), to redefine the cosmological model parameters such that the model is approximately linear in these newly defined normal parameters. Thus the likelihood approximately takes the form of a multivariate Gaussian density. For most observables, we would be at a loss to search for a linearising redefinition of the model parameter space directly motivated by the structure of the model itself. Instead, is it possible to computationally find suitable parameters, i.e. a suitable bijective transformation which approximately Gaussianises the posterior in question?
Extending the work of Joachimi & Taylor (2011), we present an algorithm to find and test such a non-linear Gaussianising transformation from a Markov chain sampling the posterior distribution of the original parameters. In principle, this distribution could stem from any experiment or data type. In Sect. 2, we describe the details of the algorithm, verification of the reconstructed posterior distribution, and the specific transformations employed. Following an illustration of these on a toy example in Sect. 3, Sect. 4 demonstrates the performance of our implementation, using Markov chains from the Planck satellite constraints on cosmological models (Planck XVI, 2014; Planck XIII, 2015). In Sect. 5, we present an efficient way to calculate the model evidence, a quantity needed to judge the predictivity of different model parameter spaces, via Gaussianising transformations. Section 6 offers conclusions and suggests future directions.
To find the right multivariate transformation, we will at first adopt the strategy of redefining each model parameter separately, i.e. the first new model parameter will only depend on the first old model parameter, etc. In Sect. 2.4, we will drop this assumption and consider transformations which can correlate the model parameters.
The set of all multivariate Gaussianisation transformations, from which we are to pick the optimal one, will be constructed in the following way: assume a family of bijective real-valued functions indexed by real transformation parameters . Given the -dimensional vector of model parameters , we transform to the new (Gaussian-distributed) parameters via
where the full multivariate transformation is now specified by all transformation parameter -tuples , i.e. one for each model parameter. To avoid confusion, we shall from now on distinguish between model parameters (MP), which the posterior probability density depends on, and transformation parameters (TP), which specify one Gaussianising transformation. The algorithm can be applied to arbitrary parametrised transformation families, suitable for various forms of non-Gaussianity – in principle, we could even choose different transformations for each model parameter, instead of using the same shape for all of them.
Assuming such a bijective transformation , we immediately have an analytic form for the posterior density
One still needs to find the mean vector and the covariance matrix of the transformed posterior density . These are estimated from the transformed sample (see Sect. 2.1).
2.1 Finding the optimal transformation
Given a weighted point sample , containing points in and probability weights , which has been sampled from the posterior distribution in question, we wish to quantify the Gaussianisation properties of different transformations applied to this sample. To this end, we follow Box & Cox (1964) (see also Velilla, 1993 and Joachimi & Taylor, 2011) in maximising the profile likelihood over TP space, i.e. depending only on the real transformation parameters contained in . This likelihood is a function of the transformation parameters , quantifying how well each transformation Gaussianises the distribution of data set ; however, it does not pertain to the posterior density in Eq. (2), which is a function of the model parameters .
For the Gaussian parameters , in Eq. (2), we insert their standard debiased weighted maximum-likelihood estimators, which depend on the transformed sample
with and . These estimators depend on indirectly, as they are computed after has been transformed with . We arrive at the profile weighted log-likelihood
where several terms independent of have been discarded. In general, both the covariance matrix of the transformed sample and the Jacobian term will depend on the transformation parameters in a non-linear way, hence finding the maximum-likelihood values for the TPs will require numerical optimisation. For this purpose, we have employed the GSL implementation of the well-known Nelder–Mead simplex algorithm (Nelder & Mead 1965).
As already noted by Joachimi & Taylor (2011), log-likelihood degeneracies in TP space are common. These may jeopardise the numerical stability of the calculation of . There are generic cases where a moderately large value for one transformation parameter may already result in unmanageably large numerical values for the transformed sample, such as e.g. the power transformation with . Generically, the optimisation algorithm tends to slide into these TP space regions quite easily. Hence, we include a penalty term of the form
where are the parameter values corresponding to the identity transformation. We minimise the function over the real numbers in . Values of and have proven to be highly stabilising, and at the same time do not distort the shape of the resulting analytic posterior distribution.
In this work, we employ the Nelder–Mead algorithm just for illustrating the method – faster and more reliable algorithms to find the global minimum of the likelihood function exist (such as Bound Optimisation BY Quadratic Approximation – BOBYQA, see Powell, 2007) and can readily be applied here.
2.2 Box–Cox transformations and their kin
The Box–Cox transformation (Box & Cox, 1964) is a generalisation of the power map. This transformation family is widely used in statistics and econometrics, e.g. to make data approximately homoscedastic and normal. Our usage is different in that we use it to alter the distribution of model parameters, rather than the distribution of data. Including a shift parameter , the one-dimensional version is defined as
for a single MP , i.e. . Note that the family is continuous at and that the mapping requires . Typically, an MP with a skewed distribution can be transformed to an MP with symmetric, Gaussian distribution upon the appropriate choice of the power TP , e.g., a log-normal distribution can be analytically transformed to a Gaussian with . The identity transformation corresponds to and . Inserting this transformation family into Eq. (2), we recover the formula given in Joachimi & Taylor (2011).
As an extension of the Box–Cox family, we propose the Arcsinh–Box–Cox transformation (‘ABC transformation’ hereafter):
The inclusion of the TP will prove particularly useful to remove residual kurtosis from a MP distribution. The identity transformation reads , , .
The Box–Cox family does not form a group, because two subsequent transformations cannot be expressed as another Box–Cox transformation; the same holds for the ABC family. This will be of importance for Sect. 2.4.
Box–Cox transformations demonstrate that the domain of the function – in particular its dependence on – requires special attention: for given , it is defined only for , the same holds for ABC transformations. Thus, the optimisation procedure for the sample requires that , the shift parameter for the model parameter , is bounded from below, i.e. . Conversely, this means that, once the optimal transformation parameters are found and inserted into the analytic expression for the original posterior density, Eq. (2), it is not defined for every value possible value of the MP , but only for . This also necessitates that the normalisation needs to be adjusted, which can be done analytically. However, if the sample is large enough so that the tails of the distributions are properly represented, this truncation of the domain is not problematic.
2.3 Verifying the optimal transformation
Once the optimal transformation within its family is found, how do we judge the effectiveness of the resulting Gaussianisation? We adopt the following pragmatic standpoint: if the analytic posterior manages to reproduce the one-dimensional and two-dimensional marginalised contours of the sample, it is deemed acceptable. To this end, we propose the test via a cross-contour (CC) plot. The idea is to characterise a probability density by the location of its contours - the surfaces of constant density - and the probability mass stored inside, i.e. the integral of the density over the interiour of a contour. If two densities and are identical, then they will store the same mass in any region of the parameter space; if they are different, we expect to find different probabilities for the same regions (e.g. the regions bounded by contours of ). Thus, looking at the family of contour-bounded regions of , we can ask: does the probability for these, assigned via , agree with the probability for them assigned via ?
To formalise this, consider the following: given a probability density in dimensions, which takes function values between 0 and , we define the contour-bounded region assigned to the density value as
The probability mass enclosed in any of these is
Now, assuming we have two probability densities and in dimensions, do the contours of reproduce those of ? They do in the relevant sense if for every , the -mass enclosed in the -contour of equals the -mass in this contour, i.e.
It should be noted that this alone is not a sufficient condition for , but the counterexamples, which can be constructed mathematically, are non-generic and can be neglected for our purposes.
To detect deviations of the contours of and , we could simply plot the left and the right side of Eq. (9) for a grid of -values between 0 and , and plot the points with respect to the line . For concrete problems, it is often more instructive to subtract the right side from the left side, and plot the excess (or deficit) probability mass of inside the contours of . If, in this plot, the excess for every contour is consistent with zero, we have succeded.
In our situation, we compare a point sample with a probability density function – the analytic posterior density as reconstructed via Gaussianisation. The right side of Eq. (9) is the probability mass in the region where the density is greater or equal to ; the left side is the fraction of the point sample which lies in the same region. So, for every value in the range of , we find the probability mass enclosed in by gridding over a region containing the sample. Similarly, we count the number of points in where the value of is above , to compute the fraction of points that lie within . This fraction is an estimator of the actual probability mass enclosed, because is a discrete sample from the actual posterior distribution. To find the variance of this estimator, we calculate the fraction on 2,000 bootstrap realisations of , and determine the 95%-confidence intervals from these. If, for every , the analytic posterior probability mass inside is within this confidence interval for the sample point fraction within , we judge our reconstruction attempt to be successful.
It should be noted that poor MCMC sampling of the original target density will yield a poor representation of this density by our reconstructed density (2). Any information about the distribution lost by undersampling cannot be regained. However, as demonstrated in Sect. 3, our method reproduces less biased contours than other standard methods of density estimation even in regions of low point density, i.e. where any density estimate must be an extrapolation. Hence, it can be used when the length of the input Markov chains is restricted by computational cost or file size.
2.4 Multi-pass transformations
If even the optimal Gaussianising transformation amongst a given family does not bring the posterior density sufficiently close to a Gaussian shape (e.g., as determined via a CC plot), we have two options. We can provide a different family of transformations and redo the optimisation; or we can repeat the process on the sample after the first transformation. As already mentioned in Sect. 2.2, the transformation families employed in this work do not form groups. Hence, two subsequent transformations do not result in another transformation from that family, and transforming twice potentially provides a better Gaussianisation than transforming once. In principle, it is possible to apply multiple subsequent transformations, should the quality of the result necessitate it.
In this spirit, we have implemented the following two-pass transformation protocol:
Optimise the TPs of the first transformation.
Linear reshaping: centring, rescaling, rotating.
Optimise the TPs of the second transformation.
Strictly speaking, this transformation, whilst being bijective, no longer falls into the class as set up in Eq. (1), as different model parameters are mixed. Nonetheless, Eq. (2) for the analytic posterior density generalises in a straightforward way.
In the second step, the sample after the first Gaussianising transformation is subjected to the following maps (in this order): subtract the sample mean from every parameter, so that the sample is centred on the origin. Then, rescale every parameter such that the standard deviation is unity. Finally, rotate into the eigenbasis of the covariance matrix – this procedure is generally known as principal component analysis (PCA). These reshaping operations not only help to avoid numerical instabilities (centring, rescaling), but also open up new directions for Gaussianisation by presenting uncorrelated parameters to the second Gaussianising transformation, since the transformations defined in Eq. (1) cannot mix parameters. If two parameters have substantial covariance after step 1, it can be crucial to decorrelate them.
Nevertheless, a price is to be paid for the Gaussianising power added with Step 2: it sacrifices a decisive property of the simple one-step transformation routine, namely that every transformed MP only depends on a single untransformed MP . This property allows for easy marginalisation of the analytic posterior: to compute this, we can marginalise the Gaussianised sample by dropping all coordinates we wish to marginalise out and determining the mean vector and covariance matrix of the remaining ones. Transforming this marginalised Gaussian density back will then yield the marginalised posterior density on the untransformed MPs. However, with linear reshaping included, this is no longer possible.
This may be problematic for some applications (such as visualisation of 1D or 2D marginal distributions, or creating a CC plot), but not for others – as long as we need only the marginal distribution of a single combination of parameters, we can marginalise by discarding all MP columns of the sample except the ones in question, prior to Gaussianising.
3 A toy example
We illustrate these ideas on a two-dimensional example. We draw a sample of 10,000 points from a bivariate Gaussian distribution, and map it through an inverse Box–Cox transformation with known input TP values (see Table 1). All weights are set to unity.
This mock data sample has the advantage that there is at least one Box–Cox transformation which precisely Gaussianises the underlying probability distribution. Figure 1 shows the original sample, and the one transformed with the one-pass Box–Cox transformation which was found to be optimally Gaussianising, i.e. maximising the log-likelihood in Eq. (5). As this is a comparably simple problem, we have set the penalty term in Eq. (6) to zero. The Nelder–Mead algorithm was started sixteen times independently with randomised initial conditions. The values of the recovered optimal TPs are shown in Table 1; the standard deviation amongst these sixteen values is of order at worst, so multiple Nelder–Mead runs are not necessary in this low-dimensional example: all of them find the same maximum of the log-likelihood. In high-dimensional cases, however, this strategy can increase the robustness of the procedure.
The apparent difference between the parameters of the single inverse Box–Cox transformation and the values found for Box–Cox optimisation is due to degeneracies in parameter space. To illustrate these, we show the profile likelihood for where are held fixed at their input values, and vice versa, in Fig. 2. The TPs found by the optimisation algorithm (black crosses - note that they are projected onto the plane for which the profile likelihood is shown) are degenerate with the input ones (red star). Both Box–Cox transformations map the distribution to sufficiently Gaussian form.
|Parameter||input value||recovered value|
We compare our method of reconstructing an analytic posterior density from an MCMC sample with the standard nonparametric method, kernel density estimation (KDE), which also aims to find a functional form for the probability density. The -contours of the posterior density from Gaussianisation are shown jointly with those from KDE: these employ a Gaussian kernel, whose covariance matrix is estimated from the sample, and Silverman’s rule (Silverman 1998) has been used to determine the bandwidth parameter. No additional smoothing has been applied in Fig. 3, top panel. The bottom panel shows the excess cross-contour probability masses between analytic posterior and sample, and KDE and sample respectively, as detailed in Sect. 2.3. Whereas the Box–Cox posterior is consistent with the sample distribution for every single contour, the KDE contours show a strong bias – the contours are wider than they should be. Given that the precision of the contour reconstruction is, for the Box–Cox method, limited only by the finite size of the sample, it has the potential to perform better than the (biased) kernel density method. Additionally, for applications in which frequent calls of the posterior density are a bottleneck for computation speed, our method of density reconstruction can be advantageous: the additional initial cost for finding the transformation parameters can be outweighed by the subsequent evaluation speedup.
4 Performance results: Planck data
To demonstrate how the algorithm works on real data, we have employed MCMC samples from the first data release of the Planck mission (see Planck XVI, 2014). This satellite has measured the temperature and polarisation anisotropies in the CMB, whose power spectra are sensitive measures of the underlying cosmology. The Planck Collaboration has published several data products222See http://www.cosmos.esa.int/web/planck/pla., including MCMC samples from the posterior probability densities of various cosmological models, generated with CosmoMC (see Lewis & Bridle, 2002, also: http://cosmologist.info/cosmomc).
The baseline cosmology is the standard model of a flat Universe with cold dark matter and a cosmological constant, commonly known as CDM. It contains six parameters: (today’s baryon density), (today’s cold dark matter density), (scaled sound horizon), (reionisation optical depth), (spectral index of primordial scalar perturbations), and (log power amplitude of primordial scalar perturbations). Several extensions of this baseline model are also listed, including those by adding either of the following parameters: (curvature parameter), (dark energy equation of state), (primordial tensor-to-scalar amplitude ratio), and (effective number of relativistic degrees of freedom). Further, these chains list derived quantities, e.g. today’s Hubble parameter , the age of the Universe, and a variety of foreground modelling parameters, such as and , modelling the amplitudes of Poisson point sources and the cosmic infrared background in the frequency bands , and . These are of particular interest to us, as the most prominent non-Gaussian features of the posterior densities can be seen in them.
The chains, as presented, are not decorrelated, so we thin them by using every 20th sample. We employ the
‘..._planck_lowl_...’ chains, which use only the temperature-temperature correlations. The plots in this section are created using the seven-parameter model including ; the sample contains 11,546 points after thinning.
All these Markov chains assume uniform proper prior densities (i.e. being supported on compact rectangular boxes) and list the log-likelihood for every point (for further details, see Planck XVI, 2014).
We show several 2D marginalised posterior samples exhibiting different non-Gaussian features, and how well they are reproduced by the analytic posterior (Eq. 2), such as triangular shapes (see Fig. 4), pronounced non-linear degeneracies (see Fig. 5), and sharp boundaries (‘walls’) arising from MP space boundaries (see Fig. 6).
Figure 5 demonstrates the usefulness of the intermediate PCA in between the ABC transformations: the first transformation has straightened out the curved shape of the maximum, but the distribution still appears skewed towards the upper left direction (see top right panel). This is remedied by reshaping, PCA and another ABC transformation (bottom right panel) – the second Gaussianising transformation having only little effect compared to the PCA.
The Gaussianisation of the distribution in Fig. 6 (top left) shows how two concatenated transformations can be more powerful than a single one. The once-transformed sample still exhibits negative excess kurtosis, which is removed by the second transformation (bottom left to bottom right panel).
Further, we compare the CC plots of this two-pass transformation and the one-pass transformation in Fig. 7, which also shows the resulting contours (left panel). The associated one-pass CC plot (bottom right) shows a significant deficit of point sample mass compared to the analytic posterior mass, for the posterior contours between and , as well as for , and between and 1. The latter is visible between the 2- and 3-contours close to the wall-like constraint at . By contrast, the CC plot for the two-pass transformation (Fig. 7, top right) demonstrates good agreement between the contours of analytic posterior and point samples.
To demonstrate the algorithm working on a high-dimensional example, we Gaussianise a seven-dimensional Planck MCMC sample with an ABC transformation. In order to visualise the result, we show all one-dimensional and two-dimensional marginal distributions of the point sample and the full analytic posterior density (see Fig. 8). We employ one-pass transformations, because, as discussed in Sect. 2.4, the marginalisation of the analytic posterior from 7D down to 2D or 1D would not be possible without explicit integration or sampling, had we chosen to use the two-pass protocol.
5 Application: Fast evidence computation
To decide which of two models and , each with their associated MP space, is more predictive on a common set of data , the essential quantity to compute is the model evidence , see Jaynes (2003); MacKay (2003); Kass & Raftery (1995); Skilling (2006). The ratio of the evidences (called Bayes factor) is then used for updating the prior model odds to posterior model odds in the Bayesian sense:
The evidence itself, for each model, can be computed via
i.e. via integration of the (unnormalised) posterior density over the respective parameter space of model – hence the term ‘marginal likelihood’ for . If takes a form with non-Gaussian features, and if the model parameter space is high-dimensional, this integral itself is often difficult to calculate.
However, with a bijective transformation that Gaussianises , we can compute the evidence integral analytically. If has the shape of an (unnormalised) multivariate Gaussian
with means , covariance matrix and maximum , the log-evidence reads
Similar expressions for Gaussian posterior densities can be found in Taylor & Kitching (2010). To estimate , we need the absolute normalisation of ; hence this method can only be applied to samples which provide the values for (possibly also in the form of log-likelihood and log-prior). From these, we compute the values of on the optimally-Gaussianised sample by adding the logarithm of the transformation Jacobian, and then fit the parameters , , and of the Gaussian via least-squares regression. This can be performed analytically, and even be used to compute an error bar on the value of – see Appendix A for details.
If the prior distribution for one MP, and hence the posterior, is supported only on a finite interval, the same will hold true for the transformed MP if we restrict ourselves to one-pass transformations. If the sample size is large enough to properly represent the cutoff, the Gaussianisation transformation will alleviate this feature, but may not fully remove it. Assuming the marginal distribution to be Gaussian, when in reality we may deal with a truncated Gaussian, will lead to a systematic error in the evidence, so it is advantageous to remove these features before starting the search for optimally Gaussianising TPs. Appendix B details ‘unboxing transformations’, which redefine the MPs, mapping a finite open interval to the entire real line. In fact, it is also possible to use them for posterior density reconstruction, before Step 1 in Sect. 2.4.
To demonstrate this idea, we compute the evidence integral first on a mock data set, and subsequently on real data from cosmology. For the former, we draw a random sample of length 10,000 from a ten-dimensional log-normal probability distribution, and assign to each point the value of the probability density function, multiplied with a factor of . All weights are set to unity. This mock sample is subjected first to the Gaussianisation procedure with one-pass ABC transformations (no unboxing), and then to the regression outlined in App. A to retrieve the best-fit estimate for and its error bar. To verify these, we determine the distribution of the estimator in Eq. (13) by producing 1,000 bootstrap samples from the transformed sample, and computing on each of them, together with its one-sigma confidence intervals. To increase the reliability of the optimally Gaussianising transformation, 24 independent Gaussianisation runs are started with randomised initial conditions; Fig. 9 shows the results for these. Relative to the “true” value of 5, these agree to sub-percent accuracy. It is also noteworthy that a lower local maximum of the likelihood, i.e. one further to the right, will depart more from the true value. Hence, a location in TP space that is close to the exact optimum will yield a biased value for the log-evidence. This indicates that the evidence is a sensitive indicator for departures from Gaussianity. Note that a log-normal distribution can be precisely Gaussianised with Box-Cox transformations – and thus also by ABC transformations, which are a superset of these. Further, it is noteworthy that our analytic procedure yields error bars of the right magnitude, yet somewhat more conservative, compared to their bootstrapped counterparts. This discrepancy arises because the bootstrapped distributions for and deviate slightly from Gaussianity, whereas the analytic error bars assume Gaussian error propagation (see App. A for details).
For a demonstration on real-world data, we Gaussianise the joint posterior distribution of MPs of data from weak lensing and baryon acoustic oscillations. The weak lensing data set is the 2D cosmic shear data taken by the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS; see Heymans et al. 2012; Kilbinger et al. 2013). The CFHTLenS survey analysis combined weak lensing data processing with THELI (Erben et al. 2013), shear measurement with lensfit (Miller et al. 2013), and photometric redshift measurement with PSF-matched photometry (Hildebrandt et al. 2012). A full systematic error analysis of the shear measurements in combination with the photometric redshifts is presented in Heymans et al. (2012), with additional error analyses of the photometric redshift measurements presented in Benjamin et al. (2013).
The BAO data set is the Data Release 9 (DR9) CMASS sample from the Baryon Oscillation Spectroscopic Survey (BOSS), which is part of the Sloan Digital Sky Survey III (SDSS-III) – see Anderson et al. (2012). This contains 264,283 massive galaxies in a redshift range , whose correlation function and power spectrum both exhibit the features of baryon acoustic oscillations. The quantity , i.e. the ratio of the comoving sound horizon at the baryon drag epoch and the spherically volume-averaged distance , is a probe of the underlying cosmological parameters – see Percival et al. (2007) for details.
To draw samples from the posterior distribution, we use the CosmoPMC software package333See http://www2.iap.fr/users/kilbinge/CosmoPMC/., which uses Population Monte Carlo (PMC), an algorithm to approximate the target distribution by a Gaussian mixture model. We compare three cosmological models: standard flat CDM, curved CDM, flat CDM, and curved CDM. The first has a four-dimensional parameter space spanned by matter density , power spectrum normalisation , baryon density , and the normalised Hubble parameter – all other parameters are set to their best fit values for flat CDM, see Planck XIII (2015). The latter two contain a fifth model variable each – curvature parameter and constant dark energy equation-of-state parameter , respectively. For all of these parameters, flat proper priors were chosen.The baseline model – flat CDM is always referred to as model 1, whereas model 2 is one of the two extensions. As a byproduct of the sampling process, PMC provides the model evidence for the data set used – see Kilbinger et al. (2010) for further details.
|flat CDM||curved CDM||flat CDM|
|(G)||486.96 0.01||485.79 0.03||486.09 0.05|
|(PMC)||487.02 0.03||485.84 0.01||486.00 0.04|
|(G)||n.a.||1.17 0.04||0.87 0.05|
|(SDDR)||n.a.||1.23 0.04||0.93 0.06|
|(PMC)||n.a.||1.19 0.04||1.03 0.05|
In the special situation where one model is nested inside the other, the evidence ratio can be computed via the Savage–Dickey density ratio (SDDR) – see Dickey (1971); Verde et al. (2013), and citations therein. Under mild conditions on prior and posterior densities for the full model and the submodel , the ratio can be derived to be
where denotes the extra parameter (or parameters) contained in but not in , is the value of that specifies the submodel , and and are posterior and prior densities of the full model, marginalised over all model parameters but (see Verde et al., 2013 for details).
We find that the log-evidence values computed by CosmoPMC need to be offset by a factor of times the log-prior density, where is the number of data sets used. This is due to a non-standard interpretation of the prior density within CosmoPMC. Throughout this work, we apply this correction to the log-evidence values produced by CosmoPMC as well as the log-posterior values extracted from the CosmoPMC output. We follow the practice of (Kilbinger et al., 2010, 2013) of accepting a CosmoPMC run as soon as the built-in convergence diagnostic, called perplexity, exceeds a value of . Sampling to even higher values for the perplexity, up to , still changes the CosmoPMC value for by as much as 0.1 – this indicates a residual bias in the statistic. However, since the exact same offset has to appear in the CosmoPMC output values for the log-evidence and for the non-normalised log-posterior , it is not of relevance to demonstrating our method, so investigating its origin is beyond the scope of this work.
Table 2 shows the log-evidences for the three models, and the Bayes factors of CDM compared to either of the two extended models. The numbers in the first line were computed via one-pass Gaussianisation with ABC transformations, preceded by an unboxing transformation. To estimate the scatter of the CosmoPMC and SDDR values for and in the second, fourth, and fifth lines, we rerun CosmoPMC ten times for each model, and determine the mean and average for the CosmoPMC and SDDR estimators. Like for the log-normal sample, 24 independent Gaussianisation runs were started for each sample, and the one with the highest log-likelihood value chosen to transform the sample, which is then subjected to the analytic evidence computation procedure. The values in the first row of Table 2 are the weighted averages and standard deviations of all ten values, where the weights are determined from the analytic error bars as (cf. App. A). The values show that the combined data favour CDM over any of the extended models, although the evidence is not strong against either of the two. Our values agree with the numbers of SDDR and PMC within the spread between the latter two estimators, but still small deviations remain, which are larger than the error bars quoted. These may be due to residual non-Gaussianity in the transformed samples, to which the evidence is a sensitive measure.
6 Conclusions and outlook
We have discussed how to transform a posterior probability density approximately into a multivariate Gaussian, and various applications thereof:
From the parameters of the Gaussianised distribution and those of the transformation, we can reconstruct an analytic expression for the original posterior probability density, given a point sample drawn from it. This facilitates the combination of different data sets to obtain the joint posterior density.
Further, this analytic posterior can be used to display contours of the density in question or its marginals, without the need for density estimates or smoothing procedures. Also, in reproducing the contours of the probability density reliably, it outperforms kernel density estimates.
We suggest that, instead of distributing lengthy point samples in the form of a Markov chain, to use a Gaussianising transformation to disseminate a posterior density. Only the transformation parameters and the first and second moments of the resulting Gaussian are needed to reproduce the posterior density in its functional form; hence we can achieve substantial data compression.
We have demonstrated this algorithm with our implementation in C (code on request), which employs Markov Chain Monte Carlo (MCMC) samples from Planck data. We used Box–Cox and Arcsinh–Box–Cox (ABC) transformations to Gaussianise various marginal distributions with distinctive non-Gaussian features, and showed the resulting contours.
One distinctive application of Gaussianising transformations, which we discuss and demonstrate here, is a novel method to compute the model evidence of a posterior distribution, given a point sample from it. We have tested this method on cosmological data from lensing and baryon acoustic oscillations, for different cosmological models, and find slight preference for CDM. Compared to the numerical results from Population Monte Carlo (PMC) and the Savage-Dickey Density Ratio (SDDR), our new method of computing the evidence agrees well within the spread of the other two.
We have introduced the cross-contour (CC) plot as a tool to decide whether one probability density reproduces the contours of another, or if they do not, to detect where they deviate.
There are several possible extensions of our method, and directions to advance its scope:
It is possible to engage new families of transformations, designed to cure a wider spectrum of non-Gaussian features that a multivariate probability density may possess – in our implementation, new families can easily be included.
Gaussianisation may be employed for fast sampling from a non-Gaussian probability density, in case that the Gaussianising parameters are either known exactly or to sufficient accuracy. Afterwards, it is possible to quickly draw a point sample from a multivariate Gaussian distribution, and transform this sample with the inverse map.
To improve the accuracy of the evidence computation, it is possible to replace the log-likelihood of Eq. (5) with another loss function, which penalises deviations from Gaussianity in a sharper manner.
So far, we have been working with unimodal probability densities. We require the transformations to be bijective, hence we cannot map a multimodal distribution into a unimodal Gaussian. However, we may be able to transform such a density into a mixture of (possibly overlapping) Gaussians, where we now have to estimate the weight factor for each constituent from the transformed sample, in addition to each and . The requisite number of components can possibly be determined with standard clustering algorithms.
RLS is grateful to Martin Kilbinger, Karim Benabed, Catherine Heymans and Stephen Feeney for helpful discussions, and has been supported by the Perren Fund in Astronomy, and by IMPACT (UCL MAPS faculty); BJ acknowledges support by an STFC Ernest Rutherford Fellowship, grant reference ST/J004421/1; HVP was supported by the European Research Council under the European Community’s Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement no.306478-CosmicDawn. This work is based on observations obtained with Planck (http://www.esa.int/Planck), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada. In part, this work was supported by National Science Foundation Grant No. PHYS-1066293 and the hospitality of the Aspen Center for Physics.
- Anderson et al. (2012) Anderson L. et al., 2012, MNRAS, 427, 4, 3435-3467 (arXiv: 1203.6594)
- Benjamin et al. (2013) Benjamin J. et al., 2013, MNRAS, 431, 2, 1547-1564 (arXiv:1212.3327)
- Box & Cox (1964) Box G.E.P., Cox D.R., 1964, J. R. Statist. Soc. B 26/2, 211-252
- Černý (1985) Černý V., 1985, J. Optimization Theory and Applications 45/1, 41-51
- Chu, Kaplinghat & Knox (2003) Chu M., Kaplinghat M., Knox L., 2003, ApJ, 596, 725 (arXiv:astro-ph/021466v2)
- Dickey (1971) Dickey J., 1971, Ann. Math. Statist., 42, 204-223
- Erben et al. (2013) Erben T. et al., 2013, MNRAS, 433, 3, 2545-2563 (arXiv:1210.8156)
- Fendt & Wandelt (2007) Fendt W.A., Wandelt B.D., 2007, ApJ, 654, 2 (arXiv:astro-ph/0606709)
- Heymans et al. (2012) Heymans C. et al., 2012, MNRAS, 427, 146-166 (arXiv:1210.0032)
- Hildebrandt et al. (2012) Hildebrandt H. et al., 2012, MNRAS, 421, 3, 2355-2367 (arXiv:1111.4434)
- Jaynes (2003) Jaynes E.T., 2003, Probability Theory: The Logic of Science, Cambridge University Press, Cambridge
- Jimenez, Verde & Peiris (2004) Jimenez R., Verde L., Peiris H.V., 2004, Phys. Rev. D, 70, 023005 (arXiv:astro-ph/0404237)
- Joachimi & Taylor (2011) Joachimi B., Taylor A., 2011, MNRAS, 416, 2, 1010-1022 (arXiv:1103.3370)
- Kahn & Marshall (1953) Kahn H., Marshall A.W., 1953, J. Operations Res. Soc. Am., 1, 5, 263-278
- Kass & Raftery (1995) Kass R., Raftery A.E., 1995, JASA , 90, 430, 779-795
- Kilbinger et al. (2010) Kilbinger M. et al., 2010, MNRAS, 405, 4, 2381-2390 (arXiv:0912.1614)
- Kilbinger et al. (2013) Kilbinger M. et al., 2013, MNRAS, 430, 3, 2200-2220 (arXiv:1212.3338)
- Taylor & Kitching (2010) Kitching T., Taylor A., 2010, MNRAS, 408, 2, 865-875 (arXiv:1003.1136)
- Kosowsky, Milosavljevic & Jimenez (2002) Kosowsky A., Milosavljevic M., Jimenez R., 2002, Phys. Rev. D, 66, 063007 (arXiv:astro-ph/0206014v1)
- Lewis & Bridle (2002) Lewis A., Bridle S., 2002, Phys. Rev. D, 66, 103511 (arXiv:astro-ph/0205436v3)
- MacKay (2003) MacKay D.J.C., 2003, Information Theory, Inference, and Learning Algorithms, Cambridge University Press, Cambridge
- Miller et al. (2013) Miller L. et al., 2013, MNRAS, 429, 4, 2858-2880 (arXiv:1210.8201)
- Nelder & Mead (1965) Nelder J.A., Mead R., 1965, Comput. J., 7, 308â313
- Percival et al. (2007) Percival W.J. et al., 2007, MNRAS, 381, 3, 1053-1066 (arXiv:0705.3323)
- Powell (2007) Powell M.J.D., 2007, IMA J. Numer. Analysis, 28, 4, 649-664.
- Planck XVI (2014) Planck Collaboration (Ade P.A.R. et al.), 2014, A&A, 571, A16 (arXiv:1311.1657)
- Planck XIII (2015) Planck Collaboration (Ade P.A.R. et al.), 2015, A&A, in press (arXiv:1502.01589)
- Sandvik et al. (2004) Sandvik H., Tegmark M., Wang X., Zaldarriaga M., 2004, Phys. Rev. D, 69, 063005 (arXiv:astro-ph/0311544v2)
- Silverman (1998) Silverman B.W., 1998, Density Estimation for Statistics and Data Analysis, London, Chapman & Hall/CRC
- Skilling (2006) Skilling J., 2006, Bayesian Analysis 1 no.4, 833-860
- Velilla (1993) Velilla S., 1993, Statistics & Probability Letters 17, 259
- Verde et al. (2013) Verde L., Feeney S.M., Mortlock D.J., Peiris H.V., 2013, JCAP09(2013)013 (arXiv:1307.2904)
Appendix A Analytic evidence computation
We outline how the computation of the log-evidence (see Eq. 13) can be performed analytically, i.e. without numerical optimisation. Our data consist of a Gaussianised weighted sample of points in , and the values of the transformed log-posterior on each of these points, . To fit a multivariate unnormalised Gaussian
through the values of , we use the regression model
which is linear in each of the regression parameters: the upper-diagonal components of the symmetric matrix , the components of vector , and the scalar . Assuming independence and homoscedasticity, we arrive at our quantity to minimise,
which is quadratic in every regression parameter. Thus, we can write all normal equations of the regression problem,
as a -dimensional linear inhomogeneous vector equation, and solve via singular value decomposition. From the resulting values of , the parameters of the multivariate Gaussian (Eq. 15) can readily be computed as
Furthermore, we can use the analytic regression procedure to find error bars on these estimators, and thus on . To this end, we can analytically find the covariance matrix of all parameters from the form of and the transformed data set , and then approximate the variances of and of by standard Gaussian error propagation. In particular:
Appendix B Unboxing Transformations
A single MP , which is assumed to be constrained to an open interval , is redefined via the unboxing transformation
where denotes the inverse of the cumulative distribution function of the Normal distribution:
, thus designed, has the following properties: it is bijective and smooth; the limits are ; . Further, the midpoint of the interval is fixed: , . Sending the interval boundaries to infinity simultaneously will result in the identity transformation
This is a generalisation of the widely used probit transformation, which maps the unit interval onto the real numbers as . Our modified probit has one huge advantage for the subsequent search for a Gaussianising transformation: If , as a random variable, is uniformly distributed on , then is normally distributed with mean and spread .
In statistics, a frequently-used alternative to probit is the logit map . For our purposes, however, the probit is preferable, since a similarly rescaled version of this logit-transformation would yield a distribution with excess kurtosis, instead of a Gaussian.
For a -dimensional vector of MPs , constrained to intervals , we unbox each dimension separately, with the appropriate boundaries:
Before starting the search for the Gaussianisation parameters, every point in the original sample is mapped through this transformation.