Observables for large scale structure

Sufficient observables for large scale structure in galaxy surveys

J. Carron and I. Szapudi
Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI, 96822
E-mail: carron@ifa.hawaii.edu
August 2, 2019

Beyond the linear regime, the power spectrum and higher order moments of the matter field no longer capture all cosmological information encoded in density fluctuations. While non-linear transforms have been proposed to extract this information lost to traditional methods, up to now, the way to generalize these techniques to discrete processes was unclear; ad hoc extensions had some success. We pointed out in Carron & Szapudi (2013) that the logarithmic transform approximates extremely well the optimal “sufficient statistics”, observables that extract all information from the (continuous) matter field. Building on these results, we generalize optimal transforms to discrete galaxy fields. We focus our calculations on the Poisson sampling of an underlying lognormal density field. We solve and test the one-point case in detail, and sketch out the sufficient observables for the multi-point case. Moreover, we present an accurate approximation to the sufficient observables in terms of the mean and spectrum of a non-linearly transformed field. We find that the corresponding optimal non-linear transformation is directly related to the maximum a posteriori Bayesian reconstruction of the underlying continuous field with a lognormal prior as put forward in Kitaura et al. (2010). Thus simple recipes for realizing the sufficient observables can be built on previously proposed algorithms that have been successfully implemented and tested in simulations.

large-scale structure of Universe, cosmology: theory, methods: statistical
pagerange: Sufficient observables for large scale structure in galaxy surveysLABEL:lastpagepubyear: 2013

1 Introduction

In the current inflationary paradigm, the small primordial density fluctuations are believed to be very close to a Gaussian field. The natural descriptors of such fields are two-point correlation functions, or (power) spectra in Fourier space. However, it was established first in -body simulations (Rimes & Hamilton, 2005, 2006), and subsequently with analytical calculations (Neyrinck et al., 2006) that the spectrum of the matter field loses its effectiveness as the fluctuations grow. Fourier modes of the density become strongly coupled (Meiksin & White, 1999; Scoccimarro et al., 1999), resulting in large covariances effectively diminishing the available information. In this context, a variety of non-linear transformations of the field such as Gaussianization (Weinberg, 1992; Neyrinck et al., 2011; Zhang et al., 2011; Yu et al., 2011), the logarithmic mapping (Neyrinck et al., 2009; Seo et al., 2011; Seo et al., 2012; Carron, 2012), Box-Cox transformations (Joachimi et al., 2011) or clipping (Simpson et al., 2011; Simpson et al., 2013), have been shown to increase the fidelity to linear theory and/or recapture information in the noise free fields. These results can be understood within the simple yet qualitatively and to some extent quantitatively accurate lognormal model (Coles & Jones, 1991; Kayo et al., 2001) of the statistics of the matter field. It has been shown that the full set of -point moments of fields of this type carry very little information in the high variance regime (see Carron & Neyrinck, 2012, and references therein for an extensive discussion on these statistical issues).
Before these ideas can applied to extract more information from galaxy surveys, a key issue to be dealt with is discreteness. Galaxy fields correspond to a set of points rather than a continuous random field, and it is not entirely clear how and to what extent the methods and conclusions of these works apply. While studies such as Neyrinck et al. (2011) suggests that analogous methods should still bring some improvement, up to now the estimators relied on ad hoc generalizations of the logarithmic and similar transforms to discrete data sets.
In Carron & Szapudi (2013) we have shown rigorously that such non-linear transforms of the matter field typically work because they i) undo (some of) the non-linear evolution, thus ii) Gaussianize the distribution, and last but not least iii) they correspond to a good approximation to sufficient statistics, observables that extract all available information from the matter field on a given cosmological parameter. The logarithmic transformation would correspond to exact sufficient statistics if the underlying continuous field were lognormal. While the evolved dark matter field is only approximately lognormal, apparently it is close enough that the amount of information extracted by the simple logarithmic transformation is virtually indistinguishable from that of the exact (and vastly more complicated) sufficient statistics. Thus a simple way presents itself to generalize these findings to the discrete galaxy fields in surveys: assume a lognormal field sampled in a Poisson fashion, and construct the corresponding sufficient statistics. Both of these assumptions are approximate, nevertheless, we expect these results to capture the essential properties of dark matter fields encoded by the distribution of galaxies. Also, it will be clear how our results generalize to more complex statistical models.

2 Methods

We build on our previous work (Carron & Szapudi, 2013) to which we refer for details, but can be summarized as follows. Let be the one-point PDF of the fluctuation . It is well known that the statistic


(i.e. the variance) contains the entire Fisher information of whenever the probability density is Gaussian. This is a special case of a more general identity, valid for any : the observable


carries the entire Fisher information content of on the parameter (in this equation and are arbitrary constants). These statistics can be therefore be considered ’sufficient’, or optimal, with respect to their Fisher information content. They can be read directly from the shape of the PDF.
In this paper we construct the corresponding sufficient observables for the PDF of a discrete galaxy field, given schematically as


where is the probability of observing galaxies in a given cell. Initially, we restrict our analysis to one-point probabilities and one-point optimal observables, to obtain local transformations exactly analogous to previous successful methods. Our focus on the one-point PDF makes the variance of the fluctuations the only model parameter of relevance. Nevertheless, generalizations to multipoint probabilities will be obvious after the detailed calculation.
We will use the local Poisson model as a natural way for discrete sampling of an underlying continuous distribution. Less obvious is the choice of the underlying PDF . In (Carron & Szapudi, 2013) we used results from perturbation theory on the moments of the dark mater field to show that


was a very good approximation to the optimal observable when the index of the power spectrum is reasonably close to . This result was very successfully tested against the Millennium Simulation (Springel et al., 2005) density field. Since equation (4) is the optimal statistic (2) for the lognormal PDF, this model presents itself as a plausible choice for the underlying continuous PDF for the construction of the sufficient observable.

2.1 Local Poisson model with lognormal density

Using the local Poisson model with lognormal underlying matter density, the probability of observing galaxies in a given cell is given by




and is the mean number of galaxy in the cell. More realistic local models for discreteness including deviations from the Poissonity or biasing schemes can be implemented analogously (Kitaura et al., 2013, e.g.). The lognormal PDF of reads


Writing , then the parameters of the above equation are set by


where . The first relation ensures that has zero mean. The log-density has a Gaussian PDF.

Saddle-point approximation

We found that can be obtained with great accuracy via saddle-point approximation. Specifically, we first write Eq. (5) using as


with . We then approximate the integrand as


where is the point where is maximal. Eq. (9) becomes a Gaussian integral resulting in


Collecting the terms from the lognormal and Poisson PDFs, we have in our case


The saddle-point is obtained by setting , giving the following non-linear equation




where . The curvature term is


Equation (13) is always well behaved with a unique solution.

2.2 The sufficient observable

With the above results we proceed to construct the sufficient observable. Let us start with some general observations. To get the optimal observable we need . From Eq. (11),


When performing the derivatives, some care must be taken, as enters in two different ways: in the density PDF and in the solution to the saddle-point equation . The first term in (16) can be dealt with simply. We note that in general . Thus, we can write


The first term in the upper equation vanishes by definition of the saddle point. The second term reduces to the right hand side of the lower equation since carries no dependence on . Thus (17) is simply the optimal statistics of the underlying continuous density field evaluated at the point . To obtain the optimal statistics of the galaxy field we only need to add a correction from the curvature term. To write explicitly the curvature term, we need . This is obtained taking the derivative of the saddle point equation , with the result


We have all the ingredients to write down the optimal observable for our Poisson lognormal model. Collecting the relevant terms we get after some algebra


We present the interpretation of and of the second term in the above observable later on.

3 Tests and results

To illustrate the behavior of and test our statistics in the parameter space spanned by and , we proceed as follows. Letting be the radius in of a (spherical) cell, we set the expected power law behaviors


where we use standard values , and . We will use for the purposes of this paper two different values of . The approximate SDSS LRGs density (Percival et al., 2007) giving , and a larger sampling rate for comparison.

Saddle-point approximation

Figure 1: Upper panel: the crosses show the exact count-in-cells PDF , as a function of for increasing values of the matter fluctuation variance as indicated from bottom to top. The corresponding number densities are given by the scaling (20), with . The solid lines correspond to the saddle-point approximation to the PDF, Eq. (11). On each curve the first cross on the left indicate , the second , etc. Lower panel: the same for the function , giving the optimal observables. For our purposes, the saddle-point approximation is essentially exact.

The exact evaluation of with numerical methods poses no particular difficulties. We used a basic Newton-Raphson algorithm to solve Eq. (13) for the saddle point. We show versus as the crosses on the lower panel of Figure 1, for , for different values of as indicated. The solid lines show the saddle-point approximation. For convenience these lines omit the discreteness of the PDF, we continued to non-integer values by replacing the factorial function by the Gamma function . We found the relative deviation to be always subpercent even for large values of , except at the void probability , where the accuracy worsen as increases. The accuracy is similar or better for . It turns out that for all the purposes of this paper, the saddle-point approximation is as good as the exact result. The lower panel shows the exact sufficient observable and its approximation with , Eq. (19). It is clear from the upper panel that the transformation from to no longer Gaussianizes the PDF on small scales. Nevertheless, the shape of remains close to a parabola on all scales. This suggests that the entire information is contained the first two moments, regardless of the non-Gaussianity of . This observation will be fully exploited next.


Figure 2: Upper panel : the efficiency of various statistics to capture the information within the count-in-cell PDF, as a function of . scales according to Eq. (20), with . The dotted line shows , the dashed line , as defined in the text. The dash-dotted line shows the second moment of the galaxy density fluctuations . Two other statistics are shown as solid lines, both almost indistinguishable from unity: the optimal statistics according to the saddle-point approximation, and the combined information content of the first two moments of , accounting for their covariance. The former shows a very slight deviation from optimality for the largest variances. The lower panel is the same for . In both cases, the optimal statistic is equivalent to the extraction of the first two moments of .

We define the efficiency of the statistics as the ratio of the information of the statistics to the total information. This is shown in figure 2, for two different values of . Each panel shows several curves. The lower solid line is the optimal statistic according to our calculations, Eq. (19). For the most part, it is indistinguishable from unity, confirming our expectations. A slight deviation is observed on the smallest scales, due to the the saddle-point approximation. The dotted curves show the efficiency of the second moment alone, neglecting the curvature term in the optimal statistic; the latter still accounts for up to 20% of the total information in the intermediate regime. The dash-dotted lines displays the efficiency of the naive . This performs poorly for large variances, as expected. For comparison, we also show with dashes the statistic


where the mapping was introduced in Neyrinck et al. (2011) as local transform alternative to the logarithmic mapping that is well defined when . While useful on large scales when is still sufficiently large, it performs poorly on smaller scales. In fact, its sensitivity to vanishes around .
Finally, motivated by the parabolic shape of on the lower panel of figure 1, the upper solid lines present the efficiency of the first two moments (jointly) of . Strikingly, these curves are indistinguishable from unity, showing perfect extraction of information. This suggests that the curvature term in gleans the information content (perhaps in a complex way) of the first moment that is negligible for small, but carries information for larger . Note that this simple procedure appears to perform slightly better that for the largest variances only due to the slight inaccuracy of the saddle point integration used to realise the optimal sufficient statistic.

4 Discussion

We have shown that to first order, the optimal statistics of a discrete field are given by that of an underlying field at the corresponding point . Since the latter statistic is close to , this suggests that the spectrum of is the key observable of the galaxy density field. Furthermore, our results suggest that the additional extraction of the mean captures the entire information. Our results for sufficient statistics are thus recast in terms of an optimal non-linear transformation. As we show next, has a meaningful physical interpretation. For the derivation of the optimal statistic, , or , was introduced as a convenient mathematical construct: the sufficient statistic of the matter field needs to be evaluated at this point. We now reinterpret these results in terms of reconstructing the underlying field from the observation of . In a Bayesian setting, the posterior for is


By definition maximizes , thus the right hand side of the above equation with respect to . Therefore we can now interpret is the maximum a posteriori (MAP) solution in a Bayesian reconstruction of the field. The generalization of these one-dimensional considerations to the multipoint case using the full joint probability is clear. Statistics optimal with respect to are given now by


where is the curvature matrix evaluated at , the saddle-point solution for the field. These coupled equations result in general in a more complicated, non-local transformation of the galaxy field. The Bayesian reconstruction of the matter field with a Poisson lognormal assumption is the exact approach used in Kitaura et al. (2010). According to the above, their Bayesian MAP equations are always identical to our saddle-point equations. They implemented and solved these equations in three dimensions, which proves the feasibility of our approach even in the more involved multipoint case. On the other hand, their good overall success in reconstructing the density field and the direct connection we revealed between the MAP solution and sufficient statistics suggest that the logarithmic transformation of this reconstructed field does capture most information in the data.
Our initial theoretical results suggest recipes for efficient observables from galaxy fields. The simplest is the following: map in each cell to and extract the spectrum and mean of the new field. The mapping requires only an estimate of and that can be obtained from count-in-cells. There are some obvious generalizations and refinements that are left for future work. The multipoint case, including the curvature term, can be implemented with straightforward extensions of the algorithms of Kitaura et al. (2010). Further, these methods should be tested in simulations to see to what extent the conclusions from the lognormal model hold in real dark matter distributions. Refinements of the statistical models can include the use of the exact sufficient observable of the matter field and/or using a more accurate PDFs. Deviations from Poisson sampling and diverse biasing schemes can be modeled as well.


We thank Mark C. Neyrinck for his careful reading of the manuscript, triggering improvements to the paper. We acknowledge NASA grants NNX12AF83G and NNX10AD53G for support.


  • Carron (2012) Carron J., 2012, Physical Review Letters, 108, 071301
  • Carron & Neyrinck (2012) Carron J., Neyrinck M. C., 2012, ApJ, 750, 28
  • Carron & Szapudi (2013) Carron J., Szapudi I., 2013, MNRAS, 434, 2961
  • Coles & Jones (1991) Coles P., Jones B., 1991, MNRAS, 248, 1
  • Joachimi et al. (2011) Joachimi B., Taylor A. N., Kiessling A., 2011, MNRAS, 418, 145
  • Kayo et al. (2001) Kayo I., Taruya A., Suto Y., 2001, ApJ, 561, 22
  • Kitaura et al. (2010) Kitaura F.-S., Jasche J., Metcalf R. B., 2010, MNRAS, 403, 589
  • Kitaura et al. (2013) Kitaura F.-S., Yepes G., Prada F., 2013, ArXiv e-prints
  • Meiksin & White (1999) Meiksin A., White M., 1999, MNRAS, 308, 1179
  • Neyrinck et al. (2006) Neyrinck M. C., Szapudi I., Rimes C. D., 2006, MNRAS, 370, L66
  • Neyrinck et al. (2009) Neyrinck M. C., Szapudi I., Szalay A. S., 2009, ApJ, 698, L90
  • Neyrinck et al. (2011) Neyrinck M. C., Szapudi I., Szalay A. S., 2011, ApJ, 731, 116
  • Percival et al. (2007) Percival W. J., Nichol R. C., Eisenstein D. J., Frieman J. A., Fukugita M., Loveday J., Pope A. C., Schneider D. P., Szalay A. S., Tegmark M., Vogeley M. S., Weinberg D. H., Zehavi I., Bahcall N. A., Brinkmann J., Connolly A. J., Meiksin A., 2007, ApJ, 657, 645
  • Rimes & Hamilton (2005) Rimes C. D., Hamilton A. J. S., 2005, MNRAS, 360, L82
  • Rimes & Hamilton (2006) Rimes C. D., Hamilton A. J. S., 2006, MNRAS, 371, 1205
  • Scoccimarro et al. (1999) Scoccimarro R., Zaldarriaga M., Hui L., 1999, ApJ, 527, 1
  • Seo et al. (2011) Seo H.-J., Sato M., Dodelson S., Jain B., Takada M., 2011, ApJ, 729, L11+
  • Seo et al. (2012) Seo H.-J., Sato M., Takada M., Dodelson S., 2012, ApJ, 748, 57
  • Simpson et al. (2013) Simpson F., Heavens A. F., Heymans C., 2013, ArXiv e-prints
  • Simpson et al. (2011) Simpson F., James J. B., Heavens A. F., Heymans C., 2011, Physical Review Letters, 107, A261301
  • Springel et al. (2005) Springel V., White S. D. M., Jenkins A., Frenk C. S., Yoshida N., Gao L., Navarro J., Thacker R., Croton D., Helly J., Peacock J. A., Cole S., Thomas P., Couchman H., Evrard A., Colberg J., Pearce F., 2005, Nature, 435, 629
  • Weinberg (1992) Weinberg D. H., 1992, MNRAS, 254, 315
  • Yu et al. (2011) Yu Y., Zhang P., Lin W., Cui W., Fry J. N., 2011, Phys. Rev. D, 84, 023523
  • Zhang et al. (2011) Zhang T.-J., Yu H.-R., Harnois-Déraps J., MacDonald I., Pen U.-L., 2011, ApJ, 728, 35
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description