Comparing cosmic web classifiers using information theory

Comparing cosmic web classifiers using information theory

Florent Leclercq Institute of Cosmology and Gravitation (ICG), University of Portsmouth,
Dennis Sciama Building, Burnaby Road, Portsmouth PO1 3FX, United Kingdom
   Guilhem Lavaux Institut d’Astrophysique de Paris (IAP), UMR 7095, CNRS – UPMC Université Paris 6, Sorbonne Universités, 98bis boulevard Arago, F-75014 Paris, France Institut Lagrange de Paris (ILP), Sorbonne Universités,
98bis boulevard Arago, F-75014 Paris, France
   Jens Jasche Excellence Cluster Universe, Technische Universität München,
Boltzmannstrasse 2, D-85748 Garching, Germany
   Benjamin Wandelt Institut d’Astrophysique de Paris (IAP), UMR 7095, CNRS – UPMC Université Paris 6, Sorbonne Universités, 98bis boulevard Arago, F-75014 Paris, France Institut Lagrange de Paris (ILP), Sorbonne Universités,
98bis boulevard Arago, F-75014 Paris, France
Department of Physics, University of Illinois at Urbana-Champaign,
1110 West Green Street, Urbana, IL 61801, USA
Department of Astronomy, University of Illinois at Urbana-Champaign,
1002 West Green Street, Urbana, IL 61801, USA
July 20, 2019

We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-web, diva and origami for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

I Introduction

The large-scale distribution of matter in the Universe is not uniform, but forms a complex pattern known as the cosmic web (Bond, Kofman & Pogosyan, 1996; van de Weygaert & Bond, 2008). As it retains memory about structure formation processes, it contains a rich variety of astrophysical and cosmological information. Applications of mapping the cosmic web include in particular: correlating with galaxy properties (e.g. Blanton et al., 2005); studying the effect of the large-scale structure (LSS) on light propagation through cosmic expansion, dust extinction, and absorption by the intergalactic medium (e.g. Planck Collaboration, 2015); testing general relativity (Falck, Koyama & Zhao, 2015); probing dark matter annihilation along caustics (Vogelsberger & White, 2011); and looking for “bullet cluster” objects (Harvey et al., 2015). Properties of cosmic web elements can also be viewed as statistical summaries of the large-scale structure, and serve as alternatives to correlation functions in order to learn about cosmological parameters for recent results, see e.g. de Haan et al., 2016 using clusters; Hamaus et al., 2016 using voids.

Important tools for cosmic web analysis are classifiers, i.e. algorithms that dissect the entire large-scale structure into one of its structural elements. In contrast to structure finders that focus on one component at a time (typically clusters, filaments, or voids), they allow an analysis of the connection between cosmic web components, identified in the same framework. The richly-structured morphology of the cosmic web is simultaneously sensitive to the original phases of the field, the local density and velocity, and the growth history. Classifiers reduce this complex information to the common concepts of voids, sheets, filaments, and clusters. Many such algorithms have been proposed over the last decade, exploiting different physical information to perform the classification: the eigenvalues of the tidal tensor the T-web, Hahn et al., 2007, and its extensions, Forero-Romero et al., 2009; the eigenvalues of the velocity shear tensor the V-web, Hoffman et al., 2012, and its particle-based formulation, Fisher, Faltenbacher & Johnson, 2016; the eigenvalues of the shear of the Lagrangian displacement field diva, Lavaux & Wandelt, 2010; the number of orthogonal axes along which stream-crossing occurs origami, Falck, Neyrinck & Szalay, 2012. Different classifiers provide different insights into cosmic web morphology. The aim of this paper is to offer a principled way of choosing among possible classifiers, depending on the application of interest.

As outlined by Leclercq, Jasche & Wandelt (2015b) and further demonstrated in this work, the need for information theory in cosmic web analysis uniquely emerges from the uncertainties inherent to actual observations, as opposed to the unique answer provided by any one simulation. Indeed, building a complete cosmographic description of the real Universe from galaxy positions requires high-dimensional, non-linear probabilistic methods. As a response, Bayesian large-scale structure inference (Lahav et al., 1994; Zaroubi, 2002; Erdoǧdu et al., 2004; Kitaura & Enßlin, 2008; Jasche & Kitaura, 2010; Jasche et al., 2010a; Jasche & Wandelt, 2013; Kitaura, 2013; Wang et al., 2013, 2014) offers a methodical approach. It has been shown recently that resulting reconstructions can be used for the detection of cosmic web elements halos, Merson et al., 2016; voids, Leclercq et al., 2015 and for the application of cosmic web classifiers (Nuza et al., 2014; Leclercq, Jasche & Wandelt, 2015b; Leclercq et al., 2016).

In previous work, we studied the dynamic cosmic web of the nearby Universe, relying on the analysis of the Sloan Digital Sky Survey (SDSS) main galaxy sample with the borg algorithm (Jasche, Leclercq & Wandelt, 2015) and using different classifiers. Specifically, in Leclercq, Jasche & Wandelt (2015b), we used the T-web definition; and in Leclercq et al. (2016) we used diva and origami. A legitimate question, unanswered yet, regards the relative merits of our different cosmic web maps, which is linked to the relative performance of the different classifiers, depending on the desired use. These possible applications are of three broad classes: (i) optimal parameter inference, (ii) model comparison, and (iii) prediction of future observations. In each case, the optimal choice of a classifier is naturally expressed as a Bayesian experimental design problem. Beyond the Gaussian assumption and Fisher matrix forecasts (which are known to suffer from severe shortcomings, see Wolz et al., 2012), information-theoretic approaches to the design of experiments and analysis procedures, especially for the second and third kind of problems, remain largely unused in cosmology (see however Bassett, 2005, concerning the optimal design of cosmological surveys for parameter estimation). Nevertheless, interestingly, problems that share strong mathematical similarity have been studied in the bioinformatics literature (e.g. Vanlier et al., 2012, 2014).

One of the most commonly used and versatile Bayesian design criteria is to maximize the mutual information between the data and some quantity of interest. Mutual information is an information-theoretic notion based on entropy that reflects how much of the uncertainty in one random variable is reduced by knowing about the other. In Leclercq, Jasche & Wandelt (2015a), we discussed an optimal decision-making criterion for segmenting the cosmic web into different structure types on the basis of their respective probabilities and the strength of data constraints. In the present paper, we use classifier utilities and the concept of mutual information to extend the decision problem to the space of classifiers. We illustrate this methodological discussion with three cosmological problems of the types mentioned above: (i) optimization of the information content of cosmic web maps, (ii) discrimination of dark energy models, and (iii) prediction of galaxy colors. In doing so, we quantify the relative performance of the T-web, diva and origami for each of these applications.

After discussing information mapping in the cosmic web in section II, we introduce utilities for cosmic web classifiers in section III. We discuss our results and give our conclusions in section IV. The relevant notions of information theory and of Bayesian experimental design are respectively reviewed in appendices A and B.

Ii Mapping information in the cosmic web

The goal of this section is to introduce probabilistic maps of the cosmic web and assess their information content. We briefly review Bayesian large-scale structure analysis in section II.1. We then discuss probabilistic classifications of the cosmic web in section II.2 and introduce the relevant information-theoretic notions in section II.3.

ii.1 Bayesian large-scale structure analysis

The cosmic web maps used in this work have been built upon results previously obtained by the application of borg (Bayesian Origin Reconstruction from Galaxies, Jasche & Wandelt, 2013) to the SDSS main galaxy sample (Jasche, Leclercq & Wandelt, 2015). borg is a Bayesian large-scale structure inference code that reconstructs the primordial density fluctuations and produces physical reconstructions of the dark matter distribution that underlies observed galaxies, by assimilating the survey data into a cosmological structure formation model. To do so, it samples a complex posterior distribution in a multi-million dimensional parameter space (corresponding to the voxels of the discretized domain) by means of the Hamiltonian Monte Carlo algorithm (Duane et al., 1987).

For each move in parameter space, the code does several evaluations of the data model, which involves second-order Lagrangian perturbation theory (see e.g. Bernardeau et al., 2002) to describe large-scale structure formation between initial density fields (at a scale factor ) and the present day (at ). In this fashion, the code jointly accounts for the shape of the three-dimensional matter field and its formation history, in the linear and mildly non-linear regimes. Besides large-scale structure formation, borg accounts for uncertainties coming from luminosity-dependent galaxy biases and observational effects such as selection functions, the survey mask, and shot noise. The distribution of galaxies is modeled as an inhomogeneous Poisson process on top of evolved, biased density fields. For a more extensive discussion of the borg data model, the reader is referred to chapter 4 in Leclercq (2015).

Starting from samples of inferred initial conditions, which contain the data constraints, we perform a non-linear filtering step (see chapter 7 in Leclercq, 2015). This is achieved by evolving samples forward in time with second-order Lagrangian perturbation theory (2LPT) to the redshift of , then running a constrained simulation with the cola method (Tassev, Zaldarriaga & Eisenstein, 2013) from to .

When producing the maps used in this work (Leclercq, Jasche & Wandelt, 2015b; Leclercq et al., 2016), we used a set of 1,097 non-linear borg-cola samples. Their initial conditions are defined on a  Mpc/ (comoving) cubic grid of voxels. The evolved realizations contain particles and have been obtained with cola timesteps. Whenever it is necessary, particles are binned to the grid using the cloud-in-cell scheme.

ii.2 Classifications

Classifier : T-web diva origami
Type Eulerian Lagrangian Lagrangian
Structure type
Void no-stream crossing
Sheet and and stream-crossing along one axis
Filament and and stream-crossing along two orthogonal axes
Cluster stream-crossing along three orthogonal axes
Table 1: Rules for classification of structure types according to the T-web, diva, and origami procedures.

This paper focuses on the possibility to classify the cosmic web into four different structure types: voids, sheets, filaments, and clusters. Any of the algorithms cited in the introduction can be used on our set of constrained realizations. However, for the purpose of this paper, we will compare the results of three classifiers:

  • the T-web (Hahn et al., 2007),

  • diva (Lavaux & Wandelt, 2010),

  • and origami (Falck, Neyrinck & Szalay, 2012).

With the T-web, structures are classified according to the sign of the eigenvalues of the tidal field tensor , the Hessian of the rescaled gravitational potential :


where obeys the reduced Poisson equation


being the local density contrast. A voxel belongs to a cluster, a filament, a sheet or a void, if, respectively, three, two, one or zero of the are positive. The T-web is a Eulerian procedure, in the sense that it operates at the level of voxels of the discretized domain. It can be applied at any time, but does not use the time-evolution of structures to classify them.

In contrast, Lagrangian classifiers rely on the displacement field , which maps the initial position of particles to their final position (see e.g. Bernardeau et al., 2002):


Such classifiers provide a description of the cosmic web at the level of the initial grid of particles.

Instead of the tidal field tensor , diva uses the shear of the displacement field , defined by


Denoting by the eigenvalues of , a particle’s structure type is defined as before by counting the number of positive (instead of ). Note that at first order in Lagrangian perturbation theory (the Zel’dovich approximation), and are proportional, so the T-web and diva yield the same classification of the cosmic web. Differences only arise at higher order.

An alternative way to classify particles is to consider the evolution of the matter streams they belong to. During gravitational collapse, “shell-crossing” happens when different streams pass through a single location. origami defines structure types according to the number of orthogonal axes along which a Lagrangian patch undergoes shell-crossing. Specifically, void, sheet, filament, and cluster particles are defined as particles that have been crossed along zero, one, two, or three orthogonal axes, respectively. The T-web, diva and origami rules for cosmic web classification are summarized in table 1.

In Bayesian large-scale structure inference, uncertainties are quantified by the variation of density fields among constrained samples. As shown in previous work (Jasche et al., 2010b; Leclercq, Jasche & Wandelt, 2015b; Lavaux & Jasche, 2016; Leclercq et al., 2016), uncertainties can be self-consistently propagated to structure type classification as follows. Let us denote by one of the classifiers. By applying to a specific large-scale structure realization, we obtain a unique answer in the form of four scalar fields that obey the following conditions for any :


where void, sheet, filament, cluster, and where is (the location of a voxel) if is a Eulerian classifier, or (the location of a particle on the initial grid) if is a Lagrangian classifier. By applying to the complete set of constrained realizations and counting the relative frequencies of structure types at each spatial coordinate , we obtain a posterior probability mass function (pmf) in the form of four scalar fields that take their values in the range and sum up to one at each :


The corresponding prior probabilities can be estimated by applying the same procedure to a set of unconstrained realizations produced using the same setup as for constrained samples. We found that these probabilities are well approximated by Gaussians whose means and standard deviations are given in table 2 for the primordial large-scale structure and 3 for the late-time large-scale structure.

With Eulerian classifiers, a classification of the primordial large-scale structure is obtained when the are voxels of the grid on which the initial density field is defined (see section IV in Leclercq, Jasche & Wandelt, 2015b). With Lagrangian classifiers, it is directly obtained by looking at the initial grid of particles (see section II in Leclercq et al., 2016). The web-type posterior maps for the primordial large-scale structure in the SDSS volume are shown in figure 1.

In Leclercq et al. (2016), we also showed how to translate the result of Lagrangian classifiers from particles’ positions to Eulerian voxels , so as to obtain a description of the late-time large-scale structure: particles transport their Lagrangian structure type along their trajectory, and are binned to the grid at their final Eulerian position. In figure 2, we show the web-type posterior for evolved structures in the SDSS. We focus on these maps in the rest of this paper.

T-web diva origami

Figure 1: Slices through the posterior probabilities for different structure types (from left to right: void, sheet, filament, and cluster), in the primordial large-scale structure in the Sloan volume (). These four three-dimensional probabilities sum up to one at each location. From top to bottom, structure types are defined using the T-web, diva and origami. The first row (T-web, reproduced from Leclercq, Jasche & Wandelt, 2015b) shows -voxel grids; the second and third row (diva and origami, reproduced from Leclercq et al., 2016) show Lagrangian grids of particles.

T-web diva origami

Figure 2: Slices through the posterior probabilities for different structure types (from left to right: void, sheet, filament, and cluster), in the late-time large-scale structure in the Sloan volume (). These four three-dimensional probabilities sum up to one on a voxel basis. From top to bottom, structure types are defined using the T-web, diva and origami. The first row is reproduced from Leclercq, Jasche & Wandelt (2015b), the second and third rows from Leclercq et al. (2016).
Classifier : T-web diva origami
Structure type
Primordial large-scale structure ()
Table 2: Prior probabilities assigned by the T-web, diva and origami to the different structures types, in the primordial large-scale structure (), i.e. in the initial density field for the T-web, and on the Lagrangian grid of particles for diva and origami.
Classifier : T-web diva origami
Structure type
Late-time large-scale structure ()
Table 3: Prior probabilities assigned by the T-web, diva and origami to the different structures types, in the late-time large-scale structure ().

ii.3 Information-theoretic comparison of classifiers

The posterior probability maps for each classifier show complex and distinct features, coming both from the quantification of observational uncertainty and from the various physical criteria used to define structures. It is therefore important to use appropriate tools to characterize their information content and agreement. As discussed in Leclercq, Jasche & Wandelt (2015b), information theory offers a natural language to address these questions. In this framework, the uncertainty content of a pmf is the Shannon entropy (Shannon, 1948), (in shannons, Sh); the information gain due to the data is the relative entropy or Kullback-Leibler divergence (Kullback & Leibler, 1951) of the posterior from the prior , ; finally, the similarity between two pmfs and is measured by the Jensen-Shannon divergence (Lin, 1991), (see appendix A). For our analysis, these quantity read generically


where the space of structure types is void, sheet, filament, cluster and the space of classifiers is T-web, diva, origami.

Figure 3: Slices through the Kullback-Leibler divergence of the web-type posterior from the prior. This quantity, defined by equation (8), represents the information gained on structure type classification by looking at SDSS galaxies. It corresponds to the joint utility for parameter inference of the SDSS data set and the classifier (, see equation (III.1)). From left to right, structures are defined using the T-web, diva and origami.
Figure 4: Slices through the Jensen-Shannon divergence between pairs of web-type posteriors, as indicated above the panels. The Jensen-Shannon divergence, defined by equation (9), is a symmetric measure of the disagreement (between and Sh) between the different classifiers.

Slices through the voxel-wise Kullback-Leibler divergence of web-type posteriors from their respective priors, for different classifiers, are shown in figure 3. As expected, the information gain is close to zero out of the survey boundaries. There, the information gain fluctuates around Sh (T-web), Sh (diva), Sh (origami). These values are small, but positive. This artifact is due to the limited number of samples used in our analysis: because of the finite length of the Markov Chain, the sampled representation of the posterior has not yet fully converged to the true posterior, and therefore it can show artificial information gain with respect to the prior (see also the discussion in Leclercq, Jasche & Wandelt, 2015a). In observed regions, SDSS galaxies are informative on the underlying cosmic web at the level of several shannons in most of the volume for the T-web and diva, this information being more evenly distributed in the diva map. With origami, the information gain can reach Sh in shell-crossed structures, but in most of the volume, filled with voids, it cannot exceed the value of Sh (which corresponds to the certain inference of a void, i.e. , with a prior value of ).

In figure 4, we show slices through the Jensen-Shannon divergence of pairs of web-type posteriors. These maps confirm and precisely quantify the visual impression, obtained with figure 2, that the T-web and diva classifications do not differ much and are far from the origami result.

Iii Classifier utilities

This section describes how to set up utility functions in the space of classifiers. This space can contain any of the algorithms mentioned in the introduction. Formally, all the implementation details (e.g. threshold, smoothing scale, method-internal parameters) should also be considered as yielding different classifiers. For simplicity, we limit the space of classifiers to T-web, diva, origami, using the detailed setups described in Leclercq, Jasche & Wandelt (2015b) and Leclercq et al. (2016). In particular, we adopted natural choices for the method-internal parameters suggested by their authors and by our borg analysis,111In particular, T-web classifications are defined at a comoving Eulerian scale of  Mpc/ (corresponding to a grid of voxels on a cube of Mpc/ side length), and diva and origami classifications are defined at a comoving Lagrangian scale of  Mpc/ (corresponding to a regular lattice of particles in the same cube). but did not further explore these choices in this study.

In analogy with the formalism of Bayesian experimental design (see appendix B), we introduce the utility of a classifier as , where is the joint utility of a data set and a classifier . The decision problem will consist in maximizing the utility .

As noted in the introduction, the choice of a classifier should be specific to the application of interest. For example, a classifier which efficiently estimates the shape of the cosmic web may not extract the relevant information for discriminating among cosmological models, or may not be useful for predicting future observations. In this section, we introduce some Bayesian utility functions for various experimental goals. We illustrate each situation with a physical question and in each case, we give an estimate of the relative performance of the different classifiers considered in this paper.

iii.1 Utility for parameter inference: cosmic web analysis

In Bayesian experimental design, common utility functions that aim at optimal parameter inference are information-based (see section B.1). Following this idea, we propose that the optimal classifier for cosmic web analysis should simply maximize the expected information gain, i.e. the utility , with

Note that in this case, depends on the location (for each data set, is a three-dimensional map of the large-scale structure), but should not depend on the location once the expectation over all possible data realizations is taken.

Using property (62), we obtain


the mutual information between the inferred parameters (the web-types) and the data. This utility will therefore maximize the information content of the inferred cosmic web map.

Figure 3 shows the joint utility for different classifiers and for one particular data set , namely the SDSS galaxies used in our borg analysis (see section 2 in Jasche, Leclercq & Wandelt, 2015). In order to estimate , one should in principle consider the expectation of such maps over all possible data sets. This task involves building many synthetic galaxy catalogs mimicking the SDSS and performing on them a borg analysis followed by different cosmic web classifications. Considering computational time requirements, such an endeavor is unattainable. Instead, we propose to estimate by considering at different locations. This idea is analog to the hypothesis of ergodicity: if the SDSS is a fair sample of the Universe, then the ensemble average and the sample average of any quantity coincide. For cosmic web analysis, this means supposing that the SDSS contains a large enough variety of voids, sheets, filaments, and clusters so that all possible configurations of such structures are represented fairly.

Utility T-web diva origami
[Sh] 0.4573 0.2664 0.1347
[Sh] 36.28 55.09 20.92
[ Sh] 5.53 2.22 3.24
[Sh] 1454.2 1782.9 861.06
[Sh] 0.0152 0.0101 0.0143
Table 4: Estimation of the utility of different classifiers (the T-web, diva and origami) for different optimization problems: parameter inference (cosmic web analysis, ), insensitivity to artifacts for parameter inference (cosmic web analysis, ), model comparison (dark energy equation of state, ), insensitivity to artifacts for model comparison (dark energy equation of state, ), prediction of additional observations (galaxy colors, ).

Formally, we introduce the following estimator:


where the summation runs over voxels of the observed regions, characterized by the three-dimensional survey response operator being positive (see Jasche, Leclercq & Wandelt, 2015). There are such voxels out of . The results, given in table 4, indicate that for cosmic web inference, preference should be given, in this order, to the T-web, diva, then origami. This ordering is mostly due to the very high information gain in T-web clusters, and to the strong prior preference of origami for voids, which limits its information gain – as noted in section II.3.

A disadvantage of using information gain as the utility function is its sensitivity to artifacts. This is a general feature of all information-theoretic quantities that are maximized in case of maximal randomness (such as entropy): they are not only sensitive to “interesting” patterns, but also to “incidental” information. In our case, classifiers have different sensitivities to artifacts in our cosmic web reconstructions, of various origin: noise in the data, approximate physical modeling, limited number of samples, etc. In order to assess the “risk” taken by different classifiers when producing the final cosmic web map, one needs to quantify the average number of “false positives”. To do so, we propose to use the information gain in unobserved regions as a proxy for the sensitivity to artifacts, and to minimize its expectation value. Therefore, we introduce the utility , where


and has not been observed when the data set has been taken.

For the SDSS, with a similar argument as before, is estimated by the inverse of the average artificial information gain in unconstrained regions, i.e.


where the summation now runs on unobserved voxels (i.e. where the survey response operator is zero), and where . Numerical values, given in table 4, show that, from this point of view, diva outperforms the T-web and origami.

Considering simultaneously and , a user can make a decision based on a quantitative criterion that weights the utility of different classifiers, accounting for the user’s preferred trade-off between information gain and sensitivity to artifacts.

iii.2 Utility for model selection: dark energy equation of state

Figure 5: Slices through the Jensen-Shannon divergence between the three cosmic web-type posteriors , for the dark energy equation of state and the classifier indicated above the panels (from left to right: the T-web, diva, and origami). This quantity corresponds to the joint utility for model selection of the SDSS data set and of the considered classifier (, see equation (16)). The color scale has been stretched around zero using the mapping .

Model selection is an important experimental design problem which has generated some research interest (see section B.2). In a Bayesian context, model selection is typically based on the Bayes factor, which measures the amount of evidence that the data provide for one model over another. In cosmology, competing models can be for example the standard CDM paradigm and one of its extensions. Our aim in this section is to choose the cosmic web classifier that selects the best features to discriminate between such models.

It has recently been shown that the Jensen-Shannon divergence between posterior predictive distributions can be used as an approximate predictor for the change in the Bayes factor (Vanlier et al., 2014). Following this idea, we propose a model selection utility for classifiers as , where


where and are two competing cosmological models.

In the following, we exemplify for three cosmological models in which the dark energy component has different equations of state: , , corresponding respectively to CDM with , CDM () and CDM with . For simplicity, we note for . Different values for the equation of state of dark energy will mean different expansion history and growth of structure in the Universe, which affects the late-time morphology of the cosmic web. We aim here at finding the classifier which best separates the predictions of different models. Following equation (15), the joint utility of a data set and a classifier is the Jensen-Shannon divergence between the three probabilities


where we need the generalized definition of (equation (54)). Given property (69), we have


the mutual information between and , respectively the model indicator and the mixture of distributions :


For the SDSS data set , the probabilities have been already inferred and discussed within standard CDM cosmology (see section II.2 and figure 2). To evaluate and , we ran a set of constrained simulations within CDM cosmology, corresponding to our existing set.222This treatment is approximate, since the calculation of and should in principle involve inference of the initial conditions with a modified version of borg, accounting for . Considering computational requirements, we leave this exact study for future work. More precisely, we started from the set of borg-inferred initial phases (obtained by dividing the initial density realizations by the square root of the fiducial power spectrum, in Fourier space) and rescaled the Fourier modes so as to reproduce the linear matter power spectrum for our set of cosmological parameters and for the correct value of . These power spectra have been obtained with the cosmological Boltzmann code class (Blas, Lesgourgues & Tram, 2011). The resulting initial conditions have been evolved with 2LPT to the redshift and with 30 cola timesteps from to . During the evolution, we fixed the dark energy equation of state to or . Finally, we performed cosmic web analysis as before to get and for each classifier.

Figure 5 shows the Jensen-Shannon divergence between , and ; i.e. the joint utility of the SDSS data set and each of our three classifiers (see equation (16)). There, one can clearly notice that Lagrangian classifiers (diva and origami) pick out more structure than the T-web. In particular, we find that the surroundings of voids are especially sensitive regions to separate the predictions of different dark energy models. This can be easily interpreted: as the cosmic web is affected by dark energy throughout its growth, the Lagrangian displacement field (used by diva and origami) keeps a better memory of the expansion history of the Universe than the final Eulerian position of particles (used by the T-web).

Since we have only one data set at hand, it is possible, as in section III.1, to use an estimator for :


and for the corresponding “risk” taken by classifiers when separating different models, :


Numerical results are given in table 4. Noticeably, this crude estimator for favors the T-web versus diva and origami: though the Jensen-Shannon divergence between the different pmfs is more evenly distributed with the T-web (see figure 5), its average value within the entire volume is the highest.

iii.3 Utility for predictions: galaxy colors

In the context of optimizing the predictive power of experiments, the expected information gain from the prior to the posterior predictive distributions is a useful utility function. As discussed in section B.3, it is also the mutual information between predicted and upcoming observations, conditional on the experimental design.

Let us denote by future observations, or observations already available but that have not been used so far. We introduce the utility of a classifier to perform predictions as , where the joint utility of a data set , a classification and a classifier is the information gain on , i.e.

Figure 6: Slices through maps of structure types in the late-time large-scale structure as observed by the SDSS. The color coding is blue for voids, green for sheets, yellow for filaments, and red for clusters. From left to right, structures are defined using the T-web, diva and origami. These maps are based on the posterior probabilities shown in figure 2, using the Bayesian decision rule of Leclercq, Jasche & Wandelt (2015a) for (the fair game situation, in which a decision is made everywhere). Blue galaxies are overplotted as blue squares and red galaxies as red diamonds.

In the following, we exemplify for the prediction of a property of galaxies that has been used neither in our borg inference nor cosmic web analyses: their color. More specifically, we started from the objects used in the borg SDSS DR7 run (see Jasche, Leclercq & Wandelt, 2015, section 2). We queried the SDSS database in order to keep only those identified as galaxies after spectral analysis () and to get for each of them the -band apparent magnitude () and the color (). From the apparent magnitude and the redshift , we computed the -band absolute magnitude . Absolute magnitudes receive appropriate -correction to their value using the code of Blanton et al. (2003a); Blanton & Roweis (2007) and -correction using the luminosity evolution model of Blanton et al. (2003b). Each galaxy is then given a color label following the criterion of Li et al. 2006 (formula 7 and table 4): it is “red” if its color satisfies


and “blue” otherwise. Therefore, in the following, the color is seen as a two-valued random variable in the set . The following step is to determine in which web-type environment each galaxy lives, given the different classifiers. To do so, we adopted the criterion of Leclercq, Jasche & Wandelt (2015a) for optimal decision-making, combined with the probabilities presented in section II.2. Since we want to commit to a structure type for each galaxy, we adopted in the notations of Leclercq, Jasche & Wandelt (2015a) (the fair game situation). This choice ensures that a decision is made for each voxel of the cosmic web map and results in the “speculative maps” of the large-scale structure, shown in figure 6. We then assigned to each galaxy the structure type of its voxel using the Nearest-Grid-Point scheme.

ra dec T-web diva origami color
Table 5: Some rows of the galaxy catalog used in section III.3. The columns are: right ascension and declination (in degrees, J2000.0 equatorial coordinates); redshift; web-type environment as defined by the T-web, diva and origami ( void, sheet, filament, cluster); galaxy color label ( blue, red). The optimal choice of a classifier can be seen as a machine learning problem: in this training set, which classification is the most relevant to predict galaxy color?
Figure 7: Schematic illustration of the procedure to compute the utility of classifiers for predicting galaxy colors (equations (III.3)–(27)), using the subcatalog given in table 5. In this example, the parent entropy is  Sh. The child entropy in each structure type is computed similarly and reported above. For each classifier, the utility is the parent entropy minus the weighted average of children entropies. In this case, we get  Sh for the T-web,  Sh for diva and  Sh for origami.

Summing up the discussion above, we built a catalog containing, for each galaxy: a color () and three structure types () – one for each of the classifiers (). Some rows of this catalog are given in table 5 as examples. This catalog contains galaxies.

In many respects, the question of choosing the best classifier for predicting galaxy colors can now be viewed as a supervised machine learning problem. Each of the input galaxies is assigned a set of attributes, the web-type in which it lives according to each classifier, and a class label, its color (see table 5). The design problem of choosing the most efficient classifier for predicting galaxy properties is analog to the machine learning problem of determining the most relevant attribute for discriminating among the classes to be learned (for a cosmological example, see e.g. Hoyle et al., 2015).

Following equation (21), the utility of a classifier for predicting galaxy colors, that we seek to maximize, is


where we have used that (before looking at the data, galaxy colors do not depend on the chosen classifier), and the simplifying assumption that (galaxy colors do not further depend on the data once their web-type environment is specified). It follows that the utility is the mutual information between the classification and the new observations (see section B.3):

The weighting coefficients to be used in the last line represent the probability that a galaxy lives in web-type , given classifier , irrespective of its color. They are approximated as (see also equation (B.3))

i.e. the fraction of galaxies (blue or red) that live in web-type . Note that this is different from , the prior probability for a given voxel to belong to a structure of type . This difference accounts in particular for the fact that galaxies live preferentially in the most complex structures of the cosmic web.

The first term in equation (III.3) is the “parent” entropy

Similarly, for each classifier and each structure type, the “child” entropy is estimated as


Eventually, equations (III.3), (III.3), (III.3) and (27) yield , an estimator of for each of the classifiers. A schematic illustration of the entire procedure is given in figure 7.

blue red
all 194,503 172,654
(53.0%) (47.0%)
blue red
void sheet filament cluster void sheet filament cluster
T-web 19,150 19,290 63,318 92,745 18,456 8,370 42,678 103,150
(50.9 %) (69.7 %) (59.7 %) (47.3 %) (49.1 %) (30.3 %) (40.3 %) (52.7 %)
diva 26,358 27,051 67,515 73,579 22,436 15,289 51,402 83,527
(54.0 %) (63.9 %) (56.8 %) (46.8 %) (46.0 %) (36.1 %) (43.2 %) (53.2 %)
origami 82,805 69,583 29,748 12,367 55,362 60,775 36,589 19,928
(59.9 %) (53.4 %) (44.8 %) (38.3 %) (40.1 %) (46.6 %) (55.2 %) (61.7 %)
Table 6: Number and percentage of blue and red galaxies as a function of their web-type environment. In the first row, the number of blue and red galaxies in the entire catalog are reported. For the other rows, the number corresponds to the number of blue/red galaxies in the web-type indicated by the column, given the classifier indicated by the row; the percentage corresponds to the fraction of galaxy living in this web-type that are blue/red.
Figure 8: Pie diagrams illustrating the data of table 6, with the area of sectors proportional to the corresponding number of galaxies. The color of sectors represent the structure type (blue for voids, green for sheets, yellow for filaments, and red for clusters), and the borders represent the galaxy color ( blue, red).

The result of our analysis is presented in table 6 and illustrated by the pie charts of figure 8. In the table, the first row represents the number of blue and red galaxies of our catalog, irrespective of their web-type environment. It permits to estimate the parent entropy . We also quote the number of blue and red galaxies that live in voids, sheets, filaments, and clusters, as defined by the T-web (second row), by diva (third row) and by origami (fourth row). Resulting numbers for estimated utilities are given in the last row of table 4.

Several physical comments can be made at this point. All classifiers agree on a general trend, which can be observed in table 6: red galaxies live preferentially in clusters, while blue galaxies live preferentially in sheets and voids. This is in agreement with earlier results (e.g Hogg et al., 2003; Patiri et al., 2006; Alpaslan et al., 2015). It is interesting to note, that though almost half of the galaxies are found in a void according to origami, its utility stays comparable to that of the other classifiers. In our setup, the T-web and origami have similar performance at predicting galaxy colors, and outperform diva. This could be due to the weaker sensitivity of diva classifications to the local density, which is known to correlate with galaxy colors (Hogg et al., 2003). It is also notable that for all classifiers, the information gained on galaxy colors once their web-type is known is rather small – of the order of  Sh, to be compared, for example, to the 1 Sh gained on cosmological models from cosmic microwave background experiments (Seehars et al., 2014; Martin, Ringeval & Vennin, 2016). From a machine learning perspective, this result means that none of the attributes that we considered are really relevant to learn the class label. This suggests that galaxy colors are only loosely related to the physical information exploited by the T-web, diva and origami (tidal field, shear of the Lagrangian displacement field and number of particle crossings, respectively). It further highlights the necessity of developing targeted cosmic web classifiers for cross-correlating with galaxy properties. Unsupervised machine learning from an extended space of attributes, including galaxy properties and their cosmic environment at different scales (e.g. Frank, Jasche & Enßlin, 2016) could allow progress for the design of such classifiers.

Iv Summary and Conclusions

Following Leclercq, Jasche & Wandelt (2015b), this study discusses the data-supported connection between cosmic web analysis and information theory. It is a project exploiting the cosmic web maps of Leclercq, Jasche & Wandelt (2015b) and Leclercq et al. (2016), which are part of the rich variety of chrono-cosmographic results produced by the application of the Bayesian inference engine borg (Jasche & Wandelt, 2013) to the Sloan Digital Sky Survey main sample galaxies (Jasche, Leclercq & Wandelt, 2015). Using information-theoretic concepts, in section II, we measure and characterize the extent to which the SDSS is informative about the morphological features of the underlying cosmic web, as defined by the T-web (Hahn et al., 2007), diva (Lavaux & Wandelt, 2010), and origami (Falck, Neyrinck & Szalay, 2012).

In section III, this paper quantitatively addresses the question of choosing a cosmic web classifier, depending on the desired use. To do so, we extend the decision-theory framework of Leclercq, Jasche & Wandelt (2015a) by introducing utility functions on the space of classifiers. We consider three classes of problem: parameter inference (section III.1), model selection (section III.2), and prediction of new observations (section III.3). In each of these general situations, we propose a utility function based on the concept of mutual information and motivated in the general framework of Bayesian design. We summarize them below:

  • for parameter inference: (equation (11)), the mutual information between the classification and the data , given the classifier ,

  • for model selection: (equation (17)), the mutual information between the model indicator and the mixture of posterior distributions, conditional on the different competing models and on the classifier ,

  • for predictions: (equation (III.3)), the mutual information between the classification and the new observations , given the classifier .

In practice, due to the difficulty of combining competing goals, the decision maker may be unwilling or unable to specify a unique utility function. Given the set of possible utility functions for different situations, a target function for several design objectives can be written down, for example, as a weighted average of different utilities.

As an illustration of our methodological framework, we assessed the relative performance of the T-web, diva, origami for different goals, one of each type mentioned above: optimization of the information content of cosmic web maps, comparison of dark energy models, and prediction of galaxy colors. Our physical findings can be summarized as follows. We found that the T-web maximizes the information content of web-type maps (especially in the densest regions), but that diva may have to be preferred due to its lower sensitivity to artifacts. Unsurprisingly, Lagrangian classifiers (diva and origami), which exploit the displacement field, excel at finding the regions of the cosmic web, such as the boundaries of voids, that are the most sensitive to the equation of state of dark energy. Finally, all classifiers agree on the general trend of red galaxies in clusters, and blue galaxies in sheets and voids. The information gained on galaxy colors is the highest with the T-web, slightly less with origami, and lowest with diva; but the absolute number stays rather low. Though investigation of this question should be made much more comprehensive, this result is indicative of the as-of-now limited understanding of the connection between galaxy properties and the cosmic web, which is essential to the development of a consistent cosmological theory of galaxy formation and evolution.

Numerical results given in table 4 depend on method-internal parameters, in particular a scale on the Eulerian or Lagrangian grid. Though we did not investigate this question, in addition to making comparisons between different classifiers, the same formalism can be used within each classification method to assign a utility for each parameter choice, then decide on which values to use. This allows to probe the hierarchical nature of the cosmic web quantitatively and to focus on the optimal filtering scale for the considered problem. At the largest filtering scales (for example, when studying the dark energy equation of state), we expect the results of the T-web and diva to converge, since the methods are equivalent at first order in Lagrangian perturbation theory; whereas origami will miss all the phase-space foldings that happen below the considered scale. At the smallest filtering scales (for example, when studying galaxies), we expect an increase of the information gained on all features that are intrinsically local, such as many galaxy properties.

In order to facilitate the use of our methods and as a complement to our earlier release of density fields and cosmic web maps, we made publicly available the maps, analysis scripts and galaxy catalog used in this paper, which can be used to reproduce our results. These products are available from the first author’s website, currently hosted at

Though we used the common terms of voids, sheets, filaments, and clusters, this paper can be considered as a generic way to optimally define four summary statistics A, B, C, D of the large-scale structure, depending on the desired use. Therefore, beyond the specific application of cosmic web analysis, our methodology opens the way to the automatic design of summary statistics of the LSS that capture targeted parts of its information content. In the coming area of accurate cosmology with deep galaxy surveys, we expect that the optimal design of analysis procedures will play an ever-increasing role.

Appendix A Information theory

We review here some useful information-theoretic notions. For simplicity, we consider only discrete random variables, but the generalization to continuous variables is possible, by replacing discrete sums by integrals. Throughout this appendix, and are two discrete random variables with respective possible values in and . We note their respective pmfs and . We denote by the pmf of another discrete random variable with possible values in .

a.1 Jensen’s inequality

An important result used in the following is Jensen’s inequality (Jensen, 1906) in a probabilistic setting. Let us consider a probability space, a random variable with probability distribution and a convex function . Then


where the brackets indicate the expectation value of the quantity inside, under the probability .

a.2 Information content and Shannon entropy

The information content (or self-information) of is defined by


where the are the probabilities of possible events. The entropy (Shannon, 1948) is the expectation of the information content under the probability itself, i.e.


It can be written explicitly


Information content and Shannon entropy are non-negative quantities. Furthermore, Jensen’s inequality (section A.1) implies that:


since is a convex function. This maximal entropy is effectively attained in the case of a uniform pmf: uncertainty is maximal when all possible events are equiprobable.

The joint entropy of two random variables and is defined as


One may also define the conditional entropy of given as




and , it is easy to show that the conditional entropy verifies


From equations (31), (33) and (36), one can derive the chain rule of conditional entropy:


from which follows straightforwardly an equivalent of Bayes’ theorem for entropies,


Finally, the cross entropy between two random variables and with possible values in the same set and respective pmfs and is


a.3 Mutual information

The mutual information of two variables and is defined as


It is a symmetric measure of inherent dependence of and . Jensen’s inequality (section A.1) implies that it is non-negative:


A remarkable property is that the entropy satisfies , the mutual information of and itself.

Using the definition of conditional probabilities, one can also show that mutual information can be equivalently expressed as:


It follows that for any and , . Therefore, conditional entropy should be understood as the amount of randomness remaining in once is known. if and only if the value of is completely determined by the value of . Conversely, if and only if and are independent random variables.

a.4 Kullback-Leibler divergence

In this section and in the following, we consider two discrete random variables and with possible values in and respective pmfs and . When there is no ambiguity, we simplify the formalism and note , , , etc.

The Kullback-Leibler divergence (Kullback & Leibler, 1951) is a non-symmetric measure of the difference between two probability distributions. It is defined as


It can also be expressed in terms of the entropy of and the cross entropy between and (see equations (31) and (39)):


An important result, known as Gibbs’ inequality, states that the Kullback-Leibler divergence is non-negative, reaching zero if and only if . Equivalently, for two pmfs and , , i.e. the (self) entropy of is always smaller than the cross entropy of with any other pmf . The proof uses the inequality for all , with equality if and only if . Denoting by the subset of for which is non-zero, we have: