The fully connected N dimensional Skeleton:
probing the evolution of the cosmic web
Abstract
A method to compute the full hierarchy of the critical subsets of a density field is presented. It is based on a watershed technique and uses a probability propagation scheme to improve the quality of the segmentation by circumventing the discreteness of the sampling. It can be applied within spaces of arbitrary dimensions and geometry. This recursive segmentation of space yields, for a dimensional space, a succession of dimensional subspaces that fully characterize the topology of the density field. The final 1D manifold of the hierarchy is the fully connected network of the primary critical lines of the field : the skeleton. It corresponds to the subset of lines linking maxima to saddle points, and provides a definition of the filaments that compose the cosmic web as a precise physical object, which makes it possible to compute any of its properties such as its length, curvature, connectivity etc…
When the skeleton extraction is applied to initial conditions of
cosmological Nbody simulations and their present day non linear
counterparts, it is shown that the time evolution of the cosmic web,
as traced by the skeleton, is well accounted for by the Zel’dovich
approximation. Comparing this skeleton to the initial
skeleton undergoing the Zel’dovich mapping shows that two effects
are competing during the formation of the cosmic web: a general
dilation of the larger filaments that is captured by a simple
deformation of the skeleton of the initial conditions on the one
hand, and the shrinking, fusion and disappearance of the more
numerous smaller filaments on the other hand. Other applications
of the N dimensional skeleton and its peak patch hierarchy are discussed.
keywords:
Cosmology: simulations, statistics, observations, Galaxies: formation, dynamics.1 Introduction
The weblike pattern certainly is the most striking feature of matter distribution on megaparsecs scale in the Universe. The existence of the “cosmic web” (Zel’Dovich, 1970) (Bond & Myers, 1996) has been confirmed more than twenty years ago by the first CfA catalog (de Lapparent et al., 1986) and the more recent catalogs such as SDSS (AdelmanMcCarthy et al., 2008) or 2dFGRS (Colless et al., 2003). These observations, together with the dramatic improvement of computer simulations (e.g. Teyssier et al. (2008) Ocvirk et al. (2008)) have largely improved the picture of a Universe formed by an intricate network of voids (i.e. globular underdense regions) embedded in a complex filamentary web which nodes are the location of denser halos. The traditional way of understanding large scale structures (LSS) formation and evolution relies on Friedman equations and assumes that LSS are the outcome of the growth of very small primordial quantum fluctuations by gravitational instability (see e.g. Peebles (1980) or Peebles (1993) and references therein). In this theory, the solution for structure formation is described in terms of a mass distribution that one needs to grasp (i.e. by following the evolution of its most important features) and compare these to observations. Comprehending the mass distribution as a whole, especially at nonlinear stages, is a very difficult task. A possible solution therefore consists in extracting and studying simple characteristic features of matter distribution such as voids, halos and filaments as individual physical objects. So far, mainly because of the relatively higher complexity of the filaments, most theoretical and computational researches have focused on the voids and halos.
The dark matter halos have arguably been the most studied component of the cosmic web. Their density profiles for instance are very well described by socalled NFW profiles (Navarro et al., 1997) and nonparametric models are still under investigation (Merritt et al., 2006). The dependence of these density profiles on the halos mass (e.g. Bond & Myers (1996), Lacey & Cole (1993)) has also been investigated thoroughly and its relationship with redshift and environmental properties are a very active topics (e.g. Harker et al. (2006), Aubert & Pichon (2006), Wang et al. (2007), AragónCalvo et al. (2007), Sousbie et al. (2008a) or Hahn et al. (2007)). From a computational point of view, much effort has been put into the development of various algorithms to identify halos in simulations and galaxies in spectroscopic redshift galaxy surveys. The friendoffriend algorithm (Huchra & Geller, 1982) is now widely spread, as well as more complex hierarchical substructures identifiers such as HFOF (Gottloeber, 1998), SUBFIND (Springel et al., 2001), VOBOZ (Neyrinck et al., 2005) or ADAPTAHOP (Aubert et al., 2004).
Voids are another feature of cosmological matter distribution that also have a long history of theoretical and computational modeling. The first voids were observed by Kirshner et al. (1981) and are in some sense the counterpart of halos: the initial quantum perturbations collapsing into halos at nonlinear stages leave room to voids in the underdense regions. The first theoretical voids models where developed by Hoffman & Shaham (1982), Icke (1984) or Bertschinger (1985) among others, while numerical void finders exist, such as the one described in ElAd & Piran (1997), ZOBOV (Neyrinck, 2008), based on Voronoi tessellation, or the recent Watershed Void Finder, based on the Watershed transform (e.g. Beucher & Lantuejoul (1979), Beucher & Meyer (1993)), by Platen et al. (2007) (see the introduction and references therein for a more complete review of the subject). The improvements in our understanding of voids and halos properties led to the formulation of powerful theories such as the patches theory (Bond & Myers, 1996) the extended PressSchechter theory (e.g. Bond et al. (1991) and Sheth (1998)) or the skeletontree formalism (Hanami, 2001).
But our investigation of the filaments as individual objects is not yet as thorough as for the halos and voids: the definition of a well established mathematical framework for their study could therefore lead to significant improvements in our understanding of matter distribution in the Universe. The first attempts date from Barrow et al. (1985), who used a graphtheory construction: the minimal spanning tree (MST). This method defines the cosmic web as the network linking galaxies (or particles from a numerical simulation), having the property of being loopfree and of minimal total length. This technique was later developed in order to try quantifying in an objective way the properties of the cosmic web (see e.g. Graham et al. (1995), Colberg (2007) and a review on the subject can be found in Martinez & Saar (2002)). The socalled shape finders (Sathyaprakash et al. (1996), Sahni et al. (1998) or Bharadwaj et al. (2000)) allow a statistical study of the filaments and another method, based on the CANDY model, commonly used to detect road networks, uses a marked point process and a simulated annealing algorithm to trace the filaments (Stoica et al., 2005). More recently, the skeleton formalism and its local approximation, that describe the filaments as particular field lines of the density field, was introduced by Novikov et al. (2006) and Sousbie et al. (2008a) with the advantage of framing a welldefined mathematical ground for theoretical predictions of the filaments properties as well as an efficient numerical identification algorithm. Finally, an interesting first attempt to unify halos, voids and filaments identification using the Multiscale Morphology Filter (MMF) technique was also proposed by AragónCalvo et al. (2007).
In this paper, we introduce a framework and algorithm to
identify the full hierarchy of critical lines, surfaces, volumes… of density distribution in the general case of dimensional
spaces. For 3D space, these critical
subspaces can be identified to the void and peak patches, as well as
filaments and other primary critical lines of the distribution. The algorithm
extracts the filaments as a differentiable
and, by definition, fully connected networks that traces the backbone
of the cosmic web. This method is closely related to the skeleton
formalism presented in Novikov et al. (2006) and Sousbie et al. (2008a) and is also based on both
Morse theory (see e.g. Colombi et al. (2000), Milnor (1963) or Jost (1995)) and
an improved Watershed segmentation algorithm that uses a
probability propagation scheme.
This paper is organized as follows. In section 2, we present a general definition of the critical subspaces that we use as well as a method to extract them from sampled density field with a subpixel precision (focusing more specifically on the filaments in the 2D and 3D case). In section 3, we use this formalism to study the time evolution of the cosmic web, and understand the change of its properties as a specific object via the truncated Zel’dovich approximation (Zel’Dovich, 1970). Finally, in section 4, we summarize our findings and discuss a few possible applications to Nbody simulations and observational spectroscopic galaxy surveys. The details of a general simplex minimization algorithm used in section 2 are presented in Appendix A while the general behavior of the interskeleton pseudodistance as defined in section 3 is given in appendix B.
2 Method
The main goal of the algorithm presented here is to allow a robust extraction of the nonlocal primary critical lines (among which the skeleton) as introduced in Novikov et al. (2006) and Sousbie et al. (2008a). In these papers, the skeleton was defined as the set of points that can be reached by following the gradient of the field, starting from the filament type saddle points (i.e. those where only one eigenvalue of the Hessian is positive). Let be the density field, and its gradient at position , the skeleton can be retrieved by solving the following differential Equation:
(1) 
using the “filament” type saddle points as initial boundary conditions.
Because of the difficulty of designing a robust algorithm to solve
this equation, it was achieved only in in Novikov et al. (2006)
and a solution to a local approximation in was proposed
in Sousbie et al. (2008a). This local approximation allowed the extraction of a
more general set of critical lines linking critical points together,
the subset of this lines linking saddle points and maxima together
corresponding to the skeleton (i.e. the “filaments” in the large
scale distribution of matter in the universe). See Pogosyan et al. (in prep.) for
a discussion of these various sets.
This method works in a very general framework and allows the extraction of a fully connected nonlocal skeleton as well as an extension of the primary critical lines introduced in Novikov et al. (2006) and Sousbie et al. (2008a) to a hierarchy of critical surface. Following the idea, already present in Equation (1), that the topology of a field can be expressed in terms of the properties its field lines, it takes ground in Morse theory (Jost, 1995) and is roughly based on an extension of the patches theory (Bond & Myers, 1996). For a sufficiently smooth and nondegenerate field ^{1}^{1}1It would be beyond the scope of this paper to define such a field from the mathematical point of view, but it certainly has to be a Morse function that obeys the MorseSmaleFloer condition, e.g. the discussion in Novikov et al. (2006).
of dimension , the peak patches – PP hereafter – (resp. void patches
– VP hereafter) are defined as the set of points from which the field
lines solution of Equation (1) all converge to the same
maximum (resp. minimum) of the field. Within this framework, we shall qualitatively show that in
a dimensional space, the skeleton can be thought of as the result
of successive identifications of VPs or, equivalently, as the
one dimensional interface between at least VPs (an actual rigorous demonstration can be found in e.g. Jost (1995)).
Using this definition, extracting the skeleton of a distribution thus simply
amounts to finding a way of robustly and consistently identifying the
patches.
Whether considering a particle distribution obtained from a numerical simulation or a density field sampled on a grid, the major difficulty arises from the discrete nature of the data. In fact, even if the underlying density field is supposedly smooth and continuous, the discreteness of the sampling implies a relatively large uncertainty on the precise location of the patches boundaries, as sampling is limited by computational power, which is even more true when considering higher dimensions space. The algorithm we use is an improved version of the Watershed transform method (Beucher & Lantuejoul, 1979), based on a probability propagation scheme and aims at attributing a probability of belonging to a given patch to every sampled point of the density field. This scheme is very general and efficient as it allows dealing with discrete dataset in a naturally continuous fashion and on manifolds of arbitrary dimensions.
2.1 Probabilistic patches extraction
The initial idea beyond our patches identification algorithm is that a
patch can be defined as the set of field lines (i.e. curves that
follow the gradient of a field) that originate from a given minimum
(VP) or maximum (PP) of a field. Considering a sampled field, being
able to identify the patches thus amounts to being able to decide, for
any given pixel , from which extremum all field lines that cross
originate. It is therefore easy to understand that the discrete nature
of the sampling rapidly plagues such a task: for each pixel,
considering the measured gradient, one has to decide from which, in
the fixed number of neighbouring pixels, the field line comes
from. Within a dimensional space, having to select between only
possibly different direction for field lines is a crude
approximation that leads, because of accumulation, to a largely wrong
answer for pixels located far away from the extrema.
Although we present the algorithm in the general case here, the reader can refer to figure 1 and its legend for a simpler and more visual explanation of the algorithm in the case. More generally, our algorithm involves considering each pixel of a sampled field in the order of their increasing (resp. decreasing) value, depending on whether we want to compute the VPs or PPs and, for each of them, computing the probability that it belongs to a given VP (resp. PP). This probability map is simply computed by scanning the probability distribution of its neighbours (within a dimensional space, here ) and deducing the current pixel patch probability distribution from it. Two cases are possible:

none of the neighbours has already been considered (i.e their respective densities are all higher resp. lower than that of the current pixel). This means that the pixel is a local minimum (resp. maximum) of the field: a new VP (resp. PP) index is created and the probability that the current pixel belongs to it is set to .

At least one neighbour has already been considered (i.e its density is lower resp. higher than that of the current pixel). The current pixel probability distribution is computed as a gradient weighted average of its lower resp. higher density neighbours’ probability distributions.
Once all pixels have been visited, a number of patches have thus
been created and a list of probabilities ,, has been computed for each pixel, . These
probabilities quantify the odds that a given pixel belongs to a
given patch . Figure 2 illustrates the
advantages of our probability list scheme compared to the naive
approach: without it, the patches borders have a strong tendency to be
aligned with the sampling grid and the problem tend to get much worse
when considering lower sampling and of course higher dimensions.
Figure 3(c) presents the results obtained by applying this
algorithm to the Gaussian random field of Figure 3(a). On this picture, each patch is assigned a different
shade, and the colour of each pixel is the probability weighted average
colour of its possible patches. As expected, a majority of pixels seems
to belong to a definite void patch with high probability (close to
). In fact, considering two neighbouring void patches A and B,
all the pixels that belong to one of these patches and have a value
lower than that of the first kind saddle point(s) on their border
(i.e. where the Hessian only has one positive eigenvalue) have a
probability of belonging to either A or B. Hence, the
probabilities of belonging to different patches only starts mixing
above first kind saddle points. This can be seen on the top right
zoomed panel of Figure 3(c) where probabilities only start
blending mildly for densities above this threshold (the saddle point
are represented by the probability “nodes” on the picture). This
results in a complex distribution of patch index probabilities in the
vicinity of higher density borders (see upper left panel of Figure 3(c)), and thus a higher uncertainty of the location of
the void patches border. This uncertainty on the precise patch index
is directly linked to the location of the skeleton. In fact, as
explained in the next section, the skeleton can also be defined as the
set of field lines that do not belong to any patch, or in other terms,
where sampled pixels have an equal probability of belonging to several
distinct patches.
2.2 The ddimensional skeleton
As one can easily see, the major strengths of this simple patch
extraction algorithm are that it is robust and can be trivially
extended to spaces of any dimensions and topology, the only
requirement being that one needs to be able to define neighbouring
relationships between pixels and measure distances between them. So we
now have a robust algorithm for extracting the VP and PP of, in practice, nearly arbitrary
scalar fields. In this subsection, we show that it is possible to
generalize the definition of the skeleton (Novikov et al., 2006) to
spaces of arbitrary dimension and present a simple method to compute
the skeleton, as well as critical lines and surfaces, based on our
patches extraction algorithm.
2.2.1 Definition
Let us first present important results of the Morse theory without
demonstrating them. The more thorough reader can refer to Jost (1995) for
a mathematical demonstration.
Let us consider the general case of a sufficiently smooth and nondegenerate dimensional scalar
field , with and a manifold (i.e
, the sphere , …)^{2}^{2}2 It will be assumed
throughout this paper that the field satisfies the MorseSmaleFloer condition
(Jost, 1995).. Following Jost (1995), the field lines of
fill and a VP can be defined as the set of points
that can be reached by following the field lines originating from a
given minimum of . The VPs of thus segment
a set of dimensional volumes that completely fill , each of
them encompassing exactly one minimum of . The interface
of the VPs, , defines a surface (i.e. a surface of
dimension embedded in ). It is therefore possible to apply
our probabilistic algorithm to , the restriction of
to , in order to extract the VPs on this
interface. For clarity, we will call the VPs of the
first order VPs of , noted VPs
hereafter. Recursively, the VPs define dimensional volumes
that pave , each of them encompassing, by definition of a VP,
exactly one minimum of , with coordinates , and the reasoning can be applied to the whole
hierarchy of VPs, .
Starting from a dimensional scalar field , it
is thus possible to define a complete hierarchy of sets of
VPs, . These VPs
are dimensional volumes that partition ,
where is defined as the dimensional
interface of the patches. Each set of VPs is
defined as the set of void patches of , the
restriction of to . Let us call a critical
point, , of kind a critical point with Morse index
(i.e. where the Hessian has exactly positive eigenvalues). Then,
encompasses the whole set of saddle points of kind
, of , the minima of
associated to each patch being the
saddle points of of kind . The interface
is thus a curve embedded in that links the maxima of
to its saddle points of kind : the skeleton of
. It is interesting to note that this approach also
allows a rigorous definition of the whole set of critical lines
similar to the one introduced with the local approximation of the
skeleton in Novikov et al. (2006) (see also Sousbie et al. (2008a)), as well as their extension to critical
hypersurfaces of any number of dimensions.
Although we have only addressed the VPs case so far, the exact same argumentation holds for the whole hierarchy of PPs, which leads to being the skeleton of the voids that links minima to saddle points of kind . Moreover, alternating a selection of VPs and VPs, , leads to being the curve that links saddle points of kind to saddle points of kind : a peculiar set of critical lines of the field. One can note that, as rigorously demonstrated in Morse theory (Jost, 1995), critical lines defined in such a way can only link critical points whose Morse index only differ by unity.
2.2.2 Implementation
The representation of the critical lines of a given scalar field as a
peculiar limit of a peak or void patches hierarchy certainly has some
mathematical appeal. From a practical point of view, although
apparently straightforward, its direct numerical implementation can
nevertheless be somewhat problematic. Let be an
initial sampling grid and its reciprocal (i.e. shifted by
half the size of the pixels in every direction). Using our patch
computation algorithm on a scalar field sampled over
, we obtain for every pixel, , of a probability that
it belongs to a given patch, . Those sets of probability
distributions could be used to define a border between the patches
and thus to compute the 1PPs and 1VPs. Nevertheless, this is in
general not an easy task: one in fact first needs a very precise localization of
the 1PPs and 1VPs (those living on the (hyper)surface of the initial
VPs or PPs) to be able to compute the following segmentation of the
hierarchy (as opposed to a density probability). In order to overcome this issue, we chose first to base our
implementation on a subset only of the different patches
probabilities and only keep for every pixel the index of its most
probable patch. This way, we are able to simply define the borders
between patches as the set of pixels of that overlap at least
pixels of with different most probable patch index. The
patches extraction algorithm can then be applied again over that
border, restraining pixels examination to the ones that lie on its
surface. Identifying pixels of that overlap at least pixels of
with different most probable patch index, one can thus
identify the 2PP or 2VP and, repeating this procedure, all orders of
the patches hierarchy.
For 2D Gaussian random fields, as pictured on figure 3(d) and 3(c), the skeleton
(resp. antiskeleton) are identical to the VP (resp. PP) borders and
the direct implementation of this algorithm leads to a very precise and
smooth skeleton. But the implementation in spaces of higher dimensions
raises a critical issue with this simplified method, due to the fact
that the borders of the PPs and VPs are only defined
by the index of the pixels they cross: thus they are jagged and
considered locally flat (on the scale of one pixel and its direct neighbourhood). Figure 4(a) presents the 1VPs
obtained by applying this algorithm to a 3D Gaussian random field,
each colour corresponding to a different 1VP index. The 1VPs live on
the 2D surface which is the border between the cells formed by the
void patches of the field, each of this cell encompassing exactly one
minimum of the field. This surface is complex: it can be multiply
connected at the interface of more than two different void patches and
its curvature is locally significant. Although neighbouring relationships
between pixels are easily obtained even where the surface is multiply
connected, only a rough approximation of the actual distances along
the surface can be computed, as the local curvature is not taken into account. Figure 4(b) shows the
corresponding skeleton, computed as the border of the 1VPs of
Figure 4(a). This skeleton is clearly not very well defined,
the uncertainty in distance computation leading to errors in the
probability propagation algorithm. This bias results in multiple
skeleton branches that seem to oscillate and cross each other along
the true skeleton location.
In the end, it appears that dropping the full probability distribution and approximating borders between patches is too coarse an approximation. One solution would involve trying to compute the precise location of the VPs and PPs using the full set of probabilities, but, as it will be discussed in section 2.3, this raises complex issues. As it is the patches interface computation that seems to be difficult, the alternative we chose to implement involves computing directly the skeleton from the VPs and PPs of the field, without having to consider the full hierarchy of VPs and VPs. A close examination of Figure 4(a) led us to formulate the conjecture that the VPs or PPs interface corresponds in fact to the subspace of where the manifold is sufficiently multiply connected (i.e. where the surface defined by folds onto itself). Equivalently, this locus can be defined in as the interface of at least different PPs or VPs (see Figure 4(a)). This is formally demonstrated in Jost (1995). In the general case of , the skeleton should thus be the 1D interface between at least VPs or PPs of . Figure 4(c) presents the skeleton obtained using this method on the same Gaussian random field as the one used for Figures 4(a) and 4(b). As expected, as there is no need to recursively compute the full hierarchy of VPs, the resulting skeleton is much more precise and well defined. Moreover, a quick comparison to Figure 4(b) confirms that it is in fact the approximation of the patches interfaces by individual pixels that plagues the algorithm, each recursive step exponentially increasing the error.
2.2.3 The skeleton as a set of individual filaments
The concepts introduced above allow the definition and extraction
of the skeleton as a fully connected network that continuously
link maxima and saddlepoints of a scalar field together. It is
certainly of interest to try understanding the topological and
geometrical properties of this scalar field through the connectivity
and hierarchy relationship that it introduces between the critical
points. Applied to cosmology, it also allows a formal definition of
the concept of individual filaments. Considering matter distribution
on large scales in the Universe, a natural definition of a single filament would be a subset of the cosmic web that directly
links two halos together. The transposition of such a definition to
the skeleton would allow the introduction of useful concepts such as
neighbouring relationship between halos in the cosmic web sense. It
would also make possible the study of filaments as individual physical
objects, similarly to what has been done for years in the literature
with the halos and voids.
On Figure 3(a), the skeleton (coloured thick network, where the colour corresponds to the underlying PP index) and antiskeleton (black network) are superimposed on the density field from which they where extracted. Let us define a filament as a subset of the skeleton continuously linking two maxima together while going through one  and only one  first kind saddle point. These saddle points can be easily extracted as they are located on the skeleton, at the border between the peak (resp. void) patches (i.e where the patch index along the skeleton changes, this definition being valid for any number of dimensions). This way, all the filaments of an Ndimensional distribution can be extracted individually by starting from each maximum of the field, following all the branches of the skeleton, and storing only the paths that cross one saddle point before reaching another maximum. This algorithm thus allows the individual extraction of filaments as well as a continuous wander of the filamentary structure of a distribution, which should be very useful in a wide range of applications in cosmology.
2.3 Subpixel resolution and skeleton smoothing
Let us first consider for simplicity a Cartesian sampling grid (even though this sub pixel smoothing does not critically depend on this geometry, see below). The implementation of the procedure of Section 2.2 naturally leads to a skeleton that lives along pixel edges and is thus jagged at the pixels scale. The differentiability of the skeleton is nonetheless a feature which may be critical for a number of its characteristics: its length, curvature, general connectivity … In order to enforce this differentiability, we developed two smoothing methods which we use in practice in turn. The first one is based on a multilinear interpolation of the patches probability distribution which flows naturally from the original algorithm used to create the skeleton. It provides subpixel resolution consistently with the probabilistic framework, thus allowing a precise extraction of the skeleton even when the sampling is low. The other is used to control the level of smoothness away from fixed points (the maxima or the bifurcation points) and can be used to enforce sufficient differentiability.
2.3.1 Multilinear subpixel skeleton
Let us first find a way to obtain a subpixel resolution on the skeleton position based on the patches probability distribution of each pixel. The raw skeleton is made of individual segments located on the edges of the pixels of a Cartesian grid . Each segment is defined by its two end points, and each of them is surrounded by pixels with a full list of possible patches index, together with their respective probabilities. Recall that the probabilistic algorithm we use works on individual pixels so the resulting skeleton position, defined as the position of the border between several patches, is computed with a precision of one pixel. This implies that the smoothing procedures may not move the skeleton on more than half the size of a pixel. In other words, if we consider the dual sampling grid, , of , the skeleton can be freely moved within the pixels of that its jagged approximation crosses. So it is sufficient to consider individually each of these pixels. Let be one of these pixels. We then know for each of its vertices, with , the probability distribution of the different VPs, , where k is the index of a VP. In order to obtain subpixel resolution, these probabilities can be interpolated within .
For simplicity, we will only use a multilinear interpolation and define , the probability distribution of patch , interpolated at point within as:
(2) 
where if the coordinate of within is and if it is . Ideally, the skeleton should not belong to any VP, so it should be located where all the non null values of are equal. Let us define the arithmetic mean of the probability (over the VPs with index ) over the pixel
(3) 
and its root mean square,
(4) 
where the sum is over all the subscripts such that there exist a pixel where . Clearly, all patches with dominating probabilities in are equal when
(5) 
Equation (5) is of fourth order and is thus difficult to solve in general.
2.3.2 Approximate quadrics subpixel smoothing
Insights into the solution of Equation (5) can be found while considering the intersection sets of points where pairs of probabilities are equal instead of equating them all at the same time. These sets are solution of the set of equations
(6) 
where and are subscripts of the patches that dominate on at least one vertex of .
For clarity, let us consider the case first. With a proper indexing of the four pixels ,
(7)  
Equation (6) writes in this case:
(8) 
where , , and only depend on the values of . Equation (8) is quadratic and its solutions are well known curves of dimension called quadrics. Figure 5 illustrates solutions of Equation (8) when is located at the intersection of , or different VPs. In the most frequent configuration where is at the intersection of VPs, Equation (8) directly gives the first order approximation of the intersection of the skeleton and , and we may approximate it by a straight segment. Finding the end points of this segment is easily achieved by computing the location of equal probability along the two sides of that link vertices with different patches (Figure 5(a)). The configuration is rarer, and concerns only the maxima of the field as well as all bifurcation points of the skeleton. In this case, we know that three different branches of the skeleton merge within the pixel, at a point where all probabilities are equal. So, we may set the bifurcation point as the locus where all the quadrics of Equation (6) intersect (note that the three of them always intersect in a single point as and implies ). The three branches of the skeleton in are thus obtained by linking the bifurcation point to the isoprobability along the three sides of that link vertices associated to different patches (Figure 5(b)). Finally, the configuration is very rare^{3}^{3}3note that the scarcity of these points is directly related to resolution, i.e. whether or not the skeleton is featureless at the subpixel scale. Hence these points may occur more often in higher dimensions, which for computational reasons may be relatively undersampled. and also more problematic. As previously, we know that there exist a bifurcation point within , but this time with different skeleton branches. Since there are now different Equations (6), and given that the solution of each of them is a 1D quadric, this system is clearly over constrained to find the precise location of the bifurcation point. A solution may well be to use a higher order interpolation, allowing more complex curves than quadrics for equal probabilities regions, or to try solving directly Equation (5). As this case is clearly rare, it would also be possible to approximate the bifurcation point as the barycenter of the three points of intersection of the subsets of Equations (6) taken in pairs. Again, the smoothed skeleton would therefore be derived by linking the bifurcation point to the four isoprobability points along the four sides of (Figure 5(c)).
2.3.3 Actual recursive implementation
Having discussed the underlying geometry of the subpixel multi linear
interpolation, let us now turn to our actual subpixel smoothing
algorithm. Indeed, in dimensions, Equation (6)
is of order and is linear in each of the space coordinates
. Its solutions are thus dimensional quadrics whose
intersections, as in , can be used to recover the skeleton
position down to a subpixel precision. Finding intersections of
quadrics in general remains nonetheless a highly difficult (or even
untractable!) problem and even stateoftheart solvers can only
achieve such a performance for at most. To circumvent this
difficulty, we thus opted in practice for a different solution that
consists in a recursive numerical minimization of the value of over the hierarchy of ncubes (i.e hypercubes of dimension
), , that are the faces of each cell of
the sampling grid. The trick is to always reduce the problem to a 1D
minimization of a polynomial of order (see appendix A). Figure 6 illustrates the full process in 3D. Let us consider
the grid cell of Figure 6, located at the interface of
different patches. The skeleton extraction algorithm produces the
jagged skeleton represented in red. In order to improve its
resolution, we first consider each of the edges individually (see
Figure 6) and determine for each of them the point of
equal probability for the two patches that dominate at the end points
of segment. Of course this point only exists if different patches
dominate at the end points of a segment and we thus obtain at most
new points ( in this instance, represented by the red
crosses). The edges of a cube can be considered as its “one
dimensional faces” or faces. The following step consists in
examining the configuration of its faces, usually called faces for
a 3D cube. Figure 6 illustrates the configuration of
these faces together with the isoprobability points computed
over their edges. We know that at least different patches have to
dominate on at least one of the vertices of each face for a
skeleton branch to enter the cell through this face. Using the
minimization algorithm presented in Appendix A and the isoprobability
points on the edges, it is thus possible to compute, over these faces,
the location of the minimum of (represented as blue
crosses on Figure 6). Finally, considering the face
of the cell (i.e. the cube itself), one can determine the point of
minimal value of over the cube, which is the point
where the skeleton branches connect (see figure 6).
The generalization of this algorithm is relatively straightforward. Let us again consider a cell that is a hypercube of dimension . We know that the skeleton intersects this cell if at least of its vertices have different maximal probability patches index. In that case, the subpixel resolved skeleton can be recovered by considering all the faces of the hypercube, , in ascending order of their dimension . When considering a face, we minimize the value of in order to obtain the point where its vertices respective patches have equal probability, using the points obtained from the faces. This point only exist for a face if at least vertices most probably belong to different patches. In the end, one thus obtains a number of points from the faces that are the points where the skeleton enters the cell and one point for the face (i.e. the cell itself), which gives the location where different branches of the skeleton connect. Figure 7 illustrates the result of applying this algorithm in the case.
2.3.4 Artifacts correction and differentiability
Though the method presented above to obtain subpixel resolution works most of
the time, there nonetheless exist situations where it can fail due to sampling
effects. Figure 8 illustrates such a situation, which can
sometimes occur when the sampling grid pixel size is not totally negligible
compared to the average extension of the patches. When the thickness of a peak
or void patch is smaller than a pixel size, it can in fact lead to mistakingly
isolated subregions of size one pixel, implying the creation of spurious loops
in the skeleton (in red). This phenomenon, although rare, occurs in spaces of
arbitrary dimension and triggers artifacts when applying our subpixel
resolution algorithm. The green skeleton on Figure 8
presents such an example of a spurious skeleton loop.
In order to fix these anomalous segments, we chose to posttreat the skeletons by openingup all onepixel sized loops and smooth the resulting skeleton to enforce a desired level of differentiability in the skeleton trajectory (see the blue skeleton of Figure 8). The smoothing method that we use presents the advantage of being quite robust, and involve fixing some specific points of the skeleton, and averaging the position of each nonfixed segments end points with the position of its closest neighbouring end points a number of times. Let be the coordinates of the sampled skeleton location (among ) between two fixed points. Before smoothing, all are located on the edges of and we can define their smoothed counterparts as , computed as:
(9) 
with
(10) 
where Equation (9) is applied times in order to smooth over segments. Basically, Equation (9) is used to compute smoothed coordinates as a weighted average of the original coordinates together with the coordinates of its direct neighbours, and . Applying this scheme times thus produces the final smoothed coordinates to be a weighted average of and its closest neighbours along the skeleton.
This smoothing technique introduces two parameters of importance: the skeleton smoothing length , and the type of fixed points. In order to determine the optimal value of , it is possible to minimize the reduced corresponding to the discrepancy between and supplemented by a penalty corresponding to the total length of the skeleton (oversmoothing will increases the discrepancy, undersmoothing will increase the total length). In practice, though, as a post treatment to an already smooth skeleton (using the sub pixel probabilities), this choice is not critical.
The choice of the skeleton points that should be fixed before smoothing depends of the planned application; in practice, we implemented two possibilities: (i) fixing the field extrema, or (ii) the bifurcation points of the skeleton (i.e the points of the skeleton where two filaments merge into one). Figure 9 illustrates the influence of this choice on the shape of the smoothed skeleton. By fixing the extrema of the field, one ensures that the skeleton subsets that link these extrema are treated independently: this is the solution used to study the properties of individual filaments in the dark matter distribution on cosmological scales. One should note that, in this case, the parts of the skeleton that belong to several individual filaments are duplicated (see the red skeleton on Figure 9), affecting global properties of the skeleton such as its total skeleton length. In contrast, fixing bifurcation points enforces the smoothing of the skeleton while conserving its global properties.
2.4 Summary
Let us finally recap the main steps involved in the extraction of the fully connected skeleton in a dimensional space.

The density field is sampled and smoothed in order to ensure sufficient differentiability. A smoothing scale of at least pixels is recommended when using a Cartesian grid.

All pixels are considered in the order of their ascending (or descending) density. Depending on their neighbours, they are labelled as minima (or maxima), or assigned a list of probability to belong to a given VP (or PP) following the algorithm of section 2.1.

Considering only the patch index with highest probability for each pixel, skeleton segments are created on pixel edges when at least surrounding pixels among have a different most probable patch index.

Calling a vertex connected to more than two segments a node of the skeleton and considering each node, the sets of connected segments that link them to other nodes are recorded in order to later recover the information on the skeleton connectivity (and allow a continuous wander along the fully connected skeleton).

The subpixel smoothing procedure of Sec. 2.3.3 is implemented. All the vertices of the skeleton segments are considered one by one together with the value of the probability distribution in the center of the surrounding pixels. According to the subpixel algorithm, the extremities are moved in order to obtain a differentiable skeleton.

Configurations that are identified as problematic are corrected for following the method described in section 2.3.4, and the resulting skeleton is smoothed over a few pixels (usually of them) while fixing either bifurcation points or maxima/minima.

Eventually, individual filaments can be extracted (and tagged) following the method of section 2.2.3.
Figures 11 and 12 show a 3D skeleton computed from a simulated density field at , sampled over only pixels.
Note that in this paper, we did not address the issue of shot noise
that has for long been known to be a problem for most segmentation
algorithms, and in particular for Watershed techniques (see e.g. Roerdink (1995) for a review on the subject). In fact, shot noise
often leads to over segmentation, and numerous complex techniques have
been developed to try and compensate for it. Instead, we chose here
to follow the approach used in Novikov et al. (2006), Sousbie et al. (2008a) and
Sousbie et al. (2008b), that involve simply filtering the sampled fields using
a Gaussian kernel on large enough scales (in terms of number of
sampled pixels) so that it is possible to consider that the sampled
field is a smooth enough representation of the underlying field. A
clear disadvantage of this method is that it introduces a particular
smoothing scale and thus adds one parameter (the smoothing scale) to
take into account when considering sets of critical lines an surfaces
computed on a field. A short study of the robustness of the skeleton extraction in the case
of a smoothed scalar field is presented in appendix C. Improvements over this shortcoming are postponed
to further investigations.
Regarding performance, the computing time and memory consumption for the extraction of the skeleton mainly depends on three parameters: the number of pixels , the smoothing length in units of pixel size and the number of dimensions . Most of the computational power is spent during the first step of the process: the propagation of the probabilities to compute the patches. For a constant value of and , the algorithm speed is linear in , and so is the memory consumption. A smaller value of implies more smaller patches, which therefore have proportionally more borders with each other thus increasing the number of different probabilities to propagate. Indeed, for very small values of , memory consumption is largely increased as well as the computational time; it seems reasonable to keep above a minimal threshold of pixels (which in any case is also necessary to ensure sufficient differentiability of the sampled field). Finally, the value of is most critical to memory consumption and speed, not only because should increase with to keep a constant sampling resolution, but also because the number of neighbours for each pixels scales as for a Cartesian grid. The computational time and the memory consumption follows, as the number of different probabilities to keep track of is also much increased (each pixels having many more neighbours, the ratio of patches interface surface to their volume increases and so does the number of different probabilities to propagate, on larger distances). For the different skeletons presented in this paper, to give and order of magnitude, for a single modern CPU, skeletons of pixels smoothed over pixels are computed in a matter of few seconds and the memory needed is of order MBytes. Computing a skeleton on a pixels grid with takes approximately minute and a hundred of MegaBytes of memory, while for a grid, it takes about an hour and around GBytes of memory are used. While skeletons are still tractable at a descent resolution , higher dimensionality seems difficult to reach with present facilities without implementing a fully parallel version of the code.
3 An application: validating the Zel’dovich mapping
The scope of application of the algorithm presented in Sec. 2 is vast (see Sec. 4 for a discussion). Here we shall focus on a simple example which makes use of one of the clear virtues of the above implementation: it allows us to identify as physical objects the filaments present in the matter distribution on cosmological scales, and see how these objects evolve with time.
Specifically, we intend to show, using the skeleton as a diagnostic tool, that a relatively simple but powerful model, namely the truncated Zel’dovich approximation mapping (Zel’Dovich, 1970), can capture the main features of the cosmic evolution of the web. Indeed predicting the evolution of matter distribution from the point of view of the topology and the geometry of the cosmic web has been a recurrent issue in cosmology (e.g. Bond & Myers (1996)) and is becoming critical as the geometry of the cosmic environment is now believed to play a key role in shaping galaxies (see, e.g. Ocvirk et al. (2008)).
Being able to carry such an extrapolation from the initial condition to the present day distribution of filaments should lead to a simplified and broader understanding of large scale structures in the Universe, in the same way the concept of clusters as important physical objects gave birth to the hierarchical model of structure formation. The fully connected skeleton encompasses both the geometry and the topology of the cosmic web: it is therefore the ideal tool to validate this mapping between the initial condition and the present day distribution of filaments. Understanding and partially correcting for the distorsion induced by the proper motions of the structures is also of prime importance when dealing with observationnal data sets (see e.g. Pichon et al. (2001)).
The principle of the Zel’dovich approximation (ZA hereafter) is to make a first order approximation, in Lagrangian coordinates, of the motion of the collisionless dark matter (DM) particles. The motion of these particles from the initial mass distribution in Lagrangian coordinates to their Eulerian coordinates can therefore be described as:
(11) 
where is the redshift, the growth factor, and the displacement field, computed in the initial matter distribution as:
(12) 
where is the Hubble constant, the quantity of energy in the Universe, the gravitational potential and the subscript “in” stands for initial conditions.
The truncated Zel’dovich
approximation simply consists
in filtering short scale modes of the initial power spectrum before
computing the displacement field in order to prevent shell crossing
effects. It has been shown to improve the precision of the
approximation (Coles et al., 1993). As we are mainly interested in the
large scale behavior of the cosmic web, the smoothing scale,
Mpc, that we use hereafter to compute
roughly corresponds to the scale of non linearity at , as the so called truncated Zel’dovich
approximation has been shown
to work best above this scale (Kofman et al., 1992). It was
computed as the scale at which, in the simulation, the smoothed
density field, , is such that
at .
3.1 Simulation and skeletons
The numerical simulation that we use in this section was computed with the publicly available Nbody code GADGET2 (Springel, 2005). It corresponds to a dark matter only cosmological simulation of particles within a Mpc cubic box, considering a CDM concordant model (, , , & ). In order to study the evolution of the cosmic web, a set of reference skeletons, , were computed from different snapshots, at redshift , where corresponds to the redshift of the initial conditions of the simulation. These skeletons were computed on density fields generated by sampling the particle distribution of the respective snapshots on a grid and after smoothing with a Gaussian kernel of size . In order to understand if the truncated Zel’dovich approximation is able to capture the essential features of cosmic web, these skeleton are compared to different sets of skeletons, generated using the truncated Zel’dovich approximation in different ways:

: This set of skeletons is generated by applying the Zel’dovich approximation to the DM particles of the simulation in the initial conditions. The displacement field is computed after smoothing over the scale and the resulting distribution is sampled and smoothed over the same scale to generate the skeletons.

: these skeletons are generated by applying the Zel’dovich approximation directly to the skeleton of the initial conditions. The initial condition simulation () is sampled and smoothed over the scale to compute its skeleton. The displacement field is computed on the same field, but smoothed over a scale Mpc (such that at ) and the Zel’dovich approximation is applied to each segment of the initial condition skeleton. We use a larger truncation scale for the Zel’dovich approximation here in order to prevent shell crossing, which can be tolerated when applied to particles but would result in a very fuzzy displaced skeleton.

: same as , but with a displacement field smoothed over the scale , in order to check the influence of this choice on .

: same as , but the sampled field is smoothed on a scale instead of to take into account that the Zel’dovich approximation introduces an artificial additional smoothing scale (see below).
3.2 Skeleton length
There exist many different ways to compare one dimensional sets of lines within a space, but one of the simplest certainly involves comparing their lengths. Figure 13 presents the measured length per unit volume of the different sets of skeletons (described above) as a function of redshift. Let us first consider the length of (purple curve with discs symbols). It was shown in Sousbie et al. (2008a) and Sousbie et al. (2008b) that, whereas for scale invariant fields such as the initial conditions of the simulation, the length of the skeleton is expected to grow as ( being the smoothing length), it grows in fact as around for CDM simulation. Note that the fact that the length of decreases with time seems consistent with the expected evolution of matter distribution in the case a cosmological constant, where the expansion accelerates around . In that case, matter in fact tends to form separate distant heavy halos: more numerous small filaments on smaller scales shrink and melt into each other as dark matter halos merge, while larger scale filaments tend to stretch: the net result is a total length decrease.
This process seems to be well captured by the Zel’dovich
approximation as the
length of (red curve, square
markers) exhibits the same time evolution as the length of
. The discrepancy between the measured
length in the simulation and with Zel’dovich’s
approximation is nonetheless of the order of
at . This disagreement should be explained in part
by the fact that the Zel’dovich
approximation uses a displacement field computed from a smoothed
version of the initial condition density field, thus introducing an
additional smoothing that one should take into account when computing
.
The measure of the ratio, , of the length of to the length of as a function of time, , is displayed on Figure 14. It appears that is approximately a linear function of time, , and can thus be fitted as
(13) 
where is the time of the initial conditions from which the Zel’dovich approximation was computed. Moreover, the fact that the value of is relatively close to unity confirms that the artificial smoothing introduced by the Zel’dovich approximation is small; we chose to model it as a convolution with a Gaussian kernel of size . The effective Gaussian smoothing used on Zel’dovich’s approximation has scale and is thus the result of the composition of two Gaussian smoothing of scale and :
(14) 
Using equations (13) and (14), and the fact that the skeleton length grows with smoothing scale as (Sousbie et al., 2008b), the value of one should chose to get the best match with CDM simulations is thus
(15) 
In order to compute a skeleton that is comparable to , one should therefore smooth the distribution obtained using the Zel’dovich approximation on a scale such that
(16) 
On Figure 13, the dotted black curve represents the measure of the length of , when the effective smoothing introduced by the Zel’dovich approximation is taken into account. The agreement with the measurements in the simulation is significantly improved compared to the naive approach; this suggest that the Zel’dovich approximation can be used to predict the shape of the evolved cosmic web from the initial conditions distribution only.
Of course, the length is only a global
characteristic of the skeleton and it certainly does not fully
constraint its shape. Higher order estimators that can compare the
relative position and shapes of skeletons are needed to
quantify how good an approximation the skeleton obtained by Zel’dovich’s
approximation is.
Before doing so, let us consider an alternative form of the Zel’dovich approximation, where, instead of displacing the particles from the initial conditions of the simulation to derive the evolved density field, we directly use the displacement field to evolve the skeleton of the initial conditions. This method will be called here the skeleton Zel’dovich approximation (SZA hereafter), and the resulting skeleton . Studying the properties of is interesting as it should make it possible to distinguish between two different processes that affect the properties of the cosmic web: the simple deformation of the initial cosmic web on the one hand and the creation or annihilation of filaments on the other hand. Indeed, reflects only the modification of the skeleton due to its deformation while also takes into account the merging and annihilation of filaments. Note nonetheless that by definition, the locus of the skeleton for the SZA is biased toward higher density regions; in these regions, nonlinear effects inducing shellcrossings in the Zel’dovich approximation are more likely. To be conservative, we thus use a larger smoothing length than to compute the displacement field. This smoothing length, Mpc, was chosen such that ; the green curve (cross markers) of Figure 13 shows that using or does not make any difference regarding the length of the skeleton. On this figure, the blue curve (triangle markers) depicts the evolution of the length of : its behavior is clearly opposite to the case, as the length rises with time. Although surprising at first sight, this result only confirms our previous interpretation of the evolution of the cosmic web. In fact, if the SZA can nicely capture the large scale evolution of long filaments, the smaller ones cannot melt into each other, which induces several small scale filaments to be located at the same loci, where only one piece of filaments should have been measured. The disappearance of the smaller scale filaments does not compensate anymore for the expansion of large scale filament: the net result is thus an increase of the total measured length of with time.
3.3 Interskeleton pseudodistance
Let us now define a way to compute a pseudodistance between two different skeletons (see also Caucci et al. (2008)). In practice, a skeleton is always computed from a sampled density and thus has a maximal resolution . It can therefore be described, without loss of information, as the union of a set of straight segments of size . We define a pseudodistance from a skeleton to a skeleton , , as the probability distribution function (PDF) of the minimal distance from the segments to any of the segments . In practice, our algorithm applied to a density field sampled on a Cartesian grid naturally leads to a skeleton described as a set of segments of size the order of the sampling resolution. Hence we directly use these segments to compute interskeleton distances.
Note that there is no reason, in
general, for to be identical to
; this discrepancy, together with the value of the
different modes of the PDFs, do in fact quantify the differences
between and (see appendix B for details on
how to interpret pseudodistances PDFs). The upper and lower panels of
Figure 15 present the pseudo distance measurements
obtained by comparing to and
respectively. A close examination of Figure 15(a) confirms the hypothesis we made in previous
subsection. First, the high correlation of and
(bold curves) for any redshift, is demonstrated
by the localization of the mode around kpc, well
below the smoothing length Mpc. Second, the
asymmetry between the PDFs of and
follows from the fact that has smaller scale filaments that have no counterpart in (the mode intensity is higher for
than ). This is exactly what should happen if
was effectively smoothed on a scale larger than
. The thin curves, for which the effective Zel’dovich
approximation
smoothing was taken into account, confirms this, as the asymmetry is
completely removed in that case.
It is also interesting to look at the distance PDFs between
and (see Figure 15(b)). Except for high redshifts (), the general
intensity of the modes are lower for than for
, suggesting that the Zel’dovich
approximation is a better description
of the evolution of the filaments on large scales, and that filaments
mergers and creation are important processes. The general position of
the modes is still comparable, which means that SZA is nonetheless
successful in describing the evolution of the general shape of the
cosmic web. Also, the asymmetry between and
suggests that has
more small scale filaments than . These
observations confirm our previous assumption that although
the cosmic web evolves in a simple inertial way on larger scales (a
process captured by SZA), the shrinking and fusion of the more
numerous smaller scale filaments is the cause of the general
length decrease of the cosmic web (as suggested by a simple visual
examination of a subregion of , and on Figure 16).
The above investigation opens the prospect of correcting for the peculiar velocities of galaxies induced by gravitational clustering, and carry an AlcockPaczynski (AP) test (Alcock & Paczynski, 1979) on the skeleton of the large scale structures of the universe. In short, the AP test compares observed transverse and longitudinal distances to constrain the global geometry of the universe. Galaxy positions are usually observed in redshift space which induces an important distortion between the distances measured along and orthogonally to the line of sight, which plagues the regular APtest. Our analysis suggests that it is in fact possible to correct through the Zel’dovich approximation for the distortions induced on the cosmic web. Having carried such a correction, we expect that the measure of the anisotropy of the observed skeleton length ratio could yield a good constraint on the value of the cosmological parameters. Note finally that The Zel’dovich mapping smoothed with (see Equation (16)) could be used to generate synthetically sets of extremely large cosmic skeletons probing exotic cosmologies using codes such as mpgrafic (Prunet et al., 2008) to generate the initial conditions and their Zel’dovich displacement. This construction could then be populated with halos and substructure using semi analytical models. Note finally that the total length and the skeleton’s distance are two probes amongst many on how to characterize the difference between two skeletons. Moreover, there are other means to quantify the evolution of the cosmic web. For instance, an interesting statistics would be to find out how often the reconnection of the skeleton occurs as a function of redshift.
4 Discussion and prospects
We have presented a method, based on an improved watershed
technique, to efficicently compute the full hierarchy of critical subsets from a
density field within spaces of arbitrary dimensions. Our algorithm
uses a fast one pass probability propagation scheme that is able to improve significantly
the quality of the segmentation by circumventing the discreteness of
the sampling. We showed that, following Morse theory, a recursive
segmentation of space yields, for a dimensional
space, a succession of dimensional subspaces that
characterize the topology of the density field. In 3D for
cosmological matter density distribution, we particularly focused on
the 3D subspaces which are the peak and void patches of the field
(i.e. the attraction/repulsion pools) and the 1D critical lines
which trace the filaments as well as the whole primary cosmic web structure
(i.e. a fullyconnected, nonlocal skeleton as
defined in Novikov et al. (2006)). For the primary critical lines, we
also demonstrated that it is possible to use the probabilities
distribution from our algorithm to derive a smooth and differentiable
skeleton with a subpixel level resolution. Thus this method allows
us to consider the cosmic web as a precise physical
object and makes it possible to compute any of its properties such as
length, curvature, halos connectivity etc…
As an application, we used our algorithm to study the evolution of the
cosmic web, while comparing the time evolution of the skeleton (a proxy to the cosmic web) of a
simulation, to those corresponding to different versions of its Zel’dovich
approximation. We first compared the
evolution of the respective lengths of the different skeletons and then introduced a method to
compute pseudodistances between different skeletons. This pseudo distance
makes it possible to compare different features of the skeleton such as the
size of their filaments and the similarity of their locations. Using
these measurements, we showed that two effects were competing, with
net result a decrease of the cosmic length with time: a general
dilation of the larger scales filaments that could be captured by a
simple deformation of the skeleton of the initial conditions on the one hand, and the
shrinking, fusion and disappearance of the more numerous smaller
scales filaments on the other hand.
We also showed that a simple Zel’dovich
approximation could accurately capture most features of the evolution of the cosmic
web for all scales larger than a few megaparsecs (provided an effective
smoothing introduced by the approximation is taken into account).
Hence in this context,
the skeleton has proven to be a useful tool both for insight and as a quantitative
probe and diagnostic. Conversely, the match between the simulated and the mapped skeleton
confirms and extends geometrically the (point process elliptical) peak patch theory (Bond & Myers, 1996) since both the peaks and their frontiers
(the skeleton in 2D and the peak patch volumes in 3D) are well mapped by the Zel’dovich
approximation.
The domain of interest of the skeleton is quite vast and offer the prospect of a number of applications.
From a theoretical point of view, using the points presented in this paper and in Sousbie et al. (2008a), we are presently developing a general theory of the skeleton and its statistical properties (Pogosyan et al., in prep.) that aims to understand the properties of the critical lines of scale invariant Gaussian random fields as mathematical objects. In particular, this companion paper provides quantitative analytic predictions for the length per unit volume (resp. curvature) of the critical lines and its scaling with the shape parameter of the field, and checks successfully the current algorithm against these. In this paper we focussed on the skeleton. One could clearly investigate on the rest of the peak patch hierarchy and measure, say, the surface or volume of the (hyper)surfaces of the recursion (whose last intersection is given by the primary critical lines). Another interesting issue would be to estimate the fraction of special (degenerate) points which do not satisfy the Morse condition, where the fields behaves pathologically (Pogosyan et al., in prep.).
For instance, one of the shortcoming of the present algorithm concerns special fields
where critical lines disappear, a situation which occurs, say, in the context of tracing dendrites
in a neural network, or blood vessels within a liver. Note also that
there exist other sets of (geometrical) critical lines that are not topological invariants such as the
lines of steepest ascent connecting directly minima to maxima which are not accounted for by the present formalism.
In contrast, the algorithm is well suited to identify bifurcation points, and the connectivity
of the network. In particular, in an astrophysical context, it would be worthwhile to make use of this feature
and
study statistically how the skeleton connects onto dark matter halos as a function of, say,
they mass or spin, and investigate the details of local spin accretion in the context of the
cosmic web superhighways, hence completing the spin alignment
measurements of Sousbie et al. (2008a) on smaller scales.
More generally, the algorithm provides
a neat bridge, via the provided connectivity,
between the theory of continuous fields on the one hand, and
graph theory for discrete networks on the other.
This could prove to be of importance in the context of percolation theory. For instance,
the
percolation threshold can be explained in terms of the properties of the connectivity of the relevant nodes (see e.g. Colombi et al. (2000)).
Here, as argued in section 2.4 we deliberately chose not to consider the issue of
shot noise and its consequences on segmentation, for which no
definitive solution yet exists, though many improvements have been proposed in the literature (see e.g. Roerdink (1995)). Instead, we followed the approach of Sousbie et al. (2008a),
that simply involves convolving the sampled
density field with a large enough (in terms of sampling scale)
Gaussian kernel so that the field can be considered smooth and
differentiable; the probabilistic algorithm allows for the removal of
sampling effects and small intensity residual shot noise. In appendix C we show that the corresponding fully connected
skeleton is nonetheless quite robust (the core of the network remains quasi unchanged), so long as the SNR is above one.
A possible
drawback of this method is that it introduces a smoothing scale
attached to the skeleton. This is not necessarily a
problem in cosmology as the scaling of the skeleton properties with
scale yields information on the distribution over these scales.
Moreover, one is usually interested by the properties of the skeleton
on a given scale (typically larger than the halo scale, a few
megaparsecs). Nonetheless, there exist more complex multiscale
sampling and smoothing techniques such as the one presented in Platen et al. (2007) or Colberg (2007)
that could straightforwardly be adapted to our
implementation. All the algorithm requires is a structured
sampling grid where one can recover a one to one pixel neighbourhood
(i.e. one needs to be able to find the neighbouring pixels of any
pixel and these pixels must have the former as neighbour as
well). For instance, we already implemented the algorithm for an Healpix (Górski et al., 2005) pixelisation
of the sphere (see Figure 17), while a direct implementation on a
delaunay tesselation network is clearly an option^{4}^{4}4for instance to segment regions on the surface of skull.
A natural extension of the theoretical component of this work would be to investigate numerically the properties of the bifurcation points in abstract space or anisotropic settings (see Pogosyan et al. (in prep.) for a theoretical discussion for isotropic Gaussian random fields). For instance, in the context of cosmic structure formation,
Hanami (2001),
relied on the parallel between the skeleton of the density field in positiontime 4D space
and in positionscale 4D space to relate the two. In the former, the skeleton is a natural way of computing what is known as a halos
merger tree, commonly used in semianalytical galaxy formation models (see Hatton et al. (2003) for instance): the
skeleton traces the evolution of the critical points of the density field in time. The peak theory (Bond & Myers, 1996)
tells us that the smoothing scale can be linked to time evolution on scales where gravitational effects remain weakly
nonlinear. A worthwhile goal
is to establish the parallel between the properties of 4D skeleton in this positionsmoothing scale space
(which can be computed from the Gaussian initial conditions only) and the halo merger tree.Finally, note that classical bifurcation theory is concerned with the
evolution of a critical point as a function of a control parameter.
In the language of the skeleton, this evolution may correspond to the skeleton in the extended “phase space”.
From the physical and observational point of view,
an other interesting venue would be to apply the skeleton to actual galaxy catalogs such as the SDSS
(AdelmanMcCarthy et al., 2008) to characterize the (universal) statistics of
filaments as physical objects, like halos or voids,
and describe them in terms of their thickness, length, curvature and
environmental properties (galaxies types, halo proximity, color and morphology gradient…), both
in virtual and observed catalogs. It could also be used as a diagnosis tool for inverse methods
which aim at reconstructing the three dimensional distribution of the IGM from say QSO bundles (Caucci et al., 2008)
or upcoming radio surveys (LOFAR, SKA etc…)
Clearly the peak patch segmentation
developed in this paper will also be useful in the context of the upcoming surveys such as
the LSST, or the the SDSS3 BAO surveys, for instance to identify rare events
such as large walls or voids and study their shape.
Its application to CMB related full sky data, such as WMAP or Planck
should provide insight into, e.g. the level of non Gaussianity in these maps.
Similarly, upcoming large scale weak lensing surveys (Dune, SNAP…)
could be analysed in terms
of these tools (see Pichon et al. (in prep.) for the validation of a reconstruction method in this context).
Using the skeleton, the geometry of
cold gas accretion that fuel stellar formation in the core of galaxies could be probed.
The properties of the distribution of metals on smaller scales could be also investigated using peak patches, to see
how they influence galactic properties; one could compare these statistical results to those obtained
through WHIM detection by Oxygen emission lines (Aracil et al. (2004) Caucci et al. (2008)). Indeed it has
been claimed (see e.g. Ocvirk et al. (2008) Dekel et al. (2008))
that the geometry of the cosmic inflow on a galaxy (its mass,
temperature and entropy distribution, the connectivity of the local filaments network etc. ) is strongly correlated to its history and nature.
In closing, let us emphasize again that the scope of application of the algorithm presented in this paper extends well beyond the context of the large scale structure of the universe: it could be used in any scientific of engineering context (medical tomography, geophysics, drilling …) where the geometrical structure of a given field needs to be characterized.
Acknowledgements
We thank D. Pogosyan, D. Aubert, J. Devriendt, J. Blaizot, S. Peirani and S. Prunet for fruitful comments during the course of this work, and D. Munro for freely distributing his Yorick programming language and opengl interface (available at http://yorick.sourceforge.net/). This work was carried within the framework of the Horizon project, www.projethorizon.fr.
References
 AdelmanMcCarthy et al. (2008) AdelmanMcCarthy, J. K., et al. 2008, ApJ Sup., 175, 297
 Alcock & Paczynski (1979) Alcock, C., & Paczynski, B. 1979, Nature, 281, 358
 Aracil et al. (2004) Aracil B., Petitjean P., Pichon C., Bergeron J., 2004, A&A, 419, 811
 AragónCalvo et al. (2007) AragónCalvo, M. A., Jones, B. J. T., van de Weygaert, R., & van der Hulst, J. M. 2007, A&A, 474, 315
 AragónCalvo et al. (2007) AragónCalvo, M. A., van de Weygaert, R., Jones, B. J. T., & van der Hulst, J. M. 2007, ApJ Let., 655, L5
 Aubert & Pichon (2006) Aubert, D., & Pichon, C. 2006, EAS Publications Series, 20, 37
 Aubert et al. (2004) Aubert, D., Pichon, C., & Colombi, S. 2004, MNRAS, 352, 376
 Barrow et al. (1985) Barrow, J. D., Bhavsar, S. P., & Sonoda, D. H. 1985, MNRAS, 216, 17
 Bertschinger (1985) Bertschinger, E. 1985, ApJ Sup., 58, 1
 Beucher & Lantuejoul (1979) Beucher S., Lantuejoul C., 1979, in Proceedings International Workshop on Image Processing, CCETT/IRISA, Rennes, France
 Beucher & Meyer (1993) Beucher S., Meyer F., 1993, Mathematical Morphology in Image Processing , ed. M. Dekker, New York, Ch. 12, 433
 Bharadwaj et al. (2000) Bharadwaj, S., Sahni, V., Sathyaprakash, B. S., Shandarin, S. F., & Yess, C. 2000, ApJ, 528, 21
 Bond & Myers (1996) Bond, J. R., & Myers, S. T. 1996, ApJ Sup., 103, 1
 Bond et al. (1991) Bond, J. R., Cole, S., Efstathiou, G., & Kaiser, N. 1991, ApJ, 379, 440
 Colberg (2007) Colberg, J. M. 2007, MNRAS, 375, 337
 Coles et al. (1993) Coles, P., Melott, A. L., & Shandarin, S. F. 1993, MNRAS, 260, 765
 Colless et al. (2003) Colless, M., et al. 2003, ArXiv Astrophysics eprints, arXiv:astroph/0306581
 Colombi et al. (2000) Colombi, S., Pogosyan, D., & Souradeep, T. 2000, Physical Review Letters, 85, 5515
 Caucci et al. (2008) Caucci S., Colombi S., Pichon C., Rollinde E., Petitjean P., Sousbie T., 2008, MNRAS in press, pp 000–000
 Dekel et al. (2008) Dekel A., Birnboim Y., Engel G., Freundlich J., Goerdt T., Mumcuoglu M., Neistein E., Pichon C., Teyssier R., Zinger E., 2008, ArXiv eprints, 808
 de Lapparent et al. (1986) de Lapparent, V., Geller, M. J., & Huchra, J. P. 1986, ApJ Let., 302, L1
 ElAd & Piran (1997) ElAd, H., & Piran, T. 1997, ApJ, 491, 421
 ForeroRomero et al. (2008) ForeroRomero, J. E., Hoffman, Y., Gottloeber, S., Klypin, A., & Yepes, G. 2008, arXiv:0809.4135
 Gottloeber (1998) Gottloeber, S. 1998, Large Scale Structure: Tracks and Traces, 43
 Górski et al. (2005) Górski K. M., Hivon E., Banday A. J., Wandelt B. D., Hansen F. K., Reinecke M., Bartelmann M., 2005, ApJ, 622, 759
 Graham et al. (1995) Graham, M. J., Clowes, R. G., & Campusano, L. E. 1995, MNRAS, 275, 790
 Hahn et al. (2007) Hahn, O., Carollo, C. M., Porciani, C., & Dekel, A. 2007, MNRAS, 381, 41
 Hanami (2001) Hanami, H. 2001, MNRAS, 327, 721
 Harker et al. (2006) Harker, G., Cole, S., Helly, J., Frenk, C., & Jenkins, A. 2006, MNRAS, 367, 1039
 Hatton et al. (2003) Hatton S., Devriendt J. E. G., Ninin S., Bouchet F. R., Guiderdoni B., Vibert D., 2003, MNRAS, 343, 75
 Hoffman & Shaham (1982) Hoffman, Y., & Shaham, J. 1982, ApJ Let., 262, L23
 Huchra & Geller (1982) Huchra, J. P., & Geller, M. J. 1982, ApJ, 257, 423
 Icke (1984) Icke, V. 1984, MNRAS, 206, 1P
 Jost (1995) Jost , Jorgen., Riemannian Geometry and Geometric Analysis, Fourth Edition, 1995, Springer
 Kirshner et al. (1981) Kirshner, R. P., Oemler, A., Jr., Schechter, P. L., & Shectman, S. A. 1981, ApJ Let., 248, L57
 Kofman et al. (1992) Kofman, L., Pogosyan, D., Shandarin, S. F., & Melott0, A. L. 1992, ApJ, 393, 437
 Lacey & Cole (1993) Lacey, C., & Cole, S. 1993, MNRAS, 262, 627
 Martinez & Saar (2002) Martinez, V., & Saar, E. 2002, Proc. SPIE, 4847, 86
 Merritt et al. (2006) Merritt, D., Graham, A. W., Moore, B., Diemand, J., & Terzić, B. 2006, AJ, 132, 2685
 Milnor (1963) Milnor, J., 1963, Morse Theory (Princeton University, Princeton, NJ)
 Navarro et al. (1997) Navarro, J. F., Frenk, C. S., & White, S. D. M. 1997, ApJ, 490, 493
 Neyrinck (2008) Neyrinck, M. C. 2008, MNRAS, 386, 2101
 Neyrinck et al. (2005) Neyrinck, M. C., Gnedin, N. Y., & Hamilton, A. J. S. 2005, MNRAS, 356, 1222
 Novikov et al. (2006) Novikov, D., Colombi, S., & Doré, O. 2006, MNRAS, 366, 1201
 Ocvirk et al. (2008) Ocvirk P., Pichon C., Teyssier R., 2008, ArXiv eprints, 803
 Peebles (1980) Peebles, P. J. E. 1980, Research supported by the National Science Foundation. Princeton, N.J., Princeton University Press, 1980. 435 p.,
 Peebles (1993) Peebles, P. J. E. 1993, Princeton Series in Physics, Princeton, NJ: Princeton University Press, —c1993,
 Pichon et al. (2001) Pichon C., Vergely J. L., Rollinde E., Colombi S., Petitjean P., 2001, MNRAS, 326, 597
 Pichon et al. (in prep.) Pichon C., Thiébaut E., Prunet S., Benabed K., Sousbie,T., Teyssier R., Colombi S. in prep.
 Pogosyan et al. (in prep.) Pogosyan D., Pichon C., Prunet S., Gay C., Sousbie,T., Colombi S. in prep.
 Platen et al. (2007) Platen, E., van de Weygaert, R., & Jones, B. J. T. 2007, MNRAS, 380, 551
 Prunet et al. (2008) Prunet S., Pichon C., Aubert D., Pogosyan D., Teyssier R., Gottloeber S., 2008, ArXiv eprints, 804
 Roerdink (1995) Jos, B. T. M., Roerdink & Arnold Meijster, 2001, Fundamenta informaticae, 41, 187228
 Sahni et al. (1998) Sahni, V., Sathyaprakash, B. S., & Shandarin, S. F. 1998, ApJ Let., 495, L5
 Sathyaprakash et al. (1996) Sathyaprakash, B. S., Sahni, V., & Shandarin, S. F. 1996, ApJ Let., 462, L5
 Sheth (1998) Sheth, R. K. 1998, MNRAS, 300, 1057
 Sousbie et al. (2008a) Sousbie, T., Pichon, C., Colombi, S., Novikov, D., & Pogosyan, D. 2008a, MNRAS, 383, 1655
 Sousbie et al. (2008b) Sousbie, T., Pichon, C., Courtois, H., Colombi, S., & Novikov, D. 2008b, ApJ Let., 672, L1
 Springel (2005) Springel, V. 2005, MNRAS, 364, 1105
 Springel et al. (2001) Springel, V., White, S. D. M., Tormen, G., & Kauffmann, G. 2001, MNRAS, 328, 726
 Stoica et al. (2005) Stoica, R. S., Martínez, V. J., Mateu, J., & Saar, E. 2005, A&A, 434, 423
 Teyssier et al. (2008) Teyssier, R, Pires, S, Prunet, S, Aubert, D. Pichon, C Prunet, Amara, A Benabed, K Colombi, S Refregier, A. & Starck, J.L. 2008, submitted.
 van de Weygaert & Schaap (2007) van de Weygaert, R., & Schaap, W. 2007, ArXiv eprints, 708, arXiv:0708.1441
 Wang et al. (2007) Wang, H. Y., Mo, H. J., & Jing, Y. P. 2007, MNRAS, 375, 633
 Zel’Dovich (1970) Zel’Dovich, Y. B. 1970, A&A, 5, 84
Appendix A A generic minimization algorithm
In this appendix, we present a generic algorithm that aims at minimizing a multilinear scalar function of variables within a polygonal volume, in a dimensional space, by reducing the problem to finding the respective minima of a set of polynomials of order . It takes as input the location of the minima, , of on the edges of the square and simply consists in recursively minimizing the value of along the lines joining them.
Let us first consider the 2D case illustrated by Figure 18, where the cell is a square. In this case, three minima, , and (represented by red crosses) can be easily found on the edges of the square from the linearly interpolated value of along them. One can then compute the location of the minima along the three lines linking them (the red triangle), noting that because of the multilinearity of , its value along a line can be expressed as a second order polynomial. One thus obtains new points, , and , and the process can be repeated, as represented by the blue and black sets of lines, until convergence to the solution, represented by the blue cross (i.e. when the three points are close enough to each other).
This algorithm can be generalized to the case of the face of an cubic cell, , thus providing the solution over the face from the solutions, , over the sets of faces that are its edges. As explained in section 2.3.3, this algorithm is thus recursively applied to the edges of the cell, starting from the faces, in the order of their increasing dimensionality. The step of the algorithm thus goes as follows:

Compute the equations of the lines joining pairs of .

Evaluate the value of at points along these lines using multilinear interpolation, and fit a polynomial of order .

Find the minima of these polynomials that belong to the cell and keep the lowest among them, with coordinates .

If these points are all contained in a sphere of radius a given fraction of the cell, stop, else start over.
Note that although only the case of a Cartesian sampling grid was presented here, the algorithm is easily transposable to any type of grid, such as the one produced by Voronoi tessellation on a manifold, which is composed of simplex shape cells.
Appendix B Interskeleton pseudodistance
The interskeleton pseudodistance from one skeleton to another skeleton was defined in the main text by the probability distribution function (PDF) of the minimum of the distance from each segment of to any segments of . In this appendix, we show how this measure can be interpreted using realizations of scale invariant Gaussian random fields (GRFs) with different power spectrum index (such that ) and different smoothing lengths . All the skeletons that we use were computed from pixels realizations of GRFs, smoothed over a scale pixels or pixels. These scales are defined as the width of the Gaussian kernel that we used to smooth the fields and the value of roughly corresponds, in number of pixels, to the smoothing scale we used in the main text, . A total of six different skeletons were computed:

and : skeletons computed from two realizations (GRF0 and ) of GRFs with spectral index , smoothed over a scale pixels.

and : skeletons computed from two realizations (GRF3 and ) of GRFs with spectral index , smoothed over a scale pixels.

: this skeleton was computed from the field GRF0, smoothed on scale . The resulting skeleton was then translated by .

: this skeleton was computed from the field GRF0, smoothed on scale pixels.
Figure 19 presents the different pseudodistances
between these skeletons,
. Figures 19(a) and 19(b) present the results obtained when comparing
uncorrelated fields (i.e. different realizations of GRFs). As expected
in that case, and
and the position of the mode is
about the smoothing length. One should also note that the mode
intensity differs between and , which can be explained by the
fact that in the latter case, small scale fluctuations are suppressed
together with smaller scale filaments, thus making it less probable
for a segment of one realization to be very close to one of the other
realization. Figure 19(c) shows that these pseudo distance
measurements make it possible to distinguish the different nature of
two skeletons. In fact, whereas has filaments on
any scales, only the larger scales are present in , which translates into an asymmetry between and . Whereas in the first case, there is no
reason why every segment of should be close to a
segment of , the reciprocal is not true : spreads on all scales and every segment of should be as close as any other from a segment of
(hence the higher intensity of the mode for ). When comparing a skeleton
with less filaments to a skeleton with more
filaments, the intensity of the mode is thus expected to be higher for
than for .
This observation is confirmed by Figure 19(e) where is compared to , which has small scale filaments that does not have. But in that case, the two skeletons are correlated as only the smoothing length changes. This results in a higher intensity of the mode of : the larger scale filaments are present in both skeletons. It also results in a shift in the position of the mode, located at a distance smaller than the smoothing length. Figure 19(d) illustrates the case of a simple translation of length half the smoothing length : in that case, both PDFs are identical and a very asymmetric and high intensity mode is present at distance . Finally, it is also interesting to note that the comparison of to almost gives the exact same result as the one for to and it is difficult to distinguish one from the other.
Appendix C Robustness of Fully connected skeleton
In order to investigate the robustness of the fully connected skeleton with respect to small changes in the underlying field, the following experiment is carried. A given 2D white random field of size is generated. It is then smoothed over 10 pixels, and its reference skeleton, is computed. A white random field of amplitude SNR is added to the reference field, and the corresponding skeleton, , is computed after smoothing over 10 pixels. The PDF of the pseudodistances and is then calculated (see Appendix B). The distance at the maximum (its mode) of both PDF remains unchanged for all the SNR considered (), which demonstrates that the core of the skeleton is quite robust: the reference skeleton is always shadowed by its noisy counter part. The amplitude of the PDF at its maximum is plotted in Figure 20. This amplitude is sensitive to the high distance tail of mismatch between the two skeleton since the PDF is normalized. In short, within the network there is a small subset of filaments which are sensitive to any small variation of the field. For the vast majority of the network, the skeleton is globally only weakly affected by changes of the underlying field so long as the amplitude of the change has a SNR above one. When the SNR drops bellow one, spurious filaments occur more and more. The discrepancy between the two plateaux at larger SNR reflects the fact that weaker filaments will occur somewhat randomly from one realisation to another, depending on very small details in the field.