Cosmic velocity–gravity relation in redshift space
Abstract
We propose a simple way to estimate the parameter from threedimensional galaxy surveys, where is the nonrelativistic matter density parameter of the Universe and is the bias between the galaxy distribution and the total matter distribution. Our method consists in measuring the relation between the cosmological velocity and gravity fields, and thus requires peculiar velocity measurements. The relation is measured directly in redshift space, so there is no need to reconstruct the density field in real space. In linear theory, the radial components of the gravity and velocity fields in redshift space are expected to be tightly correlated, with a slope given, in the distant observer approximation, by
We test extensively this relation using controlled numerical experiments based on a cosmological body simulation. To perform the measurements, we propose a new and rather simple adaptive interpolation scheme to estimate the velocity and the gravity field on a grid.
One of the most striking results is that nonlinear effects, including ‘fingers of God’, affect mainly the tails of the joint probability distribution function (PDF) of the velocity and gravity field: the – region around the maximum of the PDF is dominated by the linear theory regime, both in real and redshift space. This is understood explicitly by using the spherical collapse model as a proxy of nonlinear dynamics.
Applications of the method to real galaxy catalogs are discussed, including a preliminary investigation on homogeneous (volume limited) “galaxy” samples extracted from the simulation with simple prescriptions based on halo and substructure identification, to quantify the effects of the bias between the galaxy distribution and the total matter distribution, as well as the effects of shot noise.
keywords:
methods: analytical – methods: numerical – cosmology: theory – dark matter – largescale structure of Universe.1 Introduction
Analyzes of largescale structure of the Universe provide estimates of cosmological parameters that are complementary to those from the cosmic microwave background measurements. In particular, comparing the largescale distribution of galaxies to their peculiar velocities enables one to constrain the quantity . Here, is the cosmological nonrelativistic matter density parameter and is the linear bias of galaxies that are used to trace the underlying mass distribution. This is so because the peculiar velocity field, , is induced gravitationally and is therefore tightly coupled to the matter distribution. In the linear regime, this relationship takes the form (Peebles 1980)
(1) 
where denotes the mass density contrast and distances have been expressed in . Under the assumption of linear bias , where denotes the density contrast of galaxies, the amplitude of peculiar velocities depends linearly on .
Density–velocity comparisons are done by extracting the density field from fullsky redshift surveys (such as the PSCz, Saunders et al. 2000; or the 2MRS, Erdodu et al. 2006), and comparing it to the observed velocity field from peculiar velocity surveys (such as the Mark III catalog, Willick et al. 1997; ENEAR, da Costa et al. 2000; and more recently SFI++, Masters et al. 2005). The methods for doing this fall into two broad categories. One can use equation (1) to calculate the predicted velocity field from a redshift survey, and compare the result with the measured peculiar velocity field; this is referred to as a velocity–velocity comparison (e.g., Kaiser et al. 1991; Willick & Strauss 1998). Alternatively, one can use the differential form of this equation, and calculate the divergence of the observed velocity field to compare directly with the density field from a redshift survey; this is called a density–density comparison (e.g., Dekel et al. 1993; Sigad et al. 1998). The advantage of density–density comparisons is that they are purely local, but they are significantly sensitive to shot noise because the divergence of the observed velocity field is estimated from the sparse velocity sample. The integral form of velocity–velocity comparisons make them much less sensitive to such a noise, but their nonlocal nature make them affected by the tides due to the presence of fluctuations outside the survey volume (see, e.g., Kaiser & Stebbins 1991).
Still, the common problem with both types of these comparisons is that while equation (1) involves the density field in real space, the observed density field is given solely in redshift space. One approach to tackle with this problem is to reconstruct the realspace density field from the redshiftspace one. The transformation from the real space coordinate, , to the redshift space coordinate, , is
(2) 
where ; is the realspace velocity field, and velocities are measured relative to the rest frame of CMB. Therefore, one has to correct galaxy positions for their peculiar velocities. To do so, equation (1) is used and a selfconsistent solution for the density field in real space is usually obtained iteratively (Yahil et al. 1991). In the first iteration, to predict peculiar velocities according to equation (1), the realspace density field appearing on the r.h.s. of this equation is approximated by the redshift spacedensity field, and so on, until convergence. However, from equation (1) it is obvious that the amplitude of peculiar velocities depends on , the parameter to be subsequently estimated. Therefore, to perform a density–velocity comparison selfconsistently, one has to reconstruct the real space density field for a range of different values of . For example, Branchini et al. (1999) performed such reconstructions for 10 different values of in the range –. Note that instead of this traditional algorithm, more sophisticated methods now rely on EulerLagrange action minimization (e.g., Peebles 1989; Shaya, Peebles & Tully 1995; Nusser & Branchini 2000; Phelps 2002; Phelps et al. 2006) or resolution of optimal assignement problems (e.g., Croft & Gaztañaga 1997; Frisch et al. 2002; Mohayaee et a. 2003; Mohayaee & Tully 2005).
Another approach, proposed by Nusser & Davis (1994; hereafter ND), is to perform the comparison directly in redshift space. ND derived the density–velocity relation in redshift space in the linear regime. Because velocity–velocity comparisons seem in practice more robust than density–density ones, they aimed at transforming this relation to an integral form, i.e. at solving for the velocity as a functional of the redshiftspace density. Due to the radial character of redshiftspace distortions, it turned out to be possible only via a modal expansion of the density and velocity fields in spherical harmonics. Using this expansion, ND were able to put constraints on (ND; Davis, Nusser & Willick 1996). However, the approach with the reconstruction of the realspace density field remained popular. Apparently, equation (1) is appealing by its simplicity, both in terms of reconstruction of the velocity field, and of estimation of the parameter .
This paper is devoted to finding an equivalent of equation (1) that would hold for redshiftspace quantities, but would share its simplicity. Specifically, let us define the scaled gravity:
(3) 
Under the assumption of linear bias, adopted here, is proportional to gravitational acceleration, and can be directly measured from a 3D galaxy survey. Equation (1) then implies
(4) 
Now, let us assume that we measure the gravitational acceleration directly in redshift space:
(5) 
where denotes the density contrast of galaxies in redshift space. We will follow ND in a natural definition of the redshiftspace velocity field:
(6) 
What is the relation between and ?
Equation (4) holds strictly in the linear regime. Nevertheless, numerical simulations (Cieciela̧g et al. 2003) have shown that it remains accurate to a few percent for fully nonlinear gravity and velocity fields. These results will be fully confirmed by the present work and can be explained by the fact that velocity and gravity fields are dominated by longwavelength, linear modes. Therefore, in deriving the redshiftspace counterpart of Equation (4), we will apply linear theory. Unfortunately, there is no local deterministic relation between and . However, as shown below, and are strongly correlated, so the mean relation will be a useful quantity. Only radial components of velocities are directly measurable, so we will be interested in the relation between and .
This paper is thus organized as follows. In § 2, we compute the properties of the joint probability distribution function (PDF) of the fields and in the framework of linear theory and distant observer limit. In the linear regime, this PDF is expected to be a Gaussian, entirely determined by its second order moments. The quantities of interest are , and : they all give an estimate of the expected ratio between and and the difference between them can be used to compute the scatter on the relation, that will be shown to be small. The validity of our assumptions, in particular the distant observer approximation, is examined a posteriori in § 3. We also justify our choice of CMB rest frame redshifts, needed to avoid the socalled rocket effect (Kaiser 1987; Kaiser & Lahav 1988). In § 4, linear theory is tested against numerical experiments, both in real and redshift space. To do that we use a dark matter cosmological body simulation with high resolution, allowing us to probe the highly nonlinear regime. We propose a new and simple algorithm to interpolate the velocity and gravity field on a grid from a distribution of particles. We address extensively a number of issues, such as the validity of the distant observer limit, edge effects, cosmic variance effects, effects arising from nonlinear dynamics, in particular so called ‘fingers of God’ (FOG) and effects of dilution (numbers of tracers used to probe the velocity and gravity fields). We use spherical tophat model to support our interpretations of the measurements (technical details are given in Appendix A). The effect of the bias is also examined briefly by extracting from the dark matter distribution two kinds of subsamples, one where each “galaxy” is identified with a dark matter halo, and the other one where each “galaxy” is identified with a dark matter substructure. Finally, the main results of this work are summarized in § 5. In this last section, we discuss observational issues such as discreteness effects, appropriate treatment of biasing, incompleteness, errors on peculiar velocity estimates, etc. These issues will be addressed in detail in a forthcoming work, where the method will be applied to real data.
2 Linear theory predictions
Linear regression of on yields
(7) 
The symbols denote ensemble averaging. We can thus characterize the linear relation between gravity and velocity (gravity in terms of velocity) by its slope,
(8) 
Alternatively, one can study the inverse relation (velocity in terms of gravity). Linear regression then yields , so the inverse slope is . To describe the linear velocity–gravity relation, we have thus at our disposal two estimators of its slope, the forward slope, , and the reciprocal of the slope of the inverse relation,
(9) 
Due to the scatter, these two estimators are not equal:
(10) 
where
(11) 
is the crosscorrelation coefficient of the velocity and gravity fields; and . The linear regression, although the best among all linear fits to a cloud of points, visually looks biased. For two correlated Gaussian variables, an unbiased slope of the isocontours (ellipsoids) of their joint PDF is given by the square root of the ratio of the variances of the two variables. Thus we have third estimator of the slope,
(12) 
This is easy to show that it predicts intermediate values between and .
Secondorder moments of the joint PDF for gravity and velocity, which appear in the above three estimators of its slope, are much easier to perform in Fourier space. We will adopt the distant observer limit (DOL), in which the Jacobian of the transformation from real to redshift space, equation (2), simplifies to
(13) 
The Fourier transform of the redshiftspace velocity field, using the mapping (2), is
(14)  
where is the Fourier transform of the realspace velocity field. The linearized continuity equation yields , where and is the Fourier transform of the realspace density field, hence
(15) 
From equation (5) we have
(16) 
This equation looks similar to the preceding one, but here appears the Fourier transform of the redshiftspace galaxy density field. Moreover, unlike the preceding equation, the above equation is exact.
Radial components of the redshiftspace velocity and gravity fields are and , where . From conservation of numbers of galaxies in real and redshift space, it is straightforward to write down an equation for the Fourier transform of the redshiftspace galaxy density contrast. In the linear regime it reduces to (Kaiser 1987), or
(17) 
Therefore, we obtain
(18) 
and
(19) 
The above pair of equations enables us to calculate the averages appearing in equations (8), (9), and (12). Specifically,
(20) 
For a homogeneous and isotropic random process, , where is Dirac’s delta and is the power spectrum of the realspace density field. Performing the integral over yields
(21) 
Similarly,
(22)  
Finally,
(23)  
This yields
(24)  
(25)  
(26) 
Equations (24)–(26) provide a way to estimate comparing gravitational accelerations to peculiar velocities of galaxies directly in redshift space. All three estimators predict the slope that is greater than the corresponding one for realspace quantities (), the more the bigger . This is expected since redshift distortions enhance linear density contrasts (eq. 17), and the amplitude of distortions scales linearly with . We can therefore consider the factor as the one coming from realspace dynamics, and the additional factors involving as redshiftspace corrections.
As stated above, in redshift space, unlike in real space, the velocity–gravity relation is not deterministic even in the linear theory, so . However, the linear gravity and velocity fields are tightly correlated, so Equations (24)–(26) yield values of the ratio between and very close to each other: e.g., for (the value used in our numerical experiments, § 4) , , and .
To illustrate this point furthermore, one can examine the scatter on the conditional average (mean given ), where we have dropped out the subscripts ‘’ for simplicity of notation. For and being Gaussiandistributed, , in agreement with linear regression, equation (7). The scatter in this relation, , is the square root of the conditional variance, . The conditional variance is then equal to (see, for example, Appendix B of Cieciela̧g et al. 2003). Thus, for Gaussian velocity and gravity fields in redshift space we have
(27) 
where is the crosscorrelation coefficient defined in eq. (11). Note that the scatter is just one number, i.e., it is independent of the value of . Equations (21)–(23) yield
(28) 
In particular, for , , and for , : again, redshiftspace velocity and gravity in the linear regime are (though not simply mutually proportional, like in real space) very tightly correlated. Inserting equation (28) into (27) yields
(29) 
Hence, for , , and for , . This implies that the ‘signal to noise ratio’, , of the estimate of a single galaxy’s peculiar velocity from its gravitational acceleration can be as high as (). This is to be contrasted with the signal to noise ratio of the estimate of a galaxy’s peculiar velocity from its distance and redshift, which is typically below unity: an expected typical % relative error in distance,^{1}^{1}1See, for instance, Strauss & Willick (1995) for a review on issues related to peculiar velocity estimates. for a galaxy at a distance of ,^{2}^{2}2Where and is the Hubble constant expressed in km/s/Mpc. translates to the velocity error of , greater than typical peculiar velocities of galaxies. As a corollary, the intrinsic scatter in the redshiftspace linear velocity–gravity relation is negligible compared to that introduced by the errors of measurements of peculiar velocities.
3 Validity of the distant observer limit in the rest frame of CMB
While equation (2) was written in the CMB rest frame, the choice of the reference frame for computing redshifts of galaxies can be more general,
(30) 
where is the (angle dependent) radial velocity of the origin of the system of coordinates (in the CMB rest frame).
The Jacobian of eq. (30) yields (Kaiser 1987)
(31) 
This equation is valid as long as the mapping (30) does not induce any shell crossing. In general, non trivial singularities can appear in , even if is finite. However, in practice, since one always performs some additional smoothing to the data, remains finite. The small limit can be still problematic as it corresponds to a singularity in the system of coordinates. If the field is smooth enough, then in the neighborhood of the origin, one can write
(32) 
Choosing a spherical coordinate system such that the axis is parallel to , the term in eq. (31) will create a singular surface of equation
(33) 
that will concentrate to the origin, , in redshift space. Similarly the term might become singular, but we see here that the situation nearby the origin is not different from what happens far away from it.
The singular behavior of the form (33) is expected to occur only in the neighborhood of the observer, but might be problematic when estimating the gravitational acceleration. While inserting this singularity in eq. (5), we notice that it should be an issue only for , where it coincides with the Green function singularity (this is due to the finite mass of the singular surface). In other words, even though its small behavior is difficult to predict, the redshift space gravity should not be significantly affected by such a singular behavior at distances sufficiently large from the observer. In the CMB rest frame, and would correspond to the Local Group velocity, say, km/s (see e.g., Erdodu et al. 2006), so we would need and therefore large enough compared to 6 Mpc. More specifically, let us consider the real space sphere of radius . Its content is embedded in a sphere of radius in redshift space. The mass inside such a volume remains finite, but its internal distribution affects the gravity field at larger in a non trivial way. What matters is that the multipole contributions of higher order than the monopole (the substructures within this volume) have negligible contribution on the gravity field. Using the wisdom from treecode simulations techniques (e.g. Barnes & Hut 1986; Barnes & Hut 1989), this amounts to
(34) 
According to that criterion, the effect of the singular behavior near the origin should be of little consequence if Mpc.
If it is supposed now that is indeed large enough, one can linearly expand eq. (31) to obtain (Kaiser 1987)
(35) 
(Note that, at linear order, one can assume .) The distant observer limit would consist in dropping the term from this equation. However, as extensively discussed in Kaiser (1987) and Kaiser & Lahav (1998), this term is in fact non negligible in the redshift space gravitational acceleration as it induces the socalled rocket effect, resulting in a large logarithmic divergence if . This justifies the choice of CMB rest frame coordinate system, . Still the remaining contribution of , although zero in average, might introduce some significant fluctuations on the large scale redshift space gravity field. In the linear perturbation theory framework, does not correlate with either or . As a result, while computing the sum of the fluctuations of in a sphere of radius in eq. (5), what matters is to see whether the fluctuations added by are small, or not, compared to the fluctuations added by .
So let us estimate the ratio
(36) 
Calculations are similar to § 2, and one simply finds, in the linear regime,
(37) 
One can furthermore assume that the linear fields are smoothed, e.g. with a Gaussian window of size . In that case one has to replace with in eq. (37).^{3}^{3}3We suppose here that smoothing on is approximately equivalent to smoothing on , prior to dividing by , which should be reasonable. For scalefree power spectra, , one finds using eq. (4.10b) of Bardeen et al. (1986)
(38) 
which gives
(39) 
This means that smoothing increases the relative contribution of the term ! In other words, smoothing makes the distant observer approximation worse.
The standard cold dark matter (CDM) cosmology considered in the forthcoming numerical analyses assumes the nonrelativistic matter density parameter , the cosmological constant and the Hubble constant . Using for instance the package of Eisenstein & Hu (1998) to compute with these cosmological parameters (and ), one obtains numerically, in the absence of bias, ,
(40)  
(41)  
(42) 
Clearly, for a catalog depth of the order of Mpc as considered below, we expect significant deviations from our theoretical predictions if the smoothing scale is as large as .
4 Numerical experiments
In this section we perform controlled numerical experiments to test the velocity–gravity relation, both in real and redshift space, on the dark matter distribution. These analyzes extend the work of Cieciela̧g et al. (2003) who performed similar work but only in real space and on simulations using a pure hydrodynamic code approximating the dynamics of dark matter, namely the Cosmological Pressureless Parabolic Advection code of Kudlicki, Plewa & Różyczka (1996, see also Kudlicki et al. 2000).
This section is organized as follows. In § 4.1, we present a new algorithm to interpolate the velocity and the gravity fields on a grid using an adaptive interpolating procedure inspired closely from smooth particle hydrodynamics (SPH). In § 4.2, we describe the body simulation set used in this work. In § 4.3, we discuss measurements on our dark matter samples and diluted counterparts, while mock galaxy catalogs are addressed in § 4.4.
Since we assume flat cosmology, , we adopt the approximation
(43) 
for function , as it is known to be slightly more accurate in that case than the traditional fit (Bouchet et al. 1995). Finally, let us recall that ‘DOL’ means “distant observer limit”.
4.1 Algorithm used to estimate velocity and gravity fields
In real observations, the catalogs used to estimate cosmic velocity and gravity are in general different. In particular, the velocity field tracers are not necessarily representative of the underlying density field. Here we assume to simplify that the same catalog is used to probe velocity and gravity fields and that all the objects in the catalog have the same weight, or from the dynamical point of view, the same mass. Also, since gravitational force is of long range, it is necessary to estimate it in a domain large enough compared to the effective volume where the velocity–gravity comparison is actually performed. We assume here that it is indeed the case. In this paper, will simply be the simulation cube, while will be a subvolume included in this cube.
The basics of our method is to perform adaptive smoothing on the particle distribution to obtain a smooth velocity and a smooth density field on a regular grid encompassing and of resolution , while preserving as much as possible all the information. Then additional smoothing with a fixed window, preferably Gaussian, can be performed a posteriori in order, e.g., to be in the linear regime, since our analytic predictions in principle apply to that regime. We shall see however below that this additional step is not necessarily needed and can in fact complicate the analyzes (see also § 3).
Whether we perform the measurements in real or redshift space does not change our approach: we assume that we have at our disposal a set of points with threedimensional coordinates and a scalar velocity. This latter is either the radial velocity when we consider redshift space measurements, or the coordinate of the velocity vector when we consider real space measurements or redshift space measurements in the DOL approximation.
The main difficulty is to reconstruct a smooth velocity field on the sampling grid. Let us remind as well that the velocity field we aim to estimate is a purely Eulerian quantity, in other words, a mean flow velocity. In particular, what we aim to measure, in terms of dynamics, is something as close as possible to a moment of the density distribution function in phasespace:
(44) 
where is density, the source term of the Poisson equation to estimate the gravitational potential, given by
(45) 
Note that equation (44) applies only to real space, but we shall come back to that below.
Since we aim to estimate velocity and density on a finite resolution grid, a more sensible way of performing the calculations is to integrate equations (44) and (45) on a small cubic patch corresponding to a grid element to reduce noise as much as possible and to make the calculation conservative (i.e., the total mass and momentum is conserved). In practice, we do not perform this calculation exactly, but only approximately using an approach inspired from smooth particle hydrodynamics (SPH, see, e.g. Monaghan 1992), as we now describe in details.
In the SPH approach, each particle is represented as a smooth cloud of finite varying size depending on local density, i.e. on the typical distance between the particle and its closest neighbors (which can be found quickly with e.g. standard KDtree algorithm), where is usually of the order of a few tens. As a result, smooth representation of the density and velocity fields can be obtained at any point of space by summing up locally the contribution of all the clouds associated to each particle. With appropriate choice of the SPH kernel, these functions can be easily integrated over each cell of the grid.
The problem with this approach is that it does not guarantee that local reconstructed density is strictly positive everywhere, which can leave regions where the velocity field is undefined. To solve that problem, we start from the grid points, which we assume to be virtual particles for which we find the closest neighbors to define the SPH kernel associated to this grid site. However, one has to take into account the fact that the sought estimates should roughly correspond to an integral in a small cubic patch . In particular, all particles belonging to a grid site should participate to such an integral. This issue can be addressed in an approximate way as follows:

Count and store for each grid site, the number of particles it contains;

If , then perform SPH interpolation at the grid site as explained below using the particles contributing to it instead of the closest neighbors;

If , then find for the grid site the closest particles and perform SPH interpolation at the grid site as explained below.
Given a choice of the SPH kernel, [which should be a monotonic function verifying and ], and a number of neighbors, with or according to the procedure described above, the interpolation of a quantity on the grid site is given by
(46) 
In this equation, is the value of associated to each particle , is half the distance to the furthest neighbor of the grid site among the , is the distance of the th particle to the grid site, and is a weight given to each particle such that the total contribution of every particle to all the grid sites is exactly unity:
(47) 
In practice, the interpolated density thus reads
(48) 
by taking in equation (46), where is the mass of each particle. An interpolated velocity reads
(49) 
where is the interpolated momentum in cell . It is obtained by taking in equation (46), where is the velocity (the radial component, or a coordinate) of particle .
Note that there can be some particles for which in equation (47) (in that case by definition), i.e. which do not contribute at all to the interpolation. In the practical measurements described later, this can happen at most only for a very small fraction of the particles, typically of the order of 0.1 percent, and affects the results insignificantly. There is, however, another noticeable defect in our method, due to the unsmooth transition between the two interpolation schemes when the number of particles per grid site becomes larger than , that affects in a non trivial way the interpolated density, but again, it does not have any significant consequences for the present work.
In practice, we take for all the measurements described below and the following spline for the SPH kernel (Monaghan 1992)
(50) 
Once the density field is interpolated on the grid, it is easy to estimate the gravitational potential from it by solving the Poisson equation in Fourier space, keeping in mind that the edge of the volume , where the velocity–gravity relation is tested, should be sufficiently far away from the edges of the volume , which is itself included in the grid. It would be beyond the scope of this paper to discuss other problems related to incompleteness or edge effects. The main one is related to the uncertainties on the gravitational potential induced by the obscuring due to our own Galaxy. While this can be certainly an issue, other incompleteness problems such as segregation in luminosity can be addressed by giving the proper weight (or mass) to the galaxies in the catalog. This of course needs strong assumption on the bias, and can work only for populations of not too bright galaxies.
Note finally that, as explained before, we perform “naively” our SPH interpolation whether we are working in real space or in redshift space. The fact that we use SPH interpolation in redshift space is sufficient for our purpose as long as we are in the DOL limit, although it does not correspond anymore exactly to a simple moment of the Vlasov equation. However, the interpolation becomes somewhat questionable when the assumption of DOL is dropped: the nature of the interpolation changes with distance from the observer. In particular, if we assume that the smoothing kernel was fixed, projections, e.g. passing from the velocity vector to a one dimensional quantity such as the radial velocity, and smoothing do not commute anymore. The same problem arises for commutation between the calculation of the radial part of the gravitational force and smoothing. It is therefore necessary to carefully check that the simplistic nature of our interpolation does not introduce any systematic bias in the measurements, when performing them in redshift space relaxing the approximation of a distant observer.
4.2 The body simulation set
We performed a high resolution simulation using the adaptive mesh refinement (AMR) code RAMSES (Teyssier 2002). As already mentioned earlier, the cosmology considered here assumes , and . Initial conditions were set up using Zel’dovich approximation (Zel’dovich 1970) to perturb a set of particles disposed on a regular grid pattern to generate initial Gaussian fluctuations with a standard CDM power spectrum. To do that, we used the COSMICS package of Berstchinger (1995). The simulation involves dark matter particles on the AMR grid initially regular of size , in a periodic cube of size Mpc. Then, additional refinement is allowed during runtime: if cells contain more than 40 particles they are divided using standard AMR technique (with a maximum of 7 levels of refinement). Note finally for completeness that the normalization of the amplitude of initial fluctuations was chosen such that the variance of the density fluctuations in a sphere of radius Mpc extrapolated linearly to the present time was given by .
4.3 Measurements on the dark matter distribution
From our RAMSES simulation, we extracted a number of darkmatter samples, as described in details in Table 1, namely:
 (i)

A high resolution grid sample of size for testing the velocity–gravity relation in real space;
 (ii)

The analog of (i) for testing the velocity–gravity relation in redshift space, using the DOL approximation;
 (iii)

The analog of (ii), but without using the DOL, for testing the validity of this approximation. In that case the radial coordinate of the velocity and gravity field was estimated using an observer at rest at the center of the simulation box. The comparison of velocity to gravity was performed in a sphere of radius 70 Mpc to avoid edge effects due to the loss of periodicity while projecting in redshift space;
 (iv)

A set of 125 low resolution grids () in redshift space, for estimating (at least partly) cosmic variance effects and effects of structures nearby the observer. These samples were generated by locating the observer in the simulation box on a regular grid of size . Again, in these samples, the velocity–gravity relation was tested in a volume of radius 70 Mpc centered on the observer (exploiting the periodic nature of the simulation box).
Since the catalogs we consider in point (iv) represent a significant fraction of the simulation volume, we know however that effects of cosmic variance are likely to be underestimated.
Samp.  Content  Size  Measured from:  Comment  

(i)  real space  1  512  0.103  0.157  0.240  all, no smoothing  
0.251  0.257  0.264  all, smoothed  
0.299  0.303  0.307  1.5 isocontour, no smoothing  
0.295  0.297  0.299  1.5 isocontour, smoothed  
(ii)  DOL redshift space  1  512  0.143  0.280  0.578  all, no smoothing  
0.206  0.223  0.242  all, smoothed  
0.296  0.303  0.311  1.5 isocontour, no smoothing  
0.274  0.279  0.284  1.5 isocontour, smoothed  
(iii)  redshift space  1  512  0.098  0.287  0.987  all, no smoothing  
0.167  0.197  0.231  all, smoothed  
0.316  0.332  0.349  1.5 isocontour, no smoothing  
0.269  0.284  0.300  1.5 isocontour, smoothed  
(iv)  redshift space  125  128  0.115  0.280  0.817  all , no smoothing  
0.023  0.030  0.269  error from dispersion  
0.177  0.200  0.226  all , smoothed  
0.022  0.021  0.023  error from dispersion  
0.283  0.294  0.304  1.5 isocontours , no smoothing  
0.039  0.040  0.041  error from dispersion  
0.255  0.262  0.270  1.5 isocontours , smoothed  
0.031  0.032  0.034  error from dispersion 
We now perform a visual inspection of the fields, followed by measurements of the joint PDF of velocity and gravity, first in real space, then in redshift space.
4.3.1 Visual inspection
Figure 1 shows the density, the gravity field and the
velocity field in a thin slice extracted from the simulation, both in real space and in the DOL redshift space. Despite our interpolation procedure, there are some minor discreteness artifacts left, visible on the density field. One can notice for example a few underdense regions where the initial grid pattern, distorted by large scale dynamics, is still present. Such artifacts do not show up on the smoother coordinate of the gravity field. On the other hand, this latter seems to suffer from a few aliasing defects in real space, as vertical spurious lines in the vicinity of deep potential wells. We did not bother trying to understand such aliasing effects, because they have negligible impact on the measurements.^{4}^{4}4These aliasing effects might be related to some minor defect in the particular Fourier transform algorithm we are using (Teuler 1999), but this hypothesis seems to be contradicted by some accuracy tests performed on this algorithm (Chergui 2000). A more sensible explanation is that these effects are induced by the way we interpolate the density field, in particular by the transition between the and the regimes, combined with the fact that our Fourier Green function for computing the gravitational acceleration is simply proportional to , without additional filtering. Interesting to notice here is the nice agreement between the coordinate of the velocity field and the coordinate of the gravity field in underdense regions. These regions dominate the velocity–gravity statistics, since we have a volume weighted approach. Note the particular features in the velocity field associated to the filaments in the density distribution, as well as the ‘fingers of God’ (FOG) in redshift space. These FOG, which are mainly associated to the dark matter halos, are expected to induce some particular properties on redshift space statistics:

First, and this is a straightforward consequence of passing from real space to redshift space, there is, on the velocity field, an ‘inversion’ effect inside FOG. In other words, inside a finger of God, the variations of the velocity field are opposite to what happens in its nearby environment. This can be explained in the following way. Inside a halo, which can be, in real space, considered as a pointlike structure in first approximation, there are particles with positive velocities and particles with negative velocities. Assume, to simplify, that this halo has itself a zero center of mass velocity. It will, in redshift space, look like an elongated structure. Particles in this structure that have positive velocity will be above the center of mass, and particles with negative velocity will be below the center of mass. As a result, one expects, for the coordinate of the velocity, the finger of God corresponding to this halo to have positive velocities (towards light color on bottom right panel of Fig. 1) above the center of the halo and to have negative velocities (towards dark color) below the center of the halo. This is indeed what we can observe. However, in the nearby environment of the halo, the situation is somewhat opposite. Indeed, the halo is expected to lie in a filament, which itself represents a local potential well. If this well is not too strong to induce shellcrossing while passing from real to redshift space, the general trend of the velocity field is not modified, compared to real space: negative sign (towards dark color) above the filament, positive sign below it. If the filament corresponds to a sufficiently deep potential well such that shell crossing occurs, then the same effect than for FOG is expected, as can be noticed on the bottom right panel of Fig. 1 for the largest filaments.

Second, due to the FOG stretching, halos are more elongated in redshift space, and a natural consequence is that the corresponding potential well is less deep. That is why the coordinate of the gravitational force is less contrasted in the middle right panel of Fig. 1 than in the middle left one.
Still, we can see that both in real and in redshift space, the velocity–gravity relation is going to be in quite good agreement with expectation from linear theory. Indeed and again, our measurements are volume weighted, dominated by underdense regions. The color scale on four bottom panels of Fig. 1 has been chosen such that if linear theory applies, the same color pattern should be found for the gravity and velocity field, which is the case at first glimpse, except in the densest regions, corresponding to halos or rich filaments.
Finally, similarly as the right column of panels of Fig. 1, Fig. 2 displays the density, the gravity field and the velocity field in redshift space, but without using the DOL. If one now takes into account the radial nature of the projection, it is clear that the conclusions of the previous discussion remain unchanged, at least at the qualitative level.
4.3.2 Quantitative comparison in real space
Figure 3 shows in grey scale the measured joint probability
distribution function (PDF) of the coordinate of gravity and velocity fields extracted from the real space sample (i). The striking result is that the regions of the best likelihood (larger values of the PDF, darker places) match very well the prediction given by linear theory (thick solid line), even in the highly nonlinear regime (upper panel). This remarkable property is mainly related to our volumeweighted approach: results are mainly influenced by underdense regions, which are weakly evolved from the dynamical point of view and are expected to match well linear theory predictions. This, and the ‘propeller’ shape of the bivariate PDF, can be understood in more detail by using the spherical tophat model as a proxy of nonlinear dynamics. In that case, up to shell crossing, the velocity–gravity relation reads approximately
(51) 
(this can be easily derived from Bernardeau 1992; 1994), where is the distance from the center of a tophat fluctuation (in ). This equation is valid inside the fluctuation, which can be overdense (negative ) or underdense (positive ). One can then imagine, to simplify, the density field as a patchwork of spherical fluctuations, which correspond to a set of curves given by Equation (51), as shown in the top panel of Fig. 3. Note here that we should, for the picture to be correct, take into account the fact that we are using only the coordinate of the fields, and , where is the angle between the axis and the radial vector . In the top panel of Fig. 3, we consider the cases km/s. Each curve has a stopping point corresponding to the maximum possible value of , , which reflects the fact that there is an expected upper bound for the expansion speed of voids. This property excludes the upper left and lower right quadrants of the velocity–gravity diagram to be populated too far from linear theory prediction. On the other hand, overdense fluctuations tend to populate the upper right and the lower left quadrants, above and below linear theory prediction, respectively. Furthermore, since the low regime converges to linear theory, all the curves corresponding to Equation (51) superpose in that regime, creating a ‘caustic’ of best likelihood nearby the maximum of the joint PDF, explaining the very good agreement with linear expectation in that region. As a result, we now understand, thanks to the spherical tophat model, both the ‘propeller’ shape of the bivariate PDF, as well as the remarkable agreement with linear theory prediction nearby its maximum, even in the highly nonlinear regime (see also Cieciela̧g et al., 2003). The arguments developed here are oversimplified, but capture the main features of the dynamics of the large scale structures prior to shell crossing in real space. Beyond shell crossing, there is a mixing effect that tends to decorrelate velocity from gravity, implying a widening of the bivariate PDF and even larger tails in the upper right and the lower left quadrants. However such an effect does not affect significantly the region of best likelihood.
A straightforward consequence of the above discussion is that the joint PDF of gravity and velocity is substantially non Gaussian due to nonlinear contributions in the dynamics, as supported by the examination on middle panel of Fig. 3 of the 1.5 elliptic isocontour of the Gaussian distribution with the same second order moments as the measured PDF. As a result, the direct measurement of parameter from the moments of the joint PDF using linear theory predictions is biased to lower values, due the propeller shape of the PDF. This is illustrated on middle panel of Fig. 3 by the dotted line, which gives the velocity–gravity relation obtained from the second order moments of the PDF, , while the two dashed lines correspond to the conditional averages and . In the linear regime, let us remind from § 2 that these three quantities should be equal. Due to nonlinear effects, the relation between gravity and velocity is not deterministic anymore, , and is underestimated, leading to an effective bias or an underestimated value of , as shown in Table 1. Note, as expected, that additional smoothing of the fields with a Gaussian window of radius 10 Mpc helps to reduce non Gaussian features as well as this effective bias, and makes the overall relation between gravity and velocity tighter.
However, Fig. 3 appeals for a more sophisticated way to measure the velocity–gravity relation than simply using directly the moments of the joint PDF. Since the region around the maximum seems to agree rather well with the linear prediction, this suggests to estimate the moments from the PDF only within that region. To perform such an exercise, we selected the 68percent PDF isocontour, , such that
(52) 
This corresponds roughly to a contour in the Gaussian case. The PDF is then set to zero outside the contour of best likelihood. From this truncated PDF, the moments are estimated again, leading to a much better estimate of , agreeing at the few percent level with the true value, as shown in Table 1, regardless whether additional Gaussian smoothing with a Mpc size window is performed on the fields, or not. Furthermore, the estimators , and differ now only slightly from each other, reflecting the narrowness of the region of best likelihood around the linear expectation.
4.3.3 Quantitative comparison in redshift space
the sample (ii), in the DOL redshift space. The same striking result obtained as in real space holds: the region of maximum likelihood agrees very well with the linear theory prediction, which gives a velocity–gravity slope larger than in real space because of the enhancement of large scale density contrasts due to projection in redshift space (symbol LSRD – Large Scale Redshift Distortion – on the left panel).
However, there is on Fig. 4 a noticeable new feature on the PDF visible on left panel, in addition to the propeller shape: the joint PDF seems now to present tails in the directions orthogonal to the maximum likelihood domain. Note that these tails tend to disappear with smoothing which then makes the bivariate PDF look very much like the real space one.
This new feature is due to the fingerofGod effects already discussed at length in § 4.3.1. These effects have mainly two consequences: first, the gravitational potential is less contrasted at small scales due to the elongated nature of fingers of God, which tend to reduce the amplitude of the gravitational acceleration inside them and in their neighborhood. Second, there is an inversion effect which changes the sign of the velocity. The consequence is that some points that were, in real space, on the upper right quadrant are moved to the upper left quadrant as indicated by the arrow on the left panel of Fig. 4. These points create a tail on the upper left quadrant; similarly some points that lied in the lower left quadrant contribute to the tail on the lower right quadrant.
In fact, in the same way as we did in real space, the results obtained in redshift space can again be interpreted qualitatively using the top hat spherical collapse model, as detailed in Appendix A. Note interestingly that in the linear regime, the top hat model gives
(53) 
a value significantly smaller than what is obtained from Eqs. (24), (25) and (26): equation (53) gives instead of . This disagreement with the statistical expectation from linear theory is explained in Appendix A. However, despite the limitations of the spherical top hat model, the arguments developed previously in the real space case to explain why the linear regime dominates the most likely part of the joint PDF still hold.
Thanks to fingerofGod effects, if no additional smoothing is applied to the interpolated fields, the measured joint PDF is now much more symmetric about the linear prediction than in real space. As a consequence, the slope obtained from the direct measurement of (dotted line on left panel of Fig. 4) agrees now well with linear theory, while we still have . However, when an additional Gaussian smoothing is performed, the fingers of God tend to be diluted. This implies on Fig. 4 a clockwise rotation of the upper left tail to the upper right and of the lower right tail to the lower left. As a consequence, with our Mpc scale Gaussian smoothing, one converges to similar behavior as obtained in real space, with the propeller effect significantly biasing the overall slope, implying again that is biased low. This bias is more pronounced in redshift space than in real space, because there is still some remnant of the antidiagonal effect, depending on the level of smoothing. Of course, smoothing at larger scales would make the agreement with linear theory prediction better again.
To reduce the bias, one can play again the exercise of measuring the slope of the velocity–gravity relation by selecting the region of best likelihood, as in Equation (52). The agreement with linear theory prediction is improved as expected, and the corresponding measured value of matches very well the real value as illustrated by Table 1. However, additional smoothing tends to mix the nonlinear finger of God effects with linear features, contaminating the region of best likelihood. As a result, the measured value of is slightly biased to lower values ( instead of ) and changing the likelihood contour selection does not improve significantly the results. This mixing effect brought by smoothing suggests that in fact it might be not so wise to perform additional smoothing of the interpolated fields. Even though smoothing at sufficiently large scales brings better overall agreement with linear theory, it makes the measurements much more sensitive to finite volume and edge effects and it is furthermore not needed here since linear theory regime always dominates the region of best likelihood. Besides, we showed in § 3 that smoothing was making the DOL approximation^{5}^{5}5Valid, however, by definition, for the sample currently under consideration, (ii). worse.
are now performed in sample (iii), without assuming that the observer is (infinitely) remote. Qualitatively, all the conclusions derived from the analysis of Fig. 4 still hold. The main difference is that the measurements are more noisy due to the size of the sampled volume, now 5 times smaller. Therefore, the recovered value of from the measurements of the moments in the region of best likelihood of the joint PDF of the unsmoothed fields is good only at the ten percent level ( instead of , see Table 1). At this level of accuracy, we find that the linear prediction, which was derived in the approximation of DOL, agrees well with the measurements. To improve the quality of this comparison, Figure 6 gives the same scatter plots as in Fig. 5,
but for sample (iv), i.e. after coadding the contributions of 125 different observers. The value of derived from the best likelihood region of left panel of Fig. 6 agrees now at a few percent level with the true value of (the estimated value is , see Table 1), showing that the distant observer limit is an approximation good enough for deriving the linear prediction. Note again the bias to lower values () on the measured value of brought by additional Gaussian smoothing with a 10 Gaussian window. This bias is more pronounced than in the DOL sample (ii), and this is certainly at least partly due to the fact that, as discussed in § 3, deviations from the DOL limit are not negligible anymore for such a smoothing scale, given the sample depth, and they add to the mixing between linear and nonlinear features discussed above. It could also come partly from the edge effect discussed in caption of Fig. 6.
The dispersion among the 125 different observers leads to a typical error on of the order of , suggesting that the errors related to the choice of the position of the observer – which probes the space of configurations for the singularity discussed in § 3 – are of the order of 10 percent for our radius catalog. These errors include cosmic variance effects, but these latter are probably underestimated because our spherical redshift samples represent a rather significant fraction of the total simulation volume.
Nevertheless these measurements illustrate the relative robustness in that respect of our velocity–gravity estimator to derive from a largescale galaxy survey. However, we reiterate that, as already mentioned in beginning of § 4.1, the volume used to compute gravity field should be significantly larger than the actual volume used to perform the velocity–gravity comparison. The present analyses suppose it is the case as indeed achieved by most recent three dimensional galaxy catalogs such as the 2MRS, for which the median redshift corresponds to a halfdepth of Mpc (Erdodu et al. 2006).
4.3.4 Effects of dilution
Up to now, we have measured the velocity–gravity relation in very rich catalogs, where the number of objects was so large that discreteness effects could be considered as negligible. In real galaxy catalogs, the number of objects is much smaller, particularly when tracers of the velocity field are taken at concern. Before addressing the issue of biasing between the galaxy distribution and the dark matter distribution, we therefore examine pure discreteness effects, by diluting our dark matter samples. This dilution will not only bring a shot noise contribution, it will also increase the overall level of smoothing which is performed by our interpolation procedure.
To be able to quantify as accurately as possible the biases induced by discreteness in the mock “galaxy” catalogs considered in next section, we dilute randomly our body sample in two kind of subsamples (see Table 2):
Samp.  Content  Size  Measured from:  Comment  

(D1)  50000, real space  125  128  50000  0.163  0.213  0.278  all , no smoothing  
0.004  0.006  0.008  error from dispersion  
0.237  0.252  0.268  all , smoothed  
0.007  0.008  0.009  error from dispersion  
0.283  0.300  0.319  1.5 isocontour , no smoothing  
0.011  0.012  0.013  error from dispersion  
0.277  0.288  0.299  1.5 isocontour , smoothed  
0.012  0.013  0.013  error from dispersion  
50000, redshift space  125  128  0.062  0.248  3.841  all , no smoothing  
0.025  0.027  16.11  error from dispersion  
0.100  0.184  0.378  all , smoothed  
0.027  0.022  0.118  error from dispersion  
0.227  0.264  0.309  1.5 isocontours , no smoothing  
0.030  0.034  0.050  error from dispersion  
0.209  0.237  0.269  1.5 isocontours , smoothed  
0.026  0.028  0.039  error from dispersion  
(D2)  11000, real space  125  128  11000  0.192  0.250  0.326  all , no smoothing  
0.011  0.017  0.026  error from dispersion  
0.216  0.255  0.300  all , smoothed  
0.015  0.020  0.026  error from dispersion  
0.251  0.292  0.340  1.5 isocontour , no smoothing  
0.023  0.027  0.033  error from dispersion  
0.249  0.281  0.318  1.5 isocontour , smoothed  
0.024  0.028  0.033  error from dispersion  
11000, redshift space  125  128  0.074  0.226  1.323  all , no smoothing  
0.028  0.035  1.687  error from dispersion  
0.096  0.197  0.492  all , smoothed  
0.031  0.034  0.238  error from dispersion  
0.190  0.253  0.347  1.5 isocontours , no smoothing  
0.040  0.048  0.099  error from dispersion  
0.182  0.233  0.308  1.5 isocontours , smoothed  
0.040  0.044  0.089  error from dispersion 
 (D1)

125 realizations of dark matter catalogs involving 50000 points, similarly as for the mock with mass thresholding used in § 4.4;
 (D2)

125 realizations of dark matter catalog involving 11000 points, similarly as for the mock catalogs with mass thresholding used in § 4.4.
We perform exactly the same measurements in these dilute catalogs than in sample (i) (in real space, except that we use grids for interpolating the fields and we have 125 different realizations) and in samples (iv) (in redshift space). In the latter case, note that the observer position in the simulation cube is different for each realization, chosen exactly to be on a regular patter covering the full simulation volume as in samples (iv).
The results are summarized in Table 2 and illustrated by Figs. 7 and 8. We first discuss real space measurements using the second order moments of the full PDF and compare the values obtained for to those from the undiluted sample (i), given in Table 1. Note that this sample uses a grid for the interpolation instead of the one for samples D1 and D2. Using a resolution grid increases the measured value of in first line of Table 1 from to . Taking that fact into account, we notice that, due to the dilute nature of the samples, our interpolation procedure uses an adaptive kernel of larger size: as shown by Fig. 7, this makes the fields more linear, closer to the Gaussian limit, and decreases the level of effective bias brought by the “propeller” shape of the bivariate PDF. As a result, prior to additional smoothing with a Gaussian window of size Mpc, the measured is larger for the dilute samples than for the full one, and the convergence with linear theory prediction improves with level of dilution. The mean interparticle distance in the sparser sample is of the order of Mpc, slightly lower than the size of the postprocessing Gaussian smoothing kernel. One thus expects rough agreement between full sample and samples D1 and D2 after smoothing with such a window, which is approximately the case. Furthermore, the measured value of in D2 is not very sensitive to whether additional smoothing is performed or not, since .
Still examining real space measurements, we now consider the values of measured from second order moments of the PDF, but using only the region of best likelihood, Eq. (52), similarly as in previous paragraph. As expected, the effective bias due to nonlinear contributions (or propeller effects) is tremendously reduced, and one recovers a value of compatible with the true value, given the errorbars. These latter, which estimate pure shot noise, are of the order of 4 and 10% for D1 and D2 respectively. As for the full sample, this result stands also when additional smoothing is applied to the data, although one can notice a slight bias to lower values of .
We now turn to redshift space measurements and first consider the measured value of using the second
order moments of the full PDF. The results obtained in previous section still hold (compare Table 1 to Table 2), except that the measurements performed before additional Gaussian smoothing give a lower value of , for D1 and for D2: the low bias effect on gets worse with dilution, since the larger size of the adaptive kernel tends to reduce the effect of fingers of God: as noticed previously, fingerofGod effects help to reduce the asymmetry brought by the “propeller”, the main source of the low bias on . Additional Gaussian smoothing with a window of size Mpc improves the convergence between D1, D2 and the full sample, as expected, but induces a highly underestimated value of , because of the propeller effects which are then prominent. These arguments are supported by examination of Fig. 8. Note that the asymmetry of the tails of the bivariate PDF around the major axis of the maximum likelihood region, already observed and explained in Fig. 6, is now more pronounced, at least from the visual point of view.
Measurements are improved while selecting the region of best likelihood, but not as well as in the real space case or in the redshift space case with full sampling: at best, is underestimated by about 12 percent. Additional smoothing or passing from D1 to D2 expectingly increases the bias. This underestimate comes again from the fact that the adaptive interpolating kernel is now of much larger extension than for the full particles sample, which induces biases comparable to what was observed for the full samples with additional smoothing (§ 4.3.3).
Note that it is possible to reduce the effective bias observed on by narrowing the region of best likelihood at the price of an increase of the errors. For instance, taking a 38 percent confidence region enclosed by a isocontour gives and for D1 and D2, respectively. Our procedure for measuring can thus certainly be improved with a more sophisticated treatment of the region of best likelihood. For instance, a way to extract in an unbiased way the parameter from the data could consist in measuring the local slope of the skeleton of the surface representing the bivariate PDF (see Novikov, Colombi & Doré, 2006) after appropriate (adaptive) smoothing of the velocity–gravity scatter plot. This is left for future work; in what follows, we shall still use for the sake of simplicity our likelihood contour technique, while staying aware of the bias brought by dilution.
4.4 Measurements on simple mock “galaxy” catalogs
To estimate in a sufficiently realistic way how biasing affects the results on the velocity–gravity relation, we extracted from the simulation four “galaxy” catalogs, corresponding to two methods of treating dark matter halos. In the first method, we consider each dark matter halo as a galaxy. In the second method, we consider each substructure present in dark matter halos as a galaxy. The details are given in § 4.4.1. For each kind of “galaxy catalogs”, two level of biasing are considered, corresponding to small and large threshold on the dark matter (sub)halos masses, and respectively. In all cases, each galaxy is given the same weight while density and velocity are interpolated as explained in § 4.1. This rather extreme way of treating the bias is far from being optimal: appropriate weighting could be given to each object in order to correct at least partly for the effects of the bias, as further discussed in the conclusions of this paper.
The quantitative analysis of the velocitygravity relation is performed in § 4.4.2. The measurements are interpreted in the light of the previous paragraphs analyses, using as a guideline the large scale bias obtained from the measurement of the powerspectrum of the “galaxy” distribution.
4.4.1 The mock catalogs
To extract halos and substructures from the simulation, we use the publically available software adaptaHOP (Aubert, Pichon & Colombi, 2004).^{6}^{6}6The parameters used in adaptaHOP are the same as in Aubert et al. (2004, Appendix B), namely , , and . AdaptaHOP builds an ensemble of trees. Each tree corresponds to a halo which is a connected ensemble of particles with SPH density . The branches of the trees are composite structures of which the connectivity is controlled by the saddle points in the particle distribution. The leaves of the trees correspond to the smallest possible substructures one can find in the simulation. From this ensemble of trees, we extract two kinds of catalogs, one where each galaxy is identified to a tree, the other one where each galaxy is identified to the leaves of the trees (if a tree has only one leaf, it means that the halo is its own single substructure). Note that velocities of these galaxies are computed as the average velocity of all the particles belonging to the corresponding (sub)structure.
Additional mass thresholding is used to control the number of “galaxies” in the catalogs,. In final, 4 mock catalogs are obtained (see also Table 3):
Samp.  Content  Size  Measured from:  Comment  

(v)  HAL. , real space  1  128  43482  0.304  0.375  0.464  all, no smoothing  
,  0.330  0.379  0.435  all, smoothed  
0.318  0.368  0.426  1.5 isocontour, no smoothing  
0.329  0.370  0.415  1.5 isocontour, smoothed  
HAL. 