[

[

[
Accepted XXX. Received YYY; in original form ZZZ
Abstract

We present a self-consistent Bayesian formalism to sample the primordial density fields compatible with a set of dark matter density tracers after cosmic evolution observed in redshift space. Previous works on density reconstruction considered redshift space distortions as noise or included an additional iterative distortion correction step. We present here the analytic solution of coherent flows within a Hamiltonian Monte Carlo posterior sampling of the primordial density field. We test our method within the Zel’dovich approximation, presenting also an analytic solution including tidal fields and spherical collapse on small scales using augmented Lagrangian perturbation theory. Our resulting reconstructed fields are isotropic and their power spectra are unbiased compared to the true one defined by our mock observations. Novel algorithmic implementations are introduced regarding the mass assignment kernels when defining the dark matter density field and optimization of the time step in the Hamiltonian equations of motions. Our algorithm, dubbed barcode, promises to be specially suited for analysis of the dark matter cosmic web implied by the observed spatial distribution of galaxy clusters — such as obtained from X-ray, SZ or weak lensing surveys — as well as that of the intergalactic medium sampled by the Lyman alpha forest or perhaps even by deep hydrogen intensity mapping. In these cases, virialized motions are negligible, and the tracers cannot be modeled as point-like objects. It could be used in all of these contexts as a baryon acoustic oscillation reconstruction algorithm.

Bayesian cosmography from redshift space maps]Bayesian cosmic density field inference from redshift space dark matter maps E. G. P. Bos et al.] E. G. Patrick Bos,{}^{1,2}thanks: E-mail: p.bos@esciencecenter.nl Francisco-Shu Kitaura{}^{3,4} and Rien van de Weygaert{}^{2}
{}^{1}Netherlands eScience Center, Science Park 140, 1098XG, Amsterdam
{}^{2}Kapteyn Astronomical Institute, University of Groningen, PO box 800, 9700AV, Groningen
{}^{3}Instituto de Astrofisica de Canarias (IAC), Calle Via Lactea s/n, 38200, La Laguna, Tenerife, Spain
{}^{4}Departamento de Astrofísica, Universidad de La Laguna (ULL), E-38206, La Laguna, Tenerife, Spain

[


\@tabular

[t]@l@[\@close@row


Accepted XXX. Received YYY; in original form ZZZ


\@next\@currbox\@freelist
Key words: galaxies: distances and redshifts – large-scale structure of Universe – methods: statistical – methods: analytical – cosmology: observations
\@currbox

^\@addtodblcol

\@xsect

The Cosmic Web of the Universe arises from the gravitational instability caused by tiny primordial density perturbations, which presumably have their origin in quantum fluctuations. At initial cosmic times, perturbations are linear and the statistics describing them is extremely close to Gaussian, though some deviations from Gaussianity could be expected depending on the inflationary phase the Universe has probably experienced after its birth \citep1980PhLB…91…99S,2018arXiv180706211P. Therefore, the Cosmic Web encodes the information to understand nonlinear structure formation \citepbond96,weygaert96,cautun14,2014MNRAS.440L.106A and disentangle the interplay between dark matter and dark energy \citepparklee07,bos12,vogelsberger14. One of the great challenges of the coming years is to rigorously test these theories using observations \citepwerner08,2016ApJ…817..160L,2018arXiv180504555T However, observations are based on some biased tracers, which are affected by their proper motions, as they are measured in so-called redshift space \citepkaiser87,hamilton98. We are facing three problems here: one is the action of gravity linking the linear primordial fluctuations to the final nonlinear density field, the second one is modeling the bias of the dark matter tracers and the third one is solving for redshift space distortions. There have been dramatic developments, ever since the pioneering works by \citetpeebles89 — proposing a least action method linking the trajectories of particles from initial to final cosmic times — \citetweinberg92 — proposing a rank-ordering scheme, realizing the different statistical nature between the initial and final density fluctuations — and \citetnusser92 — proposing a time reversal machine based on the Zel’dovich approximation \citepzeldovich70. The minimization of an action led to two approaches: the FAM method \citepnusser00,2002MNRAS.335…53B and the MAK method \citep2003MNRAS.346..501B,2006MNRAS.365..939M,lavaux10. Time reversal machines based on higher order corrections have been proposed \citep1993ApJ…405..449G,1999MNRAS.308..763M,kitauraangulo12. The same concept has been proposed to enhance signatures encoded in the primordial density fluctuations, such as the Baryon Acoustic Oscillations (BAOs) \citep[see][]2007ApJ…664..675E, as this linearizes the density field transferring information from the higher order statistics to the two point correlation function \citep[see][]2015PhRvD..92l3522S. Constrained simulations have been applied to study the Local Universe \citepclues10, but it is hard to find a configuration of constrained peaks that give rise to a desired Cosmic Web configuration after non-linear gravitational evolution. The reverse Zel’dovich approximation can provide a first order correction to this limitation of constrained simulations \citepdoumler13,2018MNRAS.476.4362S. The problem of these methods is that they are limited by the approximation of reversing gravity, which breaks down with so-called shell crossing \citep2016IAUS..308…69H. Bayesian approaches have permitted the incorporation of forward modeling, which solves this problem, at a considerably higher computational cost \citepkitaura13,jaschewandelt13,wang13. We will follow here this approach and more specifically the setting described in \citetwang13 based on the combination of a Gaussian prior describing the statistical nature of the primordial fluctuations and a Gaussian likelihood minimizing the squared distance between the reconstructed density and the true one. Nonetheless, we would like to remark that the algorithmic approach of connecting the primordial density field with the final one followed by \citetwang13 is based upon the one described in \citetjaschewandelt13, as we also do here. Our novel contribution to this formalism consists of including coherent redshift space distortions (RSD) in an analytical way, within the Hamiltonian Monte Carlo sampling of the posterior distribution function (resulting from the product of the prior and the likelihood). While \citetjaschewandelt13 do not give a treatment of RSD, \citetwang13 rely on an iterative RSD correction applied prior to the Bayesian reconstruction method \citepwang09,wang12. In a recent study in the context of Bayesian reconstructions of the primordial fluctuations, \citet2018arXiv180611117J apply a redshift space transformation to the dark matter field, in which the bias is defined with respect to redshift space. Our formalism entails a more natural approach, and includes a biasing scheme with respect to Eulerian space \citep[see also][]thesis. This implies additional terms in the Hamiltonian equations, and has the advantage that it admits natural real space bias models that couple the large scale dark matter distribution to the galaxy population \citep[see e.g.][and references therein]wang13,kitaura14,2014MNRAS.441..646N,ata15,2016MNRAS.456.4156K,2017MNRAS.472.4144V. The only restriction to our approach is to include a treatment for the virialized motions, as we are not able at this moment to include a random term within the analytical posterior sampling expression. Virial motions can, in general, be corrected for with fingers-of-god (FoGs) compression \citeptegmark04,2015AJ….149..171T,2016A&A…588A..14T,2017MNRAS.469.2859S. Within a Bayesian context, it is possible to include a virialized cluster term following the approach of \citetkitaura13, which includes a likelihood comparison step object by object and not cell by cell as in \citetjaschewandelt13 or \citetwang13. In the object based approach, the likelihood comparison can be done in redshift space, including virialized motions as was demonstrated in \citethess13. This method requires two steps within an iterative Gibbs-sampler to handle the reconstruction problem, one to transform dark matter tracers from redshift space at final cosmic times to real space at initial cosmic times, and a second one to make the link between the discrete distribution to the continuous Gaussian field. It is, in fact, also possible to include an additional Gibbs-sampling step in the Bayesian formalism to self-consistently sample peculiar motions and solve for RSD \citepkitaura16rsd,ata17.

Figure 1: A high level overview of the barcode workflow and pipeline.
Left-most panel: illustration of the challenge posed when applying a reconstruction algorithm on redshift space distorted input data. The two-dimensional correlation function of a (statistically) isotropic density field is a nearly perfectly circular pattern. This is what we see in the solid black lines that represent the expected (“true”) field that we want to reconstruct. When comparing the two-dimensional correlation function of the real — non-redshift — space dark matter density fields without correcting for the RSDs, the result is a strong anisotropic deviation (red dashed lines) from this expected pattern. barcode has been specifically designed to deal with this circumstance, and samples primordial density fluctuation field realizations that are constrained to evolve into a present-day mass distribution in accordance with the input set of observations.
barcode seeks to reconstruct the underlying real-space dark matter density field from a set of observational input data. The procedure is statistically sampling the allowed realizations under these observational constraints. Unique for barcode is its ability to directly process observations as they are obtained in redshift space without the need for an intermediate redshift distortion correction step. Also implicit in the formalism is that it takes into account the noisy nature of the observational input data. The four left-hand side panels illustrate this sequence of complications. Top panel: the primordial density fluctuations that we sample. Second panel: the corresponding dark matter density field in real space, gravitationally evolved from the primordial field. Third panel: the field in redshift space. Bottom panel: the redshift space field with “observational” noise added.
The barcode procedure described in this work produces likely reconstructions of the density field, on the basis of the physical and statistical assumptions and prescriptions of the Bayesian model relating the observed mass distribution to the primordial density perturbations. It takes into account the gravitational evolution of the mass distribution, stochastic processes and observational effects.
The right-hand side panels show zoom-ins on to three realizations of the barcode procedure: primordial fields on the left, corresponding gravitationally evolved real space fields on the right. Note that we sample primordial fields, which can in turn be used as initial conditions for evolution studies of the large scale structure. The images reveal the substantial level of variation between the resulting reconstructions of the underlying dark matter distribution. The differences between the reconstructions reflect the uncertainties emanating from the biased and noisy observational input data, as well as those induced by the approximate nature indigenous to the physical modeling of gravitational evolution and biasing. Despite the intrinsic variations between permissible distributions of the dark matter distribution, the success of the barcode procedure in correcting for the redshift distortions in the input observational data is evidenced by the three far right-hand side panels. For three different input fields the algorithm has produced near perfect isotropic correlation functions in real space, clear evidence for the removal of anisotropies due to presence of redshift distortions.

In this work, we propose a self-consistent transformation of the data model into redshift space, allowing to sample directly from the posterior distribution function in one step within a Hamiltonian Monte Carlo sampler. Thus, this represents the first self-consistent (coherent) redshift space data based primordial density fluctuation reconstruction code based on one iterative sampling step, which can also naturally deal with masks. Our implementation of this approach is called barcode. A high level overview of the method and the problem it solves is presented in figure 1. In this code, we transform from Lagrangian coordinates {\bm{q}} to Eulerian coordinates {\bm{x}} using a structure formation model and within the same equation also from {\bm{x}} to the corresponding redshift space {\bm{s}}. To implement the redshift space transformation into the Hamiltonian Monte Carlo formalism, we need to derive the corresponding analytical derivatives. For our structure formation model, we will restrict our studies to the Zel’dovich approximation, as we seek the most efficient, simple formalism, which has been shown to be accurate when it is fully treated \citep[see e.g.][]white15.

barcode has already been referred to in various works giving details of its implementation \citepbos16tallinn,thesis. One of our main motivations for developing barcode is the analysis of galaxy clusters for which the mass has been measured in X-ray \citepsarazin86,mulchaey00,ikebe96,nulsen10, Sunyaev-Zel’dovich cluster surveys \citep[e.g.][]1999PhR…310…97B,2015ApJS..216…27B,2016A&A…594A..24P,2018ApJS..235…20H and weak lensing studies of clusters \citep[e.g.][]1993ApJ…404..441K,1997ApJ…489..522L,2010arXiv1002.3952U,laureijs11,dejong12,2013SSRv..177…75H,2016MNRAS.459.1764S,2017A&A…598A.107R,2018arXiv180901669T. This frees us from having to consider galaxy bias, contrary to many similar studies in the literature. Nonetheless, we have implemented a power-law bias in barcode for more general cases, as it was done in \citetwang13. More details to the context and motivation of our focus on clusters have been given in \citetthesis. We note that in such a context, the approach proposed here is potentially more appropriate than the ones cited above modeling tracers as point-like objects, because the shapes and orientations of clusters play a decisive role in their tidal effects on their environments \citepbond96,vdwbert96. We leave a more detailed investigation of this to later work.

Broader cosmological applications are also envisioned. Extending this work to make growth rate sampling is trivial, and can be done in a similar fashion to \citetgranett15, but including a more complex and accurate algorithm, as presented here. In fact, redshift space distortions can also be leveraged as a proxy for constraining the nature of gravity and cosmological parameters \citep[e.g.][]berlind01,zhang07,jainzhang08,guzzo08,nesseris08,songkoyama09,songpercival09,percivalwhite09,mcdonaldseljak09,white09,song10,zhao10,song11. To this end, many recent studies focus on the measurement of redshift space distortions \citepcole95,peacock01b,percival04,daangela08,okumura08,guzzo08,blake11,jennings11,2012ApJ…748…78K,2012MNRAS.420.2102S,2012MNRAS.426.2719R,2012JCAP…11..014O,2013MNRAS.429.1514S,2013PhRvD..88j3510Z,2013MNRAS.436.3089B,2013A&A…557A..54D,2014MNRAS.439.3504S,2014MNRAS.440.2692S,2014A&A…563A..37B,2014MNRAS.440.2222T,2014JCAP…05..003O,2014MNRAS.443.1065B.

\@xsect

This paper is structured as follows. First we present the methodology, giving details of the analytical implementations (see Section [). In particular we focus on the novel redshift space formalism implemented in barcode. We briefly recap the most important barcode concepts and equations in Appendix [. In Section [, we set the stage for our analyses by describing and illustrating our mock observations and the reconstructions that barcode yields from them.

The core results of our investigation are described in Section [. The reconstructions including the redshift space model demonstrate the major benefits of our novel approach. For contrast, we show what would happen if one used barcode without our redshift space extension on observational data in redshift space coordinates in Appendix \citeauthoryearvan de Weygaert & Platenvan de Weygaert & Platen2011.

These results are followed by a more technical discussion on the performance aspects of barcode with this new model in Section [. We evaluate whether the added stochasticity that redshift space transformations add cause the MCMC chain to evolve differently than without our redshift space model. We close with a discussion of the results in Section [.

Additional material is available in the appendices. The two dimensional correlation function, which is used extensively in the results sections, is defined in Appendix [. An important technical aspect in barcode is the use of kernel density estimation, the details of which we discuss in Appendix [. Finally, in Appendix [, we give an extensive derivation of the equations necessary for configuring barcode to use a second-order Lagrangian Perturbation Theory (2LPT) structure formation model, optionally including the spherical collapse terms of the “ALPT” model.

\@xsect

The Bayesian reconstruction algorithm connecting the evolved dark matter density to the primordial fluctuation field has already been presented in several works \citepkitaura12,kitaura13,jaschewandelt13,wang13,bos16tallinn. We summarize the central equations used in barcode — our C++ implementation of this algorithm — in Appendix [.

Let us now consider the Hamiltonian Monte Carlo calculations in redshift space {\bm{s}} (equation [) instead of Eulerian coordinates {\bm{x}}. Densities are then functions of {\bm{s}}, e.g. \rho^{\mathrm{obs}}({\bm{s}}). For convenience, in this section we write \breve{q}_{i} instead of \delta({\bm{q}}_{i}) for the initial density field (the sample in the MCMC chain), where i is a grid location (i.e. a component of the sample vector).

In the first part of this section, we will first define what we mean by redshift space coordinates. Next, in Section [, we give a brief overview of the different kinds of non-linearities involved in the redshift space transformation and our Lagrangian Perturbation Theory models of structure formation. In particular, we discuss what this means for the representation of non-linear cosmic structure. We then dive into the derivation of the main quantity necessary for the HMC algorithm: the Hamiltonian likelihood force in Section [. In Section [ we simplify the fully general form we derived to arrive at the plane parallel approximation that we employed to produce the results in this paper.

\@xsect

From a practical perspective we will describe redshift space as follows. Redshift space coordinates {\bm{s}} can be defined as

{\bm{s}}\equiv{\bm{x}}+\frac{1}{Ha}\left(({\bm{v}}-{\bm{v}}_{\mathrm{obs}})% \cdot\hat{{\bm{x}}}\right)\hat{{\bm{x}}}\equiv{\bm{x}}+{{\bm{\Psi}}_{s}}\,, (\theequation)

where {\bm{s}} is a particle’s location in redshift space, {\bm{x}} is the Eulerian comoving coordinate, {\bm{v}} is the particle’s (peculiar) velocity and {\bm{v}}_{\mathrm{obs}} is the velocity of the observer, and \hat{{\bm{x}}} is the comoving unit vector. a is the cosmological expansion factor, which converts from physical to comoving coordinates, and H is the Hubble parameter at that expansion factor, which converts the velocity terms to distances with units ~{}h^{-1}\mathrm{Mpc}. We call {{\bm{\Psi}}_{s}} the redshift space displacement field, analogously to the displacement field {\bm{\Psi}} that transforms coordinates from Lagrangian to Eulerian space in LPT (equation [).

One important point to stress is that in equation [ (and all equations henceforth), the comoving coordinates are defined such that the observer is always at the origin. The unit vector \hat{{\bm{x}}}, and thus the entire redshift space, changes completely when the origin changes.

From redshift measurements z, we can derive the radial component of the total velocity {\bm{u}} of galaxies and other objects:

{\bm{u}}={\bm{v}}_{H}+{\bm{v}}\,, (\theequation)

where {\bm{v}}_{H} is the Hubble flow velocity and {\bm{v}} is the peculiar velocity. Redshift space (comoving) coordinates {\bm{s}} can be written in terms of the total velocity as

{\bm{s}}\equiv\frac{{\bm{u}}}{Ha}\cdot\hat{{\bm{x}}}\hat{{\bm{x}}}\,. (\theequation)

We can compare such observational redshift space positions directly with theoretical models by transforming our models to redshift space as given in equation [.

\@xsect

Lagrangian Perturbation Theory (LPT) is accurate in its position and velocity estimates up to (quasi-)linear order. Note that since our objective is to use them in a model based on data at low redshift, and we are dealing with strong non-linearities, LPT is not a fully accurate model.

To clarify, there are two types of non-linearity involved in redshift space distortion theory \citepwhite12. \citetkaiser87 and others — when talking about linear redshift space distortions* Linear redshift space theory is about finding analytical solutions to \rho({\bm{s}}) \citepkaiser87. One of the great results of this theory is the so-called Kaiser effect, which predicts a boost on large scales and a deficiency at small scales in the power spectrum. Note that this is a linear effect; one assumes |\rho({\bm{x}})/\rho_{c}-1|\ll 1 to derive it. — are dealing with non-linearities in the mapping from Eulerian to redshift space. One marked example of this non-linear mapping is the creation of caustics in the triple-value regions. In the centers of clusters, we also have to take into account that the velocity field itself is non-linear, which, for instance, leads to FoGs.

Since we make no approximations in the redshift space mapping, we are dealing with its full non-linear form. The (quasi-)linear velocity field of LPT, on the other hand, will become increasingly inaccurate as we venture into the non-linear density and velocity-field regime, leading to increasingly inaccurate redshift space representations.

Large scale (linear regime) coherent flows are still well modeled in our LPT-based framework. Bulk velocities {\bm{v}} in LPT scale linearly with the displacement field {\bm{\Psi}}:

{\bm{v}}=aH\sum_{i=1}^{o}f^{(i)}{\bm{\Psi}}^{(i)}\,, (\theequation)

where i is summed up to the used perturbation order o (o=1 for the Zel’dovich approximation and o=2 for 2LPT) and f is the linear velocity growth factor:

f^{o}\equiv\frac{\partial\ln D^{o}}{\partial\ln a}\,, (\theequation)

In linear theory, the displacement is directly proportional to the gravitational acceleration, which follows directly from the initial density field. On large scales, structure still resides mostly in the linear regime, so that LPT manages to describe the velocity field fairly accurately. The squashing effect on large scales will therefore be well represented in our model.

Cluster infall is modeled poorly in Lagrangian Perturbation Theory. Virialized and multistream motion are not at all included in LPT. This means we do not model FoGs nor the triple value regime within the turnaround radius \citep[see e.g.][]haarlemvdw93 in this work. Any coincidental triple value regions cannot be trusted in our approach, even though there might be FoG-like features around clusters in LPT as well. Since LPT breaks down there, they cannot be considered as true features. We shortly come back to this in the discussion in Section [.

\@xsect

The Hamiltonian likelihood force The Hamiltonian prior force is defined in Lagrangian space only. We need not take redshift space into account there. — defined for observations in comoving coordinates in equation [ — is redefined for observations in redshift space as

F^{\mathscr{L}}_{i}=-\sum_{j}\frac{\partial\ln\mathscr{L}}{\partial\rho^{s}({% \bm{s}}_{j})}\frac{\partial\rho^{s}({\bm{s}}_{j})}{\partial\breve{q}_{i}}\,, (\theequation)

where \rho^{s}({\bm{s}}_{j}) is the Eulerian density field in redshift space, at grid location j, which is found by evolving the Lagrangian density field \breve{q}=\delta({\bm{q}}) forward (see Appendix [), and \breve{q}_{i} is the Lagrangian density field (the signal) at grid location i. The first right-hand side multiplicative term in equation [ is given analogously to equation [ by

\frac{\partial\ln\mathscr{L}}{\partial\rho^{s}({\bm{s}}_{j})}=\frac{w_{j}}{% \sigma_{j}^{2}}\left[T(\rho^{\mathrm{obs}}({\bm{s}}_{j}))-\rho^{s}({\bm{s}}_{j% })\right]\,, (\theequation)

where T(\rho) is a transfer function that performs a simple linear density transformation to values in the used structure formation model given values from an N-body simulation (which should correspond to the observed quantities). For the non-RSD models we could use functions from \citetleclercq13. However, the fitted functions from that work were not calibrated for redshift space densities, so we would need to rederive such functions. We highlight this possibility for future applications (see Section [), but without any loss of generality, we can define T(\rho)=\rho here. Since the T term in this equation does not depend on the primordial density field, its exact form is not important in our derivation.

Note that indices i and j run over completely different regular grids. The i index here refers to the initial Lagrangian space grid, while the j index is for the present-time Eulerian grid defined by the observations.

The density estimation of \rho^{s}({\bm{s}}_{j}) is done with SPH splines (Appendix [). In this case, we replace {\bm{x}} by {\bm{s}} in equation [:

\rho^{s}({\bm{s}})=\sum_{i}m_{i}W({\bm{s}}-{\bm{s}}_{i};h_{s})\,. (\theequation)

With that we can write the second right-hand side term of equation [ as

\begin{split}\displaystyle\frac{\partial\rho^{s}({\bm{s}}_{j})}{\partial\breve% {q}_{i}}&\displaystyle=\sum_{k}m_{k}\frac{\partial W(\|{\bm{s}}_{j}-{\bm{s}}_{% k}\|)}{\partial\breve{q}_{i}}\\ &\displaystyle=\sum_{k}m_{k}\frac{\partial W(\|{\bm{s}}_{j}-{\bm{s}}_{k}\|)}{% \partial{\bm{s}}_{k}}\cdot\frac{\partial{\bm{s}}_{k}}{\partial\breve{q}_{i}}\,% ,\end{split} (\theequation)

where the gradient of W is given in equation [ (replacing {\bm{x}} by {\bm{s}}). The second term is where the real difference with the non-RSD method comes in, because

\frac{\partial{\bm{s}}_{k}}{\partial\breve{q}_{i}}=\frac{\partial{\bm{\Psi}}({% \bm{q}}_{k})}{\partial\breve{q}_{i}}+\frac{\partial{{\bm{\Psi}}_{s}}({\bm{q}}_% {k})}{\partial\breve{q}_{i}}\,, (\theequation)

where, besides the derivative of {\bm{\Psi}}, we now have the extra derivative of redshift space displacement {{\bm{\Psi}}_{s}} term to deal with. It is this term that we will derive further in the following subsections. Putting this back into equation [, we can rewrite as

\begin{split}\displaystyle F^{\mathscr{L}}_{i}&\displaystyle=-\sum_{k}{\bm{V}}% _{k}\cdot\left[\frac{\partial{\bm{\Psi}}_{k}}{\partial\breve{q}_{i}}+\frac{% \partial{{\bm{\Psi}}_{s}}_{k}}{\partial\breve{q}_{i}}\right]\\ &\displaystyle=\sum_{m}h_{m}\frac{\partial\vartheta_{m}}{\partial\breve{q}_{i}% }-\sum_{k}{\bm{V}}_{k}\cdot\frac{\partial{{\bm{\Psi}}_{s}}_{k}}{\partial\breve% {q}_{i}}\\ &\displaystyle\equiv F^{\mathscr{L},\mathrm{reg}}_{i}+F^{\mathscr{L},\mathrm{% rss}}_{i}\,,\end{split} (\theequation)

where the first part is the same as the “regular” F^{\mathscr{L}} without redshift space from equation [ and the second part represents the redshift space contribution.

We carry on to derive in detail the multiplicand in this added term, which is the derivative or gradient of the redshift space displacement field {{\bm{\Psi}}_{s}} with respect to the signal (the primordial density field \delta({\bm{q}})). For convenience, we multiply by Ha, yielding:

\begin{split}\displaystyle(Ha)\frac{\partial{{\bm{\Psi}}_{s}}({\bm{q}}_{k})}{% \partial\breve{q}_{i}}&\displaystyle=\frac{\partial\left(({\bm{v}}_{k}-{\bm{v}% }_{\mathrm{obs}})\cdot\hat{{\bm{x}}}_{k}\right)\hat{{\bm{x}}}_{k}}{\partial% \breve{q}_{i}}\\ &\displaystyle=\left(\frac{\partial{\bm{v}}_{k}}{\partial\breve{q}_{i}}\cdot% \hat{{\bm{x}}}_{k}\right)\hat{{\bm{x}}}_{k}+\left(({\bm{v}}_{k}-{\bm{v}}_{% \mathrm{obs}})\cdot\frac{\partial\hat{{\bm{x}}}_{k}}{\partial\breve{q}_{i}}% \right)\hat{{\bm{x}}}_{k}\\ &\displaystyle\quad+\left(({\bm{v}}_{k}-{\bm{v}}_{\mathrm{obs}})\cdot\hat{{\bm% {x}}}_{k}\right)\frac{\partial\hat{{\bm{x}}}_{k}}{\partial\breve{q}_{i}}\,.% \end{split} (\theequation)

We can further expand the gradient of \hat{{\bm{x}}}_{k}:

\begin{split}\displaystyle\frac{\partial\hat{{\bm{x}}}_{k}}{\partial\breve{q}_% {i}}&\displaystyle=\frac{1}{\|{\bm{x}}_{k}\|}\frac{\partial{\bm{\Psi}}_{k}}{% \partial\breve{q}_{i}}+{\bm{x}}_{k}\left[-\frac{1}{2}\frac{2x_{k}\frac{% \partial x_{k}}{\partial\breve{q}_{i}}+2y_{k}\frac{\partial y_{k}}{\partial% \breve{q}_{i}}+2z_{k}\frac{\partial z_{k}}{\partial\breve{q}_{i}}}{{(x_{k}^{2}% +y_{k}^{2}+z_{k}^{2})}^{\frac{3}{2}}}\right]\\ &\displaystyle=\frac{1}{\|{\bm{x}}_{k}\|}\frac{\partial{\bm{\Psi}}_{k}}{% \partial\breve{q}_{i}}+{\bm{x}}_{k}\frac{1}{\|{\bm{x}}_{k}\|^{3}}\left({\bm{x}% }_{k}\cdot\frac{\partial{\bm{x}}_{k}}{\partial\breve{q}_{i}}\right)\\ &\displaystyle=\frac{1}{\|{\bm{x}}_{k}\|}\left[\frac{\partial{\bm{\Psi}}_{k}}{% \partial\breve{q}_{i}}-\left(\hat{{\bm{x}}}_{k}\cdot\frac{\partial{\bm{\Psi}}_% {k}}{\partial\breve{q}_{i}}\right)\hat{{\bm{x}}}_{k}\right]\,.\end{split} (\theequation)

This is the most explicit form we can give in the general case. To find a specific solution for our models, we must specify {\bm{v}}. In LPT, given {\bm{\Psi}}, we know {\bm{v}}:

\displaystyle{\bm{v}} \displaystyle=aHf^{(1)}{\bm{\Psi}}^{(1)} \displaystyle\mathrm{LPT~{}(Zel^{\prime}dovich)}\,, (\theequation)
\displaystyle{\bm{v}} \displaystyle=aH\left(f^{(1)}{\bm{\Psi}}^{(1)}+f^{(2)}{\bm{\Psi}}^{(2)}\right) \displaystyle\mathrm{2LPT}\,, (\theequation)

For convenience, in what follows we split {{\bm{\Psi}}_{s}} into two parts:

\begin{split}\displaystyle{{\bm{\Psi}}_{s}}&\displaystyle=\frac{1}{Ha}\left(% \sum_{o}{\bm{v}}^{(o)}-{\bm{v}}_{\mathrm{obs}}\right)\cdot\hat{{\bm{x}}}\hat{{% \bm{x}}}\\ &\displaystyle=\left(\sum_{o}f^{(o)}{\bm{\Psi}}^{(o)}-\frac{{\bm{v}}_{\mathrm{% obs}}}{Ha}\right)\cdot\hat{{\bm{x}}}\hat{{\bm{x}}}\equiv\sum_{o}{{\bm{\Psi}}_{% s}}^{(o)}-{{\bm{\Psi}}_{s}}^{\mathrm{obs}}\,,\end{split} (\theequation)

where o stands for the LPT order of the {\bm{v}} and corresponding {{\bm{\Psi}}_{s}} terms. This way we can treat the observational velocity contribution separately from the LPT part(s). In what follows we work out the observational part and the Zel’dovich first order LPT term.

\@xsect

When we insert equation [ into equation [ and combine with equation [ we find for the derivative of {{\bm{\Psi}}_{s}}^{(1)} that (note that by {\bm{\Psi}} we really mean {\bm{\Psi}}^{(1)} here):

To facilitate insight into this expression we introduce a few auxiliary variables:

\frac{\partial{{\bm{\Psi}}_{s}}_{k}^{(1)}}{\partial\breve{q}_{i}}\equiv f^{(1)% }\left[(A^{(1)}_{ki}+B^{(1)}_{ki}-2C^{(1)}_{k}A^{(1)}_{ki})\hat{{\bm{x}}}_{k}+% C^{(1)}_{k}\frac{\partial{\bm{\Psi}}^{(1)}_{k}}{\partial\breve{q}_{i}}\right]\,, (\theequation)

following the definition:

\displaystyle A^{(1)}_{ki} \displaystyle\equiv\frac{\partial{\bm{\Psi}}^{(1)}_{k}}{\partial\breve{q}_{i}}% \cdot\hat{{\bm{x}}}_{k} (\theequation)
\displaystyle B^{(1)}_{ki} \displaystyle\equiv\frac{1}{\|{\bm{x}}_{k}\|}{\bm{\Psi}}^{(1)}_{k}\cdot\frac{% \partial{\bm{\Psi}}^{(1)}_{k}}{\partial\breve{q}_{i}} (\theequation)
\displaystyle C^{(1)}_{k} \displaystyle\equiv\frac{1}{\|{\bm{x}}_{k}\|}{\bm{\Psi}}^{(1)}_{k}\cdot\hat{{% \bm{x}}}_{k}\,. (\theequation)

One may verify that in the Zel’dovich case the {\bm{x}} terms are also first order. The A and B terms have units ~{}h^{-1}\mathrm{Mpc} and C is unitless. It is interesting that, while the first three terms represent a contribution in the radial direction, the last term corresponds to a contribution in the direction of the displacement field. Depending on the density field this may be an arbitrary direction. The C term in front of it contains a division by the distance to the particle, meaning that the closer you look, the stronger this component will be. The same is true for the B term. The derivative of the redshift space displacement field is thus quite different from what one might naively expect from regular (radial) RSD effects, especially near the observer. The A term is the most straight-forward, simply representing the component of the derivative of the displacement field in the radial direction.

\@xsect

Elaborating on the remaining part of equation [, in combination with equation [, yields the observational term. This term is valid for any structure formation model encoded in a displacement field {\bm{\Psi}}.

\begin{split}\displaystyle(Ha)\frac{\partial{{\bm{\Psi}}_{s}}_{k}^{\mathrm{obs% }}}{\partial\breve{q}_{i}}&\displaystyle={\bm{v}}_{k}^{\mathrm{obs}}\cdot\frac% {\partial\hat{{\bm{x}}}_{k}}{\partial\breve{q}_{i}}\hat{{\bm{x}}}_{k}+{\bm{v}}% _{k}^{\mathrm{obs}}\cdot\hat{{\bm{x}}}_{k}\frac{\partial\hat{{\bm{x}}}_{k}}{% \partial\breve{q}_{i}}\\ &\displaystyle=\frac{1}{\|{\bm{x}}_{k}\|}\left[{\bm{v}}_{k}^{\mathrm{obs}}% \cdot\left(\frac{\partial{\bm{\Psi}}_{k}}{\partial\breve{q}_{i}}-\left(\hat{{% \bm{x}}}_{k}\cdot\frac{\partial{\bm{\Psi}}_{k}}{\partial\breve{q}_{i}}\right)% \hat{{\bm{x}}}_{k}\right)\right]\hat{{\bm{x}}}_{k}\\ &\displaystyle\quad+\frac{1}{\|{\bm{x}}_{k}\|}({\bm{v}}_{k}^{\mathrm{obs}}% \cdot\hat{{\bm{x}}}_{k})\left[\frac{\partial{\bm{\Psi}}_{k}}{\partial\breve{q}% _{i}}-\left(\hat{{\bm{x}}}_{k}\cdot\frac{\partial{\bm{\Psi}}_{k}}{\partial% \breve{q}_{i}}\right)\hat{{\bm{x}}}_{k}\right]\\ &\displaystyle=\frac{1}{\|{\bm{x}}_{k}\|}\left[{\bm{v}}_{k}^{\mathrm{obs}}% \cdot\frac{\partial{\bm{\Psi}}_{k}}{\partial\breve{q}_{i}}\hat{{\bm{x}}}_{k}% \right.\\ &\displaystyle\quad-2({\bm{v}}_{k}^{\mathrm{obs}}\cdot\hat{{\bm{x}}}_{k})\left% (\frac{\partial{\bm{\Psi}}_{k}}{\partial\breve{q}_{i}}\cdot\hat{{\bm{x}}}_{k}% \right)\hat{{\bm{x}}}_{k}\\ &\displaystyle\quad\left.+({\bm{v}}_{k}^{\mathrm{obs}}\cdot\hat{{\bm{x}}}_{k})% \frac{\partial{\bm{\Psi}}_{k}}{\partial\breve{q}_{i}}\right]\,.\end{split} (\theequation)

This expression has the same structure as equation LABEL:eqn:dz_dsignal_zeldovich, except for the first “A” term. The latter disappears because it is constant, yielding:

\frac{\partial{{\bm{\Psi}}_{s}}_{k}^{\mathrm{obs}}}{\partial\breve{q}_{i}}% \equiv\frac{1}{Ha}\left[\left(B^{\mathrm{obs}}_{ki}-2C^{\mathrm{obs}}_{k}A^{% \mathrm{obs}}_{ki}\right)\hat{{\bm{x}}}_{k}+C^{\mathrm{obs}}_{k}\frac{\partial% {\bm{\Psi}}_{k}}{\partial\breve{q}_{i}}\right]\,, (\theequation)

where we define

\displaystyle A^{\mathrm{obs}}_{ki} \displaystyle\equiv\frac{\partial{\bm{\Psi}}_{k}}{\partial\breve{q}_{i}}\cdot% \hat{{\bm{x}}}_{k} (\theequation)
\displaystyle B^{\mathrm{obs}}_{ki} \displaystyle\equiv\frac{1}{\|{\bm{x}}_{k}\|}{\bm{v}}^{\mathrm{obs}}_{k}\cdot% \frac{\partial{\bm{\Psi}}_{k}}{\partial\breve{q}_{i}} (\theequation)
\displaystyle C^{\mathrm{obs}}_{k} \displaystyle\equiv\frac{1}{\|{\bm{x}}_{k}\|}{\bm{v}}^{\mathrm{obs}}_{k}\cdot% \hat{{\bm{x}}}_{k}\,. (\theequation)

The similarity between the expressions reveals that it has the same behavior of non-radial contributions near the observer as the Zel’dovich term.

\@xsect

The above two terms can be greatly simplified by taking a plane parallel approximation of redshift space, effectively putting the observer at an infinite distance. The \frac{1}{\|{\bm{x}}\|} will then make the B and C terms go to zero, as well as the full observational term of the derivative of {{\bm{\Psi}}_{s}} in equation [.

For the Zel’dovich model in plane parallel redshift space, we are left with

\left.\frac{\partial{{\bm{\Psi}}_{s}}_{k}}{\partial\breve{q}_{i}}\right|_{% \hskip 14.999771pt\hskip-14.999771pt\scriptscriptstyle\mathrm{parallel}\hskip-% 14.999771pt\hskip-9.374857pt\raisebox{5.499931pt}{$\scriptscriptstyle\mathrm{% plane}$}\hskip-9.374857pt\hskip 14.999771pt}=\frac{\partial{{\bm{\Psi}}_{s}}^{% (1)}_{k}}{\partial\breve{q}_{i}}=f^{(1)}\left(\frac{\partial{\bm{\Psi}}_{k}}{% \partial\breve{q}_{i}}\cdot\hat{{\bm{x}}}_{k}\right)\hat{{\bm{x}}}_{k}\,. (\theequation)

It is good to point out that {{\bm{\Psi}}_{s}} and {{\bm{\Psi}}_{s}}^{(1)} are intrinsically different. Only in this specific case are their derivatives equal, as the observational part is equal to zero.

Without loss of generality we will restrict our studies to the plane parallel approximation for simplicity.

Finally, we insert this back into the F^{\mathscr{L},\mathrm{rss}} term of equation [:

where we define {\bm{V}}^{r}_{i} as

{\bm{V}}^{r}_{i}\equiv({\bm{V}}_{i}\cdot\hat{{\bm{x}}}_{i})\hat{{\bm{x}}}_{i}\,. (\theequation)

We are left with almost the same result as in equation 3.27 of \citetthesis, the only difference being the replacement of {\bm{V}} with f^{(1)}{\bm{V}}^{r}. We can then conclude that in order to reuse equation [, we need to merely replace h({\bm{q}}_{m}) by h^{\mathrm{rss}}({\bm{q}}_{m}), defined as

h^{\mathrm{rss}}({\bm{q}}_{m})\equiv\nabla^{-2}{\bm{\nabla}}\cdot({\bm{V}}_{m}% +f^{(1)}{\bm{V}}^{r}_{m})\,. (\theequation)

This results in:

F^{\mathscr{L}}_{i}=\sum_{m}h^{\mathrm{rss}}_{m}\frac{\partial\vartheta_{m}}{% \partial\breve{q}_{i}}\,. (\theequation)

In the implementation of this algorithm it is convenient to choose the direction of the observer parallel to one of the coordinate axes. In this way, \hat{{\bm{x}}} only changes one component of {\bm{V}}. This modification can easily be implemented in a pre-existing HMC sampling code.

\@xsect

Our novel approach towards integrating redshift space in the Hamiltonian solver code needs to be validated. The question of whether the code converges correctly towards the true Lagrangian density and in what way it diverges from it (if differently than in the regular Eulerian space treatment of previous works) will be answered in Sections [ and [. To this end, we sampled a set of reconstructions based on mock observed density distributions. We describe and illustrate the mocks and reconstructions in the rest of this section.

Figure 2: From true Lagrangian field (top left) to true Eulerian field (center left). From there we can go two ways: either directly to mock observational input field in Eulerian space as we did in \citetthesis (center right) or via true redshift space field (bottom left) to mock observational input field in redshift space (bottom right).

In our numerical experiments we use four categories of parameters, detailed in the following subsections:

  1. cosmological (§ [),

  2. statistical (§ [),

  3. astronomical (§ [),

  4. numerical (§ [).

\@xsect

For the cosmological parameters we use the maximum likelihood results from the WMAP 7-year observations \citepkomatsu11 For more up to date parameters, see \citet2018arXiv180706209P. . The relevant parameters for barcode are given in table 1.

Table 1: Cosmological parameters used in this work. These are results from the WMAP 7-year observations \citepkomatsu11, except for the box size.
\@tabular

l|cc

 & value & description
 \Omega_{m} & 0.272 & Matter density parameter
\Omega_{b} & 0.046 & Baryonic matter density parameter
\Omega_{\Lambda}& 0.728 & Dark energy (DE) density parameter
w & -1 & DE equation of state parameter
n_{s} & 0.961 & Primordial power spectrum power-law index
w_{a} & 0 & DE linear time dependency parameter
\sigma_{8} & 0.807 & Density field fluctuation rms at 8~{}h^{-1}\mathrm{Mpc}
h & 0.702 & Hubble parameter (units of 100~{}\mathrm{Mpc}~{}\mathrm{km}^{-1}s)
L & 200 & Comoving box size in~{}h^{-1}\mathrm{Mpc}
 

\@close@row

The cosmological parameters also determine the power spectrum. We use CAMB \citepcamb to generate the power spectrum which is used for the prior computation in barcode. We consider cubical volumes of 1.95~{}h^{-3}\mathrm{Gpc}^{3}. The Zel’dovich structure formation model is applied to transform from Lagrangian to Eulerian coordinates at z=0.

\@xsect

barcode has a number of parameters to tune HMC and some choices of statistical models. The results in this paper use the following statistical parameters:

Leap-frog step size \epsilon: We use an adaptive step size (Appendix ††\citeauthoryearvan de Weygaert & Platenvan de Weygaert & Platen2011). It is bounded to be between 0 and 2, but within those limits it is adapted to give an acceptance rate of 0.65\pm 0.05 (calculated over the 50 previous steps).

Number of leap-frog steps per iteration N_{\epsilon}: The number of leap-frog steps is randomly drawn from a uniform distribution of integers between 1 and 256. For Section [ the maximum was increased to 4096 to assess performance as a function of this parameter.

Hamiltonian mass: We use the inverse correlation function type mass (see \citetthesis for more details):

\hat{\mathscr{M}}=1/P(k)\,. (\theequation)

Likelihood: We use a Gaussian likelihood as described in equation [. For this work, we set \sigma_{i} and w_{i} to 1 for all cells, effectively removing these parameters from the algorithm.

\@xsect

Mock observed density field: The algorithm is based on the comparison of an observed density field with sampled fields from the posterior PDF. For the runs in this study, we generated the mock observed fields in redshift space (instead of Eulerian space which we did in \citetthesis) as follows. First, we generate a Gaussian random field given the cosmological power spectrum. This Lagrangian space field is mapped to an Eulerian space density field at z=0, using the assumed structure formation model. The Eulerian space field is, in turn, transformed to redshift space on the basis of the ballistic velocities according to the Zel’dovich approximation. Following this, to simulate actual data, we add random noise to the redshift space density field. The noise is drawn from a Gaussian with zero mean and \sigma_{i} standard deviation.

Note that since our likelihood is Gaussian, we must also allow negative densities to be produced by the noise. In \citetthesis, we truncated negative values so as not to generate unphysical negative masses. This turned out to be the cause of the power deficiency in the reconstructions we discussed at length in that work. In this work, we show the potential of the formalism, including its ability to converge on its target, which is why we allow unphysical negative densities in order to satisfy our model choice of a Gaussian likelihood. In a more realistic use case, using real observed data, one would have to choose an error model that does not allow negative values, like a gamma or log-normal distribution.

The lower right panel of figure 2 displays the resulting noisy mock observed field (the panel above that describes the end result of the mock field used in a model where density is defined in regular comoving coordinates instead of redshift space).

For our comparison study, two types of runs were set up, in addition to the runs from \citetthesis (in this work referred to as “regular” runs, i.e. runs with data and the model in regular comoving space):

  1. In the first, we used the same algorithm as the runs in \citetthesis, not taking redshift space into account; we refer to these as “obs_rsd” runs.

  2. For the second set of runs, the redshift space formalism from Section [ was used; we call these the “rsd” runs.

In figure 3 we show the power spectra of the density fields from figure 2. The redshift space spectrum clearly reflects the Kaiser effect: a boost of power at low k and a deficiency at higher k.

Figure 3: Power spectra of density field in Lagrangian space, Eulerian space and redshift space and of the mock observational density fields that are used as input (in Eulerian or redshift space). The redshift space spectrum peaks above the Eulerian space one, by a factor called the Kaiser factor \citepkaiser87. The mock observational density fields have a high-k tail that represents the Gaussian noise that is put on top of it.

Window / mask: The window parameter w_{i} in the likelihood was not used in this work, i.e. it was set to 1 for all cells i.

Starting configuration: To start the MCMC random walk, we need to provide a starting point, or “initial guess”, in the N_{x}^{3}-dimensional sampling space. We supply the code with a non-informative guess about the cosmic web structure: a field with value zero in every grid cell. This is also the way to run the code using real observations.

\@xsect

We run with a resolution of L/N_{x}=9.766~{}h^{-1}\mathrm{Mpc}, corresponding to a grid of N_{x}=128 cells in each direction and a volume side of L=1250~{}h^{-1}\mathrm{Mpc}. The total number of cells is N=2097152. In \citetthesis we studied computations with L/N_{x}=3.125. The Zel’dovich approximation as a structure formation model correlates well with a full N-body simulation above scales of \sim 10~{}h^{-1}\mathrm{Mpc}, which is why for this work we chose a higher grid cell size.

We used three seeds to initialize the random number generator in the code. Each of these seeds was used for a separate code run, giving separate, differently evolving chains.

We output the field realizations every 50 steps of the chain. This assures that the saved samples are sufficiently far apart, although it is a bit overcautious, because when the chain performs well, the HMC algorithm should already make sure that subsequent samples are uncorrelated (indeed the likelihood panel in figure 14 shows that this is the case). In the statistics of the following sections (means and standard deviations of reconstructions) we use 101 samples, the range from sample 500 to 5500.

We used the SPH density estimator (Section [). We set the SPH scale parameter equal to the cell size L/N_{x}=9.766~{}h^{-1}\mathrm{Mpc}.

\@xsect
Figure 4: Redshift space density fields. Top left: the true field. Top right: the mean of the sampled fields. Bottom: the standard deviation of the samples.

First, in figure 4 we show the difference between the mean and true fields, but now in redshift space coordinates {\bm{s}}. The mean reconstruction in redshift space shows a high degree of similarity to the true field’s configuration, both in the density values and in the shapes of the web structures. The variance around the mean (bottom panel) reflects the propagation of data, model and systematic errors embodied by the posterior model of the cosmic densities that we sample. As is the case when sampling densities in regular comoving coordinates (see e.g. \citetthesis, figures 3.14 and 3.15), the highest variance can be found in the high density regions, reflecting the higher number statistics of the particles that cluster to form these regions. These same statistics lead the voids in reconstructions to be dominated by random fluctuations from the prior. These fluctuations average out in the mean field. The underdense voids, hence, contain less structure in the reconstructions’ mean than in the true field.

Figure 5: Eulerian versus redshift space density fields. Left: true fields. Right: mean of samples. Top: Eulerian space. Bottom: redshift space.

In figure 5 we compare the redshift space results to the regular Eulerian space density (we also show the true densities, for convenience). The coherent inflow part of the redshift space distortions causes expected features along the line of sight in redshift space: squashing and enhancement of density contrast for overdense large scale structures and a stretching of underdense voids§ These features may be hard to appreciate from static figures. We provide an animated GIF at http://egpbos.nl/rsd-eul__rsd-rss/ for more convenient comparison of the right-hand side panels of figure 5. . These effects are quantified in the power spectrum, which in redshift space shows the Kaiser effect (figure 3; see also figure B3 of Appendix \citeauthoryearvan de Weygaert & Platenvan de Weygaert & Platen2011, which illustrates the imprint of the Kaiser effect on reconstructions that were not corrected for redshift space distortions), and the two-dimensional correlation function, further discussed in Section [.

\@xsect

In this section we will show the benefits of using our self-consistent redshift space (RSS) Zel’dovich model over a regular Zel’dovich model. When properly dealt with, the anisotropies caused by RSDs can be modeled in the data and hence eliminated from the reconstruction. This enables reconstruction of the mean density field based on redshift space data. By directly incorporating RSDs in the formalism, ad-hoc corrections for RSDs are no longer necessary.

From an algorithmic point of view, the most pressing question is whether the code does converge correctly towards the true Lagrangian density. Also, we want to characterize in what way the samples diverge from it.

One might wonder what would happen if redshift space data are used without a redshift space model. In Appendix \citeauthoryearvan de Weygaert & Platenvan de Weygaert & Platen2011 we illustrate the non-negligible impact of the redshift space effects in such a naive approach. This clearly motivates the need for the model used in our approach.

\@xsect
Figure 6: Mean sample Lagrangian densities (top right) compared to true (top left), together with std (bottom left) and the difference of the true and mean fields (bottom right); with RSD model.

We first inspect the match of the Lagrangian density obtained by the chain samples, after burn-in, to the true underlying density. Comparing the true field to the ensemble average in the top panels of figure 6, we find a good match in terms of the large scale structures. Many high peaks and low troughs have their counterpart in the samples, even though their exact shapes and heights do consistently differ. In some cases peaks or troughs are combined, in some they are split and in some cases random peaks or troughs are added that should not exist at all.

We should note that these kinds of fluctuations are to be expected. In the individual samples, such fluctuations will be even more pronounced (as we discuss in Section [). In the mean sample, the algorithm is expected to fluctuate around the true density field to a great extend. However, it is highly unlikely that the mean field will ever exactly match the true field. This is prevented by the uncertainties that are inherent to all the processes involved in the posterior.

The algorithm not only manages to reconstruct the structures, also quantitatively it works well: the difference between true and reconstructed density is typically on the same order of magnitude as the values themselves. It leads to a good reconstruction of the density distribution function, but still allows for significant variation. The standard deviation of sampled densities is typically around 1 (bottom left panel of figure 6). Using the RSS model gives a very comparable amount of variation to when one does not need it, as can be seen by comparing to figure B2. In principle, given the added uncertainty in spatial mass distribution configurations along the line of sight introduced by redshift space distortions, one might expect that the variation around the mean would be smaller when using the RSS model. Stated differently, the degeneracy of the z-axis could lead to the “leaking” of the MCMC chain of some of its movement in the statistical coordinate space \breve{q} to velocity space. Here we see no such effect, which may indicate that our chosen HMC parameters, in particular N_{\epsilon}, cause a sufficiently uncorrelated sampling process for both cases. We mention this here explicitly, because in our previous work on higher resolution runs \citepthesis we did see a difference in sample variability between runs with and without RSS model. The sample variation is further discussed in Section [.

Figure 7: Mean sample Eulerian densities (top right) compared to true (top left), together with std (bottom left) and the difference of the true and mean fields (bottom right); with RSD model.

Following gravitational evolution, the corresponding Eulerian density fields reveal an evolved web. Our analysis therefore also allows a similarity inspection of the prominent web-like mass distribution features in the Cosmic Web. On the basis of figure 7, it becomes clear that this model performs very well in reconstructing the large scale features that we are interested in.

As expected, there is still some difference, as seen in the bottom right panel. With the redshift space distortions included in the reconstructions, the outstanding z-directionality or bi-polarity that can be noted in the uncorrected process (see figure B2 of Appendix \citeauthoryearvan de Weygaert & Platenvan de Weygaert & Platen2011) have fully disappeared. Moreover, the variation seems evenly spread out over the entire field. If the variation were caused by redshift space distortions, one would expect a concentration of differences around clusters and “great walls”. As expected, compared to the Lagrangian case, the standard deviation of the samples is somewhat lower, but of the same order of magnitude.

Figure 8: True Lagrangian power spectrum compared to the mean and variance of the sampled density fields’ power spectra with RSD model. Note that the difference plot (bottom panel) uses samples from three different runs with different true fields, while the power spectrum plot itself (top panel) is only from one true field.

Looking further at the sample power spectra in figure 8, we find an excellent match to the expected true spectrum. The difference of the mean to the true spectrum in the bottom panel shows that the Kaiser effect has been completely accounted for. The deficiency in the reconstructed power, peaking at k=0.2h\mathrm{Mpc}^{-1}, that we discussed in \citetthesis is no longer present here, due to allowing the mock observed density distribution to have values \rho<0, so as to properly match the Gaussian likelihood (see Section [).

\@xsect

Because of the anisotropy of redshift space, RSD effects show up more clearly in directional statistics like the 2D correlation function \xi(\sigma,\pi) (see Appendix [ for more detail). In \xi(\sigma,\pi), the three spatial directions are collapsed into two. One uses the directions along and perpendicular to the line of sight. Anisotropies in the data will show as a deviation from a perfectly circular \xi(\sigma,\pi). RSDs will thus have a marked signature, see e.g. \citethawkins03,peacock01b,guzzo08.

Figure 9: Eulerian space 2D correlation functions: true (left) vs ensemble mean (center). The right-hand panel shows the two functions in contours: solid black for the true function and dashed green for the ensemble mean.

The two-dimensional correlation function \xi(r_{\perp},r_{\parallel}) of the reconstructions matches the true one when using the redshift space model. This indicates that the large scale redshift space anisotropies have been eliminated We use the notation \xi(r_{\perp},r_{\parallel}) instead of \xi(\sigma,\pi), replacing \sigma=r_{\perp} and \pi=r_{\parallel}. . In figure 9 we compare the correlation functions of the ensemble mean of chain samples and the true field. The difference between them, \xi(r_{\perp},r_{\parallel})_{\mathrm{true}}~{}-~{}\xi(r_{\perp},r_{\parallel}% )_{\mathrm{mean}}, is maximally \sim 0.001, with the mean difference being \sim 0.000013.

Figure 10: Five sample Eulerian 2D correlation functions minus the mean of all reconstructions, compared to the true field in the top left panel.
Figure 11: Sample Eulerian 2D correlation functions compared to true: the first three rows are based on three different true Lagrangian density fields, while the last row contains the average of the three runs. The left-hand column shows the true fields, the middle row the mean of the ensemble and the right-hand row compares the two. The ensemble mean is represented in dashed green and the true field in solid black lines. This sample images clearly illustrate the role of cosmic variance in the anisotropy of the field.

The nature of the MCMC sampling process itself introduces additional anisotropies in the samples. No single realization will perfectly match the 2D correlation function of the true field. This sampling effect is illustrated in figure 10, where we show the difference of the correlation functions of 5 individual realizations with that of the mean of all realizations. The differences are very small, on the order of \delta\xi/\xi\sim 10^{-3}. To account for this case-by-case variation, we only look at sample ensembles instead of just at single samples.

Note that for reasons of cosmic variance, some amount of anisotropy is always expected. A finite sized box will never be perfectly isotropic. This fact is illustrated in figure 11. The first three rows are each based on a different true Lagrangian density field, generated with a different random seed. It is immediately apparent from their deviation from perfect circularity on larger scales that the true fields (left column) contain a considerable amount of anisotropy. To account for this effect, one should always compare to the true field and take its anisotropies into consideration. Indeed, it can be seen that the true and reconstructed anisotropies match well. Alternatively, one could run a large number of chains with different random input fields and average out their statistics. For the entire Universe we expect a perfectly isotropic condition, reflecting the cosmological principle. Even averaging over only three different chains does not suffice, as may be appreciated from the bottom row of figure 11.

\@xsect

Some questions remain regarding performance of the algorithm given our new model, in terms of both speed and accuracy. Here we formulate some answers to the questions of how the chain evolves compared to the case without RSDs presented in \citetthesis and whether we see any remaining imprints of the RSDs.

\@xsect

As demonstrated above, the variation of samples around the mean seems similar to that obtained with a regular Eulerian space model. One could have expected the algorithm to have somewhat more trouble evolving the MCMC chain in the redshift space case. However, as argued above, it seems that in our situation, the leap-frog step size was sufficiently large for the chain to orbit similar volumes of posterior parameter space in both cases.

(a) Real space, with RSD model
(b) Real space, without RSD model
(c) Redshift space, with RSD model
Figure 12: Zoom-in on Eulerian real and redshift space density fields of iterations 500–3000 (top six panels) and the true field (bottom panel): with and without RSD model.

In figure 12 we take another close-up look at the evolution of the chain. The same region of the reconstructed Eulerian density field is shown for six different iterations of the chain in figure 11(a). These can be compared to the true density field in the bottom panel. We see here the stochastic sampler at work. The main large scale features can be identified in all samples, like the very massive peak in the bottom right corner and the filamentary structure in the top left. The massive peak is rather stable, with some additional protrusions sometimes being formed at random. The filaments, however, undergo some major variations in shape, orientation and connectedness.

We compare this behavior to that of a regular space run in figure 11(b) (see Appendix \citeauthoryearvan de Weygaert & Platenvan de Weygaert & Platen2011 for details on this run). While there are clear differences with the redshift space corrected runs of figure 11(a), the results do not seem qualitatively different by eye in terms of amount of structural variation, which is in line with what we discussed above. At the same time, we see deeper reds in clusters and blues in voids in figure 11(b) than in the corrected runs. This is because the uncorrected runs reproduce features that look like redshift space distortion — amongst which: enhanced density contrast — but then in real space. The enhanced contrast can also be seen in the redshift space zoom-ins of figure 11(c), which corroborates this fact. The uncorrected runs are discussed in more detail in Appendix \citeauthoryearvan de Weygaert & Platenvan de Weygaert & Platen2011.

\@xsect\@xsect
Figure 13: Likelihood during the burn-in phase of six chains. Results are shown for the “regular” Eulerian space model in solid blue lines and the “rss” model in redshift space in dashed red lines. For each model, the three lines correspond to runs with three different true Lagrangian initial fields, generated with three different random number generator seeds.

When trying to analyze chain mixing speed, one obvious place to begin is the burn-in phase. In figure 13 we compare the likelihood of three different chains for each of the two models, the Zel’dovich model in Eulerian and in redshift space. From this we conclude that burn-in does not seem at all slower. The runs seem equally fast in Eulerian and in redshift space. That also means that defining step 150 as the cut-off for burn-in is acceptable for the redshift space model as well.

\@xsect
Figure 14: Chain parameter evolution vs iteration number (horizontal axis). The iteration number includes both accepted and rejected steps. The top panels show the likelihood (right panel with the median of the likelihood subtracted for easier comparison), bottom left panel shows \epsilon, bottom right shows N_{\epsilon}. Regular runs in solid blue shades, redshift model runs in dashed red shades.

In figure 14 we show chain parameters for the whole chain (after burn-in). Note that the iteration number on the horizontal axis includes both accepted and rejected steps. The variation in the likelihood can be used as a measure of the variability of the chain, as it compares the sample to the input field. Two Eulerian space and two redshift space chains are shown in these figures.

The variation in likelihood is nearly identical for the regular and redshift space chains, with a standard deviation of slightly over 400 for both chains. This is more directly illustrated in the top right panel, where we subtract the median value of the chain, so that the actual variation around the median can be compared for the different chains. The random patterns are statistically indistinguishable.

In the bottom panel we additionally show \epsilon (bottom left) and N_{\epsilon} (bottom right). Here finally the expected difference in the chain performance between the regular and rss runs shows. The leap-frog step-size adapts to smaller values in the redshift space model runs. This means that for some reason the orbit through the posterior’s parameter space accrues more errors, i.e. its Hamiltonian is more sensitive to the exact path taken. The redshift space model constrains the sampler less tightly, since it allows for more possible configurations along the line of sight than when using a regular space model. This means that it is easier for the sampler to end up in a location that fits less well with the true field than in the regular model case. This explains why a shorter leap-frog path may be necessary in the redshift space model.

\@xsect

In \citetthesis, we found that for runs with L=200~{}h^{-1}\mathrm{Mpc} and N_{x}=64, a higher number of leap-frog steps N_{\epsilon} than 256 improved the performance of the chain in that its samples covered a larger part of the posterior’s parameter space. We tried a higher value in this work as well to test whether we have reached optimal chain performance.

Figure 15: Chain parameter evolution vs iteration number (horizontal axis) for different N_{\epsilon} values. The iteration number includes both accepted and rejected steps. The top panels show the likelihood (right panel with the median of the likelihood subtracted for easier comparison), bottom left panel shows \epsilon, bottom right shows N_{\epsilon}. Regular run in solid blue, redshift model runs (including the two with a higher N_{\epsilon}) in dashed red shades.

In figure 15, we show the results of another redshift space model runs with a higher N_{\epsilon} value of 4096. This significantly higher value does not seem to make any noticeable difference in the chain evolution parameters. It does makes the code slower, as it scales with N_{\epsilon}. We can conclude that 256 leap-frog steps per iteration are plenty for proper chain performance.

\@xsect
Figure 16: Histogram of the density of the true Eulerian density field (horizontal axis) versus the corresponding density in the ensemble average of the chain (vertical axis), compared in each grid cell. In red the redshift space model and the regular model in blue.

Figure 16 shows, for each grid cell, the density of the true Eulerian density field versus the corresponding density in the ensemble average of the chain. The right panel shows the results for the redshift space model, the left panel for the Eulerian space model. These plots corroborate some of our previous findings. The variation (width of the distribution around the x=y line) is more or less the same, corresponding to the similar variation in the samples we saw before. Generally, the correspondence is good for both models.

\@xsect

We developed and tested a self consistent method for the reconstruction of an ensemble of likely primordial density fields based on mock observed data in redshift space. We showed that this significantly improves the quality of the reconstructions. They are more isotropic and redshift space artifacts are removed. This forms a contrast to a naive application of an Eulerian space algorithm to redshift space data, which would retain these features.

The novelty of our method relates to the explicit treatment of redshift space. We extend existing Hamiltonian MC methods \citepjaschewandelt13,wang13 to a slightly more realistic data model. The rationale is that pre-processing data should be kept to a minimum. Properly modeling the data is always preferable. This way, all the possible interacting complexities in the data will be properly handled, whereas pre-processing data might lead to additional noise and/or errors. For instance, to artificially remove Fingers of God one must make a lot of simplifying assumptions, like sphericity of the clusters \citep[e.g.]tegmark04,tempel12. Simple, automated FoG removers will inevitably generate false positives and also miss out on true positives. In our integrated redshift space approach, no assumptions are needed at all. This is not only considerably easier, but also seems like the only way that one can ever consistently model all the complexities in the data.

Note that redshift distortions have been treated in a different way by \citetwang13. Their treatment of redshift distortions \citepwang09 is based on a linear approximation of the velocity field, which they use to correct for the positions of dark matter haloes. This is a pre-processing step that is done only once and in a unique way, before running their Hamiltonian sampler. In so doing, the intrinsic uncertainty from using redshift as a proxy for distance is circumvented rather than accounted for.

In the rest of this section, we will discuss some of the (astro)physical and astronomical applications of our formalism. The method presented in this work is still not fully complete with respect to modeling all complexities of the cosmic structure formation process, which we will also touch upon. In addition, a range of technical aspects and open questions will be addressed and we will discuss some further points pertaining to the novel redshift treatment.

\@xsect

Using the barcode framework, we aim to use cluster data as input and statistically study the Cosmic Web that forms in between. All the stochastic effects involved in the evolution of large scale structure and in the observations thereof can be self-consistently included. The analysis of the resulting ensemble of reconstructions should then be straightforward. This matter was explored in Chapter 5 of \citetthesis.

One point of attention when using this code is the fact that it is biased, and is better tuned towards constraining high density regions than low density regions. Because many Lagrangian {\bm{q}} locations will end up in clusters on the real-space {\bm{x}} or {\bm{s}} grid, the real-space constraints in barcode are biased towards more high density regions. This means that even though you might have put constraints on \rho^{\mathrm{obs}}({\bm{s}}) in void-like regions, they will be poorly probed. In a future paper, we will explore this effect in more detail by trying to constrain voids.

Given our aim of using clusters as constraints, this circumstance is actually quite fortunate. One might, however, want to modify the algorithm to better suit void-region constraints. One possibility is to sample directly in Eulerian (volume) space {\bm{x}}, as opposed to the Lagrangian (mass) space. This poses problems with the non-uniqueness of the translation from Eulerian to Lagrangian coordinates. Another option might be to use \sigma({\bm{x}}) to compensate for the voids. One could lower the variance in those regions, thus making the constraints tighter and more likely to be held in a random sample. However, it still leaves the possibility of intermediate fluctuations, given that the amount of particles ending up in voids will be low. The effects of this will also be tested in a future study. This approach is consistent with that of \citetwang13, who define the variance as

\sigma({\bm{x}})=\mu\delta^{\mathrm{obs}}({\bm{x}})\,, (\theequation)

with \mu a constant; i.e. simply a linear function of the density.

\@xsect

A number of improvements may be identified that would enable our formalism to improve the accuracy, reliability and level of realism of our reconstructions. For future work, especially for applying the formalism on real observational data, we here list the most significant ideas towards improvement.

\@xsect

When one wants to compare true density fields to fields derived with LPT, one of the first aspects encountered is the difference of one-point density distributions (PDFs). Perturbation theory approaches do not prevent particles from pursuing their initial track. Beyond shell crossing, it leads to the overshooting of the position they would take if the mutual gravitational interaction would have been taken into account in the calculations. True gravitational clustering will give much higher peak density values than LPT, where this artificial shell crossing artifact creates “fuzzy” clusters. A simple 1st order correction to this would be to apply a rank ordered substitution of density values. A transfer function from a “LPT Eulerian space” density to, for instance, a “N-body Eulerian space” density can be constructed. \citetleclercq13 did this by comparing the rank order of density values in LPT and N-body densities at z=0 (given the same initial conditions) and reassigning densities in one or the other in such a way that the density value PDFs match after the procedure. This way, at least one is comparing apples to apples for as far as 1st order statistics are concerned.

\@xsect

We have shown that in the perfect situation of a model that exactly describes our (mock) reality, our algorithm almost perfectly reconstructs the desired mean field and its statistics. Although the used models describe cosmic structure well on large scales, they are far from perfect for describing non-linear regime structures. There are a few models that would give a more complete picture of gravitational evolution.

The current LPT (Zel’dovich) based framework is easily adapted to using second order LPT (2LPT) or ALPT \citepkitaurahess13. 2LPT has the advantage of matching even better on the largest scales. At cluster scales 2LPT is more affected by artificial shell crossing than Zel’dovich, leading to “puffy” structures. The latter can be fixed by combining 2LPT with a spherical collapse model on small scales. This is what ALPT accomplishes. Both models are fully analytical, so that they can be implemented in the same manner as the Zel’dovich approximation described in this work. In Appendix [ we work out the equations necessary to extend barcode with 2LPT and ALPT structure formation models. A similar option would be to apply the Multiscale Spherical Collapse Evolution (MUSCLE) model \citep2016MNRAS.455L..11N. This analytical model was shown to perform slightly better than ALPT, especially when combined with 2LPT on large scales.

It would be even better if we could use an N-body simulation as our structure formation model. \citetwang14 indeed for the first time successfully invoked a particle mesh N-body algorithm in this context. The particle mesh equations are analytical, so every single particle mesh evolution step can be derived to the signal. By writing the derivative in the form of a matrix operator and combining subsequent particle mesh time steps by means of matrix multiplications, the full likelihood force can be analytically derived. By means of adjoint differentiation, the large matrices can be efficiently multiplied and the computational cost of this method stays within reasonable limits. The resulting model accuracy using only 10 particle mesh steps is remarkably high. When one needs high accuracy on small scales, this seems like the way forward. More recently, this approach was also adopted by \citet2018arXiv180611117J.

Possibly, the method by \citetwang14 could be augmented by using an N-body solver that also has baryonic particles. Whether this is analytically tractable within an HMC framework remains to be investigated. Another interesting extension might be to employ the COLA method \citeptassev13,tassev15 as an alternative gravitational solver. COLA combines the LPT and particle mesh methods, trading accuracy at small scales for computational efficiency. It yields accurate halo statistics down to masses of 10^{11}\mbox{${\mathrm{M}}_{\odot}$}, which would be more than sufficient for cluster studies. In fact, the COLA method has already found uses in the context of Bayesian reconstruction \citepleclercq15,leclercq15thesis, but in these cases COLA was applied after the Bayesian reconstruction step, not self-consistently within the model like in the work of \citetwang14.

\@xsect

As, in practice, observations concern galaxies, stars and gas, instead of dark matter, it is of key importance to address the issue of in how far the galaxy distribution reflects the underlying dark matter distribution \citep[see][for a recent review]2018PhR…733….1D. For the application to clusters observed in X-ray, SZ or via weak lensing, this is not directly necessary, since for clusters the biasing problem is far less. \citetschaller14 showed that the ratio of halo mass in N-body simulations with and without baryons is \simeq 1 from mass \simeq 10^{13.5}\mbox{${\mathrm{M}}_{\odot}$} upwards. It drops off towards \simeq 0.8 for galaxies. Similarly, the halo abundance in cluster-sized haloes was shown to be similar in simulations with and without baryons. However, we might want to extend the algorithm towards a formalism capable of processing the galaxy distribution \citep[like e.g.][]wang13,leclercq15b.

A natural way to solve this would be to incorporate gas particles in the structure formation model. However, this would still be a mere approximation, due to the complexities of baryonic astrophysical processes. A more statistical approach would be to explicitly incorporate biasing prescriptions \citep[see e.g.][and references therein]ata15,kitaura15. We have implemented such models in barcode and will explore their use further in an upcoming work.

\@xsect

One of the more observationally oriented issues is that of masking and selection functions. In our general Gaussian likelihood (equation [), we included a weight factor w_{i}. This can be set to zero if nothing is known about cell i in the data and to one if there is. Another option is to set it to zero if the area is observed, but was found to be empty (at least up to the depth of the observation). Either of these constitute a very basic form of masking.

Given the nature of observations, one could think of more advanced selection mask treatments. For instance, we should take into account the depth of a null-observation and the fact that this implies a lower limit to the density, not an absolute absence of mass. One could implement this by setting w_{i}=0 if \rho^{\mathrm{obs}}<\rho^{\mathrm{cut}} for some cut-off density (per cell). A theoretically more sound model may be to use a Heaviside step function convolved with a Gaussian kernel (i.e. an error function, plus one, divided by two) with a step at the cut-off density \rho^{\mathrm{cut}}. This reflects the fact that there is some uncertainty about whether the density is above the cut-off value that we derived from the depth of our observation. It further tells us that we are quite sure it is not much higher and that it is most likely lower than the cut-off value.

Such a selection function will give an additional \delta({\bm{q}}) dependent term in the Hamiltonian force. The derivative is simply a 1D Gaussian distribution. This implies that it should be straight-forward to derive and should give only minimal extra overhead in the algorithm.

These masking treatments will be explored further in an upcoming work.

\@xsect

Another parameter in our Gaussian model we hardly gave any attention is the \sigma_{i} cell-wise dispersion of the data. It is highly likely that a more suitable value can be found than the one we used (1 for all cells). In general, the statistical model in the form of prior and likelihood that we use in our formalism might be improved upon. It seems like the prior is well defined and encapsulates what it must in a proper way. The likelihood is a different story.

Other authors have mainly employed Poissonian data models \citepkitaura08,wang13,jaschewandelt13 instead of the Gaussian likelihood we used. The way to find out what model should be used is to meticulously map out the errors and systematics involved in the observations. One particular example of this is that of defining a likelihood model for an observational dataset of X-ray clusters. In this case, we have to take into account the following aspects:

  • X-ray photon counts, which have Poissonian errors;

  • X-ray photon energies, which are also Poissonian due to CCD photon counts;

  • the propagation of photon errors into a \chi-squared fit for the temperature and density profile parameters, e.g. using a \beta-model \citepking72,sarazin86,mulchaey00; if the confidence ranges of such parameters are estimated with bootstrapping, that could lead to complicated likelihood shapes for the density profile parameters;

  • many systematics arising from the use of certain fitting models for density estimation, like cool cluster cores which are not included in simple \beta-models \citep[e.g.][]1996A&A…305..756S;

  • the combination of the density/mass profile errors with errors in the distance estimators, e.g. from redshifts of the brightest cluster galaxies or some weighted average of all clusters’ galaxies.

Our Gaussian likelihood was chosen as a general error model that is likely to be close to (but not equal to) many true data likelihood functions. This approach is, in fact, not uncommon in the literature for X-ray cluster density error estimation. In \citeteckert11 an error in the density profile of a cluster is estimated using a Monte Carlo approach, by simulating 10^{8} realizations (assuming Poisson statistics) and taking the variation in values as errors. In \citetsamsing12, an MCMC sampling of the parameter space is done to characterize the probability function around the best fitting model. Subsequently, the width of the projected PDFs for every parameter were characterized by the root mean square, meaning that essentially they are approximating the error on the parameters as Gaussian, i.e. they leave out higher order PDF characteristics. For the NORAS survey, \citetboeringer00 assume statistically independent, Gaussian distributed errors, even though they state this is not 100% correct. Finally, many authors use the \chi^{2}-statistic for determining the goodness-of-fit in their analyses. This statistic is also based on a Gaussian distribution of noise.

The HMC algorithm is quite sensitive to deviations of the data model from the actual data errors. Using a wrong model can lead to poor reconstructions, as we found in \citetthesis. In that work, we found that the power spectrum was not properly reconstructed, deviating from expected values by as much as 10% in some ranges. For this work, to show the potential of the redshift space correction model, we accommodated for the Gaussian likelihood by allowing negative densities in the mock density fields produced by adding Gaussian noise to the Eulerian redshift space density field derived from the “true” Lagrangian initial density field. In other use-cases, when reconstructions deviate significantly from expectations, one could use this as an indicator of model deficiencies.

\@xsect

The use of Lagrangian Perturbation Theory imposes two major limitations on a redshift space distortion treatment. Because of the inaccuracy of the non-linearities in the velocity field, Fingers of God and triple value regions are not accurately represented. However, just as clustering in density was augmented in the ALPT model, one could try to think of ways to augment LPT to better represent these non-linear density features.

The virialized motion of FoGs is hard to model as a direct function of the signal (which is necessary for the derivative of P(s)). It is possible to fix this to some extent by adding a dispersion term to the velocity field model \citephess13,kitaura14. However, this implementation needs an extra stochastic step to sample from a Gaussian velocity distribution (the mean and dispersion of which depend on the local density). This breaks the philosophy of barcode, in which we sample from one posterior that integrates all the available models. It might be possible to find a way to add a similar model to our posterior. This is a topic of ongoing work.

Another possible way would be to split up very high density particles into a distribution of smaller particles with a virialized velocity distribution. An easy way to identify these particles would be to use the spherical collapse criterion of the \vartheta=-3 lower limit. To simplify matters, one could collect clustered particles and collectively split them into one big virialized kernel. However, this would then destroy any remaining substructure and shape.

The triple value regions may require a similar augmentation as the virialized motion, but maybe a simpler form is sufficient. It is possible that in ALPT these regions are already partly modeled. However, whether the velocities are also correct in ALPT should be further investigated. If so, this would indeed solve the triple value region problem.

\@xsect

In the relativistic limit, the correspondence between our theoretical redshift space definition and observational redshifts breaks down. It might be worth looking into whether a relativistic redshift space mapping could be fashioned. One would ideally also include in this the proper cosmological distance instead of the Hubble approximation that we usually make. Such a mapping may lead to rather small corrections on small scales. However, on the largest possible scales, quadratic and higher order terms must be taken into account, as they become dominant. Furthermore, one should start thinking about relativistic structure formation equations.

Within the limits of the framework described in this work, the above cosmological distance arguments are moot. A far more important consideration would be that at some point the data that is used can no longer be assumed to be in the same evolutionary state. Conceivably, LPT should be adaptable to this. The calculation of a field’s evolved state at one point in time is just as easily done as another point in time. One could then imagine that each particle in the field is simply evolved to a different point in time, depending on where it ends up. The MCMC nature of the model should be able to iteratively find optimal values for this.

In Section [, we derived the redshift space equations necessary for an application in a fully non-plane parallel redshift space. Subsequently, we neglected these terms, due to our plane-parallel approximation. We have not implemented nor tested the additional terms that the full treatment entails. However, this can easily be done. The algorithm will then become fractionally slower, but not significantly. For real galaxy redshift catalogs the plane-parallel approximation can not be used. When one wants to use these to reconstruct real Universe densities, one must implement the full redshift space equations.

\@xsect

Galaxy surveys provide information of the luminous matter distribution, such as galaxy, or cluster catalogs. They are affected by selection effects, yielding an incomplete biased and noisy sample of the underlying nonlinear matter distribution. In addition, the tracers are affected by their peculiar motions causing redshift-space distortions (RSDs). A proper analysis of the cosmological large scale structure should account for all these effects. To this end, Bayesian methods have been developed in the past years connecting the observed data to the primordial density fluctuations, which summarize all the cosmic information for a given set of cosmological parameters and structure formation model.

In this work, we have presented barcode, an algorithm for the analysis of the large scale structure, which for the first time self-consistently solves for coherent RSDs within an analytical Bayesian framework. The contribution of this work in the context of Bayesian techniques relies on the analytical derivation of the Hamiltonian equations taking into account the transformation of the biased tracer to redshift space. We present its numerical implementation and a number of tests.

This method could be extended to deal with virialized motions by including a random term accounting for it in the equations, or by a prior fingers-of-god collapse, as suggested in a number of works.

Here we have chosen the grid-based Bayesian approach, which can easily account for survey mask and geometry of a typical galaxy survey. We have restricted the numerical study to the Zel’dovich approximation, but have made derivations including the tidal field tensor within second order Lagrangian perturbation theory and small scale spherical collapse based corrections. This approach could also be extended at the expense of a higher computational cost to particle mesh solvers.

From the detailed analytical derivations and numerical tests here presented, we draw the following main conclusions:

  1. Using our self-contained redshift space model we can overcome large scale redshift space distortion effects in observations and reconstruct the true densities to a great degree of accuracy.

  2. When our model is not applied, but rather a naive model based purely on Eulerian real space is used, the redshift space distortions create an anisotropic imprint on the real space reconstructions (Appendix \citeauthoryearvan de Weygaert & Platenvan de Weygaert & Platen2011). We propose to call this effect the Kaiser effect echo or Kaiser echo for short.

  3. We have demonstrated that barcode yields unbiased initial density fluctuations from a biased tracer of the dark matter density field in redshift space in terms of the cell-to-cell correlation of the true and reconstructed density field, the respective power spectra and the 2D correlation functions.

  4. In particular, we find that the features of the power spectra characterized by the cosmic variance of the considered volume are recovered in detail solving for the Kaiser factor and beyond (as coherent flows have also an impact on non-linear RSDs), and that the 2D correlation functions are isotropized after applying our method.

  5. Component-wise scrutiny of the Hamiltonian likelihood force in redshift space (Section [) shows that it contains an unexpected non-radial term that points in the direction of the displacement field and has an amplitude that grows inversely with distance to the observer. This component may play a strong role when the plane parallel approximation is abandoned. This remains to be investigated.

  6. Our adaptive leap-frog time step method was shown to give good MCMC chain performance, yielding uncorrelated samples with a high, stable degree of structural variability, showing that the chain successfully explores the posterior parameter space of initial conditions that match the input (mock) data.

  7. The number of leap-frog steps per iteration of 256 is sufficient for good chain performance and may even be reduced slightly for this set of parameters (box size, resolution, etc.).

  8. The redshift space model constrains the sampler less strongly, which means the chain has to progress through the posterior space slightly slower than a regular space model. Our adaptive \epsilon scheme automatically takes care of this given a large enough N_{\epsilon} value. When N_{\epsilon} is too small (as in \citetthesis) subsequent samples will be more strongly correlated and the sampler will cover a smaller part of the parameter space.

This code can find applications in a large variety of cosmological problems, such as baryon acoustic oscillations reconstruction or cosmic web reconstruction from the Lyman alpha forest, galaxy or cluster distributions. The here presented code, which is made publicly available Our source code under MIT license can be found at http://ascl.net and https://github.com/egpbos/barcode. , thus has the flexibility to tackle realistic large-scale structure analysis from galaxy cluster survey data.

\@xsect

We are grateful for the opportunity offered by IAU Symposium 308 “the Zeldovich Universe” (June 2014) to present the details of the code and formalism described in this paper**The talk can be found at https://doi.org/10.5281/zenodo.1451723.. PB and RvdW thank the Leibniz-Institut für Astrophysik Potsdam for the great hospitality during the various work visits that defined the basis for this publication. FSK thanks the Karl-Schwarzschild-Fellowship funding for allowing him to regularly invite PB to Potsdam in this period. PB further thanks his eScience Center managers Jisk Attema and Rob van Nieuwpoort for supporting the final editing of this manuscript during the past four years. We thank Jelle Kaastra for the encouragement and motivation given during (mainly the early, defining stages of) this project, and Johan Hidding, Maarten Breddels and Bernard Jones for many interesting and encouraging discussions. PB acknowledges support by the NOVA project 10.1.3.07. We would like to thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high performance computing cluster. For the analysis and the figures created for this paper we used a number of scientific software libraries, most importantly: Matplotlib \citepHunter:2007 and Seaborn \citepmichael_waskom_2018_1313201 for visualization, NumPy and SciPy \citepscipy for numerical computation and IPython \citepPER-GRA:2007 and Jupyter \citepKluyver:2016aa for interactive analysis. barcode itself was written in C++, making use of the FFTW3 \citepFrigo05thedesign and GSL \citepgsl libraries.

  • \citeauthoryearAlpaslan et al.,Alpaslan et al.2014 Alpaslan M., et al., 2014, MNRAS, 440, L106
  • \citeauthoryearAta, Kitaura & MüllerAta et al.2015 Ata M., Kitaura F.-S., Müller V., 2015, MNRAS, 446, 4250
  • \citeauthoryearAta et al.,Ata et al.2017 Ata M., et al., 2017, MNRAS, 467, 3993
  • \citeauthoryearBel et al.,Bel et al.2014 Bel J., et al., 2014, A&A, 563, A37
  • \citeauthoryearBerlind, Narayanan & WeinbergBerlind et al.2001 Berlind A. A., Narayanan V. K., Weinberg D. H., 2001, ApJ, 549, 688
  • \citeauthoryearBernardeau, Colombi, Gaztañaga & ScoccimarroBernardeau et al.2002 Bernardeau F., Colombi S., Gaztañaga E., Scoccimarro R., 2002, PhysRep, 367, 1
  • \citeauthoryearBeutler et al.,Beutler et al.2014 Beutler F., et al., 2014, MNRAS, 443, 1065
  • \citeauthoryearBirkinshawBirkinshaw1999 Birkinshaw M., 1999, Phys. Rep., 310, 97
  • \citeauthoryearBlake et al.,Blake et al.2011 Blake C., et al., 2011, MNRAS, 415, 2876
  • \citeauthoryearBlake et al.,Blake et al.2013 Blake C., et al., 2013, MNRAS, 436, 3089
  • \citeauthoryearBleem et al.,Bleem et al.2015 Bleem L. E., et al., 2015, ApJS, 216, 27
  • \citeauthoryearBöhringer et al.,Böhringer et al.2000 Böhringer H., et al., 2000, ApJS, 129, 435
  • \citeauthoryearBond, Kofman & PogosyanBond et al.1996 Bond J. R., Kofman L., Pogosyan D., 1996, Nature, 380, 603
  • \citeauthoryearBosBos2016 Bos E. G. P., 2016, PhD thesis, Rijksuniversiteit Groningen
  • \citeauthoryearBos, van de Weygaert, Dolag & PettorinoBos et al.2012 Bos E. G. P., van de Weygaert R., Dolag K., Pettorino V., 2012, MNRAS, 426, 440
  • \citeauthoryearBos, van de Weygaert, Kitaura & CautunBos et al.2016 Bos E. G. P., van de Weygaert R., Kitaura F., Cautun M., 2016, in van de Weygaert R., Shandarin S., Saar E., Einasto J., eds, IAU Symposium Vol. 308, The Zeldovich Universe: Genesis and Growth of the Cosmic Web. pp 271–288 (arXiv:1611.01220), doi:10.1017/S1743921316009996
  • \citeauthoryearBranchini, Eldar & NusserBranchini et al.2002 Branchini E., Eldar A., Nusser A., 2002, MNRAS, 335, 53
  • \citeauthoryearBrenier, Frisch, Hénon, Loeper, Matarrese, Mohayaee & SobolevskiĭBrenier et al.2003 Brenier Y., Frisch U., Hénon M., Loeper G., Matarrese S., Mohayaee R., Sobolevskiĭ A., 2003, MNRAS, 346, 501
  • \citeauthoryearCautun & van de WeygaertCautun & van de Weygaert2011 Cautun M. C., van de Weygaert R., 2011, The DTFE public software: The Delaunay Tessellation Field Estimator code, Astrophysics Source Code Library (arXiv:1105.0370)
  • \citeauthoryearCautun, van de Weygaert, Jones & FrenkCautun et al.2014 Cautun M., van de Weygaert R., Jones B. J. T., Frenk C. S., 2014, MNRAS, 441, 2923
  • \citeauthoryearChallinor & LewisChallinor & Lewis2011 Challinor A., Lewis A., 2011, Phys. Rev. D, 84, 043516
  • \citeauthoryearCole, Fisher & WeinbergCole et al.1995 Cole S., Fisher K. B., Weinberg D. H., 1995, MNRAS, 275, 515
  • \citeauthoryearDesjacques, Jeong & SchmidtDesjacques et al.2018 Desjacques V., Jeong D., Schmidt F., 2018, Phys. Rep., 733, 1
  • \citeauthoryearDoumler, Hoffman, Courtois & GottlöberDoumler et al.2013 Doumler T., Hoffman Y., Courtois H., Gottlöber S., 2013, MNRAS, 430, 888
  • \citeauthoryearEckert, Molendi, Gastaldello & RossettiEckert et al.2011 Eckert D., Molendi S., Gastaldello F., Rossetti M., 2011, A&A, 529, A133
  • \citeauthoryearEisenstein, Seo, Sirko & SpergelEisenstein et al.2007 Eisenstein D. J., Seo H.-J., Sirko E., Spergel D. N., 2007, ApJ, 664, 675
  • \citeauthoryearFrigo & JohnsonFrigo & Johnson2005 Frigo M., Johnson S. G., 2005, in PROCEEDINGS OF THE IEEE. pp 216–231
  • \citeauthoryearGalassi et al.Galassi et al.2009 Galassi M., et al., 2009, GNU Scientific Library Reference Manual (3rd Ed.), ISBN 0954612078
  • \citeauthoryearGottloeber, Hoffman & YepesGottloeber et al.2010 Gottloeber S., Hoffman Y., Yepes G., 2010, in Wagner S., Steinmetz M., Bode A., Müller M. M., eds, , High Performance Computing in Science and Engineering, Garching/Munich 2009. Springer Berlin Heidelberg, pp 309–322 (arXiv:1005.2687), doi:10.1007/978-3-642-13872-0_26, http://dx.doi.org/10.1007/978-3-642-13872-0_26
  • \citeauthoryearGramannGramann1993 Gramann M., 1993, ApJ, 405, 449
  • \citeauthoryearGranett et al.,Granett et al.2015 Granett B. R., et al., 2015, A&A, 583, A61
  • \citeauthoryearGuzzo et al.,Guzzo et al.2008 Guzzo L., et al., 2008, Nature, 451, 541
  • \citeauthoryearHamiltonHamilton1998 Hamilton A. J. S., 1998, in Hamilton D., ed., Astrophysics and Space Science Library Vol. 231, The Evolving Universe. p. 185 (arXiv:astro-ph/9708102)
  • \citeauthoryearHawkins et al.,Hawkins et al.2003 Hawkins E., et al., 2003, MNRAS, 346, 78
  • \citeauthoryearHeß, Kitaura & GottlöberHeß et al.2013 Heß S., Kitaura F.-S., Gottlöber S., 2013, MNRAS, 435, 2065
  • \citeauthoryearHidding, van de Weygaert & ShandarinHidding et al.2016 Hidding J., van de Weygaert R., Shandarin S., 2016, in van de Weygaert R., Shandarin S., Saar E., Einasto J., eds, IAU Symposium Vol. 308, The Zeldovich Universe: Genesis and Growth of the Cosmic Web. pp 69–76 (arXiv:1611.01221), doi:10.1017/S1743921316009650
  • \citeauthoryearHilton et al.,Hilton et al.2018 Hilton M., et al., 2018, ApJS, 235, 20
  • \citeauthoryearHoekstra, Bartelmann, Dahle, Israel, Limousin & MeneghettiHoekstra et al.2013 Hoekstra H., Bartelmann M., Dahle H., Israel H., Limousin M., Meneghetti M., 2013, Space Sci. Rev., 177, 75
  • \citeauthoryearHoffman & GelmanHoffman & Gelman2012 Hoffman M. D., Gelman A., 2012, Journal of Machine Learning Research, pp 1–30
  • \citeauthoryearHunterHunter2007 Hunter J. D., 2007, Computing In Science & Engineering, 9, 90
  • \citeauthoryearIckeIcke1973 Icke V., 1973, A&A, 27, 1
  • \citeauthoryearIkebe et al.,Ikebe et al.1996 Ikebe Y., et al., 1996, Nature, 379, 427
  • \citeauthoryearJain & ZhangJain & Zhang2008 Jain B., Zhang P., 2008, Phys. Rev. D, 78, 063503
  • \citeauthoryearJasche & KitauraJasche & Kitaura2010 Jasche J., Kitaura F. S., 2010, MNRAS, 407, 29
  • \citeauthoryearJasche & LavauxJasche & Lavaux2018 Jasche J., Lavaux G., 2018, preprint, (arXiv:1806.11117)
  • \citeauthoryearJasche & WandeltJasche & Wandelt2013 Jasche J., Wandelt B. D., 2013, MNRAS, 432, 894
  • \citeauthoryearJaynes & BretthorstJaynes & Bretthorst2003 Jaynes E., Bretthorst G., 2003, Probability Theory: The Logic of Science. Cambridge University Press
  • \citeauthoryearJennings, Baugh & PascoliJennings et al.2011 Jennings E., Baugh C. M., Pascoli S., 2011, MNRAS, 410, 2081
  • \citeauthoryearJones, Oliphant, Peterson et al.Jones et al.2018 Jones E., Oliphant T., Peterson P., et al., 2001–2018, SciPy: Open source scientific tools for Python, http://www.scipy.org/
  • \citeauthoryearKaiserKaiser1987 Kaiser N., 1987, MNRAS, 227, 1
  • \citeauthoryearKaiser & SquiresKaiser & Squires1993 Kaiser N., Squires G., 1993, ApJ, 404, 441
  • \citeauthoryearKingKing1972 King I. R., 1972, ApJ, 174, L123
  • \citeauthoryearKitauraKitaura2013 Kitaura F.-S., 2013, MNRAS, 429, L84
  • \citeauthoryearKitaura & AnguloKitaura & Angulo2012 Kitaura F.-S., Angulo R. E., 2012, MNRAS, 425, 2443
  • \citeauthoryearKitaura & EnßlinKitaura & Enßlin2008 Kitaura F. S., Enßlin T. A., 2008, MNRAS, 389, 497
  • \citeauthoryearKitaura & HeßKitaura & Heß2013 Kitaura F.-S., Heß S., 2013, MNRAS, 435, L78
  • \citeauthoryearKitaura, Erdoǧdu, Nuza, Khalatyan, Angulo, Hoffman & GottlöberKitaura et al.2012 Kitaura F.-S., Erdoǧdu P., Nuza S. E., Khalatyan A., Angulo R. E., Hoffman Y., Gottlöber S., 2012, MNRAS, 427, L35
  • \citeauthoryearKitaura, Yepes & PradaKitaura et al.2014 Kitaura F.-S., Yepes G., Prada F., 2014, MNRAS, 439, L21
  • \citeauthoryearKitaura, Gil-Marín, Scóccola, Chuang, Müller, Yepes & PradaKitaura et al.2015 Kitaura F.-S., Gil-Marín H., Scóccola C. G., Chuang C.-H., Müller V., Yepes G., Prada F., 2015, MNRAS, 450, 1836
  • \citeauthoryearKitaura et al.,Kitaura et al.2016a Kitaura F.-S., et al., 2016a, MNRAS, 456, 4156
  • \citeauthoryearKitaura, Ata, Angulo, Chuang, Rodríguez-Torres, Monteagudo, Prada & YepesKitaura et al.2016b Kitaura F.-S., Ata M., Angulo R. E., Chuang C.-H., Rodríguez-Torres S., Monteagudo C. H., Prada F., Yepes G., 2016b, MNRAS, 457, L113
  • \citeauthoryearKluyver et al.,Kluyver et al.2016 Kluyver T., et al., 2016, in Loizides F., Schmidt B., eds, Positioning and Power in Academic Publishing: Players, Agents and Agendas. pp 87 – 90
  • \citeauthoryearKomatsu et al.,Komatsu et al.2011 Komatsu E., et al., 2011, ApJS, 192, 18
  • \citeauthoryearKwan, Lewis & LinderKwan et al.2012 Kwan J., Lewis G. F., Linder E. V., 2012, ApJ, 748, 78
  • \citeauthoryearLaureijs et al.,Laureijs et al.2011 Laureijs R., et al., 2011, preprint, (arXiv:1110.3193)
  • \citeauthoryearLavaux & WandeltLavaux & Wandelt2010 Lavaux G., Wandelt B. D., 2010, MNRAS, 403, 1392
  • \citeauthoryearLeclercqLeclercq2015 Leclercq F., 2015, PhD thesis, Institut d’Astrophysique de Paris (arXiv:1512.04985)
  • \citeauthoryearLeclercq, Jasche, Gil-Marín & WandeltLeclercq et al.2013 Leclercq F., Jasche J., Gil-Marín H., Wandelt B., 2013, J. Cosmology Astropart. Phys., 11, 48
  • \citeauthoryearLeclercq, Jasche & WandeltLeclercq et al.2015a Leclercq F., Jasche J., Wandelt B., 2015a, J. Cosmology Astropart. Phys., 6, 15
  • \citeauthoryearLeclercq, Jasche & WandeltLeclercq et al.2015b Leclercq F., Jasche J., Wandelt B., 2015b, A&A, 576, L17
  • \citeauthoryearLee, Babul, Kofman & KaiserLee et al.1997 Lee M. H., Babul A., Kofman L., Kaiser N., 1997, ApJ, 489, 522
  • \citeauthoryearLee et al.,Lee et al.2016 Lee K.-G., et al., 2016, ApJ, 817, 160
  • \citeauthoryearMcDonald & SeljakMcDonald & Seljak2009 McDonald P., Seljak U., 2009, J. Cosmology Astropart. Phys., 10, 007
  • \citeauthoryearMohayaee, Mathis, Colombi & SilkMohayaee et al.2006 Mohayaee R., Mathis H., Colombi S., Silk J., 2006, MNRAS, 365, 939
  • \citeauthoryearMonaco & EfstathiouMonaco & Efstathiou1999 Monaco P., Efstathiou G., 1999, MNRAS, 308, 763
  • \citeauthoryearMonaghanMonaghan1992 Monaghan J. J., 1992, ARA&A, 30, 543
  • \citeauthoryearMulchaeyMulchaey2000 Mulchaey J. S., 2000, ARA&A, 38, 289
  • \citeauthoryearNealNeal1993 Neal R. M., 1993
  • \citeauthoryearNealNeal2012 Neal R. M., 2012, preprint (arXiv:1206.1901)
  • \citeauthoryearNesseris & PerivolaropoulosNesseris & Perivolaropoulos2008 Nesseris S., Perivolaropoulos L., 2008, Phys. Rev. D, 77, 023504
  • \citeauthoryearNeyrinckNeyrinck2013 Neyrinck M. C., 2013, MNRAS, 428, 141
  • \citeauthoryearNeyrinckNeyrinck2016 Neyrinck M. C., 2016, MNRAS, 455, L11
  • \citeauthoryearNeyrinck, Aragón-Calvo, Jeong & WangNeyrinck et al.2014 Neyrinck M. C., Aragón-Calvo M. A., Jeong D., Wang X., 2014, MNRAS, 441, 646
  • \citeauthoryearNulsen, Powell & VikhlininNulsen et al.2010 Nulsen P. E. J., Powell S. L., Vikhlinin A., 2010, ApJ, 722, 55
  • \citeauthoryearNusser & BranchiniNusser & Branchini2000 Nusser A., Branchini E., 2000, MNRAS, 313, 587
  • \citeauthoryearNusser & DekelNusser & Dekel1992 Nusser A., Dekel A., 1992, ApJ, 391, 443
  • \citeauthoryearOkumura, Matsubara, Eisenstein, Kayo, Hikage, Szalay & SchneiderOkumura et al.2008 Okumura T., Matsubara T., Eisenstein D. J., Kayo I., Hikage C., Szalay A. S., Schneider D. P., 2008, ApJ, 676, 889
  • \citeauthoryearOkumura, Seljak & DesjacquesOkumura et al.2012 Okumura T., Seljak U., Desjacques V., 2012, J. Cosmology Astropart. Phys., 11, 014
  • \citeauthoryearOkumura, Seljak, Vlah & DesjacquesOkumura et al.2014 Okumura T., Seljak U., Vlah Z., Desjacques V., 2014, J. Cosmology Astropart. Phys., 5, 003
  • \citeauthoryearPark & LeePark & Lee2007 Park D., Lee J., 2007, PhRvL, 98, id.081301
  • \citeauthoryearPeacock et al.,Peacock et al.2001a Peacock J. A., et al., 2001a, Nature, 410, 169
  • \citeauthoryearPeacock et al.,Peacock et al.2001b Peacock J. A., et al., 2001b, Nature, 410, 169
  • \citeauthoryearPeeblesPeebles1989 Peebles P. J. E., 1989, ApJ, 344, L53
  • \citeauthoryearPercival & WhitePercival & White2009 Percival W. J., White M., 2009, MNRAS, 393, 297
  • \citeauthoryearPercival et al.,Percival et al.2004 Percival W. J., et al., 2004, MNRAS, 353, 1201
  • \citeauthoryearPérez & GrangerPérez & Granger2007 Pérez F., Granger B. E., 2007, Computing in Science and Engineering, 9, 21
  • \citeauthoryearPlanck Collaboration et al.,Planck Collaboration et al.2016 Planck Collaboration et al., 2016, A&A, 594, A24
  • \citeauthoryearPlanck Collaboration et al.,Planck Collaboration et al.2018b Planck Collaboration et al., 2018b, preprint, (arXiv:1807.06209)
  • \citeauthoryearPlanck Collaboration et al.,Planck Collaboration et al.2018a Planck Collaboration et al., 2018a, preprint, (arXiv:1807.06211)
  • \citeauthoryearRadovich et al.,Radovich et al.2017 Radovich M., et al., 2017, A&A, 598, A107
  • \citeauthoryearReid et al.,Reid et al.2012 Reid B. A., et al., 2012, MNRAS, 426, 2719
  • \citeauthoryearRyden & MelottRyden & Melott1996 Ryden B. S., Melott A. L., 1996, ApJ, 470, 160
  • \citeauthoryearSamsing, Skielboe & HansenSamsing et al.2012 Samsing J., Skielboe A., Hansen S. H., 2012, ApJ, 748, 21
  • \citeauthoryearSamushia, Percival & RaccanelliSamushia et al.2012 Samushia L., Percival W. J., Raccanelli A., 2012, MNRAS, 420, 2102
  • \citeauthoryearSamushia et al.,Samushia et al.2013 Samushia L., et al., 2013, MNRAS, 429, 1514
  • \citeauthoryearSamushia et al.,Samushia et al.2014 Samushia L., et al., 2014, MNRAS, 439, 3504
  • \citeauthoryearSánchez et al.,Sánchez et al.2014 Sánchez A. G., et al., 2014, MNRAS, 440, 2692
  • \citeauthoryearSarazinSarazin1986 Sarazin C. L., 1986, Reviews of Modern Physics, 58, 1
  • \citeauthoryearSargent & TurnerSargent & Turner1977 Sargent W. L. W., Turner E. L., 1977, ApJ, 212, L3
  • \citeauthoryearSartoris et al.,Sartoris et al.2016 Sartoris B., et al., 2016, MNRAS, 459, 1764
  • \citeauthoryearSchaap & van de WeygaertSchaap & van de Weygaert2000 Schaap W. E., van de Weygaert R., 2000, A&A, 363, L29
  • \citeauthoryearSchaller et al.,Schaller et al.2015 Schaller M., et al., 2015, MNRAS, 451, 1247
  • \citeauthoryearSchindlerSchindler1996 Schindler S., 1996, A&A, 305, 756
  • \citeauthoryearSchmittfull, Feng, Beutler, Sherwin & ChuSchmittfull et al.2015 Schmittfull M., Feng Y., Beutler F., Sherwin B., Chu M. Y., 2015, Phys. Rev. D, 92, 123522
  • \citeauthoryearSong & KoyamaSong & Koyama2009 Song Y.-S., Koyama K., 2009, J. Cosmology Astropart. Phys., 1, 048
  • \citeauthoryearSong & PercivalSong & Percival2009 Song Y.-S., Percival W. J., 2009, J. Cosmology Astropart. Phys., 10, 004
  • \citeauthoryearSong, Sabiu, Nichol & MillerSong et al.2010 Song Y.-S., Sabiu C. G., Nichol R. C., Miller C. J., 2010, J. Cosmology Astropart. Phys., 1, 025
  • \citeauthoryearSong, Sabiu, Kayo & NicholSong et al.2011 Song Y.-S., Sabiu C. G., Kayo I., Nichol R. C., 2011, J. Cosmology Astropart. Phys., 5, 020
  • \citeauthoryearSorce & TempelSorce & Tempel2017 Sorce J. G., Tempel E., 2017, MNRAS, 469, 2859
  • \citeauthoryearSorce & TempelSorce & Tempel2018 Sorce J. G., Tempel E., 2018, MNRAS, 476, 4362
  • \citeauthoryearSpringelSpringel2010 Springel V., 2010, ARA&A, 48, 391
  • \citeauthoryearStarobinskyStarobinsky1980 Starobinsky A. A., 1980, Physics Letters B, 91, 99
  • \citeauthoryearTanimura, Aghanim, Douspis, Beelen & BonjeanTanimura et al.2018 Tanimura H., Aghanim N., Douspis M., Beelen A., Bonjean V., 2018, preprint, (arXiv:1805.04555)
  • \citeauthoryearTassev, Zaldarriaga & EisensteinTassev et al.2013 Tassev S., Zaldarriaga M., Eisenstein D. J., 2013, J. Cosmology Astropart. Phys., 6, 36
  • \citeauthoryearTassev, Eisenstein, Wandelt & ZaldarriagaTassev et al.2015 Tassev S., Eisenstein D. J., Wandelt B. D., Zaldarriaga M., 2015, preprint, (arXiv:1502.07751)
  • \citeauthoryearTaylor, Ashdown & HobsonTaylor et al.2008 Taylor J. F., Ashdown M. A. J., Hobson M. P., 2008, MNRAS, 389, 1284
  • \citeauthoryearTegmark et al.,Tegmark et al.2004 Tegmark M., et al., 2004, ApJ, 606, 702
  • \citeauthoryearTempel, Tago & LiivamägiTempel et al.2012 Tempel E., Tago E., Liivamägi L. J., 2012, A&A, 540, A106
  • \citeauthoryearTempel, Kipper, Tamm, Gramann, Einasto, Sepp & TuvikeneTempel et al.2016 Tempel E., Kipper R., Tamm A., Gramann M., Einasto M., Sepp T., Tuvikene T., 2016, A&A, 588, A14
  • \citeauthoryearThe LSST Dark Energy Science Collaboration et al.,The LSST Dark Energy Science Collaboration et al.2018 The LSST Dark Energy Science Collaboration et al., 2018, preprint, (arXiv:1809.01669)
  • \citeauthoryearThomas, Melott, Feldman & ShandarinThomas et al.2004 Thomas B. C., Melott A. L., Feldman H. A., Shandarin S. F., 2004, ApJ, 601, 28
  • \citeauthoryearTojeiro et al.,Tojeiro et al.2014 Tojeiro R., et al., 2014, MNRAS, 440, 2222
  • \citeauthoryearTullyTully2015 Tully R. B., 2015, AJ, 149, 171
  • \citeauthoryearUmetsuUmetsu2010 Umetsu K., 2010, preprint, (arXiv:1002.3952)
  • \citeauthoryearVakili, Kitaura, Feng, Yepes, Zhao, Chuang & HahnVakili et al.2017 Vakili M., Kitaura F.-S., Feng Y., Yepes G., Zhao C., Chuang C.-H., Hahn C., 2017, MNRAS, 472, 4144
  • \citeauthoryearVogelsberger et al.,Vogelsberger et al.2014 Vogelsberger M., et al., 2014, MNRAS, 444, 1518
  • \citeauthoryearWang, Mo, Jing, Guo, van den Bosch & YangWang et al.2009 Wang H., Mo H. J., Jing Y. P., Guo Y., van den Bosch F. C., Yang X., 2009, MNRAS, 394, 398
  • \citeauthoryearWang, Mo, Yang & van den BoschWang et al.2012 Wang H., Mo H. J., Yang X., van den Bosch F. C., 2012, MNRAS, 420, 1809
  • \citeauthoryearWang, Mo, Yang & van den BoschWang et al.2013 Wang H., Mo H. J., Yang X., van den Bosch F. C., 2013, ApJ, 772, 63
  • \citeauthoryearWang, Mo, Yang, Jing & LinWang et al.2014 Wang H., Mo H. J., Yang X., Jing Y. P., Lin W. P., 2014, ApJ, 794, 94
  • \citeauthoryearWaskom et al.,Waskom et al.2018 Waskom M., et al., 2018, mwaskom/seaborn: v0.9.0 (July 2018), doi:10.5281/zenodo.1313201, https://doi.org/10.5281/zenodo.1313201
  • \citeauthoryearWeinbergWeinberg1992 Weinberg D. H., 1992, MNRAS, 254, 315
  • \citeauthoryearWerner, Finoguenov, Kaastra, Simionescu, Dietrich, Vink & BöhringerWerner et al.2008 Werner N., Finoguenov A., Kaastra J. S., Simionescu A., Dietrich J. P., Vink J., Böhringer H., 2008, A&A, 482, L29
  • \citeauthoryearWhiteWhite2015 White M., 2015, MNRAS, 450, 3822
  • \citeauthoryearWhite, Song & PercivalWhite et al.2009 White M., Song Y.-S., Percival W. J., 2009, MNRAS, 397, 1348
  • \citeauthoryearWhite, Carlson & ReidWhite et al.2012 White M., Carlson J., Reid B., 2012, Redshift space distortions and The growth of cosmic structure, http://mwhite.berkeley.edu/Talks/SantaFe12_BOSS.pdf
  • \citeauthoryearZel’dovichZel’dovich1970 Zel’dovich Y. B., 1970, A&A, 5, 84
  • \citeauthoryearZhang, Liguori, Bean & DodelsonZhang et al.2007 Zhang P., Liguori M., Bean R., Dodelson S., 2007, Physical Review Letters, 99, 141302
  • \citeauthoryearZhao, Giannantonio, Pogosian, Silvestri, Bacon, Koyama, Nichol & SongZhao et al.2010 Zhao G.-B., Giannantonio T., Pogosian L., Silvestri A., Bacon D. J., Koyama K., Nichol R. C., Song Y.-S., 2010, Phys. Rev. D, 81, 103510
  • \citeauthoryearZheng, Zhang, Jing, Lin & PanZheng et al.2013 Zheng Y., Zhang P., Jing Y., Lin W., Pan J., 2013, Phys. Rev. D, 88, 103510
  • \citeauthoryearda Ângela et al.,da Ângela et al.2008 da Ângela J., et al., 2008, MNRAS, 383, 565
  • \citeauthoryearde Jong, Verdoes Kleijn, Kuijken, Valentijn, for the KiDS Collaboration & Astro-WISE Consortiumsde Jong et al.2012 de Jong J. T. A., Verdoes Kleijn G., Kuijken K., Valentijn E. A., for the KiDS Collaboration Astro-WISE Consortiums 2012, Experimental Astronomy, 34
  • \citeauthoryearde la Torre et al.,de la Torre et al.2013 de la Torre S., et al., 2013, A&A, 557, A54
  • \citeauthoryearvan Haarlem & van de Weygaertvan Haarlem & van de Weygaert1993 van Haarlem M., van de Weygaert R., 1993, ApJ, 418, 544
  • \citeauthoryearvan de Weygaertvan de Weygaert1996 van de Weygaert R., 1996, in Coles P., Martinez V., Pons-Borderia M.-J., eds, Astronomical Society of the Pacific Conference Series Vol. 94, Mapping, Measuring, and Modelling the Universe. p. 49 (arXiv:astro-ph/9601026)
  • \citeauthoryearvan de Weygaert & Bertschingervan de Weygaert & Bertschinger1996 van de Weygaert R., Bertschinger E., 1996, MNRAS, 281, 84
  • \citeauthoryearvan de Weygaert & Platenvan de Weygaert & Platen2011 van de Weygaert R., Platen E., 2011, International Journal of Modern Physics Conference Series, 1, 41 \@xsect In this section we briefly recap the basics behind barcode that are necessary for this paper. barcode is a code that aims at reconstructing the full Lagrangian primordial density perturbation field corresponding to a configuration of observed objects. It has been designed specifically for clusters of galaxies as principal constraining objects. We want this reconstruction to reflect the inherently stochastic nature of both observations and the non-linear structure formation process. This stochasticity can be encoded in a probability distribution function, or posterior. The posterior describes the complete physical process leading to an observed density field; starting from the initial density perturbations, going through gravitational collapse and finally including observational effects. barcode uses an MCMC method called Hamiltonian Monte Carlo \citepneal93,taylor08,jaschekitaura10,neal12 to sample this posterior, comparing the results to the actual observed density field and in this way finding the initial density distributions that match the data. This yields not just one reconstruction, but an ensemble of possible reconstructed Lagrangian density fields. The variation in these realizations reflects the full array of stochastic effects as included in the model. In the HMC method, the PDF is represented by a Hamiltonian (Section [), where the PDF’s stochastic variable (the signal we sample) is treated as a position analogue. The HMC algorithm then samples the posterior PDF by repeating the following procedure:
    1. Initialize the signal: define the “starting position” in the parameter space (either a manually supplied initial guess or the position from the previous sample). Our “signal” \delta({\bm{q}}) is defined as

      \delta({\bm{q}})\equiv D_{1}\Delta^{(1)}({\bm{q}})\,, (A1)

      where D_{1} is the first order growing mode linear growth factor and \Delta^{(1)}({\bm{q}}) is the spatial part of the first order term of the LPT primordial density field expansion.

    2. Stochastic step: draw a random “momentum” vector (we use a Gaussian field momentum distribution with the Hamiltonian mass as the covariance matrix).

    3. Dynamical step: solve the Hamiltonian equation of motion using a leap-frog solver (Section ††\citeauthoryearvan de Weygaert & Platenvan de Weygaert & Platen2011) to find the next “position”, i.e. the next sample candidate.

    4. Accept or reject the candidate based on the acceptance criterion (Section ††\citeauthoryearvan de Weygaert & Platenvan de Weygaert & Platen2011).

    This process can continue indefinitely. Usually, the user supplies some maximum number of desired samples. \@xsect The Hamiltonian \mathscr{H} is the sum of the potential energy \mathscr{E} and the kinetic energy \mathscr{K},
    \mathscr{H}=\mathscr{E}+\mathscr{K}\,. (A2)
    The kinetic term in this is the momentum (row-vector) times the inverse of the mass matrix \mathscr{M} times the momentum (column-vector),
    \mathscr{K}=\frac{1}{2}\breve{p}^{T}\mathscr{M}^{-1}\breve{p}\,. (A3)
    The potential \mathscr{E} is coupled to the probability distribution function (PDF) of our signal \delta({\bm{q}}_{i}). We define this PDF in a Bayesian way as the posterior P, which is the product of a prior \mathscr{P} with a likelihood \mathscr{L}†† Note that the notation we use is not necessarily the most common in Bayesian literature. For instance, \citet[page 89]jaynes03 uses a notation that would translate in this work to something like L(\delta({\bm{q}}))=P(\delta^{\mathrm{obs}}({\bm{x}})|\delta({\bm{q}})), where L is the “likelihood” of the hypothesis and P is a probability function that describes it in terms of the random variable \delta^{\mathrm{obs}}({\bm{x}}) and conditional variable \delta({\bm{q}}) (the hypothesis). . The connection between the potential and the PDF is given as a canonical ensemble distribution. In logarithmic form, this gives us an equation for \mathscr{E} in terms of the posterior P:
    \mathscr{E}=\mathrm{constant}-\ln P=\mathrm{constant}-\ln\mathscr{P}-\ln% \mathscr{L}\,, (A4)
    so -\ln\mathscr{P} and -\ln\mathscr{L} are the terms we need our (statistical) model to specify so that we can calculate \mathscr{E}, up to a constant. However, as we will see below, we only need to know \delta\mathscr{H} or the derivative of the potential. We can therefore forget about the constants. In this work we use a Gaussian prior and likelihood:
    -\ln\mathscr{P}=\mathrm{constant}+\frac{1}{2}\sum_{i,j}\delta({\bm{q}}_{i}){(S% ^{-1})}_{ij}\delta({\bm{q}}_{j})\,, (A5)
    -\ln\mathscr{L}=\mathrm{constant}+\frac{1}{2}\sum_{i}\frac{{\left[T(\rho^{% \mathrm{obs}}({\bm{x}}_{i}))-\rho({\bm{x}}_{i})\right]}^{2}w_{i}}{\sigma_{i}^{% 2}}\,. (A6)
    These are two different types of Gaussian exponents. The likelihood one is a simple sum of one dimensional Gaussians; one for each grid cell. The prior exponent is a little more computationally intensive, due to the matrix inversion. However, we can approximate the {(S^{-1})}_{ij}\delta({\bm{q}}_{j}) by treating it as a convolution in real space and thus a simple multiplication in Fourier space, where the (inverse) power spectrum (the FT of S^{-1}) is known and easy (diagonal) (see Appendix 3.B.3 of \citetthesis for computational details). The same considerations apply for the kinetic term \mathscr{K}, though the mass matrix \mathscr{M} is a different quantity. Often it is taken to be \mathscr{M}=S^{-1}, meaning that indeed a similar procedure can be applied, only now with the non-inverted power spectrum. \@xsect After drawing a random momentum vector, we “move” the MCMC chain forward through the posterior’s parameter space by solving the Hamiltonian equations of motion using a leap-frog estimation scheme. Numerically, we’ll be solving the following discretized equations for each time step \tau+\epsilon (based on the previous step at \tau):
    \begin{split}\displaystyle\breve{p}_{i}(\tau+\frac{\epsilon}{2})&\displaystyle% \simeq\breve{p}_{i}(\tau)-\frac{\epsilon}{2}\frac{\partial\mathscr{E}}{% \partial\breve{q}_{i}}(\breve{q}(\tau))\\ \displaystyle\breve{q}_{i}(\tau+\epsilon)&\displaystyle\simeq\breve{q}_{i}(% \tau)+\epsilon\sum_{j}{(\mathscr{M}^{-1})}_{ij}\breve{p}_{j}(\tau+\frac{% \epsilon}{2})\\ \displaystyle\breve{p}_{i}(\tau+\epsilon)&\displaystyle\simeq\breve{p}_{i}(% \tau+\frac{\epsilon}{2})-\frac{\epsilon}{2}\frac{\partial\mathscr{E}}{\partial% \breve{q}_{i}}(\breve{q}(\tau+\epsilon))\,.\end{split} (A7)
    We will need the derivative of \mathscr{E} to each component of the signal, i.e. every \delta({\bm{q}}_{i}) sampled on a regular grid \{{\bm{q}}_{i}\}:
    F_{i}\equiv\frac{\partial\mathscr{E}}{\partial\delta({\bm{q}}_{i})}=F^{% \mathscr{P}}_{i}+F^{\mathscr{L}}_{i}\,. (A8)
    This F_{i} is called the Hamiltonian force and its components are called the prior force and likelihood force respectively. For our Gaussian prior, the prior force is
    F^{\mathscr{P}}_{i}=\sum_{j}{(S^{-1})}_{ij}\delta({\bm{q}}_{j})\,, (A9)
    where S_{ij} is the correlation matrix corresponding to the cosmological power spectrum, given an inverse power spectrum mass (Section [). In general, we can write the likelihood force as
    F^{\mathscr{L}}_{k}=\sum_{m}h({\bm{q}}_{m})\frac{\partial\vartheta({\bm{q}}_{m% })}{\partial\delta({\bm{q}}_{k})}\,, (A10)
    where the derivative of the divergence of the displacement field \vartheta({\bm{q}}) to the signal \delta({\bm{q}}_{k}) encodes the structure formation model and the h term primarily represents the Gaussian model that we use for our likelihood (see Chapter 3 of \citetthesis for more details). For the Zel’dovich model of structure formation, the likelihood force is given by
    F^{\mathscr{L},ZA}_{i}=-h({\bm{q}}_{i})\,. (A11)
    The function h({\bm{q}}) is subsequently defined as:
    \hat{h}({\bm{k}}_{l})=-{\left(-\frac{i{\bm{k}}_{l}}{k_{l}^{2}}\cdot{\left(\hat% {{\bm{V^{\prime}}}}({\bm{k}}_{l})\right)}^{*}\right)}^{*}=-\frac{i{\bm{k}}_{l}% }{k_{l}^{2}}\cdot\hat{{\bm{V}}}_{l} (A12)
    or
    h({\bm{q}}_{m})=\nabla^{-2}{\bm{\nabla}}\cdot{\bm{V}}_{m}\,, (A13)
    where
    \begin{split}\displaystyle{\bm{V}}_{i}&\displaystyle\equiv{\bm{V}}({\bm{x}}({% \bm{q}}_{i}))\equiv\frac{\partial\ln\mathscr{L}}{\partial{\bm{x}}({\bm{q}}_{i}% )}\\ &\displaystyle=m_{i}\sum_{j}\frac{\partial\ln\mathscr{L}}{\partial\rho({\bm{x}% }_{j})}\frac{\partial W({\bm{x}}_{j}-{\bm{x}}({\bm{q}}_{i});h_{s})}{\partial{% \bm{x}}({\bm{q}}_{i})}\,.\end{split} (A14)
    Here the derivative of the likelihood to the density in Eulerian space \rho({\bm{x}}) for a Gaussian likelihood is given by:
    \frac{\partial\ln\mathscr{L}}{\partial\rho({\bm{x}}_{j})}=\frac{\left(T_{j}(% \rho^{\mathrm{obs}}({\bm{x}}_{j}))-\rho({\bm{x}}_{j})\right)w_{j}}{\sigma_{j}^% {2}}\,. (A15)
    \@xsect The maximum pseudo time step \epsilon_{m} is probably the most important parameter that determines overall HMC performance. The pseudo time step \epsilon is drawn randomly from a uniform distribution between 0 and \epsilon_{m}. If \epsilon_{m} is too high, the leap-frog errors will be too large, leading to artificial increase in energy. The acceptance criterion does not allow this, meaning that many steps will be rejected. This wastes computational resources. On the other hand, if it is too low, many steps will be accepted. However, the samples will be very close together, meaning that the probability distribution is densely and slowly sampled. This is an inefficient use of computational power, since we do not need a too dense sampling, but rather a representative sampling of the overall high probability regions. Striking a balance between the two is essential. Further complications arise from the fact that the energy also scales with the dimensionality of the problem N. To keep the acceptance rate at an acceptable value, \epsilon will have to be smaller for larger problems, meaning that the chain will evolve correspondingly slower. It is therefore impossible to determine a universally optimal \epsilon for every HMC problem. \citetneal12 discusses the \epsilon issue to great lengths. One of the possible solutions is to automatically adapt \epsilon_{m} to get an optimal acceptance rate \citepneal12,hoffman12. We implemented our own adaptive scheme in barcode. In accordance with \citetneal12, we set the target acceptance rate to 0.65. The code will check after a set number of attempted iterations (50 in our case) if the acceptance rate is still within a reasonable range around our optimal rate (\pm 0.05 is allowed). If it is not, \epsilon_{m} is adjusted an appropriate amount in the right direction. Furthermore, to accelerate the initial adaptive stage (the first few HMC iterations) the code checks for acceptance every step instead of checking the acceptance rate every 50 steps. \@xsect Like Metropolis-Hastings and Gibbs sampling, Hamiltonian sampling includes a sample rejection step after the candidate sampling step. Given the difference \delta\mathscr{H} between the Hamiltonian of the state (\breve{q},\breve{p}) (where \breve{q}=\delta({\bm{q}})) before the dynamical step, \mathscr{H}(\breve{q},\breve{p}), and that of the candidate state (\breve{q}^{*},\breve{p}^{*}) after dynamics, \mathscr{H}(\breve{q}^{*},\breve{p}^{*}), the probability of acceptance of the candidate state conditional on the previous state P((\breve{q}^{*},\breve{p}^{*})|(\breve{q},\breve{p})) is given by
    P((\breve{q}^{*},\breve{p}^{*})|(\breve{q},\breve{p}))=\min\left(1,e^{-(% \mathscr{H}(\breve{q}^{*},\breve{p}^{*})-\mathscr{H}(\breve{q},\breve{p}))}% \right)\,. (A16)
    \@xsect A final important aspect of barcode — which is unrelated to HMC, but crucial to our cosmological application — concerns the transformation between the Lagrangian space of the sampled primordial density fields and that of the Eulerian comoving coordinates at some a>0. Lagrangian coordinates {\bm{q}} can, in general, be transformed to their Eulerian counterparts {\bm{x}} by adding a displacement field {\bm{\Psi}} (in our case, defined by the Zel’dovich approximation):
    {\bm{x}}({\bm{q}})={\bm{q}}+{\bm{\Psi}}({\bm{q}})\,. (A17)
    Note that this relation cannot be inverted. Mathematically, this is because the displacement field is dependent on {\bm{q}}. Physically, it is because multiple fluid elements originally at \{{\bm{q}}_{i}\} might end up at one same Eulerian location {\bm{x}}; through gravitational clustering, this will in fact happen all over the place. This means that we must take the {\bm{q}}_{i} grid as the basis of our analysis, as taking an {\bm{x}} grid as the basis would make it impossible to find unique corresponding {\bm{q}} values, which computationally is a problem. To compute {\bm{\Psi}} we need \vartheta, because in Fourier space, we have:
    \displaystyle i{\bm{k}}\tilde{\vartheta} \displaystyle=-k^{2}\tilde{{\bm{\Psi}}} (A18)
    \displaystyle\Rightarrow\tilde{{\bm{\Psi}}}=-\frac{i{\bm{k}}}{k^{2}}\tilde{% \vartheta}\,. (A19)
    For the Zel’dovich model of structure formation that we use in this work, this quantity is given by:
    \vartheta=-D_{1}\Delta^{(1)}({\bm{q}})\,. (A20)
    \@xsect In this appendix we illustrate the problems that one encounters when redshift space distortions are not modeled. In summary: redshift space distortions introduce an apparent anisotropic clustering in the data, which must be dealt with to reconstruct the real-space (initial and final) density field. When one does not correct for these distortions, the resulting reconstructions will be similarly distorted. Note that we explicitly differentiate between the Kaiser effect and the effects we illustrate in this Appendix. Both in the Kaiser effect and in the effects in our “uncorrected redshift space” runs, the main issue is deformation of structures along the line of sight. However, the term Kaiser effect pertains specifically to the generated structure in redshift space. In our case, the Kaiser effect is only present in the mock observations that we put into barcode. It is not present in the resulting reconstructed realizations, because these are either in Lagrangian space or in Eulerian (real) space, not in redshift space! The imprint of the Kaiser effect in these runs is visible because the wrong assumption was made that the observations are in real space, not in redshift space. The algorithm then tries to find a set of initial conditions that reproduce the anisotropies purely through gravitational evolution, without any redshift space assumptions. To clearly distinguish this imprint of the Kaiser effect on the reconstructions from the Kaiser effect itself in the future, we propose to call this effect the Kaiser effect echo. \@xsect To show the problem with RSDs in observations we obtained a set of regular Zel’dovich model samples with barcode. As with real data, the mock observational input is in redshift space, but we do not account for this in the model. These are the “obs_rsd” type runs that we mentioned in Section [.
Figure B1: Mean sample Lagrangian densities (top right) compared to true (top left), together with std (bottom left) and the difference of the true and mean fields (bottom right).

In figure B1, we first look at a slice of the Lagrangian density field. By eye, the results are strikingly similar to those with the redshift space model. Structures are again well reconstructed in the mean of the reconstructions (top right) and the amount of variation around the mean is at a similar level (bottom panels).

Figure B2: Mean sample Eulerian densities (top right) compared to true (top left), together with std (bottom left) and the difference of the true and mean fields (bottom right).

We next look at the match of the corresponding ensemble average Eulerian density field to the true field in figure B2. The differences between the true (top left) and mean-reconstructed (top right) fields again are subtle and hard to discern by eye on paper. The best way to see the difference is to alternate between the mean reconstruction image of this figure and that of the RSS-model results in figure 7, laid on top of each other in an animation‡‡An animated GIF showing this is provided at http://egpbos.nl/rsd-eul__obs-eul/.. In this way, it becomes immediately apparent that a strong anisotropic imprint is imparted on the reconstructions: along the line of sight direction (horizontal in the image), the voids are stretched and the overdense structures are compressed. In fact, when one compares this figure to the the RSS density field of figure 2 the problem becomes apparent**Animated GIF at http://egpbos.nl/rsd-rss__obs-eul/. When one does not model RSS, the algorithm converges on the true RSS field instead of the true Eulerian space field.

Our intention is to remove redshift space distortions, not reproduce them. In the latter case, the corresponding Lagrangian density is quite incorrect as well. When one does not include redshift space distortions and one uses data that is itself affected by redshift space distortions, the model is simply incorrect. One assumes the redshift space density distribution to equal the Eulerian space density. This begets a Lagrangian density that reproduces the observed redshift space distortions as if they were true physical features of the large scale structure. Instead, they are optical illusions and must be treated accordingly.

The difference plot (bottom right) shows the shift of many features along the z-axis. A bi-polar pattern along this line-of-sight axis is apparent for most clusters.

\@xsect
Figure B3: Sample Lagrangian power spectra compared to true; true compared to the mean and standard deviation of the sampled densities. Note that the difference plot (bottom panel) uses samples from three different runs with different true fields, while the power spectrum plot itself (top panel) is only from one true field.

The reconstruction problems are visible in the power spectrum as well, as can be seen in figure B3. This shows the boost in power at low k that is known as the Kaiser effect. In the difference plot (bottom panel of figure B3) this reveals itself as a downwards turn. We want to get rid of this effect as well, since we are trying to reconstruct RSD-less density fields.

In figure B4 we show 2D correlation functions (see Appendix [) that further illustrate the problem that is caused by having RSDs in the observations. What one would like to see is in the left panel: the isotropic \xi(r_{\perp},r_{\parallel}) of the true density field. The central panel shows the ensemble average of the samples that the algorithm obtains when RSDs are present in the mock observations, but are not modeled.

The reconstructions show large anisotropy (non-circular \xi(r_{\perp},r_{\parallel})). Note again that we only see here the effect of the coherent flow RSDs, which cause “great walls”. This is seen in the horizontal (perpendicular to line of sight) stretching of the otherwise circular function.

Figure B4: Eulerian space 2D correlation functions: true (left) vs ensemble mean (center). The right-hand panel shows the two functions in contours: solid black for the true function and dashed red for the ensemble mean.
\@xsect

The stronger clustering implied by the Kaiser effect results in a boost of the contrast of walls and filaments perpendicular to the line of sight. This boost is reflected in the higher amplitude of the power spectrum. However, it does not differentiate between the line-of-sight and the other two, unaffected directions. A common statistic that does take this into account is the two dimensional (sky — redshift) correlation function. We give a short recap on this statistic.

In general, the correlation function \xi({\bm{x}}_{A},{\bm{x}}_{B}) of point A with point B in a homogeneous field is dependent only on the distance vector {\bm{r}}_{AB} between the two points, i.e.

\xi({\bm{x}}_{A},{\bm{x}}_{B})=\xi({\bm{x}}_{A}-{\bm{x}}_{B})\equiv\xi({\bm{r}% }_{AB})\,. (C1)

Often, one also assumes the field to be isotropic, which further reduces the dependence of the correlation function to only the (absolute) distance scalar between two points:

\xi({\bm{r}}_{AB})=\xi(|{\bm{r}}_{AB}|)\equiv\xi(r_{AB})\,. (C2)

The assumptions of homogeneity and isotropy form the basis of the FLWR*† Friedmann, Lemaître, Walker and Robertson. metric of space-time.

When dealing with correlation functions of fields with RSDs, the latter assumption no longer holds. The RSDs create a marked anisotropy between the line of sight and the sky. The isotropy in the plane of the sky remains preserved. This means that for correlation functions of density fields affected by RSDs, we can reduce the correlation function to a function of two scalar variables instead of one: one in the direction of the line of sight, and one in the remaining two (transverse) directions. In our simplified case, where the observer is far away, the line of sight direction can be (and is) chosen to be along the z-axis. This gives us:

\xi({\bm{r}})=\xi(|{\bm{r}}-r_{z}\hat{z}|,r_{z})\equiv\xi(r_{\perp},r_{% \parallel})\,, (C3)

where r_{z}={\bm{r}}\cdot\hat{z} and for simplicity we drop the AB subscripts.

In the literature, often r_{\parallel} is called the radial pair separation \pi and r_{\perp} is called the transverse pair separation \sigma \citeppeacock01. Note that the parallel (to the line-of-sight) axis, which in our case is the z-axis, is vertical in the \xi(r_{\perp},r_{\parallel}) plots, but horizontal in the density slice plots. The former was chosen to correspond to the way \xi(r_{\perp},r_{\parallel}) plots are usually shown in the literature \citep[see e.g.]hawkins03.

The Kaiser effect is reflected in the 2D correlation functions. In this case, only in the transverse direction. The increased correlation on large scales corresponds to the “great walls” that are enhanced in the observations \citepsargentturner77,ryden96,thomas04. While the difference appears to be in the transverse direction, the cause of this flattening is the compression along the line of sight direction. Due to this compression, both a modest amount of flattening in the radial direction and the more apparent widening into the transverse direction are brought about. The compression also produces a generally higher level of correlation.

Each quadrant contains exactly the same information, since \xi(r_{\perp},r_{\parallel}) is symmetric along its two axes. In this paper, we plot the upper left quadrant only, which offers greater detail at the expense of making it slightly harder to see the shape.

\@xsect

The derivative of the density field \frac{\partial\rho({\bm{x}}_{i})}{\partial{\bm{x}}({\bm{q}})} pops up in the derivative of the log-likelihood of equation [. This must be treated very carefully, because a mixing of coordinate systems takes place here. We take the derivative of the Eulerian density field on a regular grid defined by the observations, but we derive to an irregular {\bm{x}}({\bm{q}}) coordinate. Even though {\bm{q}} is on a regular (initial Lagrangian density field) grid, {\bm{x}}({\bm{q}}) is certainly not. This means we cannot simply put the density on a grid and then take a derivative (easily done numerically in Fourier space by multiplying by i{\bm{k}}). One might think that by interpolating the regularized field \rho({\bm{x}}_{i}) on locations {\bm{x}}({\bm{q}}) one could also overcome this, but this is not the case. In fact, this introduces severe instabilities in the algorithm.

As always for HMC algorithms, the only solution is to keep every calculation as explicit as possible. We do this by formulating the density field in terms of particles with a kernel that distributes their mass over space. This allows us to analytically express the derivative to the particle kernel necessary in the likelihood force. The following general expression for density \rho({\bm{x}}) then applies:

\rho({\bm{x}})=\sum_{\mathclap{\mathrm{particles}~{}i}}m_{i}W({\bm{x}}-{\bm{x}% }_{i})\,, (D1)

given a kernel W({\bm{x}}_{i},{\bm{x}}) that gives the density contribution of the particle at {\bm{x}} given a particle i at {\bm{x}}_{i}, and a particle mass m_{i} to the continuous density field \rho at {\bm{x}}.*‡ To transform from the {\bm{q}} to the {\bm{x}} grid, we start by evolving particles forward in time to the desired redshift using the structure formation model we chose (Zel’dovich in this work), given the input field \delta({\bm{q}}). The resulting particle distribution in {\bm{x}} comoving coordinate space is then converted to a new regularly gridded density field \rho({\bm{x}}) by means of an SPH field estimation scheme. Our mass will be defined by the critical density \rho_{c}, box volume V and number of grid cells N, as we are only concerned with equal mass Lagrangian particles:

m_{i}=\rho_{c}\frac{V}{N}\,. (D2)

This kernel formulation conveniently integrates the particle nature of the transformation from Lagrangian to Eulerian space; this is, after all, a hydrodynamical representation of matter fields, based on particles.

One could in principle convolve fields with these kernels to smooth them out, or deconvolve fields with them by dividing by them in Fourier space. \citetwang13 suggest that the latter is necessary when using kernel-based density estimation and/or interpolation methods. We do not follow them in this, as we discuss in \citetthesis.

For this work, we chose to use an SPH spline kernel \citepmonaghan92,springel10 for each particle at {\bm{x}}({\bm{q}}). The advantages of this kernel are:

  • smoothness (it is a second order Taylor expansion of a Gaussian), which also gives us nicely behaving derivatives (first and second), and

  • boundedness within a few neighboring cells, which makes it far more computationally efficient than e.g. a Gaussian which has infinite extent.

Other options, like Gaussian, NGP, CIC and TSC kernels only have one of the above properties, not both. We can also not use the more precise DTFE scheme \citepschaap00,weygaert11,cautun11a, because it is not kernel-based, making analytical derivatives a lot harder to calculate.

The SPH spline kernel is defined in 3D following \citetmonaghan92:

W(q;h_{s})=\frac{1}{\pi h_{s}^{3}}\begin{cases}1-\frac{3}{2}q^{2}+\frac{3}{4}q% ^{3}&\mbox{ if }0\leq q<1\\ \frac{1}{4}{(2-q)}^{3}&\mbox{ if }1\leq q<2\\ 0&\mbox{ otherwise}\,,\end{cases} (D3)

where h_{s} is the scale parameter of the kernel which defines its size and q is given by:

q\equiv\frac{|{\bm{x}}-{\bm{x}}_{i}|}{h_{s}}\,. (D4)

An important property of the kernel thus normalized is that, independent of the choice of h_{s}, we have

\int_{V}W(q;h_{s})d{\bm{x}}=1\,, (D5)

which is necessary for equation [ to be correct. Note that the integral is unitless, as W’s units are \mathrm{Mpc}^{-3}; W is effectively a number density that spreads the mass of the Lagrangian tracer particle i at {\bm{x}}_{i} over its surroundings.

We need the gradient of this kernel with respect to {\bm{x}}_{i}={\bm{x}}({\bm{q}}_{i}) to calculate the likelihood force term. This gradient is given by:

\frac{\partial W({\bm{x}}-{\bm{x}}_{i};h_{s})}{\partial{\bm{x}}_{i}}=\frac{1}{% \pi h_{s}^{5}}\frac{{\bm{x}}_{i}-{\bm{x}}}{q}\begin{cases}\frac{9}{4}q^{2}-3q&% \mbox{ if }0\leq q\leq 1\\ -\frac{3}{4}{(2-q)}^{2}&\mbox{ if }1\leq q\leq 2\\ 0&\mbox{ otherwise}\end{cases}\,, (D6)

where in the F^{\mathscr{L}} equations {\bm{x}}_{i} is replaced with {\bm{x}}({\bm{q}}_{i}) and {\bm{x}} with {\bm{x}}_{j}, which are the terms we’ll actually use in the formulas (though it works for any combination of {\bm{x}} positions).

\@xsect

In the implementation used in this paper, barcode uses the Zel’dovich or first order Lagrangian perturbation theory model as its structure formation model. This model has the advantage of giving a decent match to reality over a range of scales; from large to semi-linear scales. On large scales, however, the second order Lagrangian perturbation theory (2LPT) model of structure formation is more accurate, reproducing higher order structural statistics better than for instance the Zel’dovich approximation \citepneyrinck13. Also, it takes into account the second order tidal terms, which is important for overall large scale structure formation. In this appendix (Section [), we derive the equations necessary to include the 2LPT model in barcode.

While 2LPT works well at large scales, it fails badly at smaller scales. Its main downside is that it produces even stronger overshoot in clustered regions than the Zel’dovich model. Figure 6 from \citetneyrinck13 illustrates this fact succinctly.

To correct 2LPT on small scales, we adopt the Augmented Lagrangian Perturbation Theory model of \citetkitaurahess13. This model adds a spherical collapse (“SC”) component to the 2LPT model. Through this combination, the massive overshoot is removed completely. Small scales are corrected for in a way that is still analytically tractable and thus usable in our Hamiltonian sampler. We will refer to this model by the more explicit acronym “2LPT+SC” instead of the previously used ALPT. In Section [, we derive the equations necessary to implement 2LPT+SC in barcode.

This model, though a great improvement to bare 2LPT, is not without its shortcomings. One of particular interest is the fact that it is, by nature, spherical. However, structure formation as a process is driven by anisotropic collapse \citepzeldovich70,icke73. Since this is what we study, we need to assess the impact of the addition of a spherical collapse model to anisotropic measures of large scale structure. The validity of these models with respect to important aspects of the large scale structure will be tested in a future study.

\@xsect

Most of this appendix uses the same definitions and the same notation as the rest of this paper. However, we must make some additional remarks on index notation. Normal indices usually start from i in the alphabet and indicate grid cells. We use these indices when talking about Fourier transforms, for instance. Underlined indices start from a, i.e. \underline{a}. These are used when talking about components of regular vectors like coordinate vectors {\bm{x}}, velocities {\bm{v}} or a displacement vector {\bm{\Psi}}. Some quantities have a combination of these two types of indices, e.g. when talking about (Fourier transforms of) discretized vector fields, like the i-th cell of the \underline{a}-th component of a displacement field {\bm{\Psi}}: \Psi_{i\underline{a}}.

When we use matrices, two indices are necessary. The same goes for the Kronecker delta \delta^{\mathrm{K}}_{ij}. When, instead of indices, the full coordinates are written in the subscript of the Kronecker delta, e.g. \delta^{\mathrm{K}}_{{\bm{q}}_{i},{\bm{q}}_{j}}, a comma is put in between for clarity. We clarify this, since, in other cases, a comma in the subscript means a derivative with respect to the coordinates corresponding to the indices (for instance in equation [).

\@xsect

2LPT adds a second order term to the equation for \vartheta \citep[equation 94]bernardeau02:

\vartheta^{(2)}({\bm{q}})=D_{2}\Delta^{(2)}\,, (E1)

so that the full equation for \vartheta in 2LPT reads

\vartheta=-D_{1}\Delta^{(1)}+D_{2}\Delta^{(2)}\,. (E2)

The second order spatial term is dependent on the first order term from equation [. This allows us to take the derivative to \delta({\bm{q}}_{i})=\delta^{(1)}({\bm{q}}_{i}), obtaining

\frac{\partial\vartheta({\bm{q}})}{\partial\delta({\bm{q}}_{i})}=-\delta^{% \mathrm{K}}_{{\bm{q}},{\bm{q}}_{i}}+D_{2}\frac{\partial\Delta^{(2)}}{\partial% \delta({\bm{q}}_{i})}\,. (E3)

The second order term of the density in LPT, \Delta^{(2)}, is given by

\Delta^{(2)}\equiv\sum_{\underline{a}>\underline{b}}\left(\phi^{(1)}_{,% \underline{a}\underline{a}}({\bm{q}})\phi^{(1)}_{,\underline{b}\underline{b}}(% {\bm{q}})-{\left(\phi^{(1)}_{,\underline{a}\underline{b}}({\bm{q}})\right)}^{2% }\right)\,, (E4)

where the comma-\underline{a}/\underline{b}-subscripts denote the derivative to the \underline{a}th/\underline{b}th component of the coordinates (x, y, z). \phi^{(1)} is the first order term in the LPT gravitational potential expansion, which is found by solving the Poisson equation for the first order terms:

\nabla_{\bm{q}}^{2}\phi^{(1)}({\bm{q}})=\Delta^{(1)}\,. (E5)

For the derivative of \Delta^{(2)} in equation [, one then obtains:

\displaystyle\frac{\partial\Delta^{(2)}}{\partial\delta({\bm{q}}_{i})}=\sum_{% \underline{a}>\underline{b}}\left(\phi^{(1)}_{,\underline{a}\underline{a}}({% \bm{q}})\frac{\partial\phi^{(1)}_{,\underline{b}\underline{b}}({\bm{q}})}{% \partial\delta({\bm{q}}_{i})}+\phi^{(1)}_{,\underline{b}\underline{b}}({\bm{q}% })\frac{\partial\phi^{(1)}_{,\underline{a}\underline{a}}({\bm{q}})}{\partial% \delta({\bm{q}}_{i})}\right.\\ \displaystyle\left.-2\phi^{(1)}_{,\underline{a}\underline{b}}({\bm{q}})\frac{% \partial\phi^{(1)}_{,\underline{a}\underline{b}}({\bm{q}})}{\partial\delta({% \bm{q}}_{i})}\right)\,. (E6)

The derivative of \phi^{(1)}_{,\underline{a}\underline{b}} to \delta({\bm{q}}_{i})=D_{1}\Delta^{(1)}({\bm{q}}_{i}) can be solved in Fourier-space. The Poisson equation in Fourier-space can be written as

\hat{\phi}^{(1)}({\bm{k}})=-\frac{1}{k^{2}}\hat{\Delta}^{(1)}({\bm{k}})\,, (E7)

which gives us this relation for \phi^{(1)}:

\phi^{(1)}({\bm{q}})=-\frac{1}{N}\sum_{j}e^{-i{\bm{k}}_{j}\cdot{\bm{q}}}\frac{% 1}{k_{j}^{2}}\hat{\Delta}^{(1)}({\bm{k}}_{j})\,. (E8)

Taking the derivatives to the \underline{a}th and \underline{b}th coordinate components:

\begin{split}\displaystyle\phi^{(1)}_{,\underline{a}\underline{b}}({\bm{q}})&% \displaystyle=\partial_{\underline{a}}\partial_{\underline{b}}\phi^{(1)}({\bm{% q}})\\ &\displaystyle=\frac{1}{N}\sum_{j}e^{i{\bm{k}}_{j}\cdot{\bm{q}}}\frac{k_{j% \underline{a}}k_{j\underline{b}}}{k_{j}^{2}}\left[\sum_{{\bm{q}}^{\prime}}e^{-% i{\bm{k}}_{j}\cdot{\bm{q}}^{\prime}}\Delta^{(1)}({\bm{q}}^{\prime})\right]\,,% \end{split} (E9)

and the derivative of this to \delta({\bm{q}}_{i}), we end up with:

\begin{split}\displaystyle\frac{\partial\phi^{(1)}_{,\underline{a}\underline{b% }}({\bm{q}})}{\partial\delta({\bm{q}}_{i})}&\displaystyle=\frac{1}{N}\sum_{j}e% ^{i{\bm{k}}_{j}\cdot{\bm{q}}}\frac{k_{j\underline{a}}k_{j\underline{b}}}{k_{j}% ^{2}}\left[\sum_{{\bm{q}}^{\prime}}e^{-i{\bm{k}}_{j}\cdot{\bm{q}}^{\prime}}% \frac{\partial\Delta^{(1)}({\bm{q}}^{\prime})}{\partial D_{1}\Delta^{(1)}({\bm% {q}}_{i})}\right]\\ &\displaystyle=\frac{1}{D_{1}N}\sum_{j}e^{i{\bm{k}}_{j}\cdot{\bm{q}}}\frac{k_{% j\underline{a}}k_{j\underline{b}}}{k_{j}^{2}}\left[\sum_{{\bm{q}}^{\prime}}e^{% -i{\bm{k}}_{j}\cdot{\bm{q}}^{\prime}}\delta^{\mathrm{K}}_{{\bm{q}}^{\prime},{% \bm{q}}_{i}}\right]\\ &\displaystyle=\frac{1}{D_{1}N}\sum_{j}e^{i{\bm{k}}_{j}\cdot{\bm{q}}}\frac{k_{% j\underline{a}}k_{j\underline{b}}}{k_{j}^{2}}e^{-i{\bm{k}}_{j}\cdot{\bm{q}}_{i% }}\,.\end{split} (E10)

The first exponential we can use, in the end, to get rid of the sum in equation [, by turning them into a DFT. The second exponential will be used with the sum over j in this equation. Note that for this equation to be non-zero, the {\bm{q}}_{i} and {\bm{q}} grids need to be the same! If they are not, the Kronecker delta will only equal zero. This final step thus necessarily couples the {\bm{q}}_{i}-grid of the signal, the \delta({\bm{q}}) field sample, and that of the {\bm{x}}({\bm{q}}) grid used in the likelihood \mathscr{L}.

Now, to simplify equation [, we can write each of the terms in the \underline{a}>\underline{b} sum in a general way, writing them as specific instances of a general four-index quantity. Including the multiplication by h({\bm{q}}_{m}) and the sum over index m from equation [, and using the result from equation [, we then get for each generalized part (note the indices \underline{a}, \underline{b}, \underline{c} and \underline{d}) of the full term:

\begin{split}\displaystyle\sum_{m}&\displaystyle h({\bm{q}}_{m})D_{2}\phi^{(1)% }_{,\underline{c}\underline{d}}({\bm{q}}_{m})\frac{\partial\phi^{(1)}_{,% \underline{a}\underline{b}}({\bm{q}}_{m})}{\partial\delta({\bm{q}}_{i})}\\ &\displaystyle=\sum_{m}h({\bm{q}}_{m})\frac{D_{2}}{D_{1}}\frac{1}{N}\phi^{(1)}% _{,\underline{c}\underline{d}}({\bm{q}}_{m})\sum_{j}e^{i{\bm{k}}_{j}\cdot{\bm{% q}}_{m}}\frac{k_{j\underline{a}}k_{j\underline{b}}}{k_{j}^{2}}e^{-i{\bm{k}}_{j% }\cdot{\bm{q}}_{i}}\\ &\displaystyle\equiv\frac{D_{2}}{D_{1}}\frac{1}{N}\sum_{j}\hat{\alpha}^{*}_{% \underline{c}\underline{d}}({\bm{k}}_{j})\frac{k_{j\underline{a}}k_{j% \underline{b}}}{k_{j}^{2}}e^{-i{\bm{k}}_{j}\cdot{\bm{q}}_{i}}\\ &\displaystyle\equiv\frac{D_{2}}{D_{1}}{\xi^{*}}^{\underline{a}\underline{b}}_% {\underline{c}\underline{d}}({\bm{q}}_{i})\\ &\displaystyle=\frac{D_{2}}{D_{1}}\xi_{\underline{c}\underline{d}}^{\underline% {a}\underline{b}}({\bm{q}}_{i})\,,\end{split} (E11)

where we define the convenience functions \alpha({\bm{q}}) and \xi({\bm{q}}) as

\alpha_{\underline{c}\underline{d}}({\bm{q}})\equiv{\left(h({\bm{q}})\phi^{(1)% }_{,\underline{c}\underline{d}}({\bm{q}})\right)}^{*}=h({\bm{q}})\phi^{(1)}_{,% \underline{c}\underline{d}}({\bm{q}})\,, (E12)
\begin{split}\displaystyle\hat{\xi}_{\underline{c}\underline{d}}^{\underline{a% }\underline{b}}({\bm{k}})&\displaystyle\equiv{\left(\frac{k_{\underline{a}}k_{% \underline{b}}}{k^{2}}\hat{\alpha}^{*}_{\underline{c}\underline{d}}({\bm{k}})% \right)}^{*}=\frac{k_{\underline{a}}k_{\underline{b}}}{k^{2}}\hat{\alpha}_{% \underline{c}\underline{d}}({\bm{k}})\\ &\displaystyle\Rightarrow\xi^{\underline{a}\underline{b}}_{\underline{c}% \underline{d}}({\bm{q}})=\partial_{\underline{a}}\partial_{\underline{b}}% \nabla^{-2}\left(h({\bm{q}})\phi^{(1)}_{,\underline{c}\underline{d}}({\bm{q}})% \right)\,,\end{split} (E13)

where we get rid of the conjugations because \phi^{(1)}_{,\underline{c}\underline{d}} is real, as it is a physical term, h({\bm{q}}) is also real, as previously noted, and the {\bm{k}}’s are real scale vectors. The double conjugation on \hat{\alpha}_{\underline{c}\underline{d}} cancels out as well.

We will see \xi^{\underline{a}\underline{b}}_{\underline{c}\underline{d}} again in Section [, but in a slightly different form. This prompts us to use a functional instead, which we can use again later on and which simplifies our code by only writing the general form of the function once:

\chi^{\underline{a}\underline{b}}_{\underline{c}\underline{d}}(f({\bm{q}});{% \bm{q}})\equiv\partial_{\underline{a}}\partial_{\underline{b}}\nabla^{-2}\left% (f({\bm{q}})\phi^{(1)}_{,\underline{c}\underline{d}}({\bm{q}})\right)\,. (E14)

Then, finally, we can write down a worked out 2LPT version of the likelihood force (equation [), combining the first and second order results:

F^{\mathscr{L},2LPT}_{i}=-h({\bm{q}}_{i})+\frac{D_{2}}{D_{1}}\sum_{\underline{% a}>\underline{b}}\left(\chi_{\underline{b}\underline{b}}^{\underline{a}% \underline{a}}+\chi_{\underline{a}\underline{a}}^{\underline{b}\underline{b}}-% 2\chi_{\underline{a}\underline{b}}^{\underline{a}\underline{b}}\right)(h({\bm{% q}}_{i});{\bm{q}}_{i})\,. (E15)

The argument (h({\bm{q}}_{i});{\bm{q}}_{i}) applies to all three \chis.

\@xsect

For the 2LPT+SC model, the divergence of the displacement field \vartheta(\delta^{(1)}({\bm{q}})) is approximated by the weighted sum of a 2LPT part on large scales and a spherical collapse part on small scales, where the scales are split using Gaussian convolution (meaning that there is not an absolute splitting of scales, there will be overlap):

\begin{split}\displaystyle\vartheta&\displaystyle\equiv\left[K_{G}\ast\left({% \bm{\nabla}}\cdot{\bm{\Psi}}^{\mathrm{2LPT}}\right)\right]+({\bm{\nabla}}\cdot% {\bm{\Psi}}^{\mathrm{SC}})-\left[K_{G}\ast({\bm{\nabla}}\cdot{\bm{\Psi}}^{% \mathrm{SC}})\right]\\ &\displaystyle=\left[K_{G}\ast\left(-D_{1}\Delta^{(1)}+D_{2}\Delta^{(2)}\right% )\right]+3\left(\sqrt{1-\frac{2}{3}\delta}-1\right)\\ &\displaystyle\qquad{}-\left[K_{G}\ast\left(3\left(\sqrt{1-\frac{2}{3}\delta}-% 1\right)\right)\right]\,,\end{split} (E16)

where K_{G}({\bm{q}}) is a Gaussian kernel at some characteristic scale R_{G}. The convolution (denoted by \ast) with the 2LPT part preserves mainly the scales above R_{G} and the other with “1-K_{G}” preserves scales below that for the spherical collapse part. All terms are defined on the Lagrangian {\bm{q}} grid (we left out the function arguments for brevity).

The derivative we need can be identified as

\begin{split}\displaystyle\frac{\partial\vartheta({\bm{q}})}{\partial\delta({% \bm{q}}_{i})}&\displaystyle=\left[K_{G}\ast\left(-K^{K}_{i}+D_{2}\frac{% \partial\Delta^{(2)}}{\partial\delta({\bm{q}}_{i})}\right)\right]({\bm{q}})\\ &\displaystyle\quad+{\left(1-\frac{2}{3}\delta({\bm{q}})\right)}^{-\frac{1}{2}% }K^{K}_{i}({\bm{q}})\\ &\displaystyle\quad-\left[K_{G}\ast\left({\left(1-\frac{2}{3}\delta({\bm{q}})% \right)}^{-\frac{1}{2}}K^{K}_{i}\right)\right]({\bm{q}})\,.\end{split} (E17)

Here we use an unusual notation for Kronecker delta:

K^{K}_{i}({\bm{q}})\equiv\delta^{\mathrm{K}}_{{\bm{q}},{\bm{q}}_{i}}\,. (E18)

This helps us to neatly write the Kronecker delta in convolution notation. The 2LPT part of this equation has been worked out in Section [, so we recycle those results below. The convolutions with the Gaussian are also most efficiently done in (discrete) Fourier space. Note that the conversion between continuous and discrete FTs causes the following relation to contain a small error (a convolution of a Gaussian with a general function f({\bm{q}})):

\begin{split}\displaystyle[K_{G}\ast f]({\bm{q}})&\displaystyle=\frac{1}{N}% \sum_{j}e^{i{\bm{k}}_{j}\cdot{\bm{q}}}\hat{K}_{G}({\bm{k}}_{j})\hat{f}({\bm{k}% }_{j})\\ &\displaystyle=\frac{1}{N}\sum_{j}e^{i{\bm{k}}_{j}\cdot{\bm{q}}}e^{-\frac{k_{j% }^{2}R_{G}^{2}}{2}}\hat{f}({\bm{k}}_{j})\,.\end{split} (E19)

In what follows, we will, term by term, reduce equation [, combined with the 2LPT+SC prescription of this section, to a series of Fourier transforms of several intermediate convenience functions. The numbered terms that form

F^{\mathscr{L},2LPT+SC}_{i}=\raisebox{0.5pt}{\textcircled{\raisebox{-.9pt} {1}% }}+\raisebox{0.5pt}{\textcircled{\raisebox{-.9pt} {2}}}+\raisebox{0.5pt}{% \textcircled{\raisebox{-.9pt} {3}}}+\raisebox{0.5pt}{\textcircled{\raisebox{-.% 9pt} {4}}}\,, (E20)

are

\begin{split}\displaystyle\raisebox{0.5pt}{\textcircled{\raisebox{-.9pt} {1}}}% &\displaystyle=\sum_{m}h({\bm{q}}_{m})\left[K_{G}\ast\left(-\delta^{\mathrm{K}% }_{{\bm{q}}_{m},{\bm{q}}_{i}}\right)\right]\\ \displaystyle\raisebox{0.5pt}{\textcircled{\raisebox{-.9pt} {2}}}&% \displaystyle=\sum_{m}h({\bm{q}}_{m})\left[K_{G}\ast\left(D_{2}\frac{\partial% \Delta^{(2)}({\bm{q}}_{m})}{\partial\delta({\bm{q}}_{i})}\right)\right]\equiv% \sum_{m}h({\bm{q}}_{m})\left[\raisebox{0.5pt}{\textcircled{\raisebox{-.3pt} {$% \ast$}}}\right]\\ \displaystyle\raisebox{0.5pt}{\textcircled{\raisebox{-.9pt} {3}}}&% \displaystyle=\sum_{m}h({\bm{q}}_{m}){\left(1-\frac{2}{3}\delta({\bm{q}}_{m})% \right)}^{-\frac{1}{2}}\delta^{\mathrm{K}}_{{\bm{q}}_{m},{\bm{q}}_{i}}\\ \displaystyle\raisebox{0.5pt}{\textcircled{\raisebox{-.9pt} {4}}}&% \displaystyle=-\sum_{m}h({\bm{q}}_{m})\left[K_{G}\ast\left({\left(1-\frac{2}{3% }\delta({\bm{q}}_{m})\right)}^{-\frac{1}{2}}\delta^{\mathrm{K}}_{{\bm{q}}_{m},% {\bm{q}}_{i}}\right)\right]\,.\end{split}
\@xsect

This term contains, on first hand, a somewhat unintuitive convolution with a Kronecker delta function. After some manipulation, however, this term will quickly conform to something more in line with our comfortable view of the world.

Using the DFT of the Kronecker delta:

\mathcal{F}_{D}\left[\delta^{\mathrm{K}}_{{\bm{x}},{\bm{x}}^{\prime}}\right]=% \sum_{{\bm{x}}}e^{-i{\bm{k}}\cdot{\bm{x}}}\delta^{\mathrm{K}}_{{\bm{x}},{\bm{x% }}^{\prime}}=e^{-i{\bm{k}}\cdot{\bm{x}}^{\prime}}\,, (E21)

we reshape term \⃝raisebox{-0.9pt}{1} into a simple convolution:

\begin{split}\displaystyle\raisebox{0.5pt}{\textcircled{\raisebox{-.9pt} {1}}}% &\displaystyle=\sum_{m}h({\bm{q}}_{m})\left[K_{G}\ast\left(-K^{K}_{i}\right)% \right]({\bm{q}}_{m})\\ &\displaystyle=-\frac{1}{N}\sum_{m}h({\bm{q}}_{m})\sum_{j}e^{i{\bm{k}}_{j}% \cdot{\bm{q}}_{m}}\hat{K}_{G}({\bm{k}}_{j})e^{-i{\bm{k}}_{j}\cdot{\bm{q}}_{i}}% \\ &\displaystyle=-\frac{1}{N}\sum_{j}\sum_{m}h({\bm{q}}_{m})e^{i{\bm{k}}_{j}% \cdot{\bm{q}}_{m}}\hat{K}_{G}({\bm{k}}_{j})e^{-i{\bm{k}}_{j}\cdot{\bm{q}}_{i}}% \\ &\displaystyle\equiv-\frac{1}{N}\sum_{j}\hat{c}^{*}({\bm{k}}_{j})\hat{K}_{G}({% \bm{k}}_{j})e^{-i{\bm{k}}_{j}\cdot{\bm{q}}_{i}}\\ &\displaystyle=-{\left[K_{G}\ast c\right]}^{*}({\bm{q}}_{i})\\ &\displaystyle=-\left[K_{G}\ast c\right]({\bm{q}}_{i})\,,\end{split} (E22)

where the convenience function c({\bm{q}}) is defined as

c({\bm{q}})\equiv h^{*}({\bm{q}})=h({\bm{q}})\,, (E23)

and the conjugation has no effect because h({\bm{q}}) is real (and K_{G}\ast c is as well).

\@xsect

This term itself consists of several terms, coming from the terms in the sum of equation [. In particular, the convolution part \left[\raisebox{0.5pt}{\textcircled{\raisebox{-.3pt} {$\ast$}}}\right], in full, reads

\begin{split}\displaystyle\left[\raisebox{0.5pt}{\textcircled{\raisebox{-.3pt}% {$\ast$}}}\right]&\displaystyle=\frac{D_{2}}{N}\sum_{j}e^{i{\bm{k}}_{j}\cdot{% \bm{q}}_{m}}\hat{K}_{G}({\bm{k}}_{j})\sum_{k}e^{-i{\bm{k}}_{j}\cdot{\bm{q}}_{k% }}\\ &\displaystyle\qquad\sum_{\underline{a}>\underline{b}}\left(\phi^{(1)}_{,% \underline{a}\underline{a}}({\bm{q}}_{k})\frac{\partial\phi^{(1)}_{,\underline% {b}\underline{b}}({\bm{q}}_{k})}{\partial\delta({\bm{q}}_{i})}+\phi^{(1)}_{,% \underline{b}\underline{b}}({\bm{q}}_{k})\frac{\partial\phi^{(1)}_{,\underline% {a}\underline{a}}({\bm{q}}_{k})}{\partial\delta({\bm{q}}_{i})}\right.\\ &\displaystyle\qquad\left.{}\qquad-2\phi^{(1)}_{,\underline{a}\underline{b}}({% \bm{q}}_{k})\frac{\partial\phi^{(1)}_{,\underline{a}\underline{b}}({\bm{q}}_{k% })}{\partial\delta({\bm{q}}_{i})}\right)\,.\end{split} (E24)

Like for equation [, we can write the three different parts (as defined by the three summands between parentheses) of this equation as four-index quantities:

\begin{split}\displaystyle\frac{D_{2}}{D_{1}}{\eta}^{\underline{a}\underline{b% }}_{\underline{c}\underline{d}}({\bm{q}}_{i})&\displaystyle\equiv\sum_{m}h({% \bm{q}}_{m})\frac{D_{2}}{N}\sum_{j}e^{i{\bm{k}}_{j}\cdot{\bm{q}}_{m}}\hat{K}_{% G}({\bm{k}}_{j})\\ &\displaystyle\qquad\sum_{k}e^{-i{\bm{k}}_{j}\cdot{\bm{q}}_{k}}\phi^{(1)}_{,% \underline{c}\underline{d}}({\bm{q}}_{k})\frac{\partial\phi^{(1)}_{,\underline% {a}\underline{b}}({\bm{q}}_{k})}{\partial\delta({\bm{q}}_{i})}\\ &\displaystyle=\sum_{m}h({\bm{q}}_{m})\frac{D_{2}}{D_{1}}\frac{1}{N^{2}}\sum_{% j}e^{i{\bm{k}}_{j}\cdot{\bm{q}}_{m}}\hat{K}_{G}({\bm{k}}_{j})\\ &\displaystyle\qquad\sum_{k}e^{-i{\bm{k}}_{j}\cdot{\bm{q}}_{k}}\phi^{(1)}_{,% \underline{c}\underline{d}}({\bm{q}}_{k})\sum_{l}e^{i{\bm{k}}_{l}\cdot{\bm{q}}% _{k}}\frac{k_{l\underline{a}}k_{l\underline{b}}}{k_{l}^{2}}e^{-i{\bm{k}}_{l}% \cdot{\bm{q}}_{i}}\\ &\displaystyle\equiv\frac{D_{2}}{D_{1}}\frac{1}{N^{2}}\sum_{j}\hat{c}^{*}({\bm% {k}}_{j})\hat{K}_{G}({\bm{k}}_{j})\sum_{k}e^{-i{\bm{k}}_{j}\cdot{\bm{q}}_{k}}% \phi^{(1)}_{,\underline{c}\underline{d}}({\bm{q}}_{k})\\ &\displaystyle\qquad\sum_{l}e^{i{\bm{k}}_{l}\cdot{\bm{q}}_{k}}\frac{k_{l% \underline{a}}k_{l\underline{b}}}{k_{l}^{2}}e^{-i{\bm{k}}_{l}\cdot{\bm{q}}_{i}% }\\ &\displaystyle\equiv\frac{D_{2}}{D_{1}}\frac{1}{N}\sum_{k}b^{*}({\bm{q}}_{k})% \phi^{(1)}_{,\underline{c}\underline{d}}({\bm{q}}_{k})\\ &\displaystyle\qquad\sum_{l}e^{i{\bm{k}}_{l}\cdot{\bm{q}}_{k}}\frac{k_{l% \underline{a}}k_{l\underline{b}}}{k_{l}^{2}}e^{-i{\bm{k}}_{l}\cdot{\bm{q}}_{i}% }\\ &\displaystyle\equiv\frac{D_{2}}{D_{1}}\frac{1}{N}\sum_{l}\hat{a}^{*}_{% \underline{c}\underline{d}}({\bm{k}}_{l})\frac{k_{l\underline{a}}k_{l% \underline{b}}}{k_{l}^{2}}e^{-i{\bm{k}}_{l}\cdot{\bm{q}}_{i}}\\ &\displaystyle\equiv\frac{D_{2}}{D_{1}}{\eta^{*}}^{\underline{a}\underline{b}}% _{\underline{c}\underline{d}}({\bm{q}}_{i})\\ &\displaystyle=\frac{D_{2}}{D_{1}}{\eta}^{\underline{a}\underline{b}}_{% \underline{c}\underline{d}}({\bm{q}}_{i})\,,\end{split} (E25)

where the conjugation trivially drops out and we further define convenience functions b({\bm{q}}), a_{\underline{c}\underline{d}}({\bm{q}}) and \eta_{\underline{c}\underline{d}}^{\underline{a}\underline{b}}({\bm{k}}) as

\displaystyle\hat{b}({\bm{k}}) \displaystyle\equiv{\left(\hat{c}^{*}({\bm{k}})\hat{K}_{G}({\bm{k}})\right)}^{% *}=\hat{h}({\bm{k}})\hat{K}_{G}({\bm{k}})\,,
\displaystyle\Rightarrow b({\bm{q}})=\left[K_{G}*h\right]({\bm{q}}) (E26)
\displaystyle a_{\underline{c}\underline{d}}({\bm{q}}) \displaystyle\equiv{\left(b^{*}({\bm{q}})\phi^{(1)}_{,\underline{c}\underline{% d}}({\bm{q}})\right)}^{*}=b({\bm{q}})\phi^{(1)}_{,\underline{c}\underline{d}}(% {\bm{q}})\,, (E27)
\displaystyle\hat{\eta}^{\underline{a}\underline{b}}_{\underline{c}\underline{% d}}({\bm{k}}) \displaystyle\equiv{\left(\frac{k_{\underline{a}}k_{\underline{b}}}{k^{2}}\hat% {a}^{*}_{\underline{c}\underline{d}}({\bm{k}})\right)}^{*}=\frac{k_{m}k_{n}}{k% ^{2}}\hat{a}_{\underline{c}\underline{d}}({\bm{k}})
\displaystyle\Rightarrow\eta^{\underline{a}\underline{b}}_{\underline{c}% \underline{d}}({\bm{q}})=\partial_{\underline{a}}\partial_{\underline{b}}% \nabla^{-2}\left(\left[K_{G}*h\right]({\bm{q}})\phi^{(1)}_{,\underline{c}% \underline{d}}({\bm{q}})\right)\,. (E28)

Here we can again easily get rid of the conjugations. Apart from obvious term-wise double conjugations, the first term contains the Gaussian kernel, which is real, \phi^{(1)}_{,\underline{c}\underline{d}} is also real, as it is a physical term, and the {\bm{k}} terms are real scale vectors. Finally, we replace \eta by \chi:

\eta^{\underline{a}\underline{b}}_{\underline{c}\underline{d}}({\bm{q}})=\chi^% {\underline{a}\underline{b}}_{\underline{c}\underline{d}}(b({\bm{q}});{\bm{q}}% )\,. (E29)

Putting it all together, term \⃝raisebox{-0.9pt}{2}, in full, reads

\raisebox{0.5pt}{\textcircled{\raisebox{-.3pt} {2}}}=\frac{D_{2}}{D_{1}}\sum_{% \underline{a}>\underline{b}}\left({\chi}^{\underline{a}\underline{a}}_{% \underline{b}\underline{b}}+{\chi}^{\underline{b}\underline{b}}_{\underline{a}% \underline{a}}-2{\chi}^{\underline{a}\underline{b}}_{\underline{a}\underline{b% }}\right)(b({\bm{q}}_{i});{\bm{q}}_{i})\,, (E30)

which is almost the same as the previous 2LPT part in equation [, but with a Gaussian smoothed version of h, namely b.

\@xsect

The simplest term, a matter of a sum over an expression with a Kronecker delta that changes the {\bm{q}}_{m}’s to {\bm{q}}_{i}’s:

\raisebox{0.5pt}{\textcircled{\raisebox{-.9pt} {3}}}=\frac{h({\bm{q}}_{i})}{% \sqrt{1-\frac{2}{3}\delta({\bm{q}}_{i})}}\,. (E31)
\@xsect
\begin{split}\displaystyle\raisebox{0.5pt}{\textcircled{\raisebox{-.9pt} {4}}}% &\displaystyle=-\sum_{m}h({\bm{q}}_{m})\left[K_{G}\ast\left({\left(1-\frac{2}{% 3}\delta\right)}^{-\frac{1}{2}}K^{K}_{i}\right)\right]({\bm{q}}_{m})\\ &\displaystyle\equiv-\frac{1}{N}\sum_{m}h({\bm{q}}_{m})\sum_{j}e^{i{\bm{k}}_{j% }\cdot{\bm{q}}_{m}}\hat{K}_{G}({\bm{k}}_{j})\\ &\displaystyle\qquad\sum_{k}e^{-i{\bm{k}}_{j}\cdot{\bm{q}}_{k}}\delta^{\mathrm% {K}}_{{\bm{q}}_{k},{\bm{q}}_{i}}\Delta({\bm{q}}_{k})\\ &\displaystyle=-\frac{1}{N}\sum_{j}\hat{c}^{*}({\bm{k}}_{j})\hat{K}_{G}({\bm{k% }}_{j})\sum_{k}e^{-i{\bm{k}}_{j}\cdot{\bm{q}}_{k}}\delta^{\mathrm{K}}_{{\bm{q}% }_{k},{\bm{q}}_{i}}\Delta({\bm{q}}_{k})\\ &\displaystyle=-\sum_{k}{\left[K_{G}*c\right]}^{*}({\bm{q}}_{k})\Delta({\bm{q}% }_{k})\delta^{\mathrm{K}}_{{\bm{q}}_{k},{\bm{q}}_{i}}\\ &\displaystyle=-\Delta({\bm{q}}_{i}){\left[K_{G}*c\right]}^{*}({\bm{q}}_{i})\\ &\displaystyle=-\Delta({\bm{q}}_{i})\left[K_{G}*c\right]({\bm{q}}_{i})\,,\end{split} (E32)

where we used the convenience function \Delta({\bm{q}})

\Delta({\bm{q}})\equiv{\left(1-\frac{2}{3}\delta({\bm{q}})\right)}^{-1/2}\,. (E33)

It may, on first hand, seem counter-intuitive that while we started with a convolution of the Gaussian with one function (\Delta({\bm{q}})\delta^{\mathrm{K}}_{{\bm{q}},{\bm{q}}_{i}}), we end up convolving another function (c({\bm{q}})) and do not convolve with the original one at all. This is actually easily understandable when you consider that what we were calculating is a double convolution. In fact, we could derive the term in another way, without going to Fourier space, but instead only using the definition of a discrete convolution:

\begin{split}\displaystyle\raisebox{0.5pt}{\textcircled{\raisebox{-.9pt} {4}}}% &\displaystyle=-\sum_{m}h({\bm{q}}_{m})\left[K_{G}\ast\left({\left(1-\frac{2}{% 3}\delta({\bm{q}}_{m})\right)}^{-\frac{1}{2}}\delta^{\mathrm{K}}_{{\bm{q}}_{m}% ,{\bm{q}}_{i}}\right)\right]\\ &\displaystyle=-\sum_{m}h({\bm{q}}_{m})\sum_{j}K_{G}({\bm{q}}_{m}-{\bm{q}}_{j}% )\Delta({\bm{q}}_{j})\delta^{\mathrm{K}}_{{\bm{q}}_{j},{\bm{q}}_{i}}\\ &\displaystyle=-\sum_{j}\left[K_{G}\ast h\right]({\bm{q}}_{j})\Delta({\bm{q}}_% {j})\delta^{\mathrm{K}}_{{\bm{q}}_{j},{\bm{q}}_{i}}\\ &\displaystyle=-\left[K_{G}\ast h\right]({\bm{q}}_{i})\Delta({\bm{q}}_{i})\,,% \end{split} (E34)

where in the third step we used that K_{G}({\bm{q}})=K_{G}(-{\bm{q}}). Term \⃝raisebox{-0.9pt}{4} is a convolution of h({\bm{q}}) with the convolution of K_{G} with the other term, and convolutions are associative. However, because that latter term contains a Kronecker delta, things get a little bit funky, as they always do when delta functions turn up. With this derivation, we also immediately see that the c({\bm{q}}) function is superfluous, as we already concluded when we first encountered it in term \⃝raisebox{-0.9pt}{1} (equation E22). Actually, term \⃝raisebox{-0.9pt}{1} could also easily have been derived with this double convolution method, circumventing the complication of the Fourier transform of the Kronecker delta function.

\@xsect

Putting it all together, we obtain for minus the derivative of the likelihood in a 2LPT+SC model:

\begin{split}\displaystyle F^{\mathscr{L},2LPT+SC}_{i}&\displaystyle=-\left[K_% {G}\ast h\right]({\bm{q}}_{i})\\ &\displaystyle\quad+\frac{D_{2}}{D_{1}}\sum_{\underline{a}>\underline{b}}\left% ({\chi}^{\underline{a}\underline{a}}_{\underline{b}\underline{b}}+{\chi}^{% \underline{b}\underline{b}}_{\underline{a}\underline{a}}-2{\chi}^{\underline{a% }\underline{b}}_{\underline{a}\underline{b}}\right)(b({\bm{q}}_{i});{\bm{q}}_{% i})\\ &\displaystyle\quad+\left(\frac{h({\bm{q}}_{i})-\left[K_{G}*h\right]({\bm{q}}_% {i})}{\sqrt{1-\frac{2}{3}\delta({\bm{q}}_{i})}}\right)\,,\end{split} (E35)

with \chi^{\underline{a}\underline{b}}_{\underline{c}\underline{d}}(f({\bm{q}});{% \bm{q}}) in equation [ and b({\bm{q}}) in equation E26. There are no conjugations left and the remaining c functions have been be replaced by h.

Note that in the last term, the square root has an upper bound: \delta({\bm{q}}_{i})\leq\frac{3}{2}. If it is larger, the square root will become imaginary.

In fact, this is supposed to happen; the lower bound for the stretching parameter \vartheta^{\mathrm{SC}} is -3 \citepneyrinck13. This means that the given formula is only valid up to the given upper bound. After that, the entire term (i.e. the sum of terms \⃝raisebox{-0.9pt}{3} and \⃝raisebox{-0.9pt}{4}) becomes zero — the derivative of -3 — i.e.:

\begin{split}\displaystyle F^{\mathscr{L},2LPT+SC}_{i}&\displaystyle=-\left[K_% {G}\ast h\right]({\bm{q}}_{i})\\ &\displaystyle\quad+\frac{D_{2}}{D_{1}}\sum_{\underline{a}>\underline{b}}\left% ({\chi}^{\underline{a}\underline{a}}_{\underline{b}\underline{b}}+{\chi}^{% \underline{b}\underline{b}}_{\underline{a}\underline{a}}-2{\chi}^{\underline{a% }\underline{b}}_{\underline{a}\underline{b}}\right)(b({\bm{q}}_{i});{\bm{q}}_{% i})\\ &\displaystyle\quad+\begin{cases}\left(\frac{h({\bm{q}}_{i})-\left[K_{G}*h% \right]({\bm{q}}_{i})}{\sqrt{1-\frac{2}{3}\delta({\bm{q}}_{i})}}\right)\,,&% \text{if}\ \delta({\bm{q}}_{i})\leq\frac{3}{2}\,,\\ 0,&\text{otherwise.}\end{cases}\end{split} (E36)

This paper has been typeset from a TeX/LaTeX file prepared by the author.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
326892
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description