Non-linear evolution using LPT re-expansions

Modelling non-linear evolution using Lagrangian Perturbation Theory (LPT) re-expansions

Sharvari Nadkarni-Ghosh and David F. Chernoff
Department of Theoretical Sciences, S.N. Bose N.C.B.S, Sector III, Block JD, Salt Lake, Kolkata, W.B. 700098 India
Department of Physics, I.I.T. Kanpur, Kanpur, U.P. 208016 India
Department of Astronomy, Cornell University, Ithaca, NY 14853 USA

We present a new method to calculate formation of cosmological structure in the Newtonian limit. The method is based on Lagrangian perturbation theory plus two key theoretical extensions. One advance involves identifying and fixing a previously ignored gauge-like degree of freedom relating quantities calculated in LPT to those measured by a preferred Friedmann-Robertson-Walker (FRW) observer. Handling this connection between calculational and observer frames is physically essential and ensures a momentum conserving description. The second extension is to systematically re-expand the equations of motion to increase LPT’s radius of convergence to the maximum future time prior to orbit crossing. The paper implements a complete algorithm and performs extensive “proof of principle” tests of the new method, including direct comparison to known solutions, evaluation of conserved quantities and formal convergence studies. All are satisfactory. We show convergence is exponential in grid size and Lagrangian order and polynomial in step size. There are three powerful advantages afforded by the new technique: (1) it employs a smooth representation of all fields and the results are not limited by particle induced shot-noise errors, (2) it permits the numerical error to be controlled by changing Lagrangian order and/or number of steps allowing, in principle, arbitrarily small errors to be achieved prior to orbit crossing and (3) it handles generic cold initial data (any periodic density and velocity fields, including those with initial rotational components). Together, these properties make the new technique well-suited to handle quasi-linear scales where analytic methods and/or numerical simulations fail to provide suitably accurate answers.

cosmology: theory – large-scale structure of Universe.
volume: pagerange: LABEL:firstpageLABEL:lastpagepubyear: 2013

1 Introduction

An important means of determining and/or constraining cosmological parameters is to compare the growth of structure as predicted by theoretical models with that observed in the actual universe. The complexity and sophistication of theoretical approaches span an enormous range of possibilities. Linear perturbation theory provides the simplest analytic description of growth, generally suitable when the perturbation expansion parameter is small. Quasi-linear theory extends the accuracy and applicability of the description at the cost of increasingly complex formulations. Finally, an N-body simulation is capable of handling fully non-linear situations and delivering, in principle, exact answers albeit at substantial computational cost and without the benefit of analytic insight. The choice of a particular method is best determined by the sort of problem one intends to solve. For example, if one is interested in calculating how an initial Gaussian perturbation spectrum generates non-Gaussianity as structure forms then one seeks an accurate method for treating perturbations once mode-mode coupling becomes important. As in Goldilocks’s fable the extreme choices may prove unpalatable: on the one hand, the coupling of interest is absent in a purely linear analysis; on the other, particle-based methodologies introduce spurious shot noise in representations of the underlying smooth density distribution so that high accuracy may demand infeasible numbers of particles.

This paper’s focus is the development of a middle approach, a method for the numerical treatment of structure formation based on intrinsically smooth descriptions of the density and velocity fields and capable of tracing quasi-linear evolution to arbitrary accuracy. We anticipate it will find application in situations where small non-linear effects and/or highly accurate results prior to collapse are of primary interest. The physical context is Newtonian cosmology i.e. sub-horizon scales, non-relativistic velocities and weak gravitational fields. The methods are applicable until the formation of the first caustic. Future studies will extend the methodology to reach beyond particle crossing and to handle physical scales approaching that of the horizon.

Lagrangian perturbation theory (LPT), the heart of the approach employed here, has been well-studied. Lagrange variables in the context of gravity were introduced more than four decades ago (Novikov 1969). Its use in cosmology was initiated by Zel’dovich who gave an approximate analytic solution for special initial conditions with peculiar velocity proportional to peculiar acceleration (Zel’dovich, 1970). This is the famous “Zel’dovich approximation” for the evolution of growing modes where “approximate” means accurate to first order in particle displacements. Buchert’s studies (Buchert 1989; Buchert 1992, henceforth B92) enlarged the class of initial conditions that could be solved to first-order. Generalizations of the equations of motion, the initial conditions, the perturbation order and the method of solution have appeared regularly (Moutarde et al. 1991; Bouchet et al. 1992; Buchert & Ehlers 1993, henceforth BE93; Buchert 1994, henceforth B94; Catelan 1995; Munshi, Sahni, & Starobinsky 1994; Bouchet et al. 1995; Ehlers & Buchert 1997, henceforth EB97; Rampf & Buchert 2012; Rampf & Rigopoulos 2012; Rampf & Wong 2012; Rampf 2012; Tatekawa 2012). Physical extensions include general relativistic equations of motion (Kasai, 1995) and the treatment of relativistic fluids (Matarrese & Terranova 1996; Matarrese, Pantano, & Saez 1993, 1994).

LPT has proven to be a practical tool in many different applications including modelling the non-linear halo mass functions (Monaco 1997; Monaco, Theuns, & Taffoni 2002; Scoccimarro & Sheth 2002), reconstructing baryon acoustic oscillations (Eisenstein et al. 2007), modelling the non-linear density velocity relation (Kitaura et al. 2012) and setting up the initial conditions for fully numerical simulations (Scoccimarro 1998; Crocce, Pueblas, & Scoccimarro 2006).

An assumption common to all these treatments is that the matter velocity at a point in space is single valued (the distribution is “cold”). LPT breaks down once particle orbits begin to cross, caustics appear, densities diverge and physical singularities first form. Nothing in this paper ameliorates the onset of these effects. There have been interesting suggestions on how one might modify LPT to handle this fundamental limitation (e.g. Adler & Buchert 1999) but we do not pursue the issue here. For our purposes the immediate concern is a new limitation of LPT that has recently come to light. The convergence of the formal series expansion which is the basis of LPT turns out to be limited in time (Nadkarni-Ghosh & Chernoff 2011, hereafter NC11). No one would be surprised that the onset of particle crossings inhibits the utility of the formal perturbation expansion for subsequent times. What was unexpected and striking to us was the existence of limits in a variety of other situations, even cosmologies lacking future particle crossings and devoid of future physical singularities. To the best of our knowledge Sahni & Shandarin (1996) were the first to report the failure of LPT to describe one of the simplest cosmological problems, the evolution of spherical homogeneous voids. We confirmed their result, investigated the issue for all top-hat cosmologies and traced the problem to the occurrence of poles in a suitable set of analytically continued equations of motion. We complemented this mathematical explanation with direct numerical evaluation of the LPT series in all qualitatively distinct regimes for top-hat like cosmologies. The upshot is the existence of a non-trivial limit to the future times for which the LPT description converges. We dubbed the interval when convergence of the perturbation expansion occurs as the “time of validity.” Even if one could work to infinite order in the perturbation amplitude the LPT series would fail to give the correct answer at times outside the interval. The end of the time of validity precedes any limitation due to particle crossing. Figure 1 shows the first 15 orders of the LPT description for a simple underdense top-hat. The time of validity extends only up until .

Like us many readers may find it odd that this behaviour had not been previously reported in a model as well-studied as the top-hat but we can point to two reasons. First, convergence of LPT had never been addressed in a systematic fashion. Second, for collapsing Zel’dovich initial conditions (initial velocity and acceleration proportional) the situation is degenerate in the following sense: the convergence limitations become identical to the moment of caustic formation. Generically, however, the time of validity for top-hat evolution does not extend up to the first particle crossings and is the more stringent restriction. To calculate the “answer” beyond the time of validity we proposed to evaluate the solution at an intermediate time when the series is still valid and use the results as initial conditions for a new expansion about the intermediate point. We worked out the details for this solution technique which we dubbed “LPT re-expansion”. It is roughly analogous to analytic continuation of a power series in the complex plane. Re-expansion will not circumvent the ultimate limitation set by particle crossing but it will allow the solution to be extended to this maximum possible future time after which the flow is no longer cold. We provided detailed numerical examples of how LPT re-expansion handles the previous difficulties encountered in the top-hat problem.

Although our analysis was specific to one of the simplest of all cosmological problems we believe these ideas are valid in more general circumstances. Consequently, one should shift from thinking of LPT as a one step, stand-alone method of calculation to viewing it as the fundamental operation in a numerical, multi-step method of solution. Our development of LPT re-expansion resembles a traditional finite-difference method in that the system is updated on a step-by-step basis. Just as is true for particle-based calculations, stability limits how big a step can be taken and accuracy varies with step size and expansion order.

This paper works out LPT with re-expansion for general inhomogeneous initial conditions in a flat Friedmann-Robertson-Walker (FRW) background cosmology. We present fundamental mathematical results, practical computational methods and stringent numerical tests. This paper serves as a ‘proof of principle’ demonstration of the method. We will provide practical cosmological applications in subsequent papers.

In §2 we describe the Lagrangian framework and the relationship of the full solution of the geodesic equation of motion for particles to that provided by the LPT expansion. The formal expression for the latter is derived by first taking spatial derivatives of the geodesic equation (B92; BE93; B94; EB97). This system is invariant under spatially uniform, time-dependent translations. As we show in some detail, it is necessary to fix this degree of freedom to recover the physical solution. This involves supplementing the order-by-order LPT solution with purely temporal functions, which we refer to as ‘frame shifts’. This issue has never been addressed for single-step applications of LPT. For some initial conditions the omission turns out to be inconsequential but for most others it breaks momentum conservation. Moreover, when multiple steps are taken it becomes crucial to employ the full solution at each stage of the calculation. We will show that the numerical method converges if and only if that is done.

This paper notably differs from most previous approaches by considering initial conditions of a completely general nature. LPT is often used to propagate the small fluctuations present at recombination to a modest redshift for the purpose of initialising an N-body treatment. In studies of cosmological structure formation it is common to begin from Zel’dovich initial conditions since vorticity modes decay away in an expanding universe. The damping is normally sufficient to obviate the influence of the initial vorticity so that an LPT treatment of the growing modes alone is adequate for setting up an N-body treatment of structure formation in pure CDM. Vorticity is intrinsically generated once non-linearities form and particle crossing begins; these complications are accurately handled by traditional N-body methods. In this context, there is little need for treating arbitrary initial data.

Of course having a capability to study both longitudinal and transverse motions opens up interesting possibilities involving additional participants in structure formation such as magnetic fields and cosmic strings since these may act to introduce vorticity into the cosmic flows at late times. But the main motivation to work in full generality is to facilitate the use of LPT as part of a future calculational approach meant to treat non-linear structure formation. In any realistic multi-step numerical approach (analogous to that developed herein) vorticity will inevitably appear just as it does in traditional N-body treatments. After it arises each step of a multi-step calculation must be able to handle essentially arbitrary density and velocity configurations as initial data. The ability to treat these initial conditions with LPT is a pre-requisite for utilizing LPT itself in the future approach. That is the main reason we work in full generality.

§2 includes an outline of the design and implementation of the re-expansion algorithm. §3 summarizes tests for both special and general initial conditions. We test the algorithm’s ability to reproduce known solutions. We analyse the dependence of errors on Lagrangian order, step size and size of the numerical grid. Taken together these tests validate the concept of Lagrangian re-expansion, its analytic form and our computational implementation. All tests are for small grid sizes but we expect the methods demonstrated to apply without essential modification to larger scale calculations.

§4 presents conclusions and possible future developments.

Figure 1: Convergence of LPT: and are the scale factors of the background and the underdense spherical perturbation, respectively. The black dotted line is the exact open model with at and fractional Hubble velocity . The blue lines show successive higher order approximations from to (as labeled). The LPT series converges only even though there is no future singularity in this cosmology. LPT re-expansion provides a method to extend the solution to arbitrarily large in this example.

2 Gravitational field equations in the Lagrangian framework

2.1 General Setup

Assume a flat FRW background cosmology with scale factor that contains pressureless dark matter with average density and time-dependent dark energy with average density having fixed equation of state parameter . We do not distinguish between cold baryons and cold collisionless matter and we work before the formation of caustics. The homogeneous background evolution is determined by the initial scale factor , Hubble constant and matter and dark energy densities, and respectively. The subscript “0” indicates time which may differ from the current epoch.

We intend that the initial conditions allow for arbitrary density and velocity perturbations of cold dark matter subject to minimal restrictions. We constrain the dark energy component to be spatially uniform at all times 111Although it has been suggested that quintessence models with should have spatial fluctuations (for example Caldwell, Dave, & Steinhardt 1997), these fluctuations are estimated to be small (for example Mota, Shaw, & Silk 2008; Cooray, Holz, & Caldwell 2010). . We work in an expanding, periodic volume, a 3-torus. For physical, cosmological applications this volume is intended to be (1) a fair representation of the actual universe and (2) one that can be accurately treated in the Newtonian limit. Our perturbative approach does not allow any back-reaction of the small-scale physics on the large-scale expansion of the homogeneous and isotropic background. Consistency requires that the initial mean density of the inhomogeneous matter distribution agree exactly with that of the background cosmology. Mathematically, the evolution of the inhomogeneous Newtonian problem is completely determined by the otherwise arbitrary, initial density and velocity perturbations. We must check, in principle, that the Newtonian limit is valid at all times.

2.2 Lagrangian Picture

We will start by assuming that the coordinate system of the periodic volume coincides with one of the family of preferred world lines in FRW cosmology, i.e. one for which the CMB dipole vanishes. We will refer to this starting point as the OBS “inertial frame” and the observers as “inertial observers”.

In the Lagrangian picture the particle position is a function of time and a set of Lagrangian labels . The labels are assigned at the initial time and, for our purposes, are always taken to be initial comoving Eulerian positions. We write for the instantaneous comoving coordinate of a fluid particle. By contrast, in the Eulerian picture is an independent variable that identifies a fixed grid position. For notational simplicity we will often use the descriptive shorthand that is either Lagrangian (equivalently, a particle) or Eulerian.

In many applications one starts with a finite, discrete set of particles (e.g. an N-body code) but here we assume space is completely filled with particles. In effect, we promote the discrete set of Lagrangian labels to a continuous field throughout space so that at any time it is possible to relate an Eulerian grid coordinate to a Lagrangian label in a one-to-one fashion. Mathematically, the existence of a well-defined velocity field at every point in space establishes this result (EB97). The Lagrangian particle velocity is identical to the Eulerian velocity at the grid position that coincides with the particle position.

Throughout this paper we will use the following convention for density: is the physical density expressed as a function of , the physical position, and is the same numerical quantity expressed as a function of , the comoving coordinate. Explicitly, we have


Note that is not the comoving density.

Likewise, we employ a similar convention for density written in terms of the Lagrangian variable


where is the physical position of the particle labeled by Lagrangian coordinate and is the determinant of the coordinate transformation. The distinguishing feature of the Lagrangian labels is that they track the mass so, in addition to the above general properties, we also have mass conservation in the form


for any times and .

In comoving coordinates the Newtonian limit of the geodesic equation and of Poisson’s equation for the gravitational potential are


where is the scale factor and is the physical matter density perturbation. There is also an equation for mass conservation. The -related terms are Eulerian except for those on the left-hand side of the first equation which refer to particles ( and ). The time-derivative is the ‘Lagrangian’ or ‘convective’ derivative,


where is the Eulerian velocity. The operator commutes with the Lagrangian spatial derivative but not the Eulerian one.

To make mathematical sense of these equations as a Lagrangian system we should, schematically, specify Lagrangian labels at the initial time and transform the Eulerian to Lagrangian forms while imposing the condition that the Eulerian coordinate match the particle position.

We regard the system as posing an initial value problem for the particle positions and velocities. If the initial conditions are specified as Eulerian density and velocity perturbations then they must be transformed to particle positions and velocities. The density perturbation is a functional of the initial density and the initial and final particle positions. The potential can be solved at any time the density is known.

In physical coordinates we have


where the dot denotes the total derivative with respect to time. As above, and (left-hand side of the first equation) refers to particles while the structure of the remaining terms is Eulerian.

Buchert and Ehlers’ derivation of the Lagrangian equations of motion (B92, BE93, B94, EB97) involves taking the Eulerian divergence and curl of the first equation, combining with the second and using the homogeneous equations for the scale factor to give


where and are the matter and dark energy densities respectively. These steps can eliminate the potential and operator occurrences of or on the right-hand side. The essential complication, however, is that the derivative form of the geodesic equations is insensitive to a spatially homogeneous arbitrary, time-dependent vector shift .

The solution of the resultant Lagrangian system by Ehlers and Buchert (EB97) is achieved in a specific frame, one in which the initial, spatially averaged peculiar velocity vanishes. We must address the relationship between the solutions of the geodesic equation and of this specific solution of the derivative form of the geodesic equation. To handle this situation carefully, we start in the observer frame (denoted by OBS) and introduce the Ehlers-Buchert frame (denoted by EB). The solution to the geodesic equation lies in the former and the specific solution to the derivative form in the latter. The connection is called the frame shift.

Schematically, the initial data is specified in the OBS frame, transformed to the EB frame, solved in the EB frame using EB’s perturbative expansion and then transformed back to the OBS frame. The EB frame is not an inertial frame and the transformation between the two frames is not a Galilean transformation. We regard the EB frame as a computational frame; it coincides with the physical observer frame only at the initial time.

2.3 Notation for Observer and EB Frames

Begin by assuming that each frame possesses its own comoving coordinate system. Imagine a single particle variously described according to multiple coordinate systems. In each frame the Lagrangian label is assigned to be the comoving Eulerian coordinate at the initial time


At future times the particles positions are written and , respectively. Physical coordinates, Jacobians relating Eulerian to Lagrangian coordinates and volume elements are defined in the usual manner in the OBS frame


and it follows . Similar definitions are given in the EB frame.

Without loss of generality choose the two frames to coincide at the initial time, and , and write the relationship between the comoving coordinates as a pure function of time


Here, is the comoving frame shift. Next, define in terms of


We now have


Initially, , and at all times the Eulerian volume elements satisfy and the Lagrangian volume elements . We will choose in a moment and then later make use of the remaining freedom to choose so that the geodesic equation is solved.

The physical mass density is . Since it’s a scalar it remains unchanged by the EB to OBS transformation. Mass conservation and equations (6) and (18) give


The peculiar velocity is defined


and likewise for . The peculiar velocity shift is defined


2.4 Transformation from Observer to EB Frame

The perturbation is characterized by two quantities specified in the observer frame; the initial fractional overdensity


and the initial peculiar velocity


The density perturbation is a scalar quantity so transferring to the EB frame is immediate


and we will henceforth drop the “subscript” that distinguishes these two.

The initial peculiar velocity is a vector quantity which transforms like


where is a constant we will now choose. Let denote the average of a quantity over the variable ‘’. In this notation, the average peculiar velocity in the OBS frame at any time is


We want to ensure that the average peculiar velocity in the EB frame vanishes at the beginning of the step, , as assumed in the formal perturbative scheme given in EB97 (see §2.6).

From equation (32), this requirement sets . At the beginning of the step the Eulerian and Lagrangian coordinate systems coincide, , and the initial time derivative of the frame shift is


Equation (32) refers to the initial condition whereas equation (26) applies at all times during a step. In §2.7 we give explicit expression for the shift accumulated during a time . If one takes a single step then is computed once at the start of the evolution. At the end of the step accumulated position and velocity shifts are added to the EB frame results to obtain the solution in the OBS frame. If one re-expands the solution, one takes multiple steps and repeats the process over and over.

2.5 Equations in EB Frame

Conservation of mass and equations (15), (29) and (31) imply


The density evolution of dark energy (denoted by a subscript not to be confused with the Lagrangian coordinate) for a constant equation of state is


These allow one to express the right-hand side of equation (12) in terms of Lagrangian coordinates. We transform all the derivatives with respect to the Eulerian coordinates in equation (12) and (13) to derivatives with respect to the Lagrangian coordinates (Appendix A furnishes additional details). The complete set of equations is




and the action of and operators on general vectors is defined as


Here is the usual Levi-Civita symbol and Einstein’s summation convention is used. The scalar operator and vector operator provide a compact representation.

The specification of , , , , , the initial fractional overdensity and the initial peculiar velocity determines the solution. The fractional overdensity appears explicitly in the equations above; the peculiar velocity enters as initial conditions for . The system defined by equations (2.5) and (38) can be found in EB97 (see references therein for prior versions) for the case of a cosmological constant. The generalization here allows for time-dependent dark energy term.

2.6 EB Perturbation scheme

In the formalism outlined by B92, BE93, B94 and EB97, the physical coordinate is split


where the first term is the position in a homogeneous universe (zero density perturbations and zero peculiar velocities) and the second represents displacements that occur in the general case. The choice of the initial comoving coordinate as the Lagrangian label (equation 15) implies .

The displacement vector is expanded


where is the formal, bookkeeping device that tracks the order of the displacement. Substitute the ansatz equation (44) into equations (2.5) and (38) and equate the terms of the same order of to generate a hierarchy of equations to be solved. Since the Lagrangian labelling is independent of time the derivatives of interest commute: . This makes it straightforward to find and .

This formal treatment at zeroth order of equation (2.5) reduces to


while equation (38) is identically zero. This is simply the equation governing the background scale factor. Given , , , and the background evolution is completely determined.

At first order equations (2.5) and (38) reduce to


and at higher orders to


where the operators are


and and are scalar and vector source terms comprised of combinations of displacements with order . An explicit form is given in the Appendix B. Here L and T refer to longitudinal and transverse operators.

Next, the displacement is split into longitudinal and transverse parts, order-by-order:


where and . On the 3-torus, a unique decomposition for order requires (see Appendix C of EB97 for the mathematical proof). The choice of Lagrangian labels guarantees the condition is satisfied at for all . If the volume-averaged source terms vanish at all times then the decomposition is also possible at future times. Appendix C shows that at future times if both the overdensity and the peculiar velocity average to zero initially, which is the case (the latter by the choice of the EB frame). Using the results one next shows that the average scalar and vector source terms for displacements vanish so and this argument is extended order-by-order. The result is that the requirements for decomposition are satisfied at all orders.

Separating longitudinal and transverse displacements order-by-order uncouples the left-hand sides of equations (48) and (49) and rationalizes the “L” and “T” labelling a posteriori. There are separate source term (, ) but note that each depends upon both types of lower order solutions ( and for ). The entire solution at lower orders is needed to compute either the longitudinal or transverse source for a given order.

The specification of the initial conditions for the hierarchy of equations determines the physical meaning of the formal perturbative expansion in powers of . We make specific choices to ensure that the initial density and velocity perturbations are first order in . The first choice is for all (“L/T” means the equation applies to both longitudinal and transverse displacements) which is sufficient but not necessary to insure zero initial displacement. Homogeneous initial data makes it easy to use the formal structure of the hierarchy to check how powers of density and velocity enter. The density appears explicitly only at first order, i.e. in the equation for . The second choice is that the peculiar velocity is assigned as initial data only to the derivative of the first order term


where are the curl-free and divergence-less parts of the initial velocity. For all


One might contemplate other choices (i.e., one could deviate from homogeneous for the displacements and/or distribute the peculiar velocity to different orders) but these would not have the simple physical interpretation that the natural choices provide and could vitiate the split into longitudinal and transverse components.

In equations (46), (47), (48) and (49) the spatial and temporal operators commute and the spatial and temporal parts of the solution decouple at each order (see Appendix B). The solution is a sum of spatial parts times temporal parts. Each spatial part involves solving linear, elliptic partial differential equations of the form or with known sources , (Poisson-like problems). Each temporal part involves solving a second order ordinary differential equation in time (initial value problems). Both sets of equations include coefficients that depend upon the background cosmology.

As a practical matter all the Poisson-like equations are solved using Fourier transforms on a grid with equally spaced grid points which represent the Lagrangian coordinates. Initial value equations are solved numerically using a standard differential equation solver. The individual displacement terms are summed to reconstruct using equation (44).

2.7 Frame shifts

The LPT expansion described in the previous section yields the position, velocity and density in the EB frame. To get the corresponding answers in the observer frame one must add the time dependent frame shifts (equations 19 and 26).

Start with the Newtonian limit of the geodesic equation in the observer frame


and take the average with respect to the Eulerian (on the left we would formally replace with the local Eulerian velocity and with the full convective derivative). The right-hand side is periodic and averages to zero. Now formally we return to the Lagrangian description on the left: substitute for from equation (19) and use to give


and is the comoving volume of the box (physical volume ).

The geodesic equation is satisfied if the shift satisfies this second order ODE with initial conditions and . The solution is




Note that is completely determined from the EB solution.

In the observer’s frame the physical position and velocity are


and hence the peculiar velocity (defined by equation (24)) is


The density in the observer frame is identical to that in the EB frame


where the right-hand side is given by equation (35).

Note that the frame shift is time-dependent. The right hand side of equation (57) has two integrals: the first arises from requiring that the geodesic equation is satisfied whereas the second is the decay (due to Hubble drag) of the non-zero average initial velocity defined in §2.4. The shift is not a fixed Galilean transformation. It is related to the fact that the system given by equations (12) and (13) is ‘translation covariant’ (Heckmann-Schücking transformation; see Buchert & Götz 1987 and references therein). In particular, the position, velocity and acceleration are different in the two frames. The density in the two frames, as inferred from the divergence of the acceleration, is the same. That operator is insensitive to any time dependent spatially uniform acceleration. Such accelerations are absent in the inertial frame but present in the computational one. Existing treatments of LPT set the frame shift to zero and satisfy equations (12) and (13). However, this does not yield the solution in the OBS frame. We will demonstrate in later sections that ignoring the frame shift destroys momentum conservation and convergence.

2.8 Frame shifts and momentum conservation

Frame shifts ensure that the geodesic equation is properly satisfied. This is not just a mathematical requirement but a physical one. In this section we consider the implications for momentum conservation and drop the explicit OBS subscript for clarity.

The total proper momentum in comoving coordinates written as a sum over mass elements or an integral over Lagrangian labels (i.e. the density at the initial time) is


where . Multiply by and take the time derivative


Now replace the variable of integration with at the time of interest. This recasts the integration from a Lagrangian to an Eulerian form. Using equation (23) the densities are and in the last equality we have introduced the background and the fluctuating densities. The integral splits


When the geodesic equation equation (7) is satisfied is proportional to . Since is periodic the first volume integral with integrand proportional to vanishes. When the Poisson equation equation (8) is satisfied one may rewrite using the formal Green’s function for and invoke its symmetries to show that the second integral vanishes. Hence, and .

If Poisson’s equation is solved exactly in the EB frame the second integral will vanish. However, if frame shifts are ignored then the geodesic equation will not be satisfied and the first integral will generally fail to vanish.

A qualitative understanding of why a non-zero must develop is revealed by looking at a problem with Zel’dovich initial conditions. We characterize this system as starting with first order terms in displacement, peculiar velocity and peculiar acceleration strictly proportional to one and other and negligible higher order terms (see Appendix D for a detailed discussion). Using , and the definition of , the total momentum is


where is the comoving volume of the box. The initial state with velocity proportional to acceleration implies that the first term because the system is periodic and the mean acceleration vanishes. The second term on account of the symmetries of the Green’s function for Poisson’s equation. Hence, Zel’dovich initial conditions in a periodic system imply . If/when the system evolves so that is no longer proportional to acceleration then the second term will no longer vanish. The first term must be present and non-zero to insure . In summary, the EB frame’s non-zero velocity is intimately tied to the momentum conservation law which is maintained by proper inclusion of the frame shifts.

Whenever a single step, first order scheme is used to initialize an N-body calculation starting with Zel’dovich initial conditions the frame shift vanishes, i.e. , accurate at first order. To see this insert the definition of from equation (87), the proportionality of displacement, velocity and acceleration into equation (57). In general, a second order scheme that omits the frame shift completely makes a second order mistake, i.e. is accurate at first order but not second.

This example also helps one understand some special situations when frame shifts are exactly zero. Assume the initial density and velocity field are purely one dimensional i.e., parallel planes of matter in 3D. As long as particle trajectories do not cross the acceleration on a particle is fixed by the initial density field. A single step first order LPT scheme yields an exact description and it is possible to write out explicit expressions for the frame shifts. For this one dimensional limit the mean velocity (in the OBS comoving coordinate system) is conserved i.e., . If it is initially zero, it is always zero. This does not imply that frame shifts vanish because there are two contributions, one coming from and the other from internal dynamics in equation (57). In one dimension both terms will vanish if the initial velocity is proportional to the initial acceleration. In this special situation the observer and EB frames coincide at all times and the frame shift terms are exactly zero for both single and multi-step schemes. Proofs are provided in Appendix E.

Frame shifts are needed in other contexts and omitting them will generally produce inconsistencies that manifest as lack of momentum conservation. To the best of our knowledge these frame shifts have been ignored in most applications of LPT so far.

Let’s briefly consider the physical nature of . If the box contains a fair sample of the universe and if the comoving coordinate system coincides with the preferred FRW frame (one with an isotropic cosmic microwave background radiation) then one anticipates . On the other hand, if super-horizon motions are present (so the notion of a fair representation is not satisfied) and/or if the observer’s frame is not equivalent to the preferred FRW frame then, generally speaking, is non-zero. In such cases the EB frame coincides with the OBS frame only at the beginning of the calculation.

In an inhomogeneous cosmology is not proportional to the mean momentum in the box. Of course, the box’s self-generated gravitational forces cannot impart net momentum to the box as a whole. If the initial net momentum is zero it will always be zero, however, this does not imply that will vanish on future Eulerian comoving grids if it vanishes initially. In particular, if one chooses then at later time one generally has .

As a practical matter if one employs the EB method of solution one cannot circumvent the need to start each step by finding the frame where vanishes. There is no single, adroit selection to be made at the calculation’s start because the EB frame is not tied to a conserved quantity.

2.9 LPT re-expansions

The previous two sections describe the complete formalism for single step LPT; start with initial densities and velocities in the OBS frame, transform to the EB frame, solve for the displacements, move back to OBS frame and compute the densities and velocities at the end of the step. Two questions remain. How big a step can be made and what to do if that step doesn’t encompass the total evolutionary time of interest?

In NC11 we answered those questions in the context of spherically symmetric perturbations in an cosmology. We demonstrated that all convergence problems associated with the LPT series could be overcome by re-expanding the series in overlapping time domains with each domain subject to a time of validity criteria. The time of validity was determined rigorously as functions of the initial density and velocity perturbations.

Here, we assume that the time of validity for the inhomogeneous evolution can be estimated by treating each point in the box as if it were an isolated top-hat. That is, we use the local density perturbation and local velocity perturbation to calculate the time interval that would be allowed for a top-hat. We find the maximum time that lies within all the individual intervals throughout the volume and set the time step accordingly.

The fractional overdensity and the fractional Hubble parameter are the specific parameters used to characterize the time of validity for the spherical perturbation. For generic inhomogeneous initial conditions we write the natural generalizations


The values for and are identical in OBS and EB frame. The time of validity was determined in NC11; we use those results to find the minimum of over the Lagrangian grid 222The spherical top-hat system has no transverse component so our treatment ignores that additional complication present in the inhomogeneous system. We see no evidence that this omission leads to any practical difficulty in stability or in convergence..

A positive at a point implies an overdense region and a positive implies an expanding region. From our experience with top-hat evolution we know that if the time of validity is set by an expanding region then the LPT re-expansion scheme has the ability to extend the solution arbitrarily far into the future. If it is set by a collapsing region then the LPT re-expansion can extend the solution to caustic formation. We remind the reader that the re-expansion scheme does not include any physics for treating hot (multi-stream) fluid.

In LPT re-expansion the quantities calculated at the end of one step form initial conditions for the next step. Since each step begins with the Lagrangian coordinates equal to the comoving coordinates there is one additional computation that is necessary. A new Lagrangian coordinate must be defined at the start of the next step; this is related to the Lagrangian coordinate of the previous step by


where is the starting time for the step and is given by equation (60). The final quantities from the previous step are known on a uniform grid in ; they are transformed so that the initial conditions are specified on a uniform grid in . We solve equation (73) for given uniform and interpolate the final quantities at . We refer to this step as regridding.

As explained in §2.4, differs for each step. Utilizing a sequence of frame shifts does not introduce any intrinsic errors. If the calculation were done exactly using infinite order LPT then the frame shifts at the end of a step would be computed exactly and, in turn, would yield the exact solution in the OBS frame. The re-expansion step involves interpolating the final density and velocity in the OBS frame onto a uniform grid. If interpolation were done without error then the new at the start of the step would also be exact. In finite accuracy calculations is contaminated by errors that are completely analogous to those in density or velocity. The entire solution converges when the tolerances are tightened.

3 Numerical tests of the code

This section checks and verifies the theoretical scheme outlined in §2 by carrying out selective numerical calculations. The truncation error in the calculations depends upon the Lagrangian order (), the number of time steps (), and the size of the spatial grid (). We will also refer to as the re-expansion frequency.

The basic elements of the calculation are:

  1. Use FFT (Fast Fourier Transform) methods to solve all spatial equations,

  2. Use Runge-kutta integration to solve the ordinary differential equations for the time-dependent coefficients (including the cosmology background),

  3. Use Simpson’s rule for the quadratures needed to evaluate the frame shifts (eq. (57)),

  4. Use Newton’s method for root finding and exact periodic interpolation during the regridding step (the most time-consuming task).

Generally, we complete each individual task to machine precision. In the following section “interpolation error” refers to the net total of these four sources of numerical error. The total is typically dominated by the roundoff error in the solution of the spatial equations. These manipulate matrices with elements. We observe residual errors at roughly the level where is the machine precision. The interpolation error is small compared to truncation error in all results we will discuss. The truncation error is the main focus of investigation.

3.1 Convergence and truncation error

Consider evolution over a finite length of time prior to particle crossing in a periodic box of comoving size . There are two qualitatively distinct LPT schemes that, in principle, are capable of converging to the exact answer:

  1. Fix and increase and without bounds.

  2. Fix and increase and without bounds.

In both schemes it is necessary to refine the representation of all spatial functions (). In both it is necessary to respect the time of validity for evolution. The first scheme is satisfactory if is sufficiently large; the second scheme which decreases the size of the time steps as will eventually respect any non-zero time of validity condition. Hybrid forms in which Lagrangian order and number of steps increase together are also possible.

Achieving a theoretical understanding of the scaling of the errors is important for two reasons. The numerical methods should demonstrate the expected scaling. Agreement between expected and actual scaling is valuable information when one is validating a computer code. In addition, a theoretical understanding helps make the most efficient choices of control parameters for carrying out practical calculations.

In the past, LPT with and fixed has been applied to practical cosmological problems while the formal convergence issues have generally been ignored. Occasionally, a solution has been studied as a function of the Lagrangian order for a single step as in the first scheme (Buchert et al. 1997). There are numerous comparisons with N-body calculations (Buchert, Melott, & Weiss 1994, Melott, Buchert, & Weiss 1995, Karakatsanis, Buchert, & Melott 1997) but these are not sufficiently accurate to address the issue of LPT convergence. We will provide a detailed look at convergence with respect to and .

We treat in a somewhat different manner. The spatial representation within the box should be refined as order and/or number of steps increase but we are unaware of any study of LPT that did so. Here, we will study one example with smooth initial conditions to establish that for sufficiently large grids the dominant error is set by Lagrangian order and is insensitive to grid size. For the remaining applications we argue (on a case by case basis) that grid-related errors are so small that convergence can be studied while holding large and fixed.

This strategy is a reasonable but not rigorous approach to testing a method’s convergence in a comoving, periodic volume. It is also important to note that making the comoving box larger may be crucially important for achieving accurate physical results. This is distinct from solving a given problem in a given box. For example, if the box holds a finite subset of a larger physical system (e.g. the universe) and one is interested in mean quantities of the larger system then one might need to consider ever bigger boxes. Or, if the physical system in the box obeys different boundary conditions (e.g. isolated not periodic) then one might need to increase the physical size. In these situations convergence to the physical answer of interest will require that the grid density and box size increase without bounds.

3.2 Initial conditions

At the initial time , the system is completely specified by the choice of cosmology, the initial density field and the peculiar velocity . Table 1 gives a list of various parameters for the test runs presented here.

  1. All tests are done in an Einstein-deSitter cosmology over time intervals short enough that perturbations do not collapse to form multi-stream flows.

  2. The initial conditions satisfy . This choice of convenience allows us to characterize the peculiar velocity initial conditions completely and directly in terms of transverse and longitudinal content. It has no essential impact on the general conclusions derived from the testing.

  3. The tests cover both special and generic initial conditions. Special initial conditions possess non-trivial symmetries. For example, it is well-known that transverse modes decay with time in expanding cosmologies. Practical cosmological simulations typically begin by setting ; such initial conditions are special by this criterion. We also consider generic initial conditions to exercise the full dynamics.

Section Type
3.3 1D 1 1-4 16 0.75 0.007 0.112 0.007 0
3.4 Exact 3 1 16 .5 1.22 0.11 0.11 0
3.5 Top-hat 2 1 24-64 0.75 0.72 0.05 0.03 0.042
3.6.1 Zel’dovich 1-3 1-5 16 0.75 0.077 0.687 0.687 0
3.6.2 Generic 1-3 1-6 16 0.75 0.077 0.687 0.270 0.523
Table 1: Table of parameters for the numeric tests presented in §3. Here, is Lagrangian order, is number of geometrically spaced steps, is spatial grid size, is the total time, is the initial time. The rms sub-scripted quantities are root mean square of , the initial density contrast, , the initial first order acceleration, and , the initial velocity (longitudinal, transverse). Each test is discussed in the Section noted. 1D means a one dimensional density and velocity profile. ‘Top-hat’ means a compensated top-hat smoothed by a fixed Gaussian and including a non-radial, smooth velocity perturbation. ‘Exact’ refers to the analytic solution given for special initial conditions (Model I in Buchert et al. 1997).

3.3 Evolution of 1D perturbations

First order LPT is well-known to be an exact method for treating one dimensional problems prior to particle crossing (Peacock 1999). An “exact” solution is directly found by quadrature irrespective of the identification of the EB frame. Here we compare exact results of this sort to answers generated by our general 3D numerical implementation of LPT re-expansion.

This comparison serves as a minimal check on the full code at first Lagrangian order because it involves all the basic elements of the most general calculation: the Fourier representation of spatial functions, solution of the ordinary differential equations fo‘r the time-dependent coefficients in the LPT expansion, evaluation of frame shifts and the interpolation from one Lagrangian grid to another. The solution of this simple problem exercises the entire re-expansion LPT methodology. No special modifications were made to the 3D code for carrying out this test.

The initial density and velocity perturbations were taken to be one dimensional functions consisting of a single Fourier component and , where is the comoving length of the box and is the component along the -axis. With this choice the initial velocity and acceleration are not proportional and we expect non-zero frame shift terms. Calculations were made at Lagrangian orders , for steps covering a fixed total time with time steps for . Regridding was done by solving equation (73) after each step.

Exact and numerical results are compared in the observer’s frame at the final time. As expected, the contributions to the solution at second and third order were zero to machine precision. The left panel of figure 2 shows the errors (maximum absolute velocity difference on the grid between exact and numerical results) as a function of the numbers of steps taken to reach the final time. The lower dotted line at roughly the round off level of computation (marked “with frame shifts”) shows that essentially perfect agreement with the exact answer is achieved for both single and multiple steps. This result validates (at first order) the implementation of LPT re-expansion. Note that a sequence of exact first order calculations is exact.

Frame shifts play an essential part in the numerical calculation. We repeated the calculation except that we imposed in equation (60) and equation (61). This omits the frame shifts. In left panel of figure 2 the upper dashed line shows the results obtained. There is an easily detected error with respect to the exact answer, an error present even for single step LPT.

Another useful comparison may be made between numerical single and multi-step calculations even if one is ignorant of the “exact” solutions. As we have already shown above when frame shifts are included single and multi-step first order LPT calculations are equivalent. However, if one omits the frame shifts this equivalency is broken. The solid line shows the difference between the numerical results for runs of a single step and those of multiple steps when all frame shifts are omitted. The explanation is simple: none of the answers is exactly right and the discrepancies are tied to different numbers of missing frame shifts.

The right hand panel of figure 2 shows a test of momentum conservation for calculations with and without frame shifts and for varying numbers of steps. With frame shifts the numerical calculation achieves round off levels of accuracy; without them the deficiency in the conservation law is easily detected and, moreover, does not improve as the interval in time is refined.

Taken together these simple tests help validate the method and underline the crucial importance of including the frame shifts.

Figure 2: Left hand panel: the dotted line shows the maximum difference of numerical and quadrature-derived, exact answers when frame shifts are included in the numerical answer; the dashed line shows the same information when frame shifts are omitted from the numerical answer. Only when shifts are included do the numerical answers agree with the exact ones to machine precision. The solid line shows differences in single and multi-step numerical answers in the absence of frame shifts (none of these are “exact”). Right hand panel: test of momentum conservation for the numerical runs with (dotted) and without (dashed) frame shifts.

3.4 Exact solutions

An excellent check is Model I in Buchert et al. (1997). It is an exact analytic result for one step in the EB frame at the first three orders for special initial conditions. Our numerical results agree with the analytic results to machine precision for all three orders. The special symmetry of the initial data implies that many of the spatial functions of the full general scheme vanish. The first, second and third orders comprise 2, 3 and 16 non-zero terms, respectively, and correspond to 2/3, 1/3 and 1/4 of the total number of possible terms at each order. The agreement using the full set of possible terms confirms that the both the time and space pieces of non-zero terms are encoded correctly. The agreement provides a consistency check for the spatial terms that vanish but says nothing conclusive about the temporal parts of these particular terms.

This check is essentially independent of the question of frame shifts. The analytic expressions for the frame shifts are exactly zero at all three orders. The numerically derived frame shift gave the same result to machine precision. It is not clear if the frame shifts continue to be zero at all higher orders, but for the purposes of this test we are concerned only with the first three orders.

Ideally one would construct more general test cases to check all possible terms in the code. Unfortunately, the method of constructing ‘local forms’ (B94) is cumbersome and involves guessing the correct solution for the Lagrangian displacement at each order so that the divergence and curl match the source terms order-by-order. We have utilized the checks against all the published solutions of which we are aware.

3.5 Top-hat with fixed smoothing

The idealized configuration for this test is an overdense sphere surrounded by a compensating region and an infinite homogeneous background. If the initial velocity perturbation is zero the solution is simple and analytic. There are two practical issues that potentially interfere with running tests starting from this sort of initial data. The computational volume is intrinsically periodic and the computational method is designed to handle continuous not discontinuous spatial distributions.

A periodic configuration necessarily includes interactions between neighbouring boxes because exact spherical symmetry is not maintained in an approximate numerical solution. A suitably compact configuration limits these errors and the difference between the isolated and periodic versions can be made negligibly small. 333The Fourier representation of the density guarantees that the mass within each box is exactly equal to the background value. The identical symmetries of the Cartesian grid within the box and of the arrangement of the surrounding periodic copies guarantee that the dipole interactions between copies vanish. Therefore, the leading interaction that distinguishes the isolated from the periodic problem is a quadrupole term. It is for the configuration studied. This is roughly equivalent to machine precision at and is very small compared to the truncation errors studied here. In principle, the top-hat is an excellent problem for validating the LPT expansion since the (isolated) solutions for the displacement and density can be computed analytically order by order. Numerical results can be compared to analytic ones at a given order (NC11). The significant complication is that the exact top-hat profile is discontinuous along the boundaries between the overdense sphere, the vacuum compensating region and the outer region of average density. Such discontinuities induce Gibbs phenomenon in the spatial Fourier representation and mask differences between expected and observed solutions. In principle, one can make the grid sufficiently large to overcome this interference but we do not attempt to do so here because the convergence is very slow. That means we must rely on indirect checks that the higher order terms in the LPT expansion are correct.

Here, we smooth the top-hat density field by a Gaussian with fixed width and also impose a smooth velocity field. The resulting velocity profile has both longitudinal and transverse velocities. The details of the configuration can be found in Appendix F. The exact solution is unknown so we discuss convergence using Cauchy differences for quantity


where and label one or more of , or (with the other parameters held fixed) and is the coordinate system used. We take for density and for velocity at the final time. For all three velocity components are included.

We calculate a single step of small size so that the final time is well before collapse. The assessment of convergence involves Cauchy differences formed with runs having different grid resolutions and Lagrangian orders, e.g. and . The spectral representation of spatial functions suggests that if grid errors dominate then the Cauchy differences will decrease exponentially with . Conversely, if errors in the series expansion in LPT dominate then the scaling will be exponential with . This implies a sharp transition from errors dominated by grid resolution to errors dominated by Lagrangian order.

Figure 3: Convergence of the solution as the size of the spatial grid or the Lagrangian order is increased. Cauchy differences of density on the OBS Lagrangian grid are plotted. The dotted line traces the differences for calculations having increasing numbers of grid points ( and ) at fixed Lagrangian orders (, and ). In fact, the calculated differences, labeled by , all overlap. The straight dotted line indicates the exponential convergence expected for a spectral method. The solid, horizontal lines are Cauchy differences with respect to Lagrangian order (n refers to orders and ) at fixed . Both types of errors are generally present. For small grids the grid-related spatial errors dominate and for large grids the Lagrangian-related finite order errors dominate. The crossover from -dominated to -dominated errors occurs at . Additional calculations (not shown) demonstrate that is smaller for smoother initial data. All the errors shown are much larger than the interpolation error shown by the dashed-dotted line.

Runs with and were carried out for fixed smoothing. We evaluated the Cauchy differences as the grid was refined ( at fixed ) and then as the Lagrangian order was increased ( at fixed ) for (on the OBS Lagrangian grid). Figure 3 shows the results for the density. The Cauchy differences for varying grids are virtually identical for (this is not obvious from the figure, because the lines for different overlap). The Cauchy differences for varying Lagrangian do not depend upon (this is obvious from the fact the Cauchy differences are basically horizontal lines). Define the grid crossover to be the value of that satisfies for a given . The behaviour of the formal measure of convergence is


The grid-related refinement dominates in the first case and order-related refinement in the second. The figure makes clear that the spatial differences decline exponentially with (as they should for a spectral method) and the analytic structure for the LPT expansion suggests that order errors decline exponentially with (this will be tested more directly in a succeeding section). One surmises that for two sources of exponential errors, the transition should be a linear function of .

The specifics of the transition from grid to order-dominated convergence are problem dependent. We have investigated initial conditions which differ from those above only in the degree of smoothing used (choice of ). We find is larger when initial data is less smooth and vice-versa.

The practical upshot is that if the solution is smooth and then the spatial errors are much smaller than changes associated with incrementing the Lagrangian order or decreasing the step size over finite ranges of and , respectively. We attempt to conduct all the rest of our tests in the limit when the grid errors are negligible.

3.6 Convergence with Lagrangian order and number of steps

This section presents tests of convergence with Lagrangian order and number of steps for initial conditions arising from a random Gaussian field at fixed spatial grid size. Two types of initial data are considered: Zel’dovich initial conditions (velocity proportional to the acceleration field; irrotational flow) and generic initial conditions (initial velocity chosen arbitrarily from random Gaussian fields; transverse and longitudinal components present and of comparable size). We focus on the truncation error for Lagrangian order and re-expansion frequency . We choose grids, initial conditions and time intervals for evolution for which we can be reasonably sure that grid-related errors at fixed are so small that they do not interfere with the Cauchy differences for varying and .

We proceed as follows. The power in the initial data is limited to frequencies less than half the Nyquist frequency. In this discussion, we refer to wavenumbers as ‘frequency’ (), not to be confused with the re-expansion frequency . For box-length and grid size , the magnitude of the Nyquist frequency is . Since gravitational dynamics is intrinsically non-linear we know that the power in these initially zeroed modes will grow as collapse proceeds. Ultimately, any power that reaches or exceeds the Nyquist frequency will manifest as error since the Nyquist frequency is fixed because the grid is fixed. The size of the power in the vicinity of the Nyquist frequency is taken to be a surrogate for error incurred by using a finite grid. We monitor the power that builds up to make sure that the spatial errors remain negligible.

The power in the Nyquist frequencies is


where is the set of wave numbers having a Cartesian component equal to the Nyquist frequency (or, we can monitor the power within a given range of the Nyquist value). Consider, for example, the Cauchy difference with respect to on the OBS Lagrangian grid. This may be rewritten using Parseval’s theorem (Press et al. 2002)


where is the Fourier transform of defined as . The maximum contribution to the total that comes from the Nyquist modes is