TESS: A Relativistic Hydrodynamics Code on a Moving Voronoi Mesh
Abstract
We have generalized a method for the numerical solution of hyperbolic systems of equations using a dynamic Voronoi tessellation of the computational domain. The Voronoi tessellation is used to generate moving computational meshes for the solution of multidimensional systems of conservation laws in finitevolume form. The mesh generating points are free to move with arbitrary velocity, with the choice of zero velocity resulting in an Eulerian formulation. Moving the points at the local fluid velocity makes the formulation effectively Lagrangian. We have written the TESS code to solve the equations of compressible hydrodynamics and magnetohydrodynamics for both relativistic and nonrelativistic fluids on a dynamic Voronoi mesh. When run in Lagrangian mode, TESS is significantly less diffusive than fixed mesh codes and thus preserves contact discontinuities to high precision while also accurately capturing strong shock waves. TESS is written for Cartesian, spherical and cylindrical coordinates and is modular so that auxilliary physics solvers are readily integrated into the TESS framework and so that the TESS framework can be readily adapted to solve general systems of equations. We present results from a series of test problems to demonstrate the performance of TESS and to highlight some of the advantages of the dynamic tessellation method for solving challenging problems in astrophysical fluid dynamics.
Subject headings:
hydrodynamics – methods: numerical – relativity1. Introduction
Many astrophysical gas dynamical systems involve the motion of compressible fluid with relativistic velocity or energy density. Techniques for the numerical solution of the equations governing the multidimensional dynamics of relativistic fluids have been developed greatly in recent years. Significant recent progress has been made in gridbased methods where the fuid is described using Eulerian meshes for hydrodynamics on both uniform meshes (see review by Marti & Muller, 2003, and references therein) and with adaptive mesh refinement (AMR) (Duncan & Hughes, 1994; Zhang & MacFadyen, 2006; van der Holst et al., 2008), as well as using smoothed particle hydrodynamics (Rosswog, 2010). Extensions of these methods have been implemented including for general relativistic magnetohydrodynamics (Font, 2003; Gammie et al., 2003; De Villiers & Hawley, 2003; Del Zanna et al., 2007; Anninos et al., 2005; Mignone et al., 2007; Tchekhovskoy et al., 2007; Nagataki, 2009) and for dynamic spacetimes (Anderson et al., 2006; CerdáDurán et al., 2008; Etienne et al., 2010).
In this paper we present a new method for relativistic gas dynamics based on a dynamic Voronoi tessellation of space. The Voronoi tessellation generates a numerical mesh which moves and distorts with an arbitrary velocity field depending on how the meshgenerating points in the tessellation are moved. While holding the mesh points fixed results in an Eulerian method, allowing them to move with the local fluid velocity results in an effectively Lagrangian method. In this paper we present the TESS code which we have written on a dynamic mesh generated by successive Voronoi tesselations of the spatial domain. TESS is a numerical code and general framework for both relativistic and nonrelativistic implementations of hydrodynamics and magnetohydrodynamics (MHD). The strength of the method is to retain the advantages of conservationlaw formulation essential for accurate computation of relativistic gas dynamics, while gaining the advantages of a Lagrangian scheme. Of particular importance, the mesh motion allows for contact discontinuities to propagate without the excessive numerical diffusion often found in fixed mesh computations. The preservation of contact discontinuities is of great importance for problems involving the development of fluid instabilities and for reactive hydrodyamics where artifical diffusion of reactant species can lead to unphysical solutions. Using mesh motion, TESS accurately solves the numericaly challenging case of relativistic shear flows (see Section 3), a problem which underresolved Eulerian simulations calculate incorrectly (Zhang & MacFadyen, 2006).
Lagrangian codes have had great success when employed in one dimension, usually to treat problems with spherical symmetry. Multidimensional problems, however, are more challenging for Lagrangian codes, due to distortion of the computational mesh when complexities in the flow develop. Noh (1964) formulated a simple strategy for dealing with mesh distortion in multiple dimensions, which was to continuously remap the computational domain as the system evolved, to prevent the mesh from becoming overly distorted. Codes employing this strategy are referred to as “Arbitrary LagrangianEulerian” (ALE) codes. ALE codes solve the problem of mesh distortion, but at the cost of diffusive remaps.
Börgers & Peskin (1987) addressed the problem of mesh distortion by using a tessellated computational domain. The improvement was to employ the Voronoi tessellation; a unique decomposition of space into polyhedra based on a set of meshgenerating points (originally called “fluid markers”). The advantage of such an approach is that the meshgenerating points can move freely through the computational domain and can be added or removed to enable adaptive refinement. The Voronoi tessellation adjusts its shape and topology so that the computational mesh does not become overly distorted.
Whitehurst (1995) also made use of domain tessellation for mesh generation. His “signal method” was a finitevolume method, partially inspired by finiteelement methods employed for solving problems with irregular boundaries. The method was firstorder and conservative, and employed a Delaunay triangulation of the computational domain. The technique employed highresolution shockcapturing techniques developed over the last decades for gridbased Eulerian codes, while also having the advantages that come from solving the fluid equations on a moving mesh. The “grid cells” in this case were Delaunay triangles, and the main source of diffusion came about during changes in the mesh topology. During such changes, the conserved quantities were redistributed evenly among the affected triangles. This process introduced diffusion similar to that encountered in ALE codes during remapping of the mesh, but because the diffusion was localized and only occurred during topology changes, the accumulated error was reduced.
More recently, Springel (2010) developed a secondorder finitevolume approach to solving Euler’s equations using a Voronoi tessellation of the computational domain. This method was implemented in the AREPO code, which has recently been applied to simulate star formation in a cosmological context (Greif et al., 2011). The advantage of using the Voronoi tessellation instead of the Delaunay triangulation is that the Voronoi cells do not suffer abrupt transformations during changes in mesh topology so that remapping and fluid redistribution are uneccessary. This was the first time a secondorder finitevolume Lagrangian method of this nature was proposed. Another important property of this method Galilean invariance; the code’s performance is independent of the velocity of the reference frame in which the calculation is performed.
In this paper we generalize these developments to the case of relativistic hydrodynamics. We should not expect to retain all of the advantages found in the Newtonian case; in particular, the prospect of having a Lorentzinvariant formulation is not to be expected, since in standard formulations mesh points are assumed to be simultaneous at each timestep.
The paper is structured as follows: In §2, we describe the formulation of the fluid equations, and the specific implementation in the code. In §3, we provide a series of test problems to determine the convergence rate and to illustrate what sort of problems naturally benefit from a Lagrangian approach. In §4, we demonstrate the code’s usefulness in an astrophysical context, by looking at the RayleighTaylor instability in a decelerating relativistic shock. In §5, we summarize our results.
2. Numerical Method
We first give a brief description of the Voronoi tessellation, before explaining how it is used in solving the fluid equations. A tessellation is a decomposition of space into separate pieces, in our case polygons or polyhedra. For the moment, we restrict our attention to twodimensional tesselations, mainly because they are easier to describe and to visualize.
A Voronoi tessellation can be generated from some collection of points (see Fig. 1). Each meshgenerating point (i.e. ”mesh generator”) will correspond to a polygon in the tessellation. The polygon associated with point P is defined to be the set of all points which are closer to P than to any other mesh generating point. Thus, if an edge is adjacent to the two Voronoi polygons of points P and Q, this edge lies in the line of points which is equidistant to P and Q. In general, we will refer to the polygons as “cells” and the edges as “faces”. This terminology makes it easier to generalize to arbitrary numbers of dimensions. Additionally, we will speak of the “volume” of a cell, which in two dimensions will mean area and in one dimension will mean length. We will also refer to the “area” of a face, which in two dimensions will mean length, and in one dimension will just be a constant number which we set equal to unity.
An important related tessellation is the Delaunay triangulation. Given a set of mesh generating points, one can generally form a triangulation with the mesh generators as vertices. In a sense, the Delaunay tessellation is the “nicest possible” triangulation of the space, given by the following definition: for every triangle in the tessellation, the three vertices of the triangle all lie on some circle, C. In order for this to be a proper Delaunay triangle, no other mesh generators may lie within the circle C. This property almost uniquely defines the triangulation. In degenerate cases, where four or more mesh generators lie on a common circle, the triangulation is ambiguous for these mesh generators. Degenerate cases like this will not concern us greatly in this paper.
For a given set of mesh generators, the Delaunay tessellation is the topological dual to the Voronoi tessellation. Every Delaunay edge corresponds to a Voronoi face, and every Voronoi vertex corresponds to a Delaunay triangle (in fact, the Voronoi vertex lies at the center of the circle generated by the three vertices of the Delaunay triangle). This fact will help us in constructing the Voronoi tessellation, because the problem can be reduced to finding the equivalent Delaunay triangulation. Since there is a straightforward test to check that a tessellation is Delaunay, this is typically the easiest way to find out which mesh generators are neighbors.
Before describing the details of how one might generate a tessellation from a given set of points, it will be useful to write down the numerical formulation so that we know what information needs to be extracted from the tessellation procedure. We shall find that our formulation applies generally to any hyperbolic system of conservation laws, regardless of whether the underlying equations are relativistic.
2.1. Finite Volume Formulation
The equations being solved will always have the form:
(1) 
(2) 
In particular, for the case of Euler’s equations:
(3) 
where is the density of the fluid, is the momentum density, and is the energy density. is the pressure, and is the flow velocity. For simplicity we consider the case where there are no source terms, . For the relativistic version:
(4) 
Where now is the fourvelocity.
(5) 
and is the internal energy density, which can be found from the equation of state:
(6) 
For the case of an adiabatic equation of state, we have
(7) 
where is the adiabatic index of the fluid. We consider general physical equations of state in the Appendix. To derive the finite volume form of these equations, we shall set the source term to zero for brevity, but the generalization to a nonzero source term is straightforward. For concreteness, we shall work in 2+1 dimensions, but everything said here easily generalizes to arbitrary numbers of dimensions. Equation (1) does not depend on the spacetime metric a priori, and so we can write down a formulation independent of the metric. For the following derivation we shall assume a Euclidean metric, rather than a Minkowski metric.
We now look at the evolution of a Voronoi cell over one timestep. It is assumed that the cell changes its shape and size by the linear motion of its faces, and that over a timestep it traces out a solid polyhedron in 2+1 dimensions, whose top and bottom are surfaces of constant time (see Fig. 2). The shape is actually not quite a polyhedron, but this is an approximation we are making, which is valid in the limit of short timesteps. In practice, we resolve this issue and others by taking a highorder RungeKutta timestep. We integrate (1) over this polyhedral 2+1dimensional volume P:
(8) 
We can easily convert this to an integral over the twodimensional boundary of this solid:
(9) 
where is the (euclidean) unit normal to the boundary. For the top and bottom of the polyhedron:
(10) 
The 2+1dimensional unit normal to the other faces will be related to the 2dimensional unit normal on a given timeslice, , but it will also have a component in the time dimension because the face is moving with some velocity, . If we assume that the face is not changing its size or orientation as it moves (another assumption which is resolved by taking a highorder RungeKutta timestep), it is straightforward to check that the 2+1dimensional unit normal will be
(11) 
Now we evaluate the integrals in (9) by averaging the integrated quantities over spacetime. In doing so, we need to know the 2+1 dimensional spacetime volume being integrated over. For the top and bottom, this is straightforward:
(12) 
where dV is the cell volume at the beginning or end of the timestep. For the other faces, it is easy to check that:
(13) 
Recall that “dA” is the face “area”, so it refers to the length of a Voronoi edge. Note that our factors of will end up cancelling, which is to be expected if our formulation is independent of the spacetime metric. If we interpret as the cellaveraged value of U at timestep n, and let denote the timeaveraged flux through the face adjacent to cells i and j, and likewise use to denote the timeandareaaveraged value of U on this same face, we get the following result:
(14) 
This gives us a simple prescription for how to evolve the conserved variables from one timestep to the next, assuming we know the timeaveraged fluxes and conserved quantities on the faces, and the velocity of each face:
(15) 
Of course, this result is not surprising at all. The prescription merely tells us to add an advective term to the flux (), and evolve things in a way analogous to the fixedgrid approach. It is worth noting that our formulation does not depend on the physical content of the equations expressed by (1). In particular, it does not matter whether the velocity exceeds the speed of light. In order for this to be possible, we must be careful about our physical interpretation for the mesh itself. For example, it is not necessarily meaningful to speak of the “rest frame” of a cell or face.
2.2. Riemann Solver
Equation (15) approaches an exact result, in the limit of short timesteps. This means that all of the numerical error due to the spatial discretization must stem from our estimates of the timeaveraged values of the fluxes and conserved quantities on the faces of the Voronoi cells. In TESS, we estimate these fluxes using an approximate Riemann solver. This means we assume that, at the beginning of the timestep, there is a uniform state on either side of the face. Generally speaking, a Riemann solver estimates the timeaveraged flux through the face by evolving this constantstate initial condition, either exactly or approximately.
It turns out that getting the most out of the Lagrangian nature of our scheme is highly dependent on how we solve the Riemann problem on each cell face. The Riemann solver in AREPO (Springel, 2010) is effective because it is Galileaninvariant; the Riemann problem is solved in the rest frame of the face. In our case we cannot assume it makes sense to even speak of a “rest frame” for a face, so we cannot have a method which is Galileaninvariant (this is to be expected anyway for relativistic problems). Nonetheless, we can still have a code which preserves contact discontinuities to very high accuracy. What we require is a Riemann solver which is exact for a pure advection problem. The HLLC Riemann solver, appropriately employed, meets this requirement. In the TESS code, we employ the HLLC solver for relativistic problems as described by Mignone & Bodo (2005).
A pure advection problem has constant pressure and velocity on both states, and two different densities. The HLLC solver divides spacetime into four pieces: the original left and right state, and two interior states known as *L and *R, separated by a contact discontinuity. For advection, we have the following:
(16) 
This is the exact solution to the advection problem, and hence when using the HLLC solver, we can solve the advective Riemann problem to machine precision, as long as the correct starred state is chosen, i.e. the solver should choose the spacetime region which houses the path traced out by the face as it moves with velocity (see Fig. 3). The values and found in this spacetime region are the values used in the update step given by equation (15). The reason that the HLLC solver is so crucial is that it accurately calculates advective fluxes. Since the numerical error caused by the spatial discretization is entirely housed in estimating the timeaveraged fluxes , and if the velocity of each face is very close to the fluid velocity, then for HLLC the advective fluxes will cancel, so that the numerical error for advective fluxes will be small. In the particular case of pure advection, advective fluxes completely cancel, meaning that the advection problem is solved exactly to machine precision. This property is extremely important for preserving contact discontinuities.
2.3. Primitive Variable Solver
The Riemann solver takes in two states corresponding to two adjacent Voronoi cells. To use the information in a cell to solve the Riemann problem for a given face, we need the primitive variables (density, pressure, and four velocity) on either side of the face. In order to find these primitive variables, we need to invert formulas (3) or (4) for the conserved variables on either side. This can be done using a NewtonRaphson rootfinding scheme. For relativistic hydrodynamics with an adiabatic equation of state, this solver is not difficult to write. For an arbitrary equation of state, we incorporate the temperature as an additional variable which must be solved for. In addition to an arbitrary equation of state, we want TESS to become a very general code capable of solving wide classes of problems, for example the equations of general relativistic magnetohydrodynamics (GRMHD). This makes the NewtonRaphson step somewhat more complicated, but this has already been employed in TESS even though we have not yet employed hydrodynamic variables which would necessitate such a solver. The GRMHD primitive variable solver is based on the method described by Noble et al. (2006) and uses a threedimensional NewtonRaphson step, solving for the variables , , and the temperature, (see the Appendix for more details).
2.4. Reconstruction of FaceCentered Primitive Variables
The root finding method described above determines the primitive variables (density, pressure, and fourvelocity) at the cell centers. To solve the Riemann problem on each face, we must extrapolate the primitive variables from the cell centers to the face centers. If we assume all our variables to be piecewise constant, then we can assume they have the same value on the face as they do at the center of mass of a cell. However, if we want accuracy higher than first order in space, we need to extrapolate the variable values based on the values in neighboring cells. We use piecewise linear reconstruction, using the method derived by Springel (2010) for calculating the variable gradients in each cell. We repeat the results here. Assume we have some variable, , for which we would like to calculate the gradient at cell i based on the values at adjacent cells. The formula we use is
(17) 
We use this gradient to extrapolate primitive variable values via:
(18) 
Here, represents the location of a mesh generator, represents the location of the center of mass of a cell, and is the center of mass of the face adjacent to cells i and j. Note that primitive variables are defined at , not at the mesh generators. This prescription would lead to a code which is second order in space, but it is well known that piecewise linear reconstruction can cause numerical oscialltions in the neighborhood of shocks. To deal with this, we need to constrain the estimated gradients in the neighborhood of sharp discontinuities. In other words, we need to construct a generalization of the slope limiters used in gridbased Eulerian codes.
AREPO uses a slope limiter which could be considered a generalization of minmod (Kurganov & Tadmor, 2000), but one which does not have the total variation diminishing (TVD) property. As a result, this slope limiter caused mild oscillations in some calculations. TVD is an especially important property for relativistic hydrodynamics, since oscillatory behavior in the conserved variables can potentially cause wild variation of the primitive variables, particularly in situations with large Lorentz factors. To address this problem, we optionally employ an alternative slope limiter which is much more conservative. AREPO uses the slopelimited gradient,
(19) 
(20) 
(21) 
Our method is similar, but replaces (21) with
(22) 
where is generally set to unity, but reduced if a still more diffusive scheme is desired. The slope limiter in (22) enforces monotonicity, though it is more diffusive than (21). In practice, we find it to be much more robust, so it has typically been employed in problems involving strong shocks.
2.5. Time Integration
Our time integration is based on the method of lines, performing a RungeKutta timestep for the time evolution of the conserved variables, and for the motion of the mesh points. For most problems, we use a third order RungeKutta timestep which is TVD (total variation diminishing,) and which updates both the values of the conserved variables, and the positions of the mesh generating points. The timestep is Courantlimited:
(23) 
is the Courant factor, typically chosen between 0.2 and 0.5. is the effective radius of a cell, in 2D. is the eigenvalue in cell i with the largest magnitude. Currently the code is set up to operate with a single global timestep. To take a timestep from to which is thirdorder in time, for example, we use the following prescription:
(24) 
Here, L is an operator representing the numerically integrated time derivative of U, and the variables , and represent intermediate states in the time integration.
2.6. The Voronoi Mesh
Equation (15) tells us exactly what geometric information we need in order to evolve the conserved quantities. We need to know the following about the Voronoi cells:

which cells are neighbors

the volume of each cell

the area of each face

the velocity of each face

the center of mass of each cell

the center of mass of each face
The last two elements of this list are necessary for the piecewise linear reconstruction of primitive variables. We must determine how to extract all of this information given the positions and velocities of all the mesh generating points. The velocities of mesh generators are freely specifiable, and we shall typically choose to set them equal to the local fluid velocity.
Before we determine this completely, we can show that all of the above can be calculated easily if we know which cells are neighbors, and if we also know the center of mass and area of each face. In other words, when performing the Voronoi tessellation, this will be the only information we need to store.
Given a single mesh generator and its neighbors, it is straightforward to calculate its cell’s volume, if the area of each face is known. This is done by dividing the cell into pyramids (in 2D, triangles), each of whose tip is the mesh generating point, and whose base is a Voronoi face. Then the volume of the cell is the sum of the volumes of all the pyramids:
(25) 
The volume of a pyramid can be expressed generally in Ddimensions:
(26) 
Because the face is in the plane halfway between the two mesh generating points, the height should be half the distance between these points.
(27) 
(28) 
Similarly, the center of mass of a cell can be directly calculated from the area and center of mass of the faces. We can use the weighted average over pyramids:
(29)  
(30) 
The center of mass of a pyramid also depends on the number of dimensions:
(31) 
Here, denotes the center of mass of the face adjacent to cells i and j.
Next, we need to determine the velocity of the faces. It is assumed here that the meshgenerating points themselves have been given some velocity , typically the local fluid velocity (though corrections can be added to steer the cells in ways that make the mesh betterbehaved). Springel (2010) showed that the velocity of a face can be calculated from the position and velocity of the mesh generating points and the center of mass of the face. The result is the average of the velocity of the two adjacent mesh generators, plus a correction due to the fact that the center of mass of the face is not generally at the midpoint between the two mesh generating points, and so acquires a velocity due to rotation about this midpoint:
(32) 
(33) 
Again, is the center of mass of a face. We can use equations (25  33) to pare down the list of information that we need to extract directly from the tessellation itself. The tessellation procedure now only consists in determining the following:

which cells are neighbors

the area of each face

the center of mass of each face
All relevant geometrical information can be easily extracted from this data. This is advantageous because it means the tessellation can take up a relatively small amount of memory. The tessellation procedure consists of generating a new set of faces each timestep, based on the locations of the mesh generators.
In one dimension, the tessellation procedure is trivial; neighboring cells do not change, the face area is always set to unity, and the face center of mass is simply the midpoint between the two mesh generators. The twodimensional version turns out to be surprisingly simple, because we use the tessellation from the previous timestep to generate the new faces. Since we know the neighbors of each cell on the previous timestep, we can use the neighbors of neighbors (“friends of friends”) of a cell as a list of candidates for the neighbors on the next timestep. Because the length of each step in time is Courantlimited, the tessellation will not change significantly in one timestep, and hence this list of candidates is big enough for our purposes. Optionally, we can choose to use “neighbors of neighbors of neighbors” but we have not found this to make a difference in any scenario we’ve encountered. Already having this list of candidates simplifies the algorithm immensely, as in principle it can reduce to a brute force guessandcheck procedure using this small list of candidates. In practice, the 2D algorithm is not totally brute force, but very simple nonetheless.
We consider a single mesh generating point X and all of its “friends of friends”. First we find the nearest neighbor to X (which is guaranteed to share a face with X). Call this neighbor Y. Next we take the rest of potential neighbors and order the list by clockwise angle with respect to the segment XY. What follows is an elimination process; at the end of this process, the elements in the list will be exactly those which share a face with X. At each step in the process, we consider three consecutive elements of the list; call them A,B,C (see Fig. 4). We denote the element before A “P”, and the element after C “Q”. It is determined whether or not to keep point B in the list, by checking whether C lies within the circle generated by X, A, and B. If it does, then we remove B from the list, and take one step backward (checking whether C lies within XPA). If C does not lie within the original circle, we keep B and move forward to check triangle BCQ). More concisely, if contains C, remove B from the list and take one step back: . Otherwise take one step forward: . Fig. 5 demonstrates an example of this full procedure.
Note that this algorithm is not sensitive to the presence of degenerate sets of points (that is, sets of four points which all lie on the same circle). For practical purposes, it will not matter whether our code chooses to accept or reject a point in this configuration, because a degeneracy in the Delaunay tessellation corresponds to a face of zero area in the Voronoi tessellation. If the face has zero area, then there will be zero flux through it, and hence it will have no influence on the resulting physics.
Once this operation has been performed for all members of the list (i.e., point B is now point Y, the nearest neighbor) all remaining list members are neighbors of point X. All that remains is to calculate the areas and centers of mass of the faces, which is straightforward given the vertices of the polygon generated by this list. These vertices are the centers of the circles generated by consecutive triples in the list.
The details of the tessellation algorithm do not completely extend to three dimensions, but the idea of using friendsoffriends as a list of candidates still works, so in the worst case scenario we could use a brute force guessandcheck algorithm when D=3. One might ask whether there is a major efficiency advantage to this tessellation procedure over more conventional ones such as direct insertion. While it is not clear which method should be the fastest (given an optimized implementation), it is assumed they are comparably efficient. Moreover, while the tessellation procedure takes up a nonnegligible percentage of the code’s overall runtime, it does not take up a of the runtime, and as such there is no major incentive to optimize its efficiency. The main advantage to the method described here is that the algorithm is very simple and does not require making a lot of exceptions. Additionally, this algorithm is expected to be very easy to parallelize, because the tessellation is performed locally. In the most simple prescription, we could make the code parallel via a simple domain decomposition, where different processes only share boundary data. The main disadvantage to the tessellation procedure is that we must have an approximate tessellation to begin with, so that we can use “friends of friends” for our initial pool of neighbors. In principle, this simply amounts to saving the tessellation from the previous timestep, but in practice this makes the implementation of boundary conditions a bit more complicated. Not only do we need to create an appropriate set of “ghost cells” outside the boundary, but we need to generate “ghost faces” as well (the tessellation procedure won’t automatically do this for us, because it relies on having an approximate tessellation from a previous timestep). This also adds some complication to the parallelization of the code, for the same reasons.
One might wonder why we worry so much about the positions of ghost cells. For periodic boundary conditions, we don’t really need ghost cells at all, since we could in principle associate leftmost neighbors with rightmost neighbors and so on, so that no boundary need be created. For reflecting boundaries, one might hope we could have a fixed set of ghost cells lining our reflecting wall, so that we don’t need to generate a new set each timestep. Unfortunately, we only have control over the positions of mesh generators; we don’t directly control shape of the cells. Thus, if we want a flat wall, we need to have the mesh generators reflected across the wall. The only reason we include ghost cells for periodic boundary conditions is for the sake of overall simplicity in the code (fewer parts of the code depend on the choice of boundary conditions). In this case, ghost cells are translated to the opposite end of the domain with respect to their “real” counterparts.
The boundary conditions are set in each dimension sequentially. The first step involves flagging cells according to whether they are inside or outside the boundary. If we are using periodic boundary conditions and a cell moves off of the computational domain, it is set to be a ghost cell and its corresponding ghost cell is set to be a real cell. All cells are flagged to be in one of three categories: Inside the domain, outside the domain, or a “border cell”, meaning it is inside the domain and neighbors a cell outside the domain.
The next step involves generating a new list of ghost cells, first by making copies of all border cells. Additional ghost cells get added to this list, by using all neighbors of ghost cells which are inside the computational domain. This procedure is repeated until the desired number of ghost cells is achieved (see Fig. 6). For our secondorder methods, we need two layers of ghost cells.
Next these copies are moved according to the boundary conditions, e.g. if the boundaries are reflecting, their positions are reflected across the reflecting wall.
Finally, neighbors are assigned to the copies by using the neighbor data from the original tessellation. These associations are of two kinds: associations between two copied cells, and associations between a copied cell and a border cell. Both kinds of associations can be extracted from the original tessellation.
After this step, we discard all the old ghost cells and replace them with the new ghost cells. When this is done, the tessellation algorithm is performed as previously described. This implementation of boundary conditions is essentially the same as the method used in AREPO, though we require a bit more care because we need to retain the tessellation information in the boundary cells. As a final note on this formulation, very little of the code depends on the number of dimensions; only the tessellation algorithm itself is significantly affected by D.
2.7. Mesh Regularity
For most problems, evolving the mesh according to the above prescription can generically lead to cells which are long and skinny, and whose mesh generating points are very close to their edges. These cells will evolve in an unstable manner, because their faces can move very quickly even while their mesh generators are moving slowly, and they can also have a very short soundcrossing time. It is therefore desirable to steer cells in such a way that they tend to become more regular. Springel (2010) found an effective prescription for this, which is to give the mesh generators an additional component to their velocity, pointed in the direction of the center of mass. We repeat this prescription here:
(34) 
is the effective radius of cell i, is the distance between the cell’s mesh generating point and its center of mass . is the local sound speed. and are arbitrary parameters for this prescription, which are typically set to and (the same values typically used by AREPO). Note that we we do not implement a relativistic velocity addition formula, as need not be interpreted as a physical velocity.
3. Test Problems
As this is a new method for solving relativistic hydrodynamics, we present a large number of test problems in one and two dimensions. In addition to the relativistic tests, we have included some nonrelativistic ones, to compare our code with AREPO.
3.1. OneDimensional Tests
Test  Variable  

Nonrelativistic  
Shock Tube  1  .25  
N = 100  v  0  0  
P  1  .1795  
t = 3.0  
Nonrelativistic  
Interacting Shocks  1  1  1  
N = 400  v  0  0  0 
P  1000  .01  100  
t = .038  
Easy Relativistic  
Shock Tube  1  1  
vx  0  0  
N = 400  vy  0  .99  
P  1000  .01  
t = 0.4  
Hard Relativistic  
Shock Tube  1  1  
vx  0  0  
N = 100  vy  .9  .9  
P  1000  .01  
t = 0.6 
All 1D tests involving piecewise constant states are summarized in table 1. Our first two tests are identical to tests performed by Springel (2010). The first is a simple shock tube, a test which has also been performed by Hernquist & Katz (1989). To demonstrate the importance of the HLLC Riemann solver in our scheme, we perform four tests, varying whether the mesh is static or moving, and varying whether we use HLL or HLLC. Results are plotted in Fig. 7. When the cells were moved and HLLC was employed, the contact discontinuity was very well approximated. In the other three cases, there was far more diffusion of the contact discontinuity. This demonstrates that the accuracy in this scheme comes from the combination of using a moving mesh and employing a multistate Riemann solver.
The second nonrelativistic test we perform involves the interaction of multiple shocks (Woodward & Colella, 1984). Fig. 9 shows the multipleshock problem at . We compare results with a fixed and moving mesh, and using the HLL and HLLC Riemann solvers. Again, we see that the combination of using the HLLC solver and allowing the cells to move leads to a very high accuracy in the solution.
Code  Error  Convergence Rate  

TESS  100  4.23e1  
200  2.57e1  0.82  
400  1.36e1  0.82  
800  7.45e2  0.98  
1600  3.54e2  0.91  
3200  2.48e2  0.95  
RAM  100  8.48e1  
200  4.25e1  1.0  
400  2.41e1  0.82  
800  1.27e1  0.92  
1600  6.43e2  0.99  
3200  3.34e2  0.95 
For relativistic onedimensional tests, we compare our code to the relativistic adaptive mesh refinement code RAM (Zhang & MacFadyen, 2006) which was tested for convergence against a variety of test problems. The first two are Riemann problems with transverse velocity, the socalled “Easy” and “Hard” shock tube tests. The easy shock tube test is plotted in Fig. 8 at time . For the easy shock tube, we find nearly firstorder convergence, and smaller errors than RAM (see table 2 for convergence rates). For the hard shock tube, we find that like RAM, we need very high resolution to capture the solution accurately. However, because we can initially place the cells anywhere we want, we can resolve the initial discontinuity very well and capture the solution with as few as 100 conputational zones (see Fig. 11). 50 of the zones were concentrated uniformly within a region of the discontinuity and the remaining 50 cells were distributed uniformly through the rest of the domain. Using a uniform initial mesh, TESS showed first order convergence for this problem (table 3).
Code  Error  Convergence Rate  

TESS  400  7.12e1  
800  4.54e1  0.64  
1600  2.26e1  1.0  
3200  1.10e1  1.0  
RAM  400  5.21e1  
800  3.63e1  0.52  
1600  2.33e1  0.64  
3200  1.26e1  0.89 
The last test we perform in one dimension is to demonstrate the convergence rate on a smooth problem. We set up an isentropic pulse identical to the one used by Zhang & MacFadyen (2006):
(35) 
(36) 
(37) 
, ,, and are constants. In our case, , , . is the adiabatic index, chosen to be . To determine the velocity, we use
(38)  
where is the sound speed.
Code  Error  Convergence Rate  

TESS  80  4.88e3  
160  1.78e3  1.8  
320  4.84e4  1.9  
640  1.20e4  2.0  
1280  2.93e5  2.1  
2560  6.97e6  2.1  
5120  1.47e6  2.1  
RAM  80  1.12e2  
(UPLMRK4)  160  3.56e3  1.7 
320  1.02e3  1.8  
640  2.60e4  2.0  
1280  6.49e5  2.0  
2560  1.62e5  2.0  
5120  4.04e6  2.0  
RAM  80  1.10e2  
(UPPMRK4)  160  2.56e3  2.1 
320  5.74e4  2.2  
640  1.34e4  2.1  
1280  3.10e5  2.1  
2560  7.33e6  2.1  
5120  1.82e6  2.1 
Fig. 10 shows the pulse at on a mesh with 400 cells. The L1 error and convergence rates are shown in table 4. In order to make a reasonable comparison between the codes, we chose two different methods employed by RAM which are second order. Of course, if we had chosen to compare with the WENO solvers employed by RAM, it would be no contest, as RAM can get up to fifth order convergence for smooth flows. For this problem, we not only find smaller errors, but also slightly faster convergence than RAM (RAM employed piecewise linear reconstruction of the primitive variables, and a fourthorder RungeKutta time integration, leading to an overall secondorder scheme). In fact, convergence for TESS was slightly better than second order. This could be due to the fact that the method becomes “more Lagrangian” as the resolution increases; face velocities approach the velocities of contact waves in this limit.
3.2. Convergence in Multiple Dimensions
On the surface, it is certainly not obvious that this secondorder convergence extends to multiple dimensions; it would be quite difficult to prove such a thing mathematically. A convergence test in 2D is essential if we want to establish confidence in our numerical scheme. A straightforward test which satisfies this is to propagate an isentropic wave diagonally across the mesh. The mesh will initially be a square grid, but because the flow is nonuniform, the mesh will move in a nontrivial way and we will be able to check whether secondorder convergence continues to hold during such distortions. The result we present will employ a nonrelativistic wave in a periodic box , with density and pressure given by the same formulas as in the 1D case, but now with
(39) 
For this test, we use , but all other constants are the same as in the previous example. The nonrelativistic limit of equation (3.1) is simply
(40) 
The direction of this velocity is of course diagonal to the mesh,
(41) 
Condition (40) continues to hold as the wave evolves, so we can use this as a way of measuring L1 error for this problem. In Fig. 12 we see the pulse at on a mesh, and the cells have clearly changed their shape and size in a nontrivial way. L1 error for this problem is plotted in Fig. 13. The convergence rate for this problem was calculated to be 2.3, again slightly better than second order.
3.3. Tests in Spherical and Cylindrical Coordinates
TESS is written in a modular way so that it is straightforward to change the set of conserved quantities, and to add source terms. As an example, we have implemented the equations in alternative coordinate systems for the two special cases of spherical symmetry and axisymmetry. Our extension to these coordinate systems takes the most naïve approach: we simply express the equations in an alternate coordinate system, and solve them like any other hyperbolic system in conservationlaw form. We do not wish to endure the complication of curvilinear voronoi cells, which is why we specialize to the cases of spherical symmetry (1D) and axisymmetry (2D). For example, in cylindrical coordinates in axisymmetry the nonrelativistic conservation laws take the following form:
(42) 
(43) 
(44) 
where is the distance from the zaxis (note that lives in the rz plane). The two new developments here are the presence of the radial coordinate in the equations and the nonzero source term. We evaluate both of these at the center of mass of a cell or face, depending on the context. The description in spherical coordinates and the extensions to relativistic hydrodynamics are completely analogous.
We use this opportunity to test the method’s extension to these coordinate systems, as well as the ability for TESS to preserve symmetries using an unstructured mesh. We set up very simple shocktube like initial conditions:
(45) 
(46) 
(47) 
We perform a relativistic cylindrical and spherical explosion. For the cylindrical explosion, we perform the calculation at high resolution in 1d using cylindrical coordinates, and in low resolution in 2D using cartesian coordinates. The resolution is as low as 50 50. In Fig. 14 we show the results for a moving and fixed mesh. We see that moving the cells continues to improve resolution of the contact discontinuity, and in this case we also see some improvement in symmetry – the values for the density profile are not as scattered in the movingmesh case, they tend to lie along a single curve. For the spherical explosion (Fig. 15), we see a similar story; the contact discontinuity is well preserved, and the spherical symmetry is captured somewhat better with the moving code.
3.4. Test Problems in Magnetohydrodyamics
To further illustrate that TESS in principle extends to an arbitrary set of conserved quantities, we demonstrate a few test problems in magnetohydrodynamics. From one point of view, MHD (whether relativistic or nonrelativistic) is just another hyperbolic set of equations in conservationlaw form, where electromagnetic energy, momentum, stress, and pressure are added to the fluid equations, and the magnetic induction also satisfies a conservation law (Faraday’s Law), which in the the case of a perfect conductor () has the form
(48) 
The main ingredient which distinguishes MHD from other conservation laws is that the equations also satisfy an elliptic constraint, , which must be satisfied every timestep. This constraint is analytically guaranteed to vanish, but depending on your numerical scheme, the constraint can generically grow exponentially when the equations are solved in discrete form. Over the years, many techniques have been suggested to address this issue, the most popular methods being variants of constrained transport (Tóth, 2000), where steps are taken to ensure that all fluxes added to the B fields have zero divergence, so that the time derivative of div B is guaranteed to remain zero through the entire evolution (in other words, only a solenoidal B field is ever added during the time evolution). This method would be quite difficult to extend to an unstructured mesh, so it seems that we must choose some other way of preventing these constraintviolating solutions from growing.
We employ a hyperbolic divergencecleaning scheme (Dedner et al., 2002) which adds an extra conserved quantity, , and modifies the equations so that the evolution equations for the magnetic field become
(49)  
(50) 
where and are freely specifiable constants which alter the behavior of the divergencecleaning terms. If and the equations revert to the usual MHD equations, but this introduction of has the effect of altering the time evolution of the constraint so that it satisfies a wave equation, specifically the telegraph equation. This technique is generally not as desirable as constrained transport, partly because it doesn’t guarantee zero divergence, but also because its success is strongly dependent on two freely specifiable constants, which have no obvious choice for what their values should be. Because we don’t have a constrained transport scheme (though we do speculate that one is possible), and also because this method is easy to implement, we currently use this divergencecleaning form of the equations in our MHD code.
We now look at several MHD tests. The first is a onedimensional shock tube test in relativistic MHD, the relativistic version of the BrioWu shock tube (Brio & Wu, 1988; van Putten, 1993). The left and right states are as follows:
(51) 
(52) 
The initial velocity is zero everywhere, and the magnetic field is given by the following:
(53) 
(54) 
The adiabatic index . The calculation was done with 400 zones. The final state is shown at , in Fig. 16. It is clear that an advantage is gained in moving the cells, both in the sharpness of the contact discontinuity and in the shock front. Additionally, the value of the density in the shock wave appears to be more accurate when using a moving mesh. So, it would seem that there is great potential in using this Lagrangian approach for problems in magnetohydrodynamics. However, we wish to avoid reading too much from this result, as this is a onedimensional problem, and most of the challenges in magnetohydrodynamics only appear when solving multidimensional problems.
We consider now a test problem in nonrelativistic magnetohydrodynamics, a cylindrical explosion in a uniform magnetic field. The calculation is done in a box of dimensions [0,1] x [0,1]. The initial conditions are as follows:
(55) 
(56) 
(57) 
The magnetic pressure at t=0.1 is plotted in Fig. 17. Also plotted is the numerically calculated divergence of the magnetic field. This divergence is normalized in the following manner:
(58) 
Where R is the characteristic size of the Voronoi cell. We see the code does a reasonable job of resolving the explosion at low resolution, however the magnetic divergence is unacceptably large (the normalized magnetic divergence grows to be as large as 0.14). This large error in div B appears to be particularly problematic for this method. Any time the Voronoi mesh changes topology, there can be extremely rapid growth in magnetic divergence due to the appearance of new faces in the tessellation. This magnetic growth is so rapid that it shows up even in simple test problems like the one displayed. In fact, the growth rate is such that conventional divergencecleaning techniques for suppressing the growth of div B seem to be insufficient to resolve the problem. We believe that this problem is not insurmountable, but the eventual solution might require a novel approach for treating div B. As such, we have much work to do in improving the magnetohydrodynamics part of our code. Another possible idea is to use the same basic numerical scheme, but impose the geometry of the cells by hand rather than determining them based on the motion of mesh points. If such a scheme were developed, constrained transport could potentially be possible, since the geometry of the cells could be known at runtime. This idea could have many other possible advantages, so it is a thought worth pursuing, but for TESS as it is currently written, we would like a better means for damping the growth of magnetic monopoles.
On the other hand, for some problems, div B may not be the most important concern. If the magnetic divergence does not have time to grow, we may get to the end of a calculation before it starts to have an effect on the solution. As an example, we show the relativistic MHD rotor test (Del Zanna et al., 2003), a challenging test of the robustness of our numerical scheme. The calculation is done in a box of size . The initial conditions are given by the following:
(59) 
(60) 
The disc is initially rigidly rotating with angular frequency , so that the the edge of the disc has lorentz factor . The velocity field is zero outside the disc. The calculation was done using a mesh, and the results are shown in Fig. at time . This is ordinarily a very challenging RMHD test problem, but we required no special failsafe procedures to solve it. Actually, for this particular problem, we are at a slight disadvantage, since the mesh does not necessarily give us adaptive resolution exactly where we want it; the rarefaction is underresolved (we start with an initially uniform mesh, but the central cells expand by roughly a factor of five in length). However, it seems that, at least for our method and for this problem, div B does not interfere with our results.
3.5. Kelvin Helmholtz Instability
We find that TESS is very good at solving problems involving fluid instabilities, particularly in cases where a contact discontinuity is being perturbed. The KelvinHelmholtz instability is one such example, which occurs at the interface between shearing fluids. We look at the nonrelativistic case first, to compare with AREPO.
We use a periodic box, , with a stripe in the middle half of the box . The pressure is uniform throughout the domain, and the density and xvelocity are uniform in each region:
(61) 
(62) 
(63) 
This sets up the initial conditions for the instability, and to excite a particular mode we add a small perturbation to the ycomponent of the velocity:
(64) 
(65) 
We choose and . We use an adiabatic index of . These initial conditions are identical to those used by Springel (2010). Our visualization is different, so it is difficult to directly compare results, but it appears that the growth rate of the instability is the same for a 50x50 lattice (see Fig. 19).
Tests of Galilean invariance, on the other hand, indicate that our numerics do depend on the moving reference frame in which the calculation is performed. When adding an overall velocity to the initial conditions, we find that the results are significantly changed if the boost velocity is large compared to the flow velocity. This can be traced back to the fact that the HLLC solver is not invariant under Galilean boosts. It would be interesting to see how differently the code would perform using an exact Riemann solver, though this is not a very high priority, because TESS is ultimately designed to solve relativistic problems. A more optimistic way to interpret this result is to say that Galilean invariance is not a necessary condition for accurately capturing contact discontinuities.
As a relativistic test, we use TESS to calculate the growth rate of the KelvinHelmholtz instability in the relativistic case. For this purpose, we change the initial conditions slightly: the initial density is uniform () everywhere, the pressure is now and the adiabatic index . The initial perturbation is given to the y component of the fourvelocity, , and the x component is varied. To measure the development of the instability, we add an additional passive scalar quantity, X, which is advected with the fluid velocity, for which we choose outside the stripe and inside the stripe. We measure the growth of the instability by tracking the position of the contact discontinuity in X. Note that this is much more straightforward to do with TESS than it would be for an Eulerian code. In fact, we can track the development of the instability even while the perturbation to the contact is smaller than the size of a Voronoi cell.
We compare the linear growth rate against the analytic solution. The growth rate has been calculated analytically by Bodo et al. (2004). It is the imaginary part of where is the wavevector corresponding to the wavelength of the perturbation ( in this case), is the sound speed, and is given by the following formula:
(66) 
Here, v is the flow speed, and is the relativistic Mach number, defined:
(67) 
where is the relativistic sound speed, and is the corresponding Lorentz factor.
In Fig. 20 we plot the numerically calculated growth rates against this analytical formula. At resolution, the growth rate was not very accurately calculated, but for all other tests we seem to get very accurate results, considering the low resolution.
Notice that in contrast with the nonrelativistic example, it does not make sense to even ask about the Lorentz invariance of the code for this problem, since periodic boundary conditions are only periodic in one reference frame, due to relative simultaneity. In general, even if we do not employ periodic boundaries, testing Lorentz invariance can be an extremely complicated task, because one would have to take into account that different mesh points can be sampling the fluid motion at different points in time. In any case, we know that our code is not manifestly Lorentz invariant, so we would not expect a striking improvement in Lorentz invariance over an Eulerian code.
3.6. Rayleigh Taylor Instability
A related fluid instability is the RayleighTaylor instability, where a fluid of high density is balanced on top of a fluid of low density, and the contact discontinuity between these two different densities is perturbed by the force of gravity. In the relativistic case, we can get a gravitational field by transforming to an accelerated reference frame. This is not difficult to do, but we can also simply solve the relativistic equations in the weakfield limit. For the purposes of calculating the linear growth rate, it would not matter which we do, since for a small enough perturbation, the potential jump between the highest and lowest points of the perturbation will be small, and hence the weakfield limit applies. We set up the following initial conditions in the computational domain :
(68) 
(69) 
For the initial velocity perturbation:
(70) 
We choose constants , , , and the gravitational field is . is varied to get different growth rates.
In Fig. 21, we show the growth of the instability using a moving or fixed mesh, at fairly low resolution, 128x128. The initial mesh was not perfectly square; the cells were randomly offset slightly to improve the regularity of the mesh throughout the calculation. This small ( .1%) offset is enough to cause asymmetry, even in the fixedmesh case, but we decided to employ the offset in both cases, so that the only difference between the two cases is whether the mesh points are allowed to move.
The linear growth rate of the instability is easy to derive even in the relativistic case, because the only relativistic effect for small perturbations is that gravity couples to energy instead of mass. Therefore, the growth rate for the relativistic case is still
(71) 
Where is the gravitational field strength, is the wavevector corresponding to the perturbation ( in this case), and is the relativistic Atwood number, defined as
(72) 
4. Rayleigh Taylor Instability in a Decelerating Shock
The RayleighTaylor instability provides us with a useful astrophysical application, in the context of a decelerating relativistic shock. The reason this is relevant is that an external gravitational field is equivalent to an accelerating reference frame. Hence a lowdensity gas accelerating into a highdensity gas or a highdensity gas decelerating into a lowdensity gas will exhibit this same instability. The latter could potentially occur in a shockwave, as matter is pushing its way through some ambient medium (Levinson, 2010). If the explosion is spherical, and the density profile has a power law dependence, , then the shock will decelerate for . If we assume a simple point explosion in this density profile, we recover the BlandfordMcKee solution (Blandord & McKee, 1976), or at late enough times, the nonrelativistic SedovTaylor solution. In both cases, the shock front and contact discontinuity coincide. However, if there is excess mass in the initial explosion, so that the total mass is greater than the integral of the powerlaw density profile, then there will be a coasting period until the shock overtakes an amount of mass comparable to this excess mass. This is followed by deceleration to a twoshock solution (Nakamura & Shigeyama, 2006), because the information that the shock is now decelerating needs to be transported back upstream. In this case, there will be a contact discontinuity between the two shocks, and the density discontinuity across this contact can be quite significant. For this situation, the RayleighTaylor instability can play an important role in the solution, because the contact is quite unstable. A calculation in spherical symmetry would be incorrect, even though the initial conditions are spherically symmetric.
We capture the RayleighTaylor instability from a spherically symmetric explosion in two dimensions using axisymmetric coordinates, for density profiles corresponding to . We use the domain where and set up the initial conditions as follows:
(73) 
We set and to start with an accelerating shock which typically coasts until around , then decelerates. We use to ensure that the system has a finite amount of mass. The initial pressure profile is given by
(74) 
We use a gammalaw equation of state with . We choose , and choose to get a shock which reaches Lorentz factors of before it decelerates to mildly relativistic flow by , when the calculation ends. The values of are given in table 5, along with a rough estimate of the bulk Lorentz factor for the fluid at its maximum speed, and also the Lorentz factor at the end of the calculation. Note that these initial conditions are fundamentally different from the conditions chosen by Levinson (2010); his results came from perturbative analysis of the selfsimilar solution of Nakamura & Shigeyama (2006), whereas ours begins with an explosion. In spherical symmetry, our explosion approaches this selfsimilar solution, but due to the instability, in 2D our calculation should never find this solution (at late enough times, however, it should eventually find the Sedov solution).
Power Law (k)  

0  6  11  4 
1  4  9  4 
2  6  13  11 
In Figures 23  25 we show the growth of the instability for three powerlaw density profiles. It appears that the instability grows rapidly enough to catch up to the forward shock. In fact, relativity appears to help us in this case, because for large Lorentz factors the shock front must remain a short distance from the contact discontinuity, keeping it within reach until the RayleighTaylor fingers can catch up to the shock. The fact that the instability overtakes the forward shock appears to be generic and does not require special initial conditions, although it would of course not occur for a simple point explosion, because in this case the shock and contact discontinuity would already coincide. Using steep powerlaws with little deceleration does not seem to change this picture; the instability can still catch up to the shock even when and the lorentz factor is 11, as in our third test case. This result is in qualitative agreement with the results of Levinson (2010), whose work was restricted to the linear growth rate. To our knowledge, ours is the first direct numerical calculation of the instability for a relativistic shock.
It is worth noting another advantage to TESS which is shown most plainly in Fig. 25. If we know in advance where to initially place the cells, we can get high resolution exactly where we need it. In this case, we initially concentrated most of the cells near the origin, and the region of high resolution followed the motion of the shock. Nearly all of the zones have been compressed into the shock front, giving us very high resolution there. We generated this initial mesh in the following way: We started with an initial uniform mesh in the domain (though we staggered mesh points so that our initial mesh was not exactly square). We then changed the location of the mesh points via the following prescription:
(75)  
(76)  
(77) 
where we chose . Thus our initial resolution was times higher near the origin than it would be for a uniform mesh. It should be noted that this effective resolution changes throughout the time evolution, partially due to the shock overtaking more cells, and partially due to the increasing amount of volume which is being well resolved.
5. Summary
TESS is a new relativistic hydrodynamics code based on the Voronoi tessellation. It performs particularly well on problems with sharp discontinuities, especially contact discontinuities. TESS has been extensively tested. For smooth problems, convergence is slightly better than second order. For problems involving a moving contact discontinuity, TESS demonstrated a clear advantage over the Eulerian codes in that smearing of contact discontinuities is greatly reduced. It was also demonstrated that employing the HLLC solver was necessary to get full advantage out of the Lagrangian nature of this method. For many nonrelativistic problems, TESS has success comparable to that of AREPO, but unfortunately lacks the Galilean invariance property.
We have applied TESS to an astrophysical example, testing the stability of a decelerating spherical shock wave. We found that the contact discontinuity behind the shock was unstable to the RayleighTaylor instability, so much so that the instability was able to reach the forward shock. A detailed study will be presented in a future publication.
The structure of TESS is relatively simple, there is no need to perform any explicit rotations or boosts, and we therefore have the luxury of using an approximate Riemann solver, so long as the solver doesn’t inherently diffuse contact discontinuities. This is a definite advantage, because for some hyperbolic systems, calculating the exact solution to the Riemann problem is computationally intensive if not impossible. Our method can be used to solve a wide range of hyperbolic problems, like general relativistic hydrodynamics, or magnetohydrodynamics. The tessellation algorithm is also quite simple, and very robust; it takes little effort to implement, and is not bugprone.
In speed tests, we find the tessellation algorithm consumes roughly half of the code’s runtime. We urge care in the interpretation of this, however, as very little work has been spent in optimization, and moreover this figure is highly problemdependent, as is the code’s overall runtime. On a 2.67GHz Intel Xeon X5550 processor, TESS spends roughly 40 microseconds per zone per timestep, which is comparable to RAM’s efficiency. We hope to improve the efficiency in future developments.
The code is structured in such a way that an extension to three dimensions merely requires writing a threedimensional tessellation algorithm, which would be an extension of the twodimensional algorithm already outlined. Making the code run in parallel does not pose any major academic hurdles (apart from loadbalancing). Both of these ideas are extensions to the code which we are currently pursuing. We are also interested in implementing more complicated boundary conditions, mesh refinement, and a local time step, as these have all been implemented in AREPO and are thus a solved problem.
Appendix A Primitive Variable Solver for Relativistic Magnetohydrodynamics
The conversion from conserved variables back to their primitive counterparts is nontrivial for relativistic magnetohydrodynamics. It is particularly challenging if we wish to use an arbitrary equation of state. Our code employs a threedimensional NewtonRaphson solver to recover the primitive variables from the conserved variables. The equations being solved are nearly the same as those given by Noble et al. (2006); we derive them again here for completeness.
We begin with the eight conserved variables, which we know:
(A1) 
We also make the following convenient definitions:
(A2) 
(A3) 
The electromagnetic energy and momentum is straightforward to calculate assuming :
(A4)  
(A5) 
So, the fourmomentum Q can be written out explicitly:
(A6)  
(A7) 
Following Noble, we can also take the dot product to evaluate :
(A8) 
If we substitute this back into the equations for the fourmomentum, we arrive at the following two relations:
(A9) 
(A10) 
Equations (A9) and (A10), along with the definition for W (A3), constitute three equations with the three unknown variables, W, K, and the temperature T. We can now easily turn this into a NewtonRaphson rootfinding scheme where we search the threedimensional parameter space of W,K, and T for a solution to the three equations. All of this assumes an arbitrary equation of state,
(A11)  
(A12) 
Note that , so that we don’t have to worry about what to do if one of our variables is too large (this would be the case if, for example, we used velocity as one of our unknowns). If K becomes negative on a given iteration, it is reset to zero to prevent taking a square root of a negative, but W and T are generally not constrained. For an adiabatic equation of state, we use the same framework but make the definition
(A13) 
so that effectively pressure takes the place of temperature as the third variable being solved for. For our initial guess values, we use the values of W,K,and T from the previous timestep. However, if the solver fails to find a root, we try again using new guess values. One idea is to try an estimate which is exact in the limit that rest mass or magnetic energy dominates. In doing so, we obtain the following estimates for K:
(A14) 
To take into account both of these possibilities, we use a guess value which is a weighted average of these two estimates:
(A15) 
If this estimate fails, we can use a method found by CerdáDurán et al. (2008) and try a more safe set of guess values, based on the maximum possible values (also assuming the Lorentz factor is never larger than ):
(A16) 
(A17) 
To determine when we have converged on the inverted state, we need some error function which will measure how close we are to the true solution. For example, one natural choice is
(A18) 
where measures how much W has changed since the last iteration. The strategy then would be to iterate until for some specified tolerance, typically . Unfortunately, it’s possible for W to be changing very slowly while K and T are still far from the solution. To prevent this from being a problem, we modify the error function as follows:
(A19) 
This agrees with the previous definition when we are close to the true solution, but away from the solution, it helps to prevent ”false positives”, where becomes small before the true solution has been found.
References
 Aloy et al. (1999) Aloy, M. A., Ibáñez, J. M., Martí, J. M., Muller, E. 1999, ApJS, 122, 151
 Anderson et al. (2006) Anderson, M., Hirschmann, E. W., Liebling, S. L., & Neilsen, D. 2006, Classical and Quantum Gravity, 23, 6503
 Anninos et al. (2005) Anninos, P., Fragile, P. C., & Salmonson, J. D. 2005, ApJ, 635, 723
 Antón et al. (2006) Antón, L., Zanotti, O., Miralles, J. A., Martí, J. M., Ibáñez, J. M., Font, J. A., & Pons, J. A. 2006, ApJ, 637, 296
 Beckwith & Stone (2011) Beckwith, K., & Stone, J. M. 2011, ApJS, 193, 6
 Blandord & McKee (1976) Blandford, R. D. & McKee, C. F. 1976, PhFl, 19, 1130
 Bodo et al. (2004) Bodo, G., Mignone, A., & Rosner, R. 2004, Phys. Rev. E, 70, 036304
 Börgers & Peskin (1987) Börgers, C., & Peskin, C. S. 1987, J. Comput. Phys., 70, 397
 Brio & Wu (1988) Brio, M., & Wu, C. C. 1988, J. Comput. Phys., 75, 400
 CerdáDurán et al. (2008) CerdáDuran, P., Font, J. A., Antón, L., Müller, E. 2008, A&A, 492, 937
 Dedner et al. (2002) Dedner, A., Kemm, F., Kröner, D., Munz, C.D., Schnitzer, T., Wesenberg, M. 2002, J. Comput. Phys., 175, 645
 Del Zanna et al. (2003) Del Zanna, L., Bucciantini, N., Londrillo, P. 2003, A&A, 400, 397
 Del Zanna et al. (2007) Del Zanna, L., Zanotti, O., Bucciantini, N., & Londrillo, P. 2007, A&A, 473, 11
 De Villiers & Hawley (2003) De Villiers, J.P., & Hawley, J. F. 2003, ApJ, 589, 458
 Duncan & Hughes (1994) Duncan, G. C., & Hughes, P. A. 1994, ApJ, 436, L119
 Etienne et al. (2010) Etienne, Z. B., Liu, Y. T., & Shapiro, S. L. 2010, Phys. Rev. D, 82, 084031
 Falle & Komissarov (1996) Falle, S. A. E. G., & Komissarov, S. S. 1996, MNRAS, 278, 586
 Font (2003) Font, J. A. 2003, Living Reviews in Relativity, 6, 4
 Gammie et al. (2003) Gammie, C. F., McKinney, J. C., & Tóth, G. 2003, ApJ, 589, 444
 Giacomazzo & Rezzolla (2006) Giacomazzo, B., & Rezzolla, L. 2006, Journal of Fluid Mechanics, 562, 223
 Giacomazzo & Rezzolla (2007) Giacomazzo, B., & Rezzolla, L. 2007, Classical and Quantum Gravity, 24, 235
 Greif et al. (2011) Greif, T. et al. 2011, ApJ, in press, arXiv:1101.5491v2
 Hernquist & Katz (1989) Hernquist, L., & Katz, N. 1989, ApJS, 70, 419
 Kurganov & Tadmor (2000) Kurganov, A., & Tadmor, E. 2000, J. Comput. Phys., 160, 241
 Levinson (2010) Levinson, A. 2010, GAFD, 104, 85
 Marti & Muller (2003) Martí, J. M., & Muller, E. 2003, Living Reviews in Relativity, 2, 3, http://www.livingreviews.org/lrr20037
 Meliani et al. (2007) Meliani, Z., Keppens, R., Casse, F., & Giannios, D. 2007, MNRAS, 376, 1189
 Mignone & Bodo (2005) Mignone, A., & Bodo, G. 2005, MNRAS, 364, 126
 Mignone et al. (2007) Mignone, A., Bodo, G., Massaglia, S., Matsakos, T., Tesileanu, O., Zanni, C., & Ferrari, A. 2007, ApJS, 170, 228
 Nagataki (2009) Nagataki, S. 2009, ApJ, 704, 937
 Nakamura & Shigeyama (2006) Nakamura, K., & Shigeyama, T. 2006, ApJ, 645, 431
 Noble et al. (2006) Noble, S. C., Gammie, C. F., McKinney, J. C., DelZanna, L. 2006, ApJ, 641, 626
 Noh (1964) Noh, W. F. 1964, Methods Comput. Phys., 3, 117
 Pons et al. (2000) Pons, J. A., Ma Martí, J., Muller, E. 2000, Journal of Fluid Mechanics, 422, 125
 Rosswog (2010) Rosswog, S. 2010, Journal of Computational Physics, 229, 8591
 Springel (2010) Springel, V. 2010, MNRAS, 401, 791
 Tchekhovskoy et al. (2007) Tchekhovskoy, A., McKinney, J. C., & Narayan, R. 2007, MNRAS, 379, 469
 Tóth (2000) Tóth, G. 2000, J. Comput. Phys, 161, 605
 van der Holst et al. (2008) van der Holst, B., Keppens, R., & Meliani, Z. 2008, Computer Physics Communications, 179, 617
 van Putten (1993) van Putten, M. H. P. M., J. Comput. Phys., 105, 339
 Whitehurst (1995) Whitehurst, R. 1995, MNRAS, 227, 655
 Woodward & Colella (1984) Woodward, P. & Colella, P. 1984, J. Comput. Phys., 54, 115
 Zhang & MacFadyen (2006) Zhang, W., & MacFadyen, A. I. 2006, ApJS, 164, 255