SPHYNX: an accurate density-based SPH method
for astrophysical applications
Key Words.:methods: numerical – hydrodynamics – instabilities
Aims:Hydrodynamical instabilities and shocks are ubiquitous in astrophysical scenarios. Therefore, an accurate numerical simulation of these phenomena is mandatory to correctly model and understand many astrophysical events, such as Supernovas, stellar collisions, or planetary formation. In this work, we attempt to address many of the problems that a commonly used technique, smoothed particle hydrodynamics (SPH), has when dealing with subsonic hydrodynamical instabilities or shocks. To that aim we built a new SPH code named SPHYNX, that includes many of the recent advances in the SPH technique and some other new ones, which we present here.
Methods:SPHYNX is of Newtonian type and grounded in the Euler-Lagrange formulation of the smoothed-particle hydrodynamics technique. Its distinctive features are: the use of an integral approach to estimating the gradients; the use of a flexible family of interpolators called kernels, which suppress pairing instability; and the incorporation of a new type of volume element which provides a better partition of the unity. Unlike other modern formulations, which consider volume elements linked to pressure, our volume element choice relies on density. SPHYNX is, therefore, a density-based SPH code.
Results:A novel computational hydrodynamic code oriented to Astrophysical applications is described, discussed, and validated in the following pages. The ensuing code conserves mass, linear and angular momentum, energy, entropy, and preserves kernel normalization even in strong shocks. In our proposal, the estimation of gradients is enhanced using an integral approach. Additionally, we introduce a new family of volume elements which reduce the so-called tensile instability. Both features help to suppress the damp which often prevents the growth of hydrodynamic instabilities in regular SPH codes.
Conclusions:On the whole, SPHYNX has passed the verification tests described below. For identical particle setting and initial conditions the results were similar (or better in some particular cases) than those obtained with other SPH schemes such as GADGET-2, PSPH or with the recent density-independent formulation (DISPH) and conservative reproducing kernel (CRKSPH) techniques.
Many interesting problems in Astrophysics involve the evolution of fluids and plasmas coupled with complex physics. For example, in core collapse Supernova, magnetohydrodynamics meets with general relativity, nuclear processes and radiation transport. Other scenarios, such as neutron star mergers, Type Ia Supernova, and planet- or star formation, face similar challenges in terms of complexity. Besides that, these phenomena often have a strong dependence on the dimensionality and they must be studied in three dimensions. This requires accurate numerical tools which translate to rather sophisticated hydrodynamic codes. Because of its adaptability to complex geometries and good conservation properties, the Smoothed Particle Hydrodynamics (SPH) method is a popular alternative to grid-based codes in the astrophysics community. SPH is a fully Lagrangian method, born forty years ago (Lucy 1977; Gingold & Monaghan 1977), that since then has undergone sustained development (Monaghan 1992, 2005; Rosswog 2015a; Springel 2010b; Price 2012). Recent years have witnessed a large range of improvements specially aimed at reducing the numerical errors inherent to the technique. These errors are known as errors (Read et al. 2010) and mainly appear due to the conversion of the integrals, representing local-averaged magnitudes of the fluid, into finite summations. The most simple and naive way to get rid of them would be to work closer to the continuum limit, which implies working with a number of particles and neighbors as big as possible (ideally, and , see Zhu et al. 2015). Unfortunately, this is not feasible in common applications of the technique because the total number of particles is limited by the computing power (both by speed and storage). Moreover, the number of neighbors of a given particle cannot be arbitrarily increased without suffering pairing-instability (Schuessler & Schmitt 1981). This is a numerical instability that acts as an attractive force which appears at scales slightly shorter than the smoothing length , provoking artificial particle clumping and effectively decreasing the quality of the discretization, which eventually leads to unrealistic results.
In order to reduce the errors, another more practical possibility has been studied during recent years: finding interpolating functions that are less prone to particle clustering than the widely used or cubic-spline kernel (Monaghan & Lattanzio 1985). Among the various candidates, the most used (pairing-resistant) kernels come either from an extension of the family to higher-order polynomials (Schoenberg 1946) or those based on the Wendland functions (Wendland 1995). In particular, the Wendland family is specially suited to cope with pairing instability (Dehnen & Aly 2012). Another possibility is the family of kernels (Cabezón et al. 2008), which are functions of type , and add the capability of dynamically modifying their shape simply by changing the exponent . That adaptability of the kernels makes the SPH technique even more flexible and can be used, in particular, to prevent particle clustering, as shown in Sect. 2.1.
Historically, the growth of subsonic hydrodynamical instabilities has been problematic for SPH simulations, as they damp them significantly. The Rayleigh-Taylor (RT) instability is a ubiquitous phenomenon which serves as a paradigmatic example. It appears wherever there is cold and dense fluid on top of a hot and diluted one in the presence of gravity (or any inertial force, in virtue of the principle of equivalence). The entropy inversion leads to the rapid overturn of the fluid layers. In the real world, the overturn is triggered by small perturbations at the separation layer between the light and dense fluids. The RT instability is one of the most important agents driving the thermonuclear explosion of a white dwarf, which gives rise to the Type Ia supernova (SNIa) explosions. Their correct numerical description is also crucial to understanding the structure of the supernova remnants (SNR) and to model core collapse supernova (CCSN) explosions. Additionally, the RT instability is also the source of other interesting phenomena such as the KH instability or turbulence. The numerical simulation of the Rayleigh-Taylor instability using SPH has traditionally been a drawback for the technique, especially for low-amplitude initial perturbations in the presence of a weak gravitational force. At present, for a similar level of resolution, the best SPH codes cannot yet compete with the state-of-art grid-based methods. For example, the finite-volume/difference Godunov methods such as ATHENA and PLUTO, AMR codes such as FLASH, the Meshless Finite Mass (MFM) and Volume methods (MFV), and especially the moving mesh methods based on Voronoi tessellations, as in the code AREPO, provide a good approach to the RT instability. This problem is partially overcome using a large number of neighbors and by adding an artificial heat diffusion term to the energy equation, as in the PSPH proposal by Saitoh & Makino (2013) and Hopkins (2013). However, these problems still persist when either the size of the initial perturbation or the gravity value are reduced (Valdarnini 2012), meaning that they are a symptom of another source of numerical error in SPH named tensile instability. This is an artificial surface tension that appears at contact discontinuities because of an insufficient smoothness of pressure between both sides of the discontinuity (Monaghan 2000). As a consequence, the integration of the momentum equation gives incorrect results. An excess of that tension provokes the damping of fluid instabilities, especially those with short wavelengths. Several techniques have been proposed to treat this problem, like averaging the pressure by means of the interpolating kernel itself, scheme PSPH (Hopkins 2015), volume element estimation bounded to pressure, the density independent scheme (DISPH) (Hopkins 2013; Saitoh & Makino 2013, 2016), or by adding an artificial diffusion of heat to the energy equation, which helps to balance the pressure across the discontinuity (Price & Monaghan 2007). They have paved the road that led the SPH technique to a new standard within the last few years, and have helped to overcome this long lasting inconvenience. In particular, it has been proved that it is fundamental to increase the accuracy of the gradient estimation across the contact discontinuities via reducing its numerical noise. To achieve that, García-Senz et al. (2012) used an integral scheme to calculate spatial derivatives, which proved to be especially efficient at handling fluid instabilities. The validity of that approach was assessed in subsequent works by Cabezón et al. (2012) and Rosswog (2015a). In this last case, the integral approach to the derivatives (IAD) was used to extend the SPH scheme to the special-relativistic regime (see also Rosswog (2015b)). See also the recent work of Valdarnini (2016), where the efficiency of the IAD scheme to reduce the errors is studied in detail.
Finally, a recent breakthrough in SPH was the emergence of the concept of generalized volume elements (Ritchie & Thomas 2001; Saitoh & Makino 2013; Hopkins 2013). In these works, it was shown that a clever choice of the volume element (VE) can reduce the tensile-instability, leading to a better description of hydrodynamic instabilities. In this paper, we present a novel estimator to the VE that preserves the normalization of the kernel. We show in this work that having a good partition of the unity is also connected to the tensile-instability problem. The calculations of the growth of the Kelvin-Helmholtz and Rayleigh-Taylor instabilities using these new VE are encouraging.
In this work, we also introduce the hydrodynamics code SPHYNX, that gathers together the latest advances in the SPH technique, including those new ones presented here. SPHYNX has already been used in production runs simulating type Ia and core collapse supernova and is publicly accessible111astro.physik.unibas.ch/sphynx.
The organization of this paper is as follows. In Section 2 we review the main properties of the kernels as well as the integral approach to the derivatives, which are at the heart of SPHYNX. Section 3 is devoted to the choice of the optimal volume element and to the update of the smoothing-length and of the kernel index . Section 4 describes the structure of the hydrodynamics code SPHYNX: moment and energy equations and included physics. Sections 5 and 6 are devoted to describing and analyzing a variety of tests carried out in two-dimensions and three-dimensions, respectively. Finally, we present our main conclusions and prospects for the future in Section 7.
2 Interpolating kernels and gradients evaluation
2.1 The Sinc kernels
Interpolating kernels in SPH must fulfill several basic requirements in order to be considered suitable interpolators. Some of these features are automatically satisfied if the functional form of the kernel is spherically symmetric and approaches the Dirac in the continuum limit. A functional representation of the Dirac- is (Couch 1997)
Writing the function above becomes
where the magnitude is a widely known function used in signal analysis. Nevertheless, the expression given by Eq. (2) does not have compact support, limiting its practical applications as SPH interpolation kernel. Moreover, for , the function oscillates and eventually produces negative values. An obvious solution to this problem is to restrict the domain to , but in that case the first derivative of this function does not go to zero at the limits of the interval. To fix that, we define the kernel set as:
where , , and is the spatial dimension. is a normalization constant, whereas is a real number with to guarantee the nullity of the first derivative at . We generically refer to the set as kernels; see Cabezón et al. (2008) (where they were called kernels) and García-Senz et al. (2014). These interpolators fulfill the regular desirable characteristics for SPH interpolators. Namely, tending to a -function in the continuum limit, having compact support, and being spherically symmetric. But they have a number of interesting additional features: a) they are connected by definition to the -function; b) they add flexibility to the SPH calculations as the kernel profile can be changed by varying the kernel index ; c) working with a high index not only ensures that high-order derivatives are well behaved, but also overcomes particle clustering (see Sect. 2.1.1); and d) they tend to fulfill separability when the index increases (see App. A).
Unfortunately, there is not a general analytic expression for the normalization constant , but this small problem can be circumvented by fitting , numerically calculated for a series of values of . A fitting formula for , valid in the range , was provided in García-Senz et al. (2014). We reproduce it here for the sake of completeness,
where the values of coefficients as a function of the dimensionality are provided in Table 1.
In the limit of large , the kernels also display an interesting feature: separability, which has not been sufficiently emphasized in the extant literature concerning SPH. Standard kernels used in SPH are spherically symmetric functions which naturally lead to a second-order accurate scheme. Interestingly, the majority of these kernels do not fulfill the identity:
This property guarantees the consistency of simulations involving planar symmetry, which should render identical results when calculated in 1D, 2D, or 3D, if the resolution and the interpolating kernel are the same in all three cases. Exploring in detail the applications of this property is beyond the scope of this paper. Nevertheless, we added a short discussion in Appendix A.
Additionally, some particular values of the kernel index can mimic the profile of the most notable kernels used in SPH. In Table 2 we show the correspondence between the and both, the and families of interpolators. The value of shown in Table 2 was obtained by minimizing the metric distance , where and are the interpolators to be compared222This is not the only way to compare the kernels as one could, for instance, compare the logarithms of the functions. Alternatively, the minimization of the metric distance between the first derivative of the kernels can be also of interest. All these procedures lead to only slight variations of the kernel exponents with respect to those given in Table 2.. It is remarkable that is able to approach both the quintic B-spline and the Wendland kernel. Thus, the choice seems a suitable default value when the number of neighbors is moderate, in 3D (see f.e. Figs. 4 and 5 of Rosswog 2015a). For it may be advisable to raise the index of the kernel to avoid the pairing instability. Interestingly, the choice provides a very similar profile to that of the Wendland kernel. Nevertheless, a similarity in the real space is not the only characteristic to take into account. As proved by Dehnen & Aly (2012), having a positive Fourier transform for a large range of modes is of utmost importance when dealing with pairing instability (see Sect. 2.1.1).
It is also worth noting that the family forms a continuous set, making it possible to locally change the index during the runtime of the simulation so that one can, for example, raise the value of in presence of sharp density gradients, (see Sect. 3.2).
Finally, the evaluation of a sinc function is indeed computationally more costly than the regular spline kernels. Nevertheless, it is very easy to circumvent this problem by interpolating the value of the sinc function from a relatively small pre-calculated table. In practical applications, though, the choice of the kernel has a negligible impact on the computational burden, being completely masked by efficient cache usage and other sections of the code, as neighbor search or gravity calculation.
Although SPHYNX uses the kernels by default, it also incorporates the Wendland and kernels which can easily be selected by the user whenever necessary. On another note, the impact of the kernels has recently been studied by other authors (Dehnen & Aly 2012; Rosswog 2015a) so we focus here almost exclusively on the set. Nevertheless, a comparison between the performance of the and interpolators is provided in Section 5.5, where the evolution of the Gresho-Chan vortex is simulated and discussed.
2.1.1 Suppressing pairing instability
Convergence to the continuum limit is only achieved when . According to Zhu et al. (2015), it can only be reached using both a large number of particles , and a large number of neighbors . The value of chiefly depends on the available computer resources and on algorithmic details. However, working with a large number of neighbors is not only expensive but it may be totally impractical due to the tendency of the mass-particles to gather in clusters when is high, a phenomenon known as pairing instability. A leap forward to alleviate this problem was the proposal of using Wendland functions (Wendland 1995) as interpolators by Dehnen & Aly (2012), which are much less sensitive to this instability than the low-order members of the B-spline family. Another possible option to avoid particle clustering is to choose a kernel with a large enough exponent, García-Senz et al. (2014).
As mentioned above, having a Fourier transform with positive modes is necessary for a kernel to be stable. In Fig. (1) we show the Fourier transform of several B-spline, Wendland, and kernels. As expected, Wendland kernels show positive Fourier transform at very large modes. Nevertheless, it is clear that the family systematically becomes negative at higher modes and their negative regions decrease in size very quickly as the exponent increases. Therefore, particle clustering can be totally suppressed, in practice, simply by raising . It is also worth noting that, in order to improve the performance of Wendland kernels, it is advisable to considerably increase the number of neighbors up to in 3D (Valdarnini 2016) which has a substantial increase on the computational burden of actual simulations333In fact, much larger than computing high-order kernels like the -family with large exponents ()..
We tested the impact of the kernel and the number of neighbors on the emergence of pairing instability in a dynamical simulation. Results can be found in Section 5.1.
2.2 The integral approach to derivatives
In SPH, gradients are usually calculated simply by applying the nabla operator to the kernel function (Gingold & Monaghan 1977). Nevertheless, the same procedure is not applied to calculating higher-order derivatives, such as the Laplace operator, needed, for example, to evaluate the heat transport contribution to the energy equation. For these cases, an integral approach is preferred (Brookshaw 1985) because it is less noisy and provides excellent results. Working in that direction, an integral approach to calculating first derivatives, called IAD, was recently introduced by García-Senz et al. (2012). The IAD approach provides a more accurate estimation of the first derivative, thus improving the overall quality of the simulations. This was demonstrated through the study of several test cases in 1D, 2D, and 3D by Cabezón et al. (2012), Rosswog (2015a), and Valdarnini (2016), hence we refer the reader to these papers for technical details of the method. In García-Senz et al. (2012) it was shown that a restriction of the full integral approach (called IAD) leads to a conservative formulation of the SPH equations in the same way as the standard scheme444For this work, we consider the term ”standard codes” defined as those SPH codes fulfilling that: a) they estimate gradients using the analytical derivative of the kernel; and b) the VE is linked to the local density value . does. This, however is not without cost. Some accuracy is lost in the gradient calculation, which is exact for linear functions when IAD is used. In his work, Rosswog (2015a) evaluated this accuracy loss in several tests, and the outcome was that even if IAD was used, the accuracy loss, in comparison with IAD, was much smaller than the accuracy gain when comparing with standard gradient estimators. Furthermore, Valdarnini (2016) presents a series of tests that show the effectiveness of the IAD method at removing sampling errors in sub-sonic flows. Additionally, the SPH equations linked to the IAD scheme are formally similar to those of the standard method provided that the kernel gradient is computed as follows (for the sake of clarity, from now on we drop the dependence of the kernel on the particle inter-distance, ),
being the coefficients of the inverse of matrix for particle , whose elements are,
In García-Senz et al. (2012), it is shown that for the Gaussian kernel the IAD scheme is equivalent to the gradient estimation with the analytical derivative of the kernel function, this being a particular case of the integral approach. It was also proven that IAD, in fact, exactly reproduces the gradient of linear functions provided that,
Equation (9) is, in general, fulfilled only approximately, and this is the main difference between IAD and IAD accuracies. The best gradient estimation is therefore attained by IAD only when . In this respect, the initial model should be built so that Eq. (9) is approached as much as possible. As we see in the following section, it turns out that getting a small is closely related to the choice of the adequate volume element for the integration. An analytical proof showing that an improvement in the partition of the unit translates to a better estimation of gradients within the paradigm is provided in the Appendix B.
A collateral, albeit positive, feature of IAD is that it hinders the emergence of the pairing stability. This is shown in Section 5.1.
3 Preconditioning the calculation
3.1 The choice of the volume element
A recent breakthrough in SPH has been the development of a general method to assign the volume element to a particle (Saitoh & Makino 2013; Hopkins 2013, 2015; Rosswog 2015a). In SPH, the volume element of particle has traditionally been but, as noted in the seminal proposal by Ritchie & Thomas (2001), other options could be more convenient for handling specific problems. SPHYNX makes use of the scheme developed by Hopkins (2013), where an estimator is introduced so that the particle volume is
The density of the particle is then calculated as . Current choices for the estimator are , , and , where is the pressure. Taking leads to the common choice . Actually, the same volume element is obtained with when the mass of the particles is the same. On the other hand, the choice helps to suppress the surface tension that appears at contact discontinuities.
Here we want to discuss another option for , not considered in the aforementioned papers, which could also be of value for SPH users. We choose
and put it into Eq.(10):
which for becomes,
where the summation underneath is simply the kernel normalization condition for the standard volume choice . Therefore, the volume element given by Eq. (13) comes after re-normalizing . We note that if , then Eq. (13) leads to the identity , as expected. Furthermore, one can take Eq. (12) as a recursive equation, which, jointly with , allows to find the optimal leading to an almost perfect kernel normalization after several iterations. That is, starting from an initial guess of the density, for example , the value of is computed in an explicit way using Eq.(12). This gives a new density to be used in the next integration step, and so on. The consistency and robustness of this procedure has been checked with the tests described in Sections 5, 6, and in Appendix C.
3.2 Choosing smoothing length and kernel exponent: equalization
The use of the kernels allows us to work with continuum adaptive index . Interestingly, the smoothing length and kernel index can be calculated jointly with the density estimation. Because of the large number of involved variables, , , , , and , this point deserves some discussion.
Firstly, the volume estimator is updated only when the global iteration has finished. Therefore, is handled explicitly so that the coupling with other variables during the current iteration is avoided. Secondly, self-consistent values of , , , and are simultaneously calculated with a Newton-Raphson iterative scheme, that we named (because it equalizes the resolution at the post-shock tail of shock waves) and is described in García-Senz et al. (2014). We summarize it here for the sake of completeness:
Choose a trial value of , as well as the baseline kernel index , at the beginning of the integration step.
Calculate the density of each particle and its logarithm average over neighbors: .
Evaluate , the ratio between and , which is taken as a local indicator of linearity and is quantified with the following:
Use to assign a new kernel index according to:
where is the maximum allowed jump from , is a scaling parameter, and :
Solve Eq. (15) jointly with mass conservation:
where is a constant set at the beginning of the simulation jointly with , , and . After this process, we obtain the values of , , , and . Obviously, setting turns the equalization off and the evolution is computed with . Typical values for the simulations presented in the following sections are , , and .
4 The hydrodynamics code SPHYNX
SPHYNX gathers all the advances on the SPH technique that have been presented in Sections 2 and 3 and represents our flagship code for astrophysical applications. In the following, we present the most relevant details on its implementation and itemize the mathematical expressions of the SPH equations implemented in the Cartesian version of the code. The precise form of the equations as well as the notation follows the discussion made in the preceding Sections. They are similar to those presented in Rosswog (2015b).
4.1 Work-flow and formulation
First of all, we specify the initial number of neighbors and kernel index variables: (baseline value), (maximum allowed index jump), and (scaling parameter). This is followed by the choice of the volume estimator . Then, the preconditioning stage starts with the self-consistent calculation of the volume element , smoothing-length , kernel index , density , and grad-h/n term (see Sect. 3.2).
For the volume elements we use . From the experiments presented below, we saw that taking leads to the best results in most cases, but it has the disadvantage of overshooting density interpolations in contact discontinuities when . We overcome this problem by using a SPH-averaged value of the VE estimator, making it robust and allowing us to safely rise the exponent to (see Sect. 5.2 for more details.)
which we calculate jointly with including both contributions: the grad-h and grad-n terms (García-Senz et al. 2014),
Once the preconditioning is completed, the hydrodynamic equations are evaluated.
The terms with superscript AV refer to the artificial viscosity acceleration and energy contributions. As detailed in the following subsection, we have slightly changed the implementation of these terms in order to make them fully compatible with the generalized volume elements.
4.2 The artificial viscosity
Regarding the inclusion of physics, SPHYNX incorporates, by default, an artificial viscosity algorithm to handle shocks, as well as routines for the calculation of the equation of state (EOS) and gravitational force. Heat transport by conductive and diffusive means is also included as a basic code unit. More specific routines dealing with nuclear or chemical reactions, ionization or more complex transport schemes can be modularly added to the code (see, e.g., the application of SPHYNX to simulate core collapse Supernova, including a spectral neutrino treatment, in Perego et al. (2016)). In the following, we explain with some detail the AV algorithm.
First of all, we take the viscosity as in Monaghan (1997):
where is an estimate of the signal velocity between particles and . It is common to take but in that case the volume element is implicitly assumed to be . Thus, the VE of particle is not an unequivocally defined magnitude because it also depends on the density of the neighbor particle . A more compatible option between AV and VE, which can be generalized to other VE, is obtained using , which leads to the following viscous acceleration,
and are the Balsara limiters (Balsara 1995):
Expression (26) can be written as
with , . Unlike , the magnitude is not divided by the density and, as a consequence, the viscous force, , is symmetric with respect to any pair of particles, . We note that the volume elements are now unequivocally defined when the mass of the particles is the same. The Balsara limiters work more efficiently if they are averaged as,
because it gives a lower AV than the arithmetic average in regions with strong shear flows. By default SPHYNX sets these limiters to:
which retains a residual viscosity to damp the smallest numerical fluctuations. Equation (31) works remarkably well in regions where strong shocks and instabilities cohabit, such as in the Triple Point test described below. In the case of strong shocks with no shear, Eq.(30) could be replaced by the standard arithmetic mean if necessary.
Following Springel (2010b), we used a constant so that remains close to the classical SPH artificial viscosity introduced by Monaghan & Gingold (1983). The contribution of the AV to the energy equation (Eq. 23) is then,
Additional information on the physics included in SPHYNX, such as gravity, heat transport or alternative energy equations is deferred to Appendix D.
5 Two-dimensional tests
In this section we present the outcome of applying SPHYNX to several two-dimensional (2D) tests that have traditionally been problematic for the SPH technique. We show that the sinc kernels help against the appearance of pairing instability, that the new VE provide a better treatment of discontinuities than the often-used value, and that, when applied jointly with IAD, there is an overall improvement in the results of the shock treatment in the Sedov and wall-shock tests, and also in the development of subsonic instabilities; in particular, the challenging cases of Kelvin-Helmholtz with a density jump of 8 and Rayleigh-Taylor with very weak gravitational field (). We also found a convergence on the Gresho-Chan test closer to those of Eulerian codes. Finally, we prove that our implementation suppresses the tensile instability, enabling mixing in tests like the wind-cloud interaction and the triple-point shock test.
Our results below suggest that improving volume conservation produces better results than using volume elements that do not ensure volume equipartition. At first glance, it also renders equalization useless because the advantages of using variable adaptive kernel indices are apparently obscured by the new VE. Nevertheless, equalization and volume elements work on different bases. As demonstrated in García-Senz et al. (2014), the use of a variable kernel index improves the interpolation of non-linear functions, even when they are estimated using integrals instead of finite summations. It is therefore expected that the equalization will be useful when both the number of particles, , and neighbors, , are high enough so that the error in interpolations is dominated by non-linear effects rather than by the evaluation of integrals as finite summations (see also Sect. 6.2). For a moderate amount of neighbors the use of the new VE given by Eq. (12) makes equalization unnecessary. Thus, unless explicitly stated, many hydrodynamic tests presented in this work have been carried out taking .
5.1 Pairing instability
Here we show a numerical experiment, similar to that described in Zhu et al. (2015), used to study the relationship between the kernel index and pairing instability. We set a sample of particles in a regular 2D lattice leading to a homogeneous density distribution. Then, we seed a random small fluctuation of the internal energy, taking the system out of mechanical equilibrium. Afterwards, we add a frictional dissipative force proportional to the velocity in the momentum equation so that the system returns to equilibrium after a few sound crossing times. The final equilibrium distribution is sensitive to the number of neighbors and to the exponent of the function. This is summarized in Fig. (2), which depicts the distribution of the minimum inter-particle distance (), normalized to its initial value (), as a function of pairs at several elapsed times. As we can see, a careful choice of the exponent keeps the distribution of normalized particle distances closer to 1, making particle clustering difficult, thus avoiding the pairing-instability even at large . For example, keeps close to 1 when the number of neighbors is small (top-left panel of Fig. 2), even at large times, while it completely fails when increases (top-right panel). Nevertheless, increasing the exponent of the kernel suppresses again the pairing instability. This numerical experiment points to , , and as indicative values to handle with , , , and neighbors, respectively (in 2D). For equivalent numbers in 3D, for example , currently used in SPH calculations, a kernel with will be sufficient to suppress clustering in many applications (see also Fig. (3) in García-Senz et al. (2014)). We provide a comparison between and the Wendland in the bottom-right panel of Fig. (2). As we can see, the kernel works better than but the difference is small.
The previous test was done using the standard gradient estimator. Fig. (3) depicts the results for the same numerical experiment, but carried out with IAD. The comparison between Figs. (2) (standard gradient estimator) and (3) (IAD) suggests that the integral scheme also helps with the pairing problem, as it keeps the particle distribution closer to .
5.2 Static square of test
We consider a system of two fluids with different density but identical pressure:
The system is isobaric with and the EOS is that of a perfect gas with .
To carry out the simulations we use Eqs. (22) and (23) from Section 4, with constant kernel index , neighbors, and a variety of volume elements. We tried two different initial settings: using particles with the same mass (and uneven spaced grid), as well as particles with unequal mass spread on a uniform lattice. The outcome of the simulations at times , 0.4, 1, and 2 is depicted in Fig. (4). For equal-mass particles and (with ), the code was able to keep the hydrostatic equilibrium during several crossing times (first row). The evolution calculated using , , and (second row) led to the same wrong results in much shorter times. Rows 3 and 4 in Fig. (4) summarize the evolution using a homogeneous lattice of particles with different mass. Hydrostatic equilibrium is very well preserved using , , and (third row), while (fourth row) does not give a satisfactory result.
In Fig. (5) we show the magnitudes (left panel) and (right panel) along a 1D cut taken around , from to (center of the system) in the hydrostatic system described above after 20 integration steps. As can be seen in the left panel, choosing gives the best results, leading to an almost perfect volume normalization, while the case with (equivalent to ) shows a large oscillation around the contact discontinuity. The case with is also quite satisfactory although, a small oscillation is still present. The right panel in Fig. (5) suggests that an adequate choice of the volume element may improve even further the accuracy of the IAD method to estimate gradients because, as we can see, a value of helps to keep , which is the necessary condition for IAD to exactly reproduce the gradient of linear functions.
Unfortunately, taking has the unwanted side effect that density tends to overshoot close to the contact discontinuity. The overshooting in density drops the value of the estimator , so that in the next integration step the density undergoes a further overshooting. Occasionally, the feedback between and might rise the density to unrealistic values after several time steps. This problem can be avoided by reducing the value of the exponent but in that case the adequate optimal value becomes problem dependent. For example, works well in the hydrostatic test depicted in Fig. (4) but it produces worse results for the Sedov test described later in Sect. 5. For that test was a better choice. A way to circumvent the overshooting problem is to consider
where is the SPH average of the standard volume element. Although less efficient than , this simple recipe enhances the robustness of the scheme so that the exponent of the estimator can be safely raised to , independently of the problem to be simulated. This procedure is especially well suited to handle contact discontinuities hosting large density contrasts, and it has been checked in the blob and Evrard tests described in Sections 5.6 and 6.3. A more quantitative discussion on the convergence rate of the estimator as a function of and the density contrast is given in Appendix C.
5.3.1 Point explosion
The study of the propagation of a point-like explosion in more than one dimension is of great usefulness for checking hydrodynamics codes. We have carried out several simulations with SPHYNX to explore the impact of different combinations of volume element and level of equalization on handling shock waves. The explosion was triggered by depositing a considerable amount of energy inside a Gaussian surface of characteristic width located at the center of a uniform lattice of size . The initial density profile is and the total injected energy . All simulations used particles and the initial number of neighbors was set to . The different parameters used in the simulations are summarized in Table 3.
Figure (6) shows the results of the Sedov test for several combinations of volume elements and equalization at . Models , , , and were calculated using and different values for the exponent . All models are quite satisfactory but model , calculated with and , leads to the best results. The density peak is closer to the analytic value and the pressure profile close to the origin behaves well. Model , calculated using with , also provides good results while models , , with the choice (standard VE) lead to lower peak values and oscillating pressure tails that depart from the analytic result.
The lower panels of Fig. (6) depict the profile of the moduli (left panel) and that of the normalization condition (right panel). These results are encouraging because they strongly suggest that the new volume elements not only enhance volume conservation (lower right panel of Fig. 6) but also make gradient estimation more accurate, because Eq. (9) is better fulfilled, ensuring that the IAD approximation is, in fact, close to the full IAD in terms of accuracy (lower left panel of Fig. 6). Again models and lead to the best results whereas and show worse volume conservation and lower fulfillment of Eq. (9).
It has been suggested that other choices of VE could be of interest for handling specific problems. In particular, it has been claimed that the pressure-based estimator () could be well suited to handling contact discontinuities (Hopkins 2013). We carried out one simulation using that estimator, with the recommended value (Rosswog 2015a). We found that the results with the pressure-based estimator were not as good as those of models and of Table 3. In Fig. (7) we compare the results of using the estimator with those of and . In the right panel it is clear that the volume normalization is not so well preserved as in model , obtaining values similar to those of using standard VE (model ). Moreover, it leads to a lower peak density value and a spurious precursor shock is clearly seen ahead of the shock in the left panel. Although small, such precursor shock is a numerical artifact which was also reported by other authors (Rosswog 2015a). For that reason, hereafter we have limited the choice of volume elements to those given by Eq. (12)555We note that the standard is a particular case of Eq. (12) when and all particles have the same mass. Therefore, we can use the same expression to explore the impact of using both the standard and the new VE..
5.3.2 The wall-shock test
Along with the point explosion (Sedov test), the so called wall-shock (or Noh) test is also used to check the performance of multidimensional hydrodynamics codes. Unlike the Sedov test, the Noh experiment is an implosion towards a geometrical center. The interaction of the converging inflow of gas with the stagnated material at the central volume provokes the formation of a shock-wave moving outwards. The Noh problem has an analytic solution (Noh 1987) to compare with. It is, however, not an easy test owing to the large jump in density across the shock front, in 2D, in 3D, for . SPH codes traditionally had difficulties in handling this test because: a) many particles are needed to correctly reproduce the shock region (especially in 3D); and b) close to the center the internal energy shows a pronounced spike and, consequently, the density abruptly drops to keep the pressure constant. In fact, both problems are connected to the use of the AV and it is not clear to what extent a change in the SPH formalism may alleviate these shortcomings (Brookshaw 2003).
A sample of equal-mass particles was distributed in an bcc lattice with circular perimeter. The radius of the initial configuration is and the density is homogeneous with (except at the outer edge of the system). A radial profile of was imparted to all particles at , so that the system implodes. The evolution was tracked with SPHYNX until , when a clear steady shock moving outwards is formed. The results, calculated with , , and for the VE and , have been compared to both, the analytic predictions and to the output of GADGET-2 for the same initial model.
Table 4 and Figs. (8) and (9) summarize initial parameters of each simulated model and the main results of the simulations. The density profiles at for the three calculated models, , , are shown on the leftmost panel of Fig. (8). A sharp shock, moving towards the right, is clearly seen at . The density jump across the shock front is , hence close to the analytic value. Close to the center of the configuration there is a pronounced dip in density caused, as expected, by the use of the AV. The SPHYNX calculation with (model with green dots) gives slightly better results at the shock position than and because the density profile is steeper and the maximum density is closer to the analytic value of 16. We note that all three calculations do show density oscillations at the shock front and that the GADGET-2 simulation is more blurred. The color-maps of density for models and are also shown in the central and right panels of Fig. (8). As we can see, the shock region is sharp and well defined in the SPHYNX calculation whereas the simulation with GADGET-2 shows stronger oscillations close to the shock location and an inhomogeneous particle distribution close to the origin.
Additionally, model has both a better volume normalization than (i.e., closer to 1), as shown in Fig. (9) (left panel) and a better behavior of (i.e., closer to 0) at the shock location (right panel).
5.4 Fluid instabilities
Instabilities play a central role in hydrodynamics because they are usually connected to shear mixing and turbulence. In the cosmos, instabilities are especially important because the large Reynolds numbers involved in astrophysical processes make these systems prone to turbulence. Particle- and grid-based codes have been applied with different levels of success to simulate the evolution of hydrodynamical instabilities such as Kelvin-Helmholtz (KH) or Rayleigh-Taylor (RT) instabilities. Here we re-visit them with SPHYNX.
Grid-based codes are, for the most part, almost free of numerical viscosity, leading to a good match between simulations and the analytic predictions during the linear phase of growth of the KH instability (Junk et al. 2010). On the other hand, SPH codes are Galilean invariant and do not suffer from numerical diffusion, but intrinsically have more numerical viscosity. It has been pointed out that SPH may not be appropriate for handling fluid instabilities across contact discontinuities with large density jumps (Agertz et al. 2007). Although there have been proposals to enhance the modeling of these systems (Price 2008), the SPH calculation of shear flows with large density contrast is still an open question.
We ran two sets of models simulating the growth of the KH instability around the boundary layer separating two flows with moderate and large density contrasts, respectively. Each set was in turn calculated using two values for the volume elements namely and in Eq. (12). The initial setting was the same as in McNally et al. (2012), where three stratified fluid layers inside a 2D box of size were considered. The fluid layers span for , and with densities , , and , respectively. The adopted values for , , , , and , as well as the number of particles used in the simulation, are shown in Table (5). Periodic boundary conditions were implemented at all sides of the box.
A velocity is given to the central strip, whereas the rest of the box moves in the opposite direction with . Prior to the calculations, the density and distributions were smoothed following the method described in McNally et al. (2012) (their Eqs. 1 and 3). Thus, the growth-rate of the instability obtained with SPHYNX can be compared to the templates obtained by McNally and collaborators using the PENCIL Code666Available at pencil-code.nordita.org, a state-of-the-art hydrodynamics code of Eulerian type (Brandenburg & Dobler 2002). The pressure is set to everywhere, with . The fluid layer is in almost vertical equilibrium except for a small seeded perturbation in given by,
with . The velocity perturbation has therefore a wavelength so that the box hosts two complete waveforms.
Models and in Table (5) were calculated assuming a density contrast of two, whereas an initial jump of 8 was imposed to models and . To build the initial models, a sample of particles were first spread in a squared lattice and then stretched in the vertical direction so that the ensuing density profile adapts to that of Eq. (1) in McNally et al. (2012). This setting naturally leads to a density contrast of two for equal-mass particles and provides the necessary smoothness around the separation layers between the fluids. To build the high-density contrast models, the initial lattice was stretched until the density ratio was . The mass of the particles was then made proportional to the density of the stretched grid, so that a smooth profile with a density contrast of 8 was achieved.
The Balsara limiter given by Eqs. (28, 30, and 31) was applied to all models to reduce the shear viscosity. For these initial conditions, previous simulations with the traditional formulation of SPH predict the growth of the KH instability only for the low-density contrast case (Agertz et al. 2007; Junk et al. 2010). When the density contrast rises well above it is necessary either to add an artificial heat flux to keep isobaricity (Price 2008) or to redefine the volume elements (Saitoh & Makino 2013; Hopkins 2013). In particular, it was shown that the formulation which uses volume elements linked to pressure is able to reproduce the KH instability across high density jumps. Also, codes which directly use a smoothed pressure in the SPH equations, such as the PSPH formulation described in the Appendix of Hopkins (2015), can also handle with high density contrasts. Here we show that the IAD approach combined with new volume elements, which preserve kernel normalization, is also able to successfully simulate the KH instability with high density jumps.
Figure 10 shows a color-map of density of the evolution of models and at times and . We see that there is a growth of the instability, qualitatively similar to the results obtained using other novel formulations of the technique such as the PSPH scheme with neighbors (see for example Fig. (19) in Hopkins (2015)). The results of SPHYNX are also very similar to those obtained with the recent CKRSPH formulation (see for example the upper-row of figure 17 in Frontiere et al. 2017). Model , calculated using the new volume elements, appears to evolve slightly faster than , computed with the standard choice, , showing more evolved structures in the non-linear stage. Unlike the PSPH scheme, our method does not estimate the pressure by kernel smoothing nor does it incorporate any artificial flux of heat to smooth the pressure.
Figure 11 depicts the mode-amplitude evolution of calculated taking the Fourier transform (FT) of the field. The FT was calculated with the expressions given in McNally et al. (2012) so that a comparison with the amplitude growth calculated with the PENCIL code (continuum-red line) and the NDSPMHD code, described in Price (2012), (dashed lines) is straightforward. The results are encouraging as SPHYNX reproduces the KH growth closer to the reference simulation of the PENCIL code than the NDSPMHD code, the latter even having a factor 2 higher resolution than our simulations with SPHYNX (see also Fig. 7 in McNally et al. (2012) and comments therein). Our results are close to those obtained with the PSPH scheme with the same resolution. In Fig. 11 we also show the L errors of each model with respect to the reference values, calculated as: , where and stand for the reference and simulated amplitudes, respectively. These results are an indication of the importance of using an accurate gradient evaluation, this being the most relevant difference between SPHYNX and NDSPMHD.
Model , calculated with , exhibits better volume normalization than , computed with , as shown in Fig. (12). That figure depicts the value of the normalization condition as a function of the density of the particles. For both runs, most of the particles cluster around the expected value of 1, but the summation falls below one in the low-density region and exceeds it in the high-density region. Nevertheless, the dispersion around the correct value is considerably lower in the case with , showing its greater capability to achieve equipartition in disordered particle distributions.
Figure 13 summarizes the simulations with an initial density ratio . The upper and lower panels were calculated with and in Eq. (12), respectively. The snapshots correspond to times and . First of all, we see that despite the large jump in density, the instability is able to develop in both cases. These results, therefore, suggest that the use of the IAD scheme to calculate gradients improves by itself the quality of the simulations of hydrodynamic instabilities. Although the instability evolves slightly faster when , and shows more structure, it is probably affected by the noise introduced by the VE estimator when the mass of the particles is not constant. As in the precedent tests, the model calculated with has the best kernel normalization properties (not shown in the figures).
5.4.2 The Rayleigh-Taylor instability
In García-Senz et al. (2012) it was shown that the IAD scheme is able to simulate the gross features of the growth of the RT instability for initial perturbations as low as in the velocity field. In this section we present clear proof that SPHYNX is able to cope with the growth of the RT instability also when the gravitational force is small.
Our initial model is similar to that described in Springel (2010a), where the numerical experiment takes place in a box sizing . The lower and upper halves of the box are filled with equal-mass particles. The particle distribution is then arranged in the vertical direction so that (upper region) and (lower region).
The contact discontinuity around the interface was smoothed using:
with and . The integration of the hydrostatic equilibrium equation