# Improved Performances in Subsonic Flows of an SPH

Scheme with Gradients Estimated using an Integral Approach

###### Abstract

In this paper we present results from a series of hydrodynamical tests aimed at validating the performance of a smoothed particle hydrodynamics (SPH) formulation in which gradients are derived from an integral approach. We specifically investigate the code behavior with subsonic flows, where it is well known that zeroth-order inconsistencies present in standard SPH make it particularly problematic to correctly model the fluid dynamics.

In particular we consider the Gresho-Chan vortex problem, the growth of Kelvin-Helmholtz instabilities, the statistics of driven subsonic turbulence and the cold Keplerian disc problem. We compare simulation results for the different tests with those obtained, for the same initial conditions, using standard SPH. We also compare the results with the corresponding ones obtained previously with other numerical methods, such as codes based on a moving-mesh scheme or Godunov-type Lagrangian meshless methods.

We quantify code performances by introducing error norms and spectral properties of the particle distribution, in a way similar to what was done in other works. We find that the new SPH formulation exhibits strongly reduced gradient errors and outperforms standard SPH in all of the tests considered. In fact, in terms of accuracy we find good agreement between the simulation results of the new scheme and those produced using other recently proposed numerical schemes. These findings suggest that the proposed method can be successfully applied for many astrophysical problems in which the presence of subsonic flows previously limited the use of SPH, with the new scheme now being competitive in these regimes with other numerical methods.

## 1 Introduction

Application of computational fluid dynamics to many astrophysical problems has grown steadily over the years with advances in computational power and it has now become a standard tool for studying the non-linear evolution of baryonic structures in the Universe.

There are two methods commonly used in numerical astrophysics for solving the Euler equation. The first method makes use of a spatial grid, either fixed (Stone & Norman, 1992; Norman & Bryan, 1999; Stone et al., 2008) or adaptative (Fryxell et al., 2000; Teyssier, 2002; Norman, 2005; Bryan et al., 2014). The second method is a Lagrangian mesh-free numerical scheme, known as smoothed particle hydrodynamics (SPH: Lucy, 1977; Gingold & Monaghan, 1977) in which particles are used to model fluid properties. Both methods have advantages and weaknesses which are specific to the numerical approach on which each method is based.

Because of its Lagrangian nature, SPH possesses very good conservation properties, moreover the method is free of advection errors present in mesh codes and is naturally adaptative because particle trajectories trace the mass. This latter feature is particularly useful in many astrophysical problems involving collapse of the structure under study.

The method, however, is not free from significant drawbacks and more specifically it has been found that in several hydrodynamical test cases there are significant differences between the results obtained using the two methods, with SPH failing to properly model the correct behavior (O’Shea et al., 2005; Agertz et al., 2007; Wadsley et al., 2008; Tasker et al., 2008; Mitchell et al., 2009; Read et al., 2010; Valcke et al., 2010; Junk et al., 2010).

More specifically, Agertz et al. (2007) found that SPH fails to resolve the formation of Kelvin-Helmholtz (KH) instabilities at fluid interfaces. This is strongly related to the fluid mixing properties of SPH as well as to the lack of a core entropy in non-radiative simulations of galaxy clusters, in contrast with what is found using mesh-based codes (Wadsley et al., 2008; Mitchell et al., 2009).

It is now widely recognized that the origin of these errors is due to two distinct problems which are present in SPH. The first problem originates from the inconsistencies of standard SPH when dealing with steep density gradients at contact discontinuities, the so-called local mixing instability (Read et al., 2010), thereby suppressing the growth of KH instabilities at the fluid interfaces. The second problem is inherent in the discrete nature of SPH, in which a finite set of particles is used to model the fluid. The discretization implies the presence of a zeroth-order error in the momentum equation due to sampling effects (Read et al., 2010), the so-called error. This error can be reduced if one increases the number of neighbors present within the kernel, but for the standard cubic spline kernel there is a threshold value for beyond which a clumping instability develops thus degrading the convergence rate.

In view of the benefits of the SPH method previously outlined, there have been many attempts to eliminate or reduce these difficulties. Several solutions have been proposed, concerning the problem posed by the standard formulation at contact discontinuities.

One solution is to modify the equations, so that it is the pressure rather than the density which is smoothed (Ritchie & Thomas, 2001; Read et al., 2010; Hopkins, 2013; Saitoh & Makino, 2013; Hu et al., 2014). On the other hand, Price (2008) proposed to include in the SPH energy equation a term of artificial conductivity (AC) with the aim of smoothing the thermal energy across fluid interfaces and thus removing the associated entropy gap. This approach is similar to that used by Wadsley et al. (2008), who mimicked the effect of subgrid turbulence by adding a heat diffusion term to the equations. The method, however, requires some care in the implementation of the conduction switches to avoid the risk of getting too much diffusion.

By performing a suite of hydrodynamical tests (Valdarnini, 2012, hereafter V12) it has been found that the method yields consistent results when contrasted with those obtained using mesh-based codes. In particular (V12; Biffi & Valdarnini, 2015), the level of core entropies produced in simulations of non-radiative galaxy clusters are now comparable with those of grid codes. Recently Beck et al. (2016), have proposed a modification to the standard SPH code Gadget-II (Springel, 2005) which incorporate the new AC term into the hydrodynamic equations.

Finally, other variants of SPH are based on Riemann solvers (Godunov-SPH: Inutsuka, 2002; Cha et al., 2010; Murante et al., 2011), Voronoi tessellation techniques (Heß & Springel, 2010) or on the use of high order dissipation switches (Read & Hayfield, 2012).

The zeroth-order inconsistency is due to the inability of the SPH method to properly reproduce a constant function because of finite resolution (Dilts, 1999; Liu et al., 2003), thus leading to poor gradient estimates (Read et al., 2010; McNally et al., 2012) and in turn affecting the momentum equations. Keeping these errors under control becomes problematic when dealing with subsonic flows, as in the case of subsonic turbulence (Bauer & Springel, 2012) or with Rayleigh-Taylor instabilities (Abell, 2011; García-Senz et al., 2012, V12).

A possible solution is to drastically increase the number of neighbors used in the simulation. In this case, the clumping instability can be avoided either by modifying the shape of the cubic spline kernel (Read et al., 2010), or by adopting (Dehnen & Aly, 2012) the Wendland kernels (Wendland, 1995). These kernels are characterized by the specific property of not being subject to clumping instabilities in the large limit.

Another possibility is to consider other discretizations of the momentum equation (Morris, 1996; Abell, 2011) but this comes at the cost of losing energy and momentum conservation (Price, 2012), thus making the scheme of little use in practice.

Finally, to overcome the difficulties of SPH mentioned above, new numerical schemes have been proposed (Springel, 2010; Duffell & MacFadyen, 2011; Hopkins, 2015; Schaal et al., 2015; Pakmor et al., 2016) which aim to retain the advantages of using both SPH and mesh-based codes. These new schemes are quite numerical complex and, in some cases, their space discretization does not seem to be optimal as required by forthcoming parallel computing systems consisting of several million cores (Schaal et al., 2015).

A satisfactory solution to the problem of zeroth-order inconsistency in SPH has been presented by García-Senz et al. (2012), who showed how the accuracy in gradient estimates can be greatly improved by calculating first order derivatives by means of the evaluation of integrals and the use of matrix inversions. The resulting tensor scheme has been tested in a variety of hydrodynamical test cases (García-Senz et al., 2012; Rosswog, 2015), showing significant improvements as compared with the standard formulation. A crucial feature of the method is that it retains the Lagrangian nature of SPH, unlike previous attempts aimed at improving gradient accuracy.

Motivated by these findings we here further investigate the performance of the new scheme, paying particular attention to its behavior in the regime of subsonic flows, where it has been found that standard SPH presents its major difficulties.

The goal of this paper is to demonstrate that, for the hydrodynamical tests considered here, the new SPH formulation gives results with accuracy comparable to that of mesh-based codes. Thus, the new code can be profitably used for many astrophysical problems without the shortcomings of standard SPH. The main advantage of the new scheme is that it keeps its fully Lagrangian nature, while retaining a relative simplicity in its implementation as compared with new numerical schemes recently proposed.

The paper is organized as follows. In Section 2 we present the hydrodynamical method and the implementation of the integral-based approach. Some basic properties of the most widely used SPH kernels are briefly reviewed in Section 3. The results of the hydrodynamical tests are given in Section 4, where we consider the Gresho-Chan vortex problem, the development of KH instabilities, the statistic of driven subsonic turbulence and finally the Keplerian disc problem. Our main results and conclusions are summarized in Section 5.

## 2 Hydrodynamic method

This section reviews the basic features of SPH; for a comprehensive review see Rosswog (2009) and Price (2012).

### 2.1 Basic equations

In SPH, the fluid is described within the solution domain by a set of particles with mass , velocity , density , and specific entropy ( we use the convention of having Latin indices denoting particles and Greek indices denoting the spatial dimensions). Here, we integrate the entropy per particle (Springel & Hernquist, 2002) in place of the thermal energy per unit mass (Hernquist & Katz, 1989; Wadsley et al., 2004). The entropy is related to the particle pressure by , where for a mono-atomic gas. The density estimate at the particle position is given by

(1) |

where is the interpolating kernel which is zero for (Price, 2012). Since the kernel has compact support, the sum in Equation (1) is over a finite number of particles. The smoothing length is implicitly defined by

(2) |

so that in two and three dimensions, respectively, and are the mean number of neighboring particles of particle within a radius . In principle, for a given parameter , the solution of Equation 2 allows for non-integer values of (Price, 2012). Here we solve the equation for the by requiring an integer value for , to which we will generically refer in the following as the neighbor number.

Following Price (2012) the Euler equations are derived using a Lagrangian formulation

(3) |

where is defined as

(4) |

### 2.2 Artificial viscosity

The momentum equation (3) must be generalized to include an artificial viscosity (AV) term which in SPH represents the effects of shocks. This is introduced in order to prevent particle streaming and convert kinetic energy into heat at shocks; the new term reads

(5) |

where is the symmetrized kernel and is the AV tensor.

The latter is written following the formulation of Monaghan (1997), based on an analogy with the Riemann problem :

(6) |

where is the average density, if but zero otherwise and . The signal velocity is introduced as

(7) |

with being the sound velocity. The amount of AV is regulated by the parameter , and is a viscosity limiter which is introduced so as to suppress AV when strong shear flows are present. This is written as (Balsara, 1995)

(8) |

where and are estimated according to the SPH formalism.

The early SPH formulation (Monaghan, 2005), assumed a constant viscosity parameter of order unity for all the particles, thus making the scheme excessively viscous away from shocks. In the literature, the SPH scheme with this viscosity parametrization is often referred to as standard SPH, whereas here we use this term to indicate the SPH formulation which uses the AV switch which now we will describe.

To reduce the amount of AV away from shocks Morris & Monaghan (1997) proposed letting the ’s vary with time according to some source term . The time-evolution of is given by

(9) |

where is a floor value and

(10) |

is a decay time scale which is controlled by the dimensionless decay parameter . The source term is constructed so that it increases whenever (Morris & Monaghan, 1997); here we adopt a slightly modified form (Valdarnini, 2011) which reads

f_i S_0 max(-(→∇⋅→v)_i,0) (α_max-α_i) | |||||

where sets an upper limit and is unity for . In the following, unless otherwise specified, we adopt a time-dependent AV scheme with parameters . This set of parameters will be denoted as AV (See Table 1 of Valdarnini, 2011, to which we refer for more details).

To suppress AV more efficiently away from shocks, the time dependent AV scheme has been further improved by Cullen & Dehnen (2010). They introduced the time derivative of to detect in advance when a flow is converging, as well as higher order gradient estimators and a more sophisticated functional form for the viscosity limiter in shear flows. The Cullen & Dehnen (2010) scheme will be used in some test cases and we refer to the authors’ paper for a detailed description of its implementation in SPH.

### 2.3 The artificial conductivity scheme

In the entropy formulation of SPH, the rate of entropy generation is given by (Springel & Hernquist, 2002)

(11) |

where the terms in brackets denote different sources ( in the hydrodynamic test cases presented here, radiative losses are not considered).

The term refers to the numerical viscosity:

(12) |

The AC term for the dissipation of energy takes the form

(13) |

where is the AC signal velocity, , and is the AC parameter which is of order unity. The above Equation represents the SPH analogue of a diffusion equation of the form (Price, 2008)

(14) |

where is a numerical heat-diffusion coefficient given by

(15) |

A crucial issue concerns reducing the AC in the absence of contact discontinuities. In analogy with the AV scheme, one can define an AC switch with a source term given by

(16) |

where the Laplacian of the thermal energy is calculated as done by Brookshaw (1985)

(17) |

and for the signal velocity we use (V12)

(18) |

For the other parameters we set , , , and .

The time evolution of the AC parameter is similar to that of the AV

(19) |

where sets the decaying time scale away from jumps in thermal energy.

The AC term has been introduced with the purpose of smoothing the thermal energy at contact discontinuities (Price, 2008), and when using the signal velocity (18) can be interpreted as a subgrid model mimicking the effects of diffusion due to turbulence (Wadsley et al., 2008). Finally, it must be stressed that for the hydrodynamic test problems considered here, with the exception of the KH tests, thermal energy gradients are null or very small. For these tests the impact of AC on simulation results can then be considered negligible.

### 2.4 The Integral Approximation scheme

In SPH, the errors associated with finite sampling cannot be simply eliminated by a more accurate interpolation scheme. A gradient estimator which is exact at linear order can be constructed by using matrix inversion (Price, 2012), but this comes at the cost of losing the conservation properties of SPH.

To avoid these difficulties García-Senz et al. (2012) proposed a novel approach in which SPH first-order derivatives are estimated through the use of integrals. This makes the method much less noisy than in the standard formulation, with accuracy in estimated gradients being greatly improved (García-Senz et al., 2012; Rosswog, 2015).

Moreover, a significant benefit of the method is that it retains the Lagrangian nature of SPH, thereby ensuring exact conservation of linear and angular momentum.

After the paper of García-Senz et al. (2012), the performance of the scheme was investigated in detail by Rosswog (2015); here we briefly describe the essential features of the method.

Let us define the integral

(20) |

where is a spherically symmetric and normalized kernel. By Taylor expanding to first order around

(21) |

where we have introduced the notation . The gradient of the function is then given by

(22) |

where and

(23) |

In SPH integrals are replaced by summations over particles, so that for the matrix of particle one has

(24) |

In evaluating the discrete equivalent of the integral (20), a key step is to assume that the condition

(25) |

is fulfilled with a certain degree of accuracy. In such a case the integral (20) then becomes

(26) |

Because of the approximation (25), for linear functions gradient estimates are no longer exact. However it is can easily be seen (García-Senz et al., 2012) that the gradient approximation (22), obtained using Equations (24) and (26), is now antisymmetric in the pair . Thus, the condition (25) in the new scheme is crucial for ensuring exact conservation properties.

How well the approximation (25) is valid depends on the particle distribution within the kernel radius. The validity of the new scheme has been carefully tested (García-Senz et al., 2012; Rosswog, 2015) for several hydrodynamical problems, demonstrating significant improvements in the accuracy of the results with respect to standard SPH.

To summarize , the integral approximation implies the replacement of in the SPH equations according to the following prescriptions:

(27) |

and

(28) |

where . In the following, the SPH formulation in which gradients are estimated according to the numerical scheme described here, will be referred to as the integral approximation (IA).

## 3 Kernels

In this section we briefly review some properties of the most commonly used kernels in SPH. All of these kernels are characterized by the property of having compact support and of being continuous up to some degree.

A class of kernels which has often been considered in SPH is that of the B-spline functions (Price, 2012), which are generated via the 1D Fourier transform:

(29) |

The degree of smoothness increases with and the kernel approaches the Gaussian in the limit . The function is a polynomial of degree and its derivative is continuously differentiable times. By requiring in SPH the continuity of the first and second derivative, the first useful kernel is then (cubic spline):

(30) |

where the kernel is non zero for and is the truncation radius in units of . The normalization constant takes the values for and , respectively.

The B-splines next in order which have been considered are () and ( ); for a more detailed description of these kernels we refer the reader to Price (2012).

The stability properties of the kernel family have been investigated by a number of authors (Morris, 1996; Børve et al., 2004; Read et al., 2010; Dehnen & Aly, 2012, V12). A crucial result which emerges from these analyses is that all of the B-splines suffer from pairing instability. The number of neighboring particles for which the instability develops depends on the kernel degree and in 3D lies in the range between for up to when the kernel is used (Dehnen & Aly, 2012).

It has been suggested that particle clumping can be avoided by modifying the kernel shape in order to have a non-zero gradient at the origin. Examples of this family of kernels are the core-triangle (CRT; Read et al., 2010) and the linear quartic (LIQ; Valcke et al., 2010). However, such adjustments reflect negatively on the density estimation ability of the kernels. By introducing a non-zero central derivative, the kernel profile becomes steeper and this in turn implies an overestimate of density when compared with the corresponding B-spline (V12, Rosswog, 2015).

These attempts to fix the pairing instability by introducing ad hoc modifications in the kernel shape have recently been superseeded by a new class of kernels. It has been shown (Dehnen & Aly, 2012) that a necessary condition for avoiding pairing instability is that of having kernels with non-negative Fourier transforms. A class of kernels with satisfies this property and has compact support are the Wendland (1995) functions. An example in 3D of these functions is the Wendland :

(31) |

where if . Hereafter we will refer to this kernel as . Other classes of Wendland kernels are () and (). We refer to Table 1 of Dehnen & Aly (2012) for the functional forms and normalization of these kernels. Finally, García-Senz et al. (2014) proposed using the sinc functions as another class of kernels which can be used to avoid pairing instability.

The accuracy of density estimation in SPH for different kernel families has been assessed by many authors (Dehnen & Aly 2012; V12; Rosswog 2015; Zhu et al. 2015). Here we measure the mean SPH density of particles using a glass-like particle distribution inside a cube of sidelength unity and total mass one. Figure 1 shows the mean SPH density of the particles as a function of the neighbor number for different B-splines and Wendland kernels. The value of ranges in powers of two between and , with three distinct values of being considered for each kernel according to its order ( see Figure 1 ). To avoid overcrowding in the plot, standard deviation ’s are not shown, but the general tendency is of decreasing as increases with for the largest value of .

A number of conclusions can be drawn by examining, for different kernels, the accuracy behavior depicted in Figure 1. The best performances are given by and , which for any given number of neighbors outperform all of the other kernels. The cubic spline () and the Wendland () exhibit the worst performances, regardless of the value of .

The Wendland kernels yield acceptable density estimates only when large values of are used (), with and having better performances at any given and stable estimates already for . The differences between the two families reduce progressively as is increased, with the results becoming comparable only when .

This behavior reflects the difference in shape between the two families, with the Wendland kernels being more centrally peaked than the B-splines and systematically overestimating the density. This in turn is a direct consequence of the way in which the Wendland kernels have been constructed in order to avoid pairing instability and of their spectral properties.

It must be stressed that in making comparisons between the error behavior of kernels of different families, it is only meaningful to compare kernels with the same polynomial order (Aguilar et al., 2011). In 3D, the Wendland equivalent of the kernel is therefore .

This means that in relative terms, the performances of the Wendland kernels are not very good unless one is willing to use a very large number of neighbors in the SPH simulations made with them. In this respect, it is now common practice to use the Wendland kernels (or even ) setting . However, even small errors in the densities can have a significant impact on estimates of other hydrodynamic variables (see the results for the Sod shock tube in V12). Thus, in the case of using Wendland kernels, a conservative lower limit on the neighbor number to be used in 3D SPH runs should be .

To improve the performances of Wendland kernels Dehnen & Aly (2012, cf. their Eq. 18) proposed to subtract from the SPH density estimate (1) a fraction of the particle self-contribution. The correction term depends on the kernel order and neighbor number, with an impact which decreases as one of the two increases. For the Wendland kernel we show in Figure 1 density estimated using the self-correction term (), which now brings the relative density error down to .

However, for the hydrodynamic tests presented here the numerical set-ups consist of particle positions arranged in lattice or glass-like configurations with densities of order unity. This suggests that one can use the results of Figure 1 to assess density errors, which are already very small () without the use of such a correction term. Therefore for the considered runs we expect a negligible impact of the self-correction term on SPH densities, and in what follows it will not be considered.

These results hold for a glass-like configuration, but for a realistic SPH distribution of particles it is difficult to assess the error behavior. Zhu et al. (2015) put the expected convergence rate between that found for a random distribution set () and the one measured for a highly ordered distribution (), such as a glass-like configuration. These findings strengthen the previous conclusions, suggesting that when using Wendland kernels in SPH, the number of neighbors should be kept as high as possible.

## 4 Hydrodynamic tests

In the following, we analyze results from some test problems aimed at assessing code performance of the new IA scheme. As already outlined in the Introduction, the problems considered here have been chosen with the specific aim of investigating code behavior when subsonic flows are present in the hydrodynamic tests.

This is motivated by the serious shortcomings which affect standard SPH in these regimes. We first discuss the Gresho-Chan test, which in this respect presents severe challenges to the SPH scheme, and then the others.

### 4.1 The Gresho-Chan vortex problem

The Gresho-Chan (Gresho & Chan, 1990) vortex consists of a fluid of uniform density in differential rotation, with centrifugal forces balancing pressure gradients. The system is stationary and any change in the azimuthal velocity profile that arises during the integration is then due to numerical artifacts.

Because of sampling effects, errors in force accuracy lead to noise in the velocity field, thus generating numerical viscosity and hence particle disorder due to spurious transport of angular momentum. It is then particularly problematic for standard SPH to successfully model this problem leaving unaltered the velocity profile during the simulation, and in the literature (Springel, 2010; Dehnen & Aly, 2012; Read & Hayfield, 2012; Kawata et al., 2014; Hu et al., 2014; Hopkins, 2015; Rosswog, 2015; Zhu et al., 2015) it has been widely used to validate code performances.

We next describe our initial conditions set-up. We take a gas with uniform density within the periodic domain , with zero radial velocity, azimuthal velocity profile

(32) |

and pressure profile

(33) |

where , , and is the Mach number. Here we have adopted the generalized expression for the background pressure of Miczek et al. (2015), so that we can consider subsonic shear flows with low Mach numbers (Hu et al., 2014). The standard Gresho case is recovered for , giving .

Particle positions are initialized using an lattice of particles (Zhu et al., 2015), being the effective 1D resolution. For the box thickness, we set and we always consider . Here we use a hexagonal-close-packed (HCP) configuration for the particle coordinates. The particle velocities and pressure are set according to Equations (32) and (33). All of the simulations are run up to a final time , using a fixed timestep , so as to ensure the same integration accuracy in runs with different Mach numbers.

We quantify the convergence rate for different runs by using the error norm for the velocity (Springel, 2010)

(34) |

where the summations is over bins, we set a binsize of in the range (Hu et al., 2014), is the average azimuthal velocity of the particles which lie in the bin interval and is the analytic solution at the bin radial coordinate.

Figure 2 shows the azimuthal velocity profiles for at ; the one dimensional resolution is and each panel refers to a different kernel. Within each panel, lines with different color codes are for different neighbor numbers, the standard run always refers to the lowest neighbor number indicated in the panel. In this case, for the B-splines, values of are considered which are below the pairing instability threshold.

It is clear from all of the histograms that the IA scheme outperforms the standard one for all of the kernels, with the latter scheme being much more noisy. There is a tendency for the standard scheme to improve as higher order kernels are considered, but the error in the velocity profile is always significant. These results are in agreement with previous findings (Dehnen & Aly, 2012; Read & Hayfield, 2012; Hu et al., 2014; Rosswog, 2015) and clearly demonstrate how, for the vortex test, inaccuracies in gradient estimates, i.e. the error, are the leading error sources. These errors are significantly reduced when using the IA scheme, showing how good that method is.

To quantify the performance of the IA approach, for the same test case we show in Figure 3 at the velocity error as a function of the 1D resolution . This ranges between up to a maximum value of . For any given value of we considered different combinations of kernel and neighbor number ; these are reported in the Figure. For the same combination of resolution, kernel shape and neighbor number, we performed a simulation according to the IA formulation and a corresponding one using standard SPH.

In the case of the B-splines we employed the highest neighbor number that it is possible to use without having the pairing instability. For the Gresho-Chan vortex test the convergence rate has already been estimated for a variety of different SPH implementations, so that Figure 3 can be compared with the corresponding rates already obtained by various authors (Dehnen & Aly, 2012; Read & Hayfield, 2012; Hu et al., 2014; Rosswog, 2015; Zhu et al., 2015).

From Figure 3 it can be seen that there is a resolution dependency of the error on . In the standard case the convergence behavior is in line with those found previously, see for example Figure 10 of Dehnen & Aly (2012) or Figure 5 of Zhu et al. (2015). The dependency of on resolution parallels that of the kernel and the same holds for the kernels , this is very similar to what is seen in Figure 10 of Dehnen & Aly (2012, we use the same neighbor number). However for the norm we obtained smaller errors here. For example in the case we found that for , whereas for the same test run their Figure 10 shows . The same is true for the runs, for which here is about a factor three smaller than in their corresponding run.

When passing from the standard scheme to the IA scheme, there is a significant reduction in the amplitude of the norms. The decrease in is by a factor of between and , with some dependency on resolution and adopted kernels. For the kernel the ratio between the norms ranges from at down to when .

We now analyze the convergence rate of in the IA formulation. This depends on the adopted kernel, and for we found . This is close to what was given by Springel (2010) when using the moving mesh code Arepo ().

To estimate the rate for the other kernels we adopt a conservative view and only include in the fit those points with . We then obtain , which is better than that reported by Hu et al. (2014, ) for their pressure-entropy SPH formulation. The rate is also in agreement with that shown by Rosswog (2015, Figure 10, case : ), who implemented a IA scheme using a kernel. Note, however, that the value of at found there is a factor higher than that found here ().

Higher convergence rates have been obtained by Read & Hayfield (2012, ), who employed a modified version of SPH with high-order dissipation switches, and by Zhu et al. (2015). The latter investigated the convergence behaviour of standard SPH showing that consistency in numerical convergence is achieved when the conditions are satisfied. In their varying case (, for the Gresho-Chan vortex test the authors report , a much faster rate than that obtained by keeping constant. The value of in the case is of the same order () as that obtained here for the standard run with the same resolution.

Finally, with the exception of the lowest order kernels ( and ), a comparison of the norms with those produced using the moving-mesh code Arepo shows that the IA formulation gives results which are comparable or better than those obtained with the mesh code (Springel, 2010, Figure 29), with ranging here from () down to ().

To further investigate the performance of the IA scheme, we ran a suite of vortex tests with progressively lower Mach numbers. These tests are particularly challenging since, at constant velocity, the lower is the Mach number, the higher is the sound speed. This in turn implies an increase in the viscous force. Errors in the momentum equation then become progressively more important.

To aid comparison with the previous works, as in Hu et al. (2014) we considered the following Mach numbers and . We ran the simulations using as 1D resolution. This is a factor of two lower than that used by Hu et al. (2014) in their tests, however the results are not significantly affected by this choice. For the kernel () we show the azimuthal velocity profiles of the four test cases in Figure 4, so that the Figure can be compared with the corresponding histograms of Figure 2 of Hu et al. (2014).

The profiles of the standard runs largely reproduce those of Hu et al. (2014); however, a striking feature of the IA scheme which emerges from the histograms of Figure 4 is the close proximity of the azimuthal velocity profiles to the analytical solution. This occurs even when very low Mach numbers are considered, as can be seen from the case. This shows the effectiveness of the IA method for eliminating sampling errors in SPH when subsonic flows are present.

Moreover, these findings are in agreement with previous results (Read & Hayfield, 2012; Hu et al., 2014) and demonstrate that in SPH simulations of the Gresho-Chan test, errors in force accuracy dominate over viscous effects.

For the considered tests we have shown until now the mean binned velocities. In order to assess the amount of noise present in the various runs, it is useful to plot directly the azimuthal particle velocities. To this end we ran two tests with 1D resolution , Mach numbers and , respectively. In the latter case we set a background pressure of . For each run performed using the time dependent AV scheme with settings AV ( see Section 2.2 ), we also consider a parent simulation in which has been implemented the AV switch of Cullen & Dehnen (2010).

The results are shown in Figure 5 at , where for the two test cases we plot the azimuthal velocities for a subset of all particles. The velocity distributions can be compared directly with those of the corresponding runs in Figure 4 & 5 of Hopkins (2015). An important feature which emerges from the plots of Figure 5 is that both AV methods show velocity distributions which are evenly scattered around the analytic solution, with the AV switch of Cullen & Dehnen (2010) exhibiting a much smaller amount of noise. It is worth noting how the IA method, even for very low Mach numbers, can accurately follow the analytic solution also at the peak vortex velocity. This behaviour is much better than that seen for the same test in the top panel of Figure 5 of Hopkins (2015).

Finally, it must be pointed out that in these simulations the amount of thermal diffusion due to the AC term is negligible and we do not include such a term in the SPH equations. This occurs because of the high sound speeds in the low Mach number regime, so that the time evolution (19) of the parameter is driven by the decaying rate , which dominates over the source term . To better quantify this issue we use Equation (14) to estimate at time the change in thermal energy due to the AC term: .

The Laplacian of has a maximum at , where the azimuthal velocity reaches its peak value. The diffusion coefficient is then given by , so that

(35) |

where we have approximated with the equilibrium solution .

For at , and can be easily computed because so that

(36) |

For and this term is always much smaller than , regardless of the chosen kernel.

### 4.2 The Kelvin-Helmholtz instability

The KH instability has been investigated by many authors since it is a classic test in which SPH fails to properly model the development of the instability (Price, 2008; Read et al., 2010; Valcke et al., 2010; Junk et al., 2010; Heß & Springel, 2010; Cha et al., 2010; Murante et al., 2011; Valdarnini, 2012; McNally et al., 2012; Kawata et al., 2014; Hu et al., 2014; Hopkins, 2015).

The test consists of two fluid layers of different densities sliding past each other with opposite shearing velocities, and a small velocity perturbation is imposed in the direction perpendicular to the contact surface. A fluid instability develops, which is initially small and then becomes progressively larger until non-linearity is reached with the appearence of KH rolls. For a sinusoidal perturbation of wavelength , a linear time scale can be defined as

(37) |

where and are the two fluid densities with a density ratio and is the relative shear velocity.

To perform the test, the following conditions

(38) |

are applied for a fluid with adiabatic index in a two-dimensional periodic domain with cartesian coordinates , . We set here , and for the density contrast.

The two layers are in pressure equilibrium with , so that the sound velocities in the two layers are and , respectively. The Mach number of the high-density layer is and the KH time scale is . We ran KH simulations with three different Mach numbers: and . For the latter value the initial condition set-up was similar to that of Hopkins (2015, Section 4.4.1).

The KH instability is triggered by adding in the proximity of the layer boundaries a small single-mode velocity perturbation along the direction

(39) |

where , and if , where takes the values and , respectively. Note that for the amplitude of the initial velocity perturbation we set here, unlike in previous runs (V12), a relative constant amplitude with respect to the streaming velocity. This was done in order to consistently compare, between runs with different Mach numbers, the impact of zeroth-order errors on .

We performed the initial condition set-up by arranging equal mass particles inside the simulation box, setting the particle coordinates according to an HCP configuration. The lattice spacing was smoothly adjusted at the fluid interfaces so as to avoid density discontinuities; the details of the whole procedure are given in V12. For each test case the IA simulations were then performed using the , and the kernels with neighbor number , and , respectively. For the runs we also considered standard simulations. Finally, all of the simulations were performed with the AC term of Section 2.3 switched on.

For the specified range of Mach numbers, we first show in Figure 6 density plots of the KH simulations at . We show maps extracted from the runs, and contrast the IA scheme against the standard SPH scheme. For , both of the methods are able to produce KH rolls. At lower Mach numbers () the standard method completely fails the KH test, whereas the IA scheme shows a degraded capability to resolve KH rolls. At the rolls are absent and the differences between the two schemes are no longer present.

The relative performances of the two methods can be quantitatively assessed by measuring the error (Read et al., 2010; Valcke et al., 2010, V12) for the various runs. Here we first show the growth rate of the KH instability (Junk et al., 2010; Heß & Springel, 2010, V12), which allows one to recognize in a more visual way the differences between the KH results produced by the two schemes. The growth rate is measured by Fourier transforming, at different times, the growing mode of the velocity perturbation (Junk et al., 2010, V12).

The growth rates are shown in Figure 7, and their relative differences confirm the visual impressions derived from the maps of Figure 6. For there are no significant differences between the two methods and both are able to follow the growth of the KH instability (Figure 6, right panel), the rates of the different runs being in accord with the analytic expectation. The results can also be compared with the corresponding rates in Figure 18 of Hopkins (2015), taking care about the different time scales due to the different number of modes used to seed the perturbation. Note that, unlike in Hopkins (2015), the standard version here correctly follows the development of the KH instability. We interpret this difference as being due to the small neighbor number ( in 3D) adopted in his standard (PSPH) run.

As lower Mach numbers are considered, from the other panels of Figure 7 one can see a growing difficulty of the IA scheme in following the KH instability, regardless of the kernel employed in the simulation. This happens because by reducing the Mach number the shear velocity is also reduced and in turn, owing to the chosen settings, the initial velocity amplitude is reduced as well. At a fixed resolution the impact of gradient errors, and the subsequent particle disorder, on the growth of the KH instability is then higher as the Mach number decreases.

The error of particle is defined by (Read et al., 2010)

(40) |

and we show in Figure 8 the mean binned distribution of the particle errors versus . The plots refer to the runs of Figure 6. A key feature is the magnitude of the errors, which in proximity of the interfaces for the runs are smaller by factor than the standard ones. This is in line with what expected and in accord with what seen in Figure 7, with the growth rates of the IA runs exhibiting a better behavior at low . However one can see from Figure 7 that for the KH instability is not correctly reproduced even in the IA scheme. In such a case gradient errors can be reduced by increasing the simulation resolution.

We do not undertake here a resolution study aimed at assessing the convergence rate to the KH solution in the very low () subsonic regime. We use instead a simple argument to provide a rough estimate of the minimum number of particles which would be necessary to simulate the KH test case.

An error norm for the KH problem has been introduced by Robertson et al. (2010) and, in analogy with their Equation 11, we conjecture here for a generic dependence of the form

(41) |

on the particle number , and the simulation time . We have dropped the dependency on the bulk flow velocity, present in their Equation 11, and for the power-law dependencies we generically assume the exponents and .

The simulations of Robertson et al. (2010) were performed using the Eulerian mesh code ART (Kravtsov et al., 1997); note however that their initial condition setup corresponds here to the KH test case.

The ratio between the error norms of two different KH runs is then

(42) |

where we set in order to consistently compare the norms and . We now assume as reference run the test case, for which from Figure 7 can be considered an adequate resolution up to . Therefore for the norm ratio is

(43) |

For the dependency on simulation resolution Robertson et al. (2010) report for their Eulerian code. Here the numerical convergence is likely to be shallower, with . However, the results of Section. 4.1 indicate for the vortex test a convergence rate of the IA scheme very close to that seen using moving mesh schemes (Springel, 2010). We therefore assume here as a reasonable slope on resolution convergence, thus putting a conservative lower limit on .

In Robertson et al. (2010), the time evolution of the error norm has a weak dependency on numerical resolution: . This slope clearly depends on the adopted numerical scheme and we simplify this dependency by assuming . The impact of this assumption on estimating is however relatively unimportant, the ratio between the two time factors being in any case of order unity and closer to one as . In fact, we further simplify the ratio (43) by just removing the time factors.

Finally, an accuracy criterion for the simulations is set by putting an upper limit on the error norm , with being a given threshold. We now assume for a generic dependence on the initial velocity amplitude of the form , with . This lower limit on is justified by the requirement that lower values of must correspond to lower values of . Then , for the ratio (43), we have

=(512/N_2)^2 ( δv(0)y(2)δv(0)y(1))^β | ||||||

Setting we thus obtain for the lower limit . Note however that to achieve numerical convergence in a consistent way in SPH the number of neighbors must also increase when and (Zhu et al., 2015). For their Gresho-Chan vortex test Zhu et al. (2015) adopt, when is allowed to vary, . In such a case, by referring to the run with neighbors, we conclude that neighbors and particles are the least necessary in order to simulate the KH test case using the Wendland kernel.

Finally, it is worth noting that the difficulties of SPH to follow the formation of KH instabilities depend not only on velocity noise, but also on the local mixing instability (LMI: Agertz et al., 2007; Price, 2008; Read et al., 2010; Valcke et al., 2010). This LMI occurs because, in the presence of a density step, the entropy conservation of SPH causes a pressure blip at the boundary. These pressure discontinuities in turn lead to the presence of shock waves which then inhibit the growth of KH instabilities (Valcke et al., 2010).

Different approaches have been taken to eliminate or reduce the LMI: by introducing initial conditions with a smoothing of the density step (Valcke et al., 2010), and/or adding an AC term to give smooth entropies (Price, 2008), or by reformulating the SPH density estimate (Ritchie & Thomas, 2001).

Based on a suite of numerical tests, Valcke et al. (2010) argued that for low Mach numbers () the growth of KH instabilities is still suppressed by the LMI. Although the magnitude of the shocks induced by LMI has been greatly reduced because of the initial density smoothing, Valcke et al. (2010) found that for low M the time scales are much higher than those set by the numerical shocks.

It is not trivial to remove from the simulations these residual shocks. For instance, they can be eliminated by applying a relaxing scheme to the initial conditions, but the growing KH instabilities are then strongly suppressed by the induced particle disorder (Valcke et al., 2010). A study on these effects is beyond the scope of this paper.

As a final point, it must be stressed that the results of the KH runs presented here have been obtained by using the AV scheme of Section 2.2 with settings AV. By replacing this scheme with the AV switch of Cullen & Dehnen (2010) we expect a significant reduction in the amount of AV present in the simulations ( see results of the previous and next Section ). This in turn will result in a more inviscid behavior and a better capacity of the code to follow the development of the KH instabilities.

### 4.3 Subsonic turbulence

Studies of driven isothermal subsonic turbulence (Bauer & Springel, 2012) have shown substantial differences in the properties of the velocity power spectra extracted from mesh-based simulations, when compared with those produced from the same test runs using the standard formulation of SPH.

Although the use in standard SPH of a time-dependent AV scheme alleviates the problem (Price, 2012b), the discrepancies are still present and their origin has been identified as being due to large errors in the SPH gradient estimates (Bauer & Springel, 2012). These errors in turn imply the presence of subsonic velocity noise which is higher as lower Mach numbers are considered. As a result, SPH simulations exhibit spectra with a much smaller inertial range (i.e. Kolgomorov-like) than the ones measured using mesh codes.

A faithful numerical modeling of subsonic turbulence is particularly relevant in various astrophysical contexts (star formation, intracluster medium, intergalactic medium), and it is therefore important to investigate the capability of the IA scheme to properly simulate this test problem.

To this end we set an HCP lattice of particles with initially zero velocities inside a periodic box of sidelength and density . The gas was isothermal with and . Turbulence in the gas was driven by adding to the momentum Equation (3) of the particles an external stochastic driving force . This was constructed in -space according to a procedure already used by previous authors (Price & Federrath, 2010; Bauer & Springel, 2012; Price, 2012b; Hopkins, 2013, 2015; Zhu et al., 2015)

The power spectrum of varies as and the Fourier modes are non-zero in the range between and . The phases of the stirring field are drawn from an Ornstein-Uhlenbeck (UO) process for which the random sequence at the step is given by (Eswaran & Pope, 1988; Bartosch, 2001)

(44) |

where is a decaying factor, is a Gaussian random variable with unit variance and is the variance of the UO process. The constructed sequence then has and .

In order to obtain a pure solenoidal driving, we apply an Helmholtz decomposition in space:

(45) |

where the vector is a complex vector-valued stochastic process characterized at any given by 6 UO random sequences (44) and is the solenoidal stirring field ().

The particle accelerations are calculated at each timestep by updating the stochastic field according to the described procedure, the summation in space being performed by summing directly at the particle positions. For the driving parameters we use the values of Bauer & Springel (2012, Table 1). The power spectrum is normalized so that the rms Mach number lies in the range after the simulations have reached the steady-state regime ().

We compared results extracted from subsonic simulations performed with the standard and IA implementations of SPH. We ran simulations with three different resolutions: and . For a given resolution we used in both of the schemes the same initial condition set-up and stirring force field. In all of the simulations we ran up to and adopted the kernel with neighbors. We perform standard and IA runs by using the time-dependent AV scheme of Section 2.2 with settings AV. Additionally, we also run a set of IA simulations by using the AV method of Cullen & Dehnen (2010). In the following, we will refer to these IA runs with the term IA-CD, whilst we will use the term IA-AV when referring to the IA runs with AV settings AV.

As in other works (Bauer & Springel, 2012; Hopkins, 2015) we measure the spectral properties of the turbulent velocity field to assess the performances of the two codes. The velocity power spectrum is defined as

(46) |

where , and is the ensemble average velocity power spectrum. This is given by

(47) |

where is the Fourier transform of the velocity field :

(48) |

In the case of incompressible turbulence, the energy spectrum follows the Kolgomorov scaling . To measure the energy spectrum we first set inside the simulation box a cube with grid points. From the particle velocities we then estimate the grid velocity field at the grid points , using a triangular-shaped cloud function (TSC) interpolation scheme.

We then compute the discrete Fourier transforms of and the discrete power spectrum is evaluated by binning the quantity in spherical shells of radius and averaging in the bins. The energy density of Equation (46) is then given, aside from a normalization factor, by . Finally, for a given simulation, the spectrum is estimated by doing a time-average between and , with the spectrum being sampled each time interval.

We show in Figure 9 the spectra as measured from our simulations. The spectral behavior of the standard runs is in broad agreement with previous findings (Price, 2012b; Bauer & Springel, 2012; Hopkins, 2013, 2015; Zhu et al., 2015). The spectra are characterized by a very narrow inertial range at low wavenumbers, with a significant decline at higher . The spectra reach a minimum at a wavenumber , which increases as higher resolutions are considered, followed by a steep increase in the power at smaller scales .

The precise value of depends on , but it is still much smaller than the noise scale set by the kernel. For incompressible turbulence one can easily approximate with the value of given by the average density, thus obtaining

According to Price (2012b) the very limited capability
of standard SPH to develop a Kolgomorov-like spectrum is due to
the excess of numerical viscosity present in the scheme, which can
be reduced by adopting a time-dependent AV switch.
^{1}^{1}1 We recall that with the term standard SPH
we refer here to the usual SPH scheme of Section 2.1, but
incorporating the time-dependent AV switch
described in Section 2.2 .
In contrast, Bauer & Springel (2012, Figure 6) showed that the rise in power at small
scales is mainly a result of the subsonic velocity noise due to
kernel gradient errors present in standard SPH.
For the same set of initial conditions and forcing sequence,
their spectra extracted from runs performed
using the moving-mesh code Arepo
exhibit an inertial range which extends over more than a decade in .
Similar results were later obtained by Hopkins (2015, Figure 27),
by using for the same test problem a completely different code (Gizmo).

A similar behavior is found here for the spectra of the IA-CD runs depicted in Figure 9, which show a dramatic improvement over the corresponding standard SPH runs. The spectra exhibit now a much larger inertial range, which increases with resolution and for the simulation it exte