# Hydrodynamic capabilities of an SPH code incorporating an artificial conductivity term with a gravity-based signal velocity

###### Key Words.:

Hydrodynamics – Methods: simulations :SPH : artificial conductivity – TurbulenceThis paper investigates the hydrodynamic performances of an SPH code incorporating an artificial heat conductivity term in which the adopted signal velocity is applicable when gravity is present. To this end, we analyze results from simulations produced using a suite of standard hydrodynamical test problems. In accordance with previous findings it is shown that the performances of SPH to describe the development of Kelvin-Helmholtz instabilities depend strongly on the consistency of the initial condition set-up and on the leading error in the momentum equation due to incomplete kernel sampling. On the contrary, the presence of artificial conductivity does not significantly affect the results.

An error and stability analysis shows that the quartic spline kernel () possesses very good stability properties and we propose its use with a large neighbor number, between (2D) to (3D), to improve convergence in simulation results without being affected by the so-called clumping instability. Moreover, the results of the Sod shock tube test demonstrate that in order to obtain simulation profiles in accord with the analytic solution, for simulations employing kernels with a non-zero first derivative at the origin it is necessary the use of a much larger number of neighbors than in the case of the runs.

SPH simulations of the blob test show that in order to achieve blob disruption it is necessary the presence of an artificial conductivity term. However, it is found that in the regime of strong supersonic flows an appropriate limiting condition, which depends on the Prandtl number, must be imposed on the artificial conductivity SPH coefficients in order to avoid an unphysical amount of heat diffusion.

Results from hydrodynamic simulations that include self-gravity show profiles of hydrodynamic variables that are in much better agreement with those produced using mesh-based codes. In particular, the final levels of core entropies in cosmological simulations of galaxy clusters are consistent with those found using AMR codes. This demonstrate that the proposed diffusion scheme is capable of mimicking the process of entropy mixing which is produced during structure formation because of diffusion due to turbulence.

Finally, results of the Rayleigh-Taylor instability test demonstrate that in the regime of very subsonic flows the code has still several difficulties in the treatment of hydrodynamic instabilities. These problems being intrinsically due to the way in which in standard SPH gradients are calculated and not to the implementation of the artificial conductivity term. To overcome these difficulties several numerical schemes have been proposed which, if coupled with the SPH implementation presented in this paper, could solve the issues which recently have been addressed in investigating SPH performances to model subsonic turbulence.

## 1 Introduction

Smoothed particle hydrodynamics (SPH) is a Lagrangian, mesh-free, particle method which is used to model fluid hydrodynamics in numerical simulations. The technique was originally developed in an astrophysical context (Lucy 1977; Gingold & Monaghan 1977), but since then it has been widely applied in many other areas (Monaghan 2005) of computational fluid dynamics.

The method has several properties which make its use particularly advantageous in astrophysical problems (Hernquist & Katz 1989; Rosswog 2009; Springel 2010). Because of its Lagrangian nature the development of large matter concentrations in collapse problems is followed naturally, moreover the method is free of advection errors and is Galilean invariant, Finally, the method naturally incorporates self-gravity and possess very good conservation properties, its fluid equations being derived from variational principles.

The other computational fluid dynamical method commonly employed in numerical astrophysics is the standard Eulerian grid based approach in which the fluid is evolved on a discretized mesh (Stone & Norman 1992; Ryu et al. 1993; Norman & Bryan 1999b; Fryxell et al. 2000; Teyssier 2002). The spatial resolution of an Eulerian scheme based on a fixed Cartesian grid is often insufficient however to adequately resolve the large dynamic range frequently encountered in many astrophysical problems, such as galaxy formation. This has motivated the development of adaptative mesh refinement (AMR) methods, in which the spatial resolution of the grid is locally refined according to some selection criterion (Berger & Colella 1989; Kravtsov, Klypin & Khokhlov 1997; Norman 2005). Additionally, the order of the numerical scheme has been improved by adopting the parabolic piecewise method (PPM) of Colella & Woodward (1984), such as in the AMR Eulerian codes ENZO (Norman & Bryan 1999b) and FLASH (Fryxell et al. 2000).

Application of these different types of hydrodynamical codes to the same test problem with identical initial conditions should in principle lead to similar results. However, there has been growing evidence over recent years that in a variety of hydrodynamical test cases there are significant differences between the results produced the two types of methods (O’Shea et al. 2005a; Agertz et al. 2007; Wadsley, Veeravalli & Couchman 2008; Tasker et al. 2008; Mitchell et al. 2009; Read, Hayfield & Agertz 2010; Valcke et al. 2010; Junk et al. 2010).

For instance, Agertz et al. (2007) showed that in the standard SPH formulations the growth of Kelvin-Helmholtz (KH) instabilities in shear flows is artificially suppressed when steep density gradients are present at the contact discontinuities. Moreover, the level of central entropy produced in binary merger non-radiative simulations of galaxy clusters is significantly higher by a factor in Eulerian simulations than in those made using SPH codes (Mitchell et al. 2009). The origin of these discrepancies has been recognized (Agertz et al. 2007; Read, Hayfield & Agertz 2010) as partly due to the intrinsic difficulty for SPH to properly model density gradients across fluid interfaces, which in turn implies the presence of a surface tension effect which inhibits the growth of KH instabilities. A second problem is due to the Lagrangian nature of SPH codes, which prevents mixing of fluid elements at the particle level and leads to entropy generation (Wadsley, Veeravalli & Couchman 2008; Mitchell et al. 2009). In particular, Mitchell et al. (2009) argue that the main explanation for the different levels of central entropy found in cluster simulations is due to the different degree of entropy mixing present in the two codes. Unlike SPH, in Eulerian codes fluid mixing occurs by definition at the cell level and a certain degree of overmixing is certainly present in those simulations made using mesh-based codes (Springel 2010b).

Given the advantages of SPH codes highlighted previously, it appears worth pursuing any improvement in the SPH method capable of overcoming its present limitations. Along this line of investigation many efforts have been made by a number of authors (Abell 2011; Price 2008; Wadsley, Veeravalli & Couchman 2008; Valcke et al. 2010; Read, Hayfield & Agertz 2010; Heß & Springel 2010; Cha, Inutsuka & Nayakshin 2010; Murante et al. 2011).

Price (2008) introduces an artificial heat conduction term in the SPH equations with the purpose of smoothing thermal energy at fluid interfaces. This artificial conductivity (AC) term in turn gives a smooth entropy transition at contact discontinuities with the effect of enforcing pressure continuity and removing the artificial surface tension effect which inhibits the growth of KH instabilities at fluid interfaces. Similarly, Wadsley, Veeravalli & Couchman (2008) suggest that in SPH the lack of mixing at the particle level can be alleviated by adding a heat diffusion term to the equations so as to mimic the effects of subgrid turbulence, thereby improving the amount of mixing.

Read, Hayfield & Agertz (2010) present an SPH implementation in which a modified density estimate is adopted (Ritchie & Thomas 1991), together with the use of a peaked kernel and a much larger number of neighbors. The authors showed that the new scheme is capable of following the development of fluid instabilities in a better way than in standard SPH. Abell (2011) presents an alternative derivation of the SPH force equation which avoids the problem encountered by standard SPH in handling fluid instabilities, although the approach is not inherently energy or momentum conserving and is prone to large integration errors when the simulation resolution is low.

The method proposed by Inutsuka (2002) reformulates the SPH equations by introducing a kernel convolution so as to consistently calculate density and hydrodynamic forces. The latter are determined using a Riemann solver (Godunov SPH). The method has recently been revisited by Cha, Inutsuka & Nayakshin (2010) and Murante et al. (2011), who showed that the code correctly follows the development of fluid instabilities in a variety of hydrodynamic tests.

A deeper modification than those presented here so far has been introduced by Heß & Springel (2010), who replaced the traditional SPH kernel approach with an new density estimate based on Voronoi tesselation. The authors showed that the method is free of surface tension effects and therefore the growth rate of fluid instabilities is not adversely affected as in standard SPH.

Finally, a radically new numerical scheme has been introduced by Springel (2010b), with the purpose of retaining the advantages of both SPH and mesh-based codes. In the new code the hydrodynamic equations are solved on a moving unstructured mesh using a Godunov method with an exact Riemann solver. The mesh is defined by the Voronoi tesselation of a set of discrete points and is allowed to move freely with the fluid. The method is therefore adaptative in nature and thus Galilean invariant but, at the same time, the accuracy with which shocks and contact discontinuities are described is that of an Eulerian code. In a recent paper Bauer & Springel (2012) argue that the standard formulation of SPH fails to accurately resolve the development of turbulence in the subsonic regime (but see Price (2012b) for a different viewpoint). The authors draw their conclusions by analyzing results from simulations of driven subsonic turbulence made using the new moving-mesh code, named AREPO, and a standard SPH code.

Similar conclusions were reached in a set of companion papers (Sijacki et al. 2011; Vogelsberger et al. 2011), in which the new code was used in galaxy formation studies to demonstrate its superiority over standard SPH. However, the code is characterized by considerable complexity which makes the use the SPH scheme still appealing and, more generally, it is desirable that simulation results produced with a specific code should be reproduced with a completely independent numerical scheme when complex non-linear phenomena are involved. It appears worthwhile, therefore, to investigate, along the line of previous authors, the possibility of constructing a numerical scheme based on the traditional SPH formulation which is capable of correctly describing the development of fluid instabilities and at the same time incorporates the effects of self-gravity when present.

This is the aim of the present study, in which the SPH scheme is modified by incorporating into the equations an AC diffusion term as described by Price (2008). However, in Price (2008) the strength of the AC is governed by a signal velocity which is based on pressure discontinuities. For simulations where gravity is present this approach is not applicable because hydrostatic equilibrium requires pressure gradients. An appropriate signal velocity for conductivity when gravity is present is then used to construct an AC-SPH code with the purpose of treating the growth of fluid instabilities self-consistently. The viability of the approach is tested using a suite of test problems in which results obtained using the new code are contrasted with the corresponding ones produced in previous work using different schemes. The code is very similar in form to that presented by Wadsley, Veeravalli & Couchman (2008), but here the energy diffusion equation is implemented in a different manner.

The paper is organized as follows. Sect. 2 presents the hydrodynamical method and introduces the AC approach. In Sect. 3 we investigate the effectiveness of the method by presenting results from a suite of purely hydrodynamical test problems. These are: the two-dimensional Kelvin-Helmholtz instability, the Sod shock tube, the point explosion or Sedov blast wave test and the blob test. In Sect. 4 we then discuss results from hydrodynamic tests that include self-gravity. Specifically, we consider the cold gas sphere or Evrard collapse test, the Rayleigh-Taylor instability and the cosmological integration of galaxy clusters. Finally, the main results are summarized in Sect. 5

## 2 The Hydrodynamic method

Here we present the basic features of the method, for recent reviews see Rosswog (2009), Springel (2010) and Price (2012).

### 2.1 Basic SPH equations

In the SPH method the fluid is described by a set of particles with mass , velocity , density , and a thermodynamic variable such as the specific thermal energy or the entropic function . The particle pressure is then defined as , where for a monoatomic gas. The density estimate at the particle position is given by

(1) |

where , is the interpolating kernel that has compact support and is zero for (Price 2012). The kernel is normalized to the condition . The sum in Eq. (1) is over a finite number of particles and the smoothing length is a variable that is implicitly defined by the equation

(2) |

where is the number of spatial dimensions, for and is the number of neighboring particles within a radius . This equation can be rewritten by defining in units of the mean interparticle separation

(3) |

so that and . The smoothing length is determined by solving the non-linear equation (2); note that does not necessarily need to be an integer but can take arbitrary values if is used as the fundamental parameter determining (Price 2012). A kernel commonly employed is the , or cubic spline, which is zero for . In three dimensions typical choices for lie in the range , which for the kernel corresponds to . The equation of motion for the SPH particles can be derived from the Lagrangian of the system (Springel 2010; Price 2012) and is given by

(4) |

where the coefficients are defined as

(5) |

### 2.2 Artificial viscosity

In SPH, the momentum equation must be generalized by adding an appropriate viscous force, which is aimed at correctly treating the effects of shocks. An artificial viscosity (AV) term is then introduced with the purpose of dissipating local velocities and preventing particle interpenetration at the shock locations. The new term is given by

(6) |

where the term is the symmetrized kernel and is the AV tensor. In the SPH entropy formulation (Springel & Hernquist 2002), it is the entropy function per particle which is integrated and its time derivative is calculated as follows

(7) |

where . For the AV tensor we adopt here the form proposed by (Monaghan 1997) in analogy with Riemann solvers:

(8) |

where

(9) |

is the signal velocity and if but zero otherwise, so that is non-zero only for approaching particles. Here scalar quantities with the subscripts and denote arithmetic averages, is the sound speed of particle , the parameter regulates the amount of AV, and is a controlling factor that reduces the strength of AV in the presence of shear flows. The latter is given by (Balsara 1995)

(10) |

where and are the standard SPH estimates for divergence and curl (Monaghan 2005). For pure shear flows so that the AV is strongly damped.

Using the signal velocity (9), the Courant condition on the timestep of particle reads

(11) |

In the standard SPH formulation the viscosity parameter which controls the strength of the AV is given by , with being a common choice (Monaghan 2005). This scheme has the disadvantage that it generates viscous dissipation in regions of the flow that are not undergoing shocks. In order to reduce spurious viscosity effects Morris & Monaghan (1997) proposed making the viscous coefficient time dependent so that it can evolve in time under certain local conditions. The authors proposed an equation of the form

(12) |

where is a source term and regulates the decay of to the floor value away from shocks. For the source term the following expression is adopted

(13) |

which is constructed in such a way that it increases in the presence of shocks. The prefactor is unity for and the damping factor is inserted to account for the presence of vorticity. The original form proposed by Morris & Monaghan (1997) has been refined by the factor (Rosswog et al. 2000), which has the advantage of being more sensitive to shocks than the original formulation and of preventing from becoming higher than a prescribed value . Recently, Cullen & Dehnen (2011) presented an improved version of the Morris & Monaghan (1997) AV scheme, employing as shock indicator a switch based on the time derivative of the velocity divergence.

The decay parameter is of the form

(14) |

where is a dimensionless parameter which controls the decay time scale. In a number of test simulations, (Rosswog et al. 2000) found that appropriate values for the parameters , and are , and , respectively. In principle, the effects of numerical viscosity in regions away from shocks can be reduced by setting to higher values than . In practice, the minimum time necessary to propagate through the resolution length sets the upper limit . However, the time evolution of the viscosity parameter can be affected if very short damping timescales are imposed. Neglecting variations in the coefficients, the solution to Eq. (12) at times can be written as

(15) |

where

(16) |

and is a modified source term

(17) |

From Eq. (15) it can be seen that in the strong shock regime but this condition is not satisfied if . Therefore for mild shocks this implies that, if very short decay time scales are imposed, the peak value of at the shock front might be below the AV strength necessary to properly treat shocks.

To overcome this difficulty a modification to (13) has been adopted (Valdarnini 2011, herefater V11) which, when , compensates the smaller values of with respect to the small regimes. This is equivalent to considering a higher value for , so that in Eq. (13) is substituted by .

label run | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|

M | 0.2 | 0.4 | 0.6 | 0.8 | 1.0 |

0.26 | 0.52 | 0.77 | 1.0 | 1.3 | |

1.23 | 0.56 | 0.37 | 0.28 | 0.22 |

The correction factor has been calibrated using the shock tube problem as reference and requiring a peak value of for the viscosity parameter at the shock front, as in the case. The results indicate that a normalization of the form for satisfies these constraints. This normalization has been validated in a number of other test problems showing that it is able to produce a peak value of the viscosity parameter at the shock location which is independent of the chosen value of the decay parameter .

The time-dependent AV formulation of SPH has been shown to be effective in reducing the damping of turbulence due to the effects of numerical viscosity in simulations of galaxy clusters (Dolag et al. 2005, V11) Moreover, it has been used in a recent paper (Price 2012b) to argue that the conclusion of Bauer & Springel (2012) about the difficulty for SPH codes in properly modeling the development of subsonic turbulence is based just on using a SPH code in its standard AV formulation. In the following, unless otherwise stated, simulations will be performed using a time-dependent AV; this will be fully specified by the set of parameters .

### 2.3 Artificial conductivity

As already outlined in the Introduction, different formulations have been proposed for overcoming the problems encountered by standard SPH in the treatment of fluid discontinuities. Here we follow the approach suggested by Price (2008), who proposed adding a dissipative term to the thermal energy equation for smoothing the energy across contact discontinuities. The presence of these dissipative terms introduces a smoothing of entropy at fluid interfaces with the effect of removing pressure discontinuities.

The motivations lying at the basis of this approach were discussed by Monaghan (1997), who showed how the momentum and energy equations in SPH must contain a dissipative term related to jumps in the variables across characteristics, in analogy with the corresponding Riemann solutions.

The artificial conductivity (AC) term for the dissipation of energy takes the form

(18) |

where is the signal velocity, which does not need to be the same
as that used in the momentum equation,
and is the AC parameter
which is of order unity ^{1}^{1}1Note that there is a misprint in the sign of
the corresponding Eq. 28 of Price (2008).
Eq. (18) represents the SPH analogue of a diffusion
equation of the form

(19) |

where

(20) |

is the SPH expression for the Laplacian (Brookshaw 1985) and, in analogy with the analysis of Lodato & Price (2010) for defining an equivalent physical viscosity coefficient for the SPH numerical viscosity, we can define as a numerical heat diffusion coefficient given by

(21) |

Simulations | SPH | RHO | LIQ | LP | CRT | M5 |
---|---|---|---|---|---|---|

Kernel | CS | CS | LIQ | LIQ | CRT | M5 |

32 | 32 | 32 | 32 | 32 | 50 | |

grav | grav | grav | pres | grav | grav | |

U | S | S | S | S | S |

An important issue concerns the choice of the AC parameter which must be constructed so that diffusion of thermal energy is kept under control and is damped away from discontinuities. This can be achieved by introducing a switch similar to that devised for AV. The dissipation parameter is then evolved according to

(22) |

the meaning of the terms being similar to that of Eq. (12). In particular, for the decay timescale we set here and set to zero the floor value . For the source term the following expression is used

(23) |

where is a dimensionless parameter of order unity and the parameter avoids divergences as . This equation differs from the corresponding source term proposed by Price & Monaghan (2005), Price (2008) by the factor which has been inserted here in analogy with Eq. (13) for introducing a stronger response of the switch in the presence of discontinuities. The choice of values for the parameters and depends on the problem under consideration, for example Price & Monaghan (2005) proposed . Here a number of numerical experiments has shown that the best results in terms of response of the AC parameter to the presence of discontinuities are obtained by setting and , which we assume henceforth as reference values. Moreover it is found that significant benefits in terms of sharpness of the AC parameter profile across the discontinuity are obtained by inserting the term. In principle, the choice of the derivative term used to detect discontinuities is arbitrary, but in practice a second derivative (Price & Monaghan 2005) term ensures better sensitivity to sharp discontinuities in thermal energy than a first derivative (Price 2005a).

An example of a signal velocity specifically designed to remove pressure gradients at contact discontinuities is that originally introduced by Price (2008) for pure hydrodynamical simulations

(24) |

and further refined by Valcke et al. (2010). The ability of the new AC formulation of SPH to follow the development of KH instabilities using this expression for the signal velocity has been verified in a number of tests (Price 2008; Valcke et al. 2010; Merlin et al. 2010). However, the disadvantage of the signal speed (24) is that it cannot be applied when self-gravity is considered because in that case a pressure gradient is present at hydrostatic equilibrium. Imagine, for example, a self-gravitating gas sphere in hydrostatic equilibrium in which there is a negative radial temperature gradient, with the gas temperature decreasing when moving outward from the center of the sphere. An application of the SPH equations, with the AC term of Eq. (22) now using the signal velocity (24), will lead to a heat flux from the inner to the outer regions and, in the long term, to the development of an unphysical isothermal temperature profile. A signal velocity which avoids this difficulty is given by

(25) |

which corresponds to the formulation proposed by Wadsley, Veeravalli & Couchman (2008) in which a dissipative term is added to the evolution of the thermal equation with the purpose of modeling heat diffusion due to turbulence. The new term is constructed in analogy with the subgrid-scale model of Smagorinsky (1963) and is given by

(26) |

where is a coefficient of order unity, whose precise value depends on the problem under consideration. For the rising hot bubble problem considered by Wadsley, Veeravalli & Couchman (2008), the best agreement with the corresponding PPM results taken as reference is obtained setting , higher values being too much diffusive. Throughout this paper the SPH simulations of the hydrodynamic tests are performed by adopting the expression (18) for the thermal energy dissipative term. With respect to the formulation of Wadsley, Veeravalli & Couchman (2008) this approach presents the advantage of using a diffusion parameter which is not constant but evolves in time according to a source term, so that the amount of diffusion away from discontinuities is minimized. For the AC signal velocity, we then use expression (25), whose performances in the AC formulation (18) has not been fully tested before in SPH simulations of hydrodynamic test problems.

## 3 Hydrodynamic tests

In the following, simulation results obtained by applying the new AC-SPH code to a number of hydrodynamic test problems are discussed with the objective of validating the code and assessing its performance. To this end, the simulation results of the tests will be compared with the corresponding ones obtained by previous authors using different codes and/or numerical methods. The problems considered are usually presented in the literature in dimensional or complexity order, but here we follow a different approach. Since the KH instability is the hydrodynamic test which initially (Agertz et al. 2007) originated the discussion about the difficulties of standard SPH in properly handling the development of instabilities, and it is also the most widely considered in this context (Abell 2011; Price 2008; Cha, Inutsuka & Nayakshin 2010; Heß & Springel 2010; Read, Hayfield & Agertz 2010; Valcke et al. 2010; Murante et al. 2011), we here discuss first in detail the two-dimensional version of the test. The setup of the other runs will then be considered in the light of the results obtained from the 2D KH test.

### 3.1 2D Kelvin-Helmholtz instability

The KH instability arises in the presence of shear flow between two fluid layers when a small velocity perturbation is imposed in the direction perpendicular to the interface between the two fluids. The development of the instability is characterized by an initial phase, where the fluids interpenetrate each other, and then the forming of vortices, which become progressively more pronounced and turn into KH rolls at the onset of non-linearity. In the case of incompressible fluids for a sinusoidal perturbation of wavelength , this phase is reached with a time-scale (Chandrasekhar 1961)

(28) |

where and are the fluid densities and is the relative shear velocity. A proper modeling of instabilities in numerical hydrodynamics is essential since the KH instability plays a crucial role in the development of turbulence which occurs in many hydrodynamical phenomena. In particular, the KH instability is relevant in many astrophysical processes, such as gas stripping from cold gas clouds occurring in galactic haloes and the production of entropy in the intracluster gas of galaxy clusters due to injection of turbulence.

The growth of the KH instability in hydrodynamic simulations has been addressed by various authors (Abell 2011; Price 2008; Wadsley, Veeravalli & Couchman 2008; Read, Hayfield & Agertz 2010; Valcke et al. 2010; Heß & Springel 2010; Cha, Inutsuka & Nayakshin 2010; Murante et al. 2011). These studies found that the development of the instability is artificially suppressed because of two distinct effects: the first problem is the so-called local mixing instability (LMI) which is due to entropy conservation and which inhibits mixing at the kernel scale thus introducing pressure discontinuities; the second problem arises because of errors in the momentum equation which cannot be reduced by increasing the number of neighbor particles without causing particle clumping.

Given the wide variety of numerical parameters and initial conditions with which the KH instability has been addressed, we choose here to perform the tests using as reference the simulations presented by Valcke et al. (2010). In particular, the authors implement in their SPH equations an AC term as that of Eq. (18), but with the AC parameter set to unity, and with a signal velocity given by Eq. (24). The authors performed a systematic analysis of the capabilities of SPH to capture the KH instability using different SPH formulations, kernels, numerical resolutions and KH time-scales. This choice allows to assess code capabilities by constructing a large suite of simulations whose numerical results can be contrasted against the corresponding ones discussed by Valcke et al. (2010).

#### 3.1.1 Initial conditions set-up

The problem domain consists of a periodic box of unit length with cartesian coordinates , . Within the domain there is a fluid with adiabatic index which satisfies the following conditions:

(29) |

As in Valcke et al. (2010), we choose here and , with the index referring to the high density layer. This choice is motivated by the finding that the difficulties of SPH in reproducing KH instabilities increase as the density contrast between the two fluid layers gets higher. The two layers are in pressure equilibrium with so that the sound velocities in the two layers are and , respectively. The layers slide against each other with opposite shearing velocities and in order to seed the KH instability a small single-mode velocity perturbation is imposed along the direction

(30) |

where and . In order to restrict the perturbation to spatial regions in the proximity of the interfaces, the perturbation (30) is applied only if , where takes the values of and , respectively. With this choice of parameters, the Mach number of the high-density layer is and the KH time-scale is . As in Valcke et al. (2010), we consider KH simulations with a range of five different Mach numbers; Table 1 reports the values of together with the respective values of and .

In order to implement the the density set-up (29), a two-dimensional lattice of equal mass particles is placed inside the simulation box. We adopt here an isotropic hexagonal-close-packed (HCP) configuration for the particles coordinates instead of the more commonly employed Cartesian grid. The advantage of this configuration is that, for a given number of neighbors, it gives a better density estimate due to its symmetry properties. To construct the initial density configuration (29), the lattice spacing of the particles is varied until the SPH density estimate (1) satisfies the required values within a certain tolerance criterion () for the relative density error. The simulations were run using a total number of particles and the initial particle specific energies were assigned after the density calculation so as to satisfy pressure equilibrium. This particle number is larger than that used in the runs of Valcke et al. (2010) (), but guarantees a density setup with the specified tolerance criterion.

As noticed by Valcke et al. (2010) and other authors, SPH is a numerical method which can only represent smoothed quantities, and so applying it to hydrodynamic problems where strong density gradients are present can lead to inconsistencies. This is, in fact, the situation encountered by standard SPH in the treatment of KH instabilities, where the lack of entropy mixing induces an artificial pressure discontinuity at fluid interfaces with a jump in density.

Motivated by these difficulties, we consider here a set of runs in which the density discontinuity at the interfaces is replaced by a smooth transition. To allow making a consistent comparison with the corresponding runs of Valcke et al. (2010), we adopt the same smoothing profile

(31) |

where the coefficients are given by

(32) |

(33) |

(34) |

(35) |

and . In Eq. (31) the sign in front of the coefficient refers to respectively. The parameters and determine the thickness of the density transition and take the values , .

The following procedure is adopted in order to construct a lattice configuration which satisfies the density profile given by Eq. (31). An HCP lattice of particle is first constructed in the domain and with a spacing such that the SPH density is . Proceeding upward from , the lattice spacing is progressively reduced according to until and particles are left. The same procedure is applied to a high-density lattice which in the same domain satisfies the condition : starting from and proceeding downward, the lattice spacing is increased so that until and particles remain of the original high-density lattice. The whole procedure is numerically iterated until the numbers and satisfy the conditions and . The lattice is then replicated in the top half of the box () so that the initial conditions are fully symmetric around . In the following, simulations with initial conditions for which the particle positions are arranged in a unsmoothed HCP lattice are denoted with the suffix SPH, whereas all of the others adopt the smoothed density profile constructed according to the procedure described here.

Another issue which is found to have a significant impact on the ability of standard SPH to properly capture KH instabilities is the choice of the kernel function. According to Read, Hayfield & Agertz (2010), the accuracy of the momentum equation for particle is governed by the leading error

(36) |

which vanishes in the continuum limit. However, this does not hold for a finite number of irregularly distributed particles or, more specifically, in the proximity of a contact discontinuity where a density step is present. A possible solution consists of increasing the number of neighbors so as to improve kernel sampling, although this approach presents some difficulties when the commonly employed or cubic spline (CS) SPH kernel is used. This occurs because for a large number of neighbors the CS kernel is subject to the so-called clumping instability, in which pairs of particles with interparticle distance remain close together because the CS kernel gradient tends to zero below this threshold distance. A stability analysis (Morris 1996; Børve, Omang & Trulsen 2004; Price 2005a; Read, Hayfield & Agertz 2010) shows that for the CS kernel, a Cartesian lattice of particles is unstable for or when and , respectively. The clumping degrades the spatial resolution because it reduces the effective number of neighbors with which integrals are sampled, thus still having large errors even when the resolution is increased. To overcome this problem one can modify the kernel shape in order to have a non-zero kernel derivative at the origin. However, for a fixed number of neighbors, kernels of this kind have the drawback of giving a less accurate density estimate than that given by the CS kernel since near the origin the kernels have a steeper profile.

As an alternative to the CS kernel, we consider here the linear quartic (LIQ) kernel, introduced by Valcke et al. (2010), which is a quartic polynomial for and is linear for . The choice of the parameter determines the quality of density estimates. From a set of 2D Sod shock tube runs, Valcke et al. (2010) recommend , which is the value adopted here. For the functional form and normalization of the kernel see Valcke et al. (2010).

Another kernel which has been introduced for the purpose of avoiding particle clumping is the core triangle (CRT) kernel (Read, Hayfield & Agertz 2010) which has constant first derivative for and is similar to the CS kernel for . This kernel is of the form

(37) |

where for and , respectively. The value of is fixed by the condition of continuity for the second derivative, giving . For the grid of initial conditions previously described, the sample of KH simulations is then constructed by performing SPH runs with the same initial conditions but using different kernels. We consider the CS kernel, together with the LIQ and CRT kernels. For all of these runs the number of neighbors is .

However, another solution for reducing sampling errors consists of still keeping a spline kernel function but increasing its order (Price 2012, sect.5.4). This approach presents the advantage of retaining the bell shape of the kernel, a feature which provides good density estimates (Fulk & Quinn 1996). After the (CS) kernel, at the next order is the or quartic kernel

(38) |

which is truncated to zero for and has for and , respectively.

An additional set of SPH simulations was then performed using the spline as the kernel. For consistency with the other runs, we kept the same ratio of smoothing length to particle spacing () so that for these runs the chosen number of neighbors is . A non trivial issue concerns the role of pairing instability for this class of kernels. Because the gradient of the kernel still goes to zero as , one would expect the instability still to occur for . Nonetheless, it will be seen that KH simulations with the kernel do not exhibit particle clumping, in contrast with corresponding simulations performed with the CS kernel. This suggests that the stability properties of the kernel are better than those of the CS one; this topic will be discussed in a subsequent dedicated section (3.1.3).

#### 3.1.2 Results

Fig. 1 shows the density maps at for some of the KH simulations listed in Table 2. The panels can be compared directly with the corresponding ones in Fig. 10 of Valcke et al. (2010). A visual inspection reveals that the expected features of the KH runs are correctly reproduced and of a general consistency between the results produced by the two codes. In particular, the LP simulations which use the pressure-based AC signal velocities (24) appear to produce results almost identical to the corresponding LIQ ones. Thus showing how, for the KH tests examined here, the use of the two signal velocities can be considered equivalent, within the spanned range of Mach numbers.

Note, however, the absence of KH rolls for the LP run with , whereas for the same simulation in Valcke et al. (2010) the rolls have already been developed. Given the general agreement between the two codes it is hard to ascertain the origin of the discrepancy for this specific run. However, the panels of Fig. 1 show that the AC-SPH formulation, and this point will be discussed in detail later in sect. 4.2, is still unable to properly capture the development of KH instabilities for very subsonic shear flows. This suggests how small differences in the time integration procedure of the two codes can become manifest in the long-term evolution of cold flows.

In Fig. 2 the results for the same set of KH simulations of Table 1 are shown, but with the use of kernels LIQ, CRT and . An important feature is now the appearance for the CRT and runs of KH rolls in the case. This improvement in the description of KH instabilities suggests that integration errors, that are present with the LIQ kernel, are now absent or reduced in the CRT and runs. However, in the case, the kernels are still unable to capture the development of the KH instability. For this test case, a high-resolution run () using the kernel (not shown here) still reveals the absence of any KH roll at , thus showing that in SPH the problem of an accurate description of KH instabilities in the very subsonic regimes is not a resolution issue or due to the AC implementation. See also McNally, Lyra & Passy (2011) for a recent discussion on this topic.

To test the effectiveness of the switch (13) in ensuring a fast response of the parameters to the presence of thermal energy discontinuities, Fig. 3 renders the plots of the AC parameters that are shown for the test runs of Fig. 2. The maps are color coded according to the range of values of . The highest values lie in the range and are confined in a narrow strip around the interface layers, with the floor value as the background value away from the discontinuities. Note that in general, and in particular for the test case, the maximum values of the for the runs are below those of the other runs. A result due to a faster growth rate of the instability, ensured by an improved kernel accuracy in interpolating the variables.

The long term evolution of the KH tests is shown at in Fig. 4. The overall features of the density plots for different runs are similar to their counterparts displayed in Fig. 11 of Valcke et al. (2010). Note the tendency for the runs to the appearance of small scale features at the layer contacts.

The aim of this section is to test the consistency of the present AC implementation by comparing results, extracted from a suite of AC-SPH simulations of the 2D KH instability problem, with those of similar runs (Valcke et al. 2010). The results of the KH tests indicate a code behavior which is in accord with what expected and with the simulations of Valcke et al. (2010).

However, Valcke et al. (2010) argue that it is not the absence of the AC term the main reason of the SPH failure to develop KH instabilities, although the presence of AC is necessary for the long-term evolution of the instabilities, but this difficulty of SPH is rather due to two distinct reasons. The first issue is a general problem of consistency of the initial condition set-up, as SPH by definition can only deal with smoothed quantities and therefore its application to problems where sharp discontinuities are present leads to inconsistencies. This is part of the more general problem (Robertson et al. 2010) of properly smoothing in numerical simulations of hydrodynamic test problems the discontinuities initially present at the interfaces, so to achieve convergence in the solution. This aspect of the KH test problem can be cured by properly stretching the initial particle lattice so as to introduce a smooth density transition at the interfaces. The results indicate a significant improvement in the capability of SPH to properly capture the correct growth rate of the KH instability.

The other issue which causes a poor performance of SPH when handling KH instabilities is the leading error in the momentum equation due to incomplete kernel sampling. This error can be reduced by removing particle clumping, which depends on the kernel stability properties. In fact, the kernels LIQ and CRT have been introduced (Read, Hayfield & Agertz 2010; Valcke et al. 2010) with the aim of removing the clumping instability. Since the results of the simulations suggest performances for the kernel that are quite similar to those achieved by these kernels, it is therefore interesting to better quantify the relative performances of these kernels.

Following Junk et al. (2010), we then measure the growth rate of the KH instability for some of the runs and compare it with the linear theory growth rate expectation . This is achieved, using Fourier transforms, by measuring the time-evolution of the growing mode of the velocity perturbation component. The details of the procedure are given in Junk et al. (2010) and will not be reported here. For the sake of clarity, the results of the LIQ runs are not displayed since they are quite similar to those of the CRT simulations. Moreover, we only display the growth rates for three different KH test cases, those with Mach number and , the results of the others being intermediate between these.

There are a number of distinct features which are apparent from the panels of Fig. 5. The first is that simulations with smoothed IC ( RHO) perform systematically better than the unsmoothed ones (SPH). The second is that only for the simulations with kernels CRT and are able to correctly recover the expected growth rate. Finally, this capability degrades progressively as one considers lower Mach numbers. While this behavior is in accordance with the visual impression of the maps previously displayed, and confirms that the present SPH implementation still has problems in the description of instabilities in subsonic flows, it is interesting to note that the performances of the runs are similar to those obtained using the CRT kernel. This behavior is particularly interesting, since the simulations have been performed setting the ratio of the smoothing lengths to particle spacing to so that the pairing instability, that is present in the runs which use the CS kernel, should be present in the simulations as well.

How the clumping instability affects sampling errors can be assessed by computing the particle errors , according to Eq. (36). The distribution, throughout the simulation domain of the errors versus , is shown in the top panels of Fig. 6 for those simulations performed using different kernels. The solid lines represent the mean of the binned distributions. Similar plots have been produced by Valcke et al. (2010) and their Fig. 3 can be used for comparative purposes.

As expected, the largest errors are present in the SPH simulation, whereas better results are obtained by using the LIQ and CRT kernels, as indicated by the error distribution in their respective panels. This is a result of the absence of particle clumping for these simulations, owing to the specific stability properties of these kernels (Read, Hayfield & Agertz 2010; Valcke et al. 2010).

A striking feature is given vice versa by the error distribution of the simulation performed using the kernel, which, in fact, shows how the magnitude of the errors are even smaller than those of the two runs CRT and LIQ for this kernel. A result which is in accordance with what has been found previously by analyzing the growth rates, suggesting that, since all of the simulations were performed using the same , the stability properties of the kernel are better than those of the CS one.

To further investigate this point, the bottom panels of Fig. 6 show the ‘nearest neighbor’ map of the corresponding top panels. This is defined by interpolating the quantity at the grid points, according to the SPH prescription, where for the neighbor and the neighbors have been sorted according to their distance from the particle itself.

The map of the kernel indicates a distribution of the second nearest neighbor distribution that is quite similar to that displayed by the CRT and LIQ kernels. The only difference being at the layer interfaces where the distribution of quantities for the kernel is slightly shifted towards smaller values than for the other kernels. Note, vice versa, how for the CS kernel because of the pairing instability, the distribution of the neighboring distances is flipped with respect that of the other kernels.

The results of Fig. 6, however, clearly show the absence of clumping instability for the kernel. In the next section, we investigate, in more detail, the stability properties of this kernel.

#### 3.1.3 Stability issues

The stability properties of SPH have been investigated by a number of authors (Swegle & Hicks 1995; Morris 1996; Monaghan 2000; Børve, Omang & Trulsen 2004; Read, Hayfield & Agertz 2010; Dehnen & Aly 2012). The instabilities are studied analytically by analyzing the dispersion relation for sound waves of small amplitude, propagating in a uniform medium. We now derive the dispersion relation for the SPH equation of motion in a manner similar to the analysis of previous authors. A uniform lattice of equal mass particles of mass , density , pressure and sound speed is perturbed by a small wave

(39) |

where is the perturbation, the wavevector and are the unperturbed particle positions. By linearizing the equation of motion for the perturbation one has the dispersion relation

where the summations are over the neighbors of particle , the vector is defined as

(40) |

and is the Hessian of the kernel

(41) |

For a given smoothing length and wavevector , the stable solutions of Eq. (3.1.3) are defined by the condition . Solutions for which represent exponentially growing or decaying perturbations. Moreover, it is useful to define a numerical sound speed and a scaled numerical sound speed . Clearly, the condition should be satisfied to correctly model the sound speed.

We now examine the stability properties of the CS and kernels. For simplicity, we consider a plane wave propagating along the axis, , and assume . The bottom panels of Fig 7 show, for the longitudinal and the two transverse waves of the perturbation, the instability regions of the CS kernel, these are denoted by the gray areas and represent the solutions to Eq. (3.1.3) , in the domain , for which . Similarly, the top panels are for the kernel.

The longitudinal wave perturbation is responsible for the clumping instability, whereas the traverse waves give rise to the so-called banding instability (Read, Hayfield & Agertz 2010). Unlike the clumping instability, the latter is relatively unimportant in causing sampling errors (Read, Hayfield & Agertz 2010) and will not be further considered here. A comparison of the two stability plots for the longitudinal wave solution clearly shows that the stability properties of the kernel are much better than those of the CS one.

This is part of a more general result which was already recognized by (Morris 1996): the stability properties of SPH are improved, as higher order spline kernels are used in place of the CS kernel. This occurs, basically, because the higher the order of the spline, the better it approximates a Gaussian kernel. In Eq. (3.1.3) one can see that the numerical sound speed depends on the first and second derivative of kernel . Ideally, one should have to accurately describe the sound wave propagation and this condition is fulfilled as smoother kernels are used, so as to keep the numerical dependency of as weak as possible.

Note however that here, differently from the stability properties of the CRT kernel, the clumping instability is not entirely suppressed but rather it is present whenever .

Finally, it must be stressed that the paring instability that occurs in the hydrodynamic tests described here, is unlikely to have a significant impact on many astrophysical problems of interest, where very cold flows are absent. Rasio & Lombardi (1999) estimate for instance, from SPH simulations of a stationary fluid, that lattice effects become important when velocity dispersions are below the sound speed.

The results of this section and of the previous one, therefore, suggest that in order to avoid the pairing instability, the kernel can be considered a viable alternative to the use of the otherwise steeper CRT and LIQ kernels, provided that the parameter is consistently rescaled. In the next section, we then discuss the related performances of these kernels in a test simulation.

### 3.2 Sod’s shock tube

A classic test used to investigate the hydrodynamic capabilities of SPH codes is the Sod shock tube problem (Hernquist & Katz 1989; Wadsley, Stadel & Quinn 2004; Springel 2005; Price 2008; Tasker et al. 2008; Rosswog 2009). This test consists in a fluid, initially at rest, in which a membrane located at separates the fluid on the right, with high density and pressure, from the fluid on the left, with lower density and pressure. The membrane is removed at and a shock wave develops propagating toward the left, followed by a contact discontinuity and a rarefaction wave propagating to the right.

A well known problem of standard SPH codes in reproducing the analytic solution of the shock tube problem is the presence of a pressure discontinuity that arises across the propagating contact discontinuity. Simulations incorporating an artificial thermal conductivity term in the SPH equations (Price 2008; Rosswog 2009; Price 2012) show shock profiles in which density and thermal energy are resolved across the discontinuity, hence giving a continuous pressure profile. However, in these runs the AC formulation adopts the AC signal velocity (24) where jumps in thermal energy are smoothed in accordance with the presence of pressure discontinuities. This is in contrast with the AC signal velocity (25) employed here, for which in absence of shear flows, contact discontinuities are unaffected and therefore cannot be used to remove the blip seen at the contact discontinuity in the pressure profile of the shock tube SPH runs.

However, the application of the AC-SPH code to this test is nonetheless of interest because it can still be used to validate code performances. In particular, we consider a 3D setup of the shock tube test and with these initial conditions we construct a set of AC-SPH simulations performed with different kernels and neighbor numbers. Shock tube profiles extracted from these simulations are compared with the aim of assessing the goodness of different kernels in reproducing, for a given test problem, profiles of known analytic solutions.

The initial condition setup consists of an ideal fluid with , initially at rest at . An interface set at the origin separates the fluid on the right with density and pressure from the fluid on the left with . To construct these initial conditions, a cubic box of side-length unity was filled with equal mass particles, so that were placed in the right-half of the cube and in the left-half. The particles were extracted from two independent uniform glass-like distributions contained in a unit box consisting of and particles, respectively. This initial condition setup is the same previously implemented in Sect. 5.1 of V11, to test the time-dependent AV scheme described in Sect. 2.2.

For the same initial setup, to investigate the performances of different kernels in reproducing the analytic profiles of the shock tube problem, we perform runs with different kernels and neighbor numbers. The kernels considered are : CS, LIQ, CRT and . The number of neighbors varies in power of two between and . The SPH runs were realized by imposing periodic boundary conditions along the and axes and the results were examined at . We show results obtained using the standard AV formulation, the results of the other runs being unimportant from the viewpoint of kernel performances.

Some of the simulation profiles extracted from the 3D SPH runs of the shock tube test are shown in Fig. 8. A striking feature is the wide scatter between the pressure profiles of the runs performed using different kernels or neighbor numbers. The same behavior is present for the thermal energy profiles, whilst very similar density profiles are exhibited by the same runs.

There are several conclusions that can be drawn from the results of Fig. 8. The performances of the -64 run are quite similar to those of CS-32, although for the former simulation a closer inspection reveals a slightly better treatment of the thermal energy spike at the contact discontinuity, the spike being due to the initial condition set-up.

The worst results are obtained by the LIQ simulations when using or neighbors. The profiles of the LIQ-128 run are not shown here to avoid overcrowding in the plots, and are quite similar to those of simulation CRT-128. This clearly demonstrates the need for this class of kernels to use a large ( say ) number of neighbors, so as to compensate for the density underestimate due to the steeper profiles introduced to avoid the pairing instability.

To better quantify this density bias, Table 3 reports, for different kernels and neighbor number, the mean SPH density estimated from a glass-like configuration of particles, in a unit periodic box of total mass one. The results illustrate how the density estimate of the kernel with 64 neighbors is comparable to the CS one obtained using 32 neighbors, the value of being the same (). Vice versa, for the LIQ and CRT kernels, only when does the mean density approach the -64 estimate.

The density values given in Table 3 also help to explain the rather poor performances of the LIQ kernel when using 32 neighbors. From Fig. 8 it can be seen that for the corresponding run the relative pressure error is in the unperturbed right zone of the cube. This error is already present in the pressure profile at and it is originated from the density error due to the adopted kernel and neighbor number, together with the use of an entropy-conserving code to perform the simulations. Initially, particle entropies are assigned according to the initial conditions so that, for particles satisfying , . During the integration particle pressures are calculated according to , and for a relative pressure error is present when .

Taken at face value, the results of Table 3 demonstrate that in 3D simulations a conservative lower limit for the kernels with a modified shape, should be to assume at least neighbors. In fact, in a recent paper Read & Hayfield (2012) presented a new formulation of SPH where they adopted, as reference, the so-called high-order core triangle (HOCT, Price (2005a)). The profile of this kernel is a generalization of that of the CRT kernel and the authors assume as the reference value for their scheme.

However, the results of the previous sections suggest that the stability properties of the kernel can be profitably used, with an appropriate choice of the parameter, to avoid the pairing instability when dealing with tests of hydrodynamic instabilities in which cold flows are present.

Finally, the density estimates of Table 3 suggest that great care should be used when deciding on the goodness of a particular kernel on the basis of its relative performances in terms of density estimates. The results of the 3D Sod shock tube clearly indicate how there could be a large difference between the simulation and the expected solution profile of some hydrodynamic variable, such as pressure or thermal energy, while having, at the same time, a much smaller difference in the corresponding density profile.

### 3.3 Sedov blast wave

The Sedov blast wave test is used to validate, in three dimensions, the code capability in the strong shock regime. The test consists of a certain amount of energy which is injected at into a very small volume of an ambient medium of uniform density . The spherically symmetric shock propagates outward from the initial volume and at the time the shock front is located at the radius (Sedov 1959)

(42) |

where for .

Previous investigations (Rosswog & Price 2007; Merlin et al. 2010) showed that, owing to the large discontinuities initially present in the thermal energy, incorporating an artificial conduction term in the energy equation greatly improves the description of the shock front in the simulations. Without the presence of this term, the initially large discontinuity in the thermal energy will soon give rise to a disordered particle distribution thus degrading the shock profile (Rosswog & Price 2007).

The initial setting of the test is realized as follows. A HCP lattice of equal mass particles is arranged in a cubic box of side length unity. The particle masses are chosen so as to give and periodic boundary conditions are imposed. The nearest particle to the position is chosen as the center particle. In order to consistently represent a point-like explosion with the given numerical resolution, the particles comprised within the kernel radius of the center particle are given an initial thermal energy such that the total injected energy is . This blast wave energy is distributed among the neighboring particles not uniformly but with a weight proportional to .

Kernel | |||
---|---|---|---|

CS | 1.005 0.035 | 1.002 0.019 | 1.0018 0.012 |

LIQ | 1.06 0.028 | 1.024 0.016 | 1.01 0.011 |

CRT | 1.079 0.039 | 1.035 0.020 | 1.016 0.012 |

M5 | 1.022 0.052 | 1.003 0.024 | 1.0008 0.015 |

Using this initial condition set-up, we ran three test simulations, which differ in the choice of the adopted kernel and neighbor number. For the CS kernel, we use neighbors. We also run two other test cases, now using the CRT and kernel and neighbors. All of the simulations were performed using the implemented AC scheme and the time-dependent AV formulation with parameters given by .

Fig. 9 shows the radial density profiles at for the different test runs. The solid black line represents the expected analytic solution, with the shock front being located at . Radial simulation profiles were obtained by averaging, for each radial bin , SPH densities calculated from the particle distributions over a set of grid points uniformly spaced in angular coordinates: these are located at the surface of a sphere with radius . The radial spacing is not uniform but is chosen so as to guarantee an accurate sampling of density in the proximity of the shock front, with about radial bins between and .

All of the simulations are in fair agreement with the analytical solution and the differences between the simulation profiles are negligible. At the shock front, the profiles exhibit a density jump of about , whereas the analytical solution gives a compression factor of . These results are in accordance with previous findings (Rosswog & Price 2007; Springel 2010b; Heß & Springel 2010) and indicate that for 3D SPH simulations of the Sedov-Taylor point explosion problem the simulation profiles converge to the analytical solution as the resolution is increased, with approximately particles (Rosswog & Price 2007) being required to fully resolve the shock front.

Fig. 9 can also be used to verify the behavior of the individual time-step algorithm. For problems involving very strong shocks, as demonstrated by Saitoh & Makino (2009), individual particle time-steps must be properly restricted so to avoid that in the proximity of the shock front they do not satisfy the local Courant condition, thus leading to inaccuracies in the integration. In fact, SPH simulations using individual time-steps, but without an appropriate limiter, will fail to predict the expected solution profile (see Fig. 3 of Saitoh & Makino (2009)). The profiles of Fig. 9 demonstrate the correctness of the algorithm used to update the time-steps, although different from that devised by Saitoh & Makino (2009).

The present scheme adopts individual particle time-steps that can vary in power-of-two subdivisions of the largest allowed time-step (Hernquist & Katz 1989). At each step, particles whose time bin is synchronized with the current time are defined as ‘active’ and their hydrodynamic quantities, as well as their smoothing lengths and densities are consistently updated. However, non-active particles , that are neighbors of an active particle are defined as ‘low-order active’ particles and their hydrodynamic variables, as well as their time-step constraints but not their forces, are recalculated. This criterion is applied regardless of the local shock conditions and no particular conditions are imposed on which depend on .

### 3.4 The blob test

The blob test is another hydrodynamic test where results of standard SPH differ significantly from those produced by grid based simulations (Agertz et al. 2007; Read, Hayfield & Agertz 2010; Cha, Inutsuka & Nayakshin 2010; Heß & Springel 2010; Murante et al. 2011). The test consists of a gas cloud of radius placed in an external medium ten times hotter and less dense than the cloud, so as to satisfy the pressure equilibrium. A large enough wind velocity is given to the hot low-density medium so that a strong shock wave strikes the cloud. The interaction of the cloud with the supersonic medium produces a number of effects that are of interest in an astrophysical context, such as gas stripping and fragmentation.

Initially, the blob is perturbed by the development of Richtmyer-Meshkov and Rayleigh-Taylor instabilities (Agertz et al. 2007). After wards, large-scale () KH instabilities are created at the cloud surface because of the shear flows due to the supersonic wind. This non-linear phase is supposed to develop over a time-scale (Agertz et al. 2007), where is the density contrast, after which cloud disruption will take place.

In order to investigate the capability of the AC-SPH code to properly follow the hydrodynamic of the blob test, we compare results extracted from a set of SPH simulations realized with the same initial conditions but with different numerical parameters. The numerical setup of the test is the same as in Read, Hayfield & Agertz (2010), to which we refer for more details. A spherical cloud of radius is placed in a periodic, rectangular box of size . The cloud has density and temperature . The ambient medium has density and temperature so that . The cloud is initially located at and the ambient medium is given a wind velocity , so that for an adiabatic index its Mach number is . An HCP lattice of equal mass particle is constructed in order to satisfy the above density requirements so as to use for this version of the test, a total number of particles, as in Heß & Springel (2010). Finally, a velocity perturbation is imposed at the cloud surface in order to trigger the development of an instability; amplitude and modes are given in appendix B of Read, Hayfield & Agertz (2010).

We compare results from SPH simulations where three different spline kernels have been used : CS, CRT and . We use neighbors for the simulations with the CS and CRT kernels and for the run with the kernel, so that the ratio is the same for all the runs (). For the CS kernel, we run a simulation where the AC term (18) is absent in the equation of thermal energy evolution (CS-128 NOAC), and three other simulations (CS-128, CRT-128, M5-220) in which the AC term (18) is incorporated in the energy evolution equation. In the following, the simulation AV parameters are set as follows:

For several runs Fig. 10 shows the gas density maps at three different times: ; the time being in units of