GraviDy, a GPU modular, parallel directsummation body integrator: Dynamics with softening
Abstract
A wide variety of outstanding problems in astrophysics involve the motion of a large number of particles under the force of gravity. These include the global evolution of globular clusters, tidal disruptions of stars by a massive black hole, the formation of protoplanets and sources of gravitational radiation. The directsummation of gravitational forces is a complex problem with no analytical solution and can only be tackled with approximations and numerical methods. To this end, the Hermite scheme is a widely used integration method. With different numerical techniques and specialpurpose hardware, it can be used to speed up the calculations. But these methods tend to be computationally slow and cumbersome to work with. We present a new GPU, directsummation body integrator written from scratch and based on this scheme, which includes relativistic corrections for sources of gravitational radiation. GraviDy has high modularity, allowing users to readily introduce new physics, it exploits available computational resources and will be maintained by regular updates. GraviDy can be used in parallel on multiple CPUs and GPUs, with a considerable speedup benefit. The single GPU version is between one and two orders of magnitude faster than the single CPU version. A test run using 4 GPUs in parallel shows a speed up factor of about 3 as compared to the single GPU version. The conception and design of this first release is aimed at users with access to traditional parallel CPU clusters or computational nodes with one or a few GPU cards.
keywords:
Methods: numerical – Stars: Kinematics and dynamics – Celestial Mechanics1 Motivation
The dynamical evolution of a dense stellar system such as e.g. a globular cluster or a galactic nucleus has been addressed extensively by a number of authors. For Newtonian systems consisting of more than two stars we must rely on numerical approaches which provide us with solutions that are more or less accurate. In this sense, one could make the following coarse categorisation of integration schemes for pure stellar dynamics: those which are particlebased and those which are not. In the latter, the system is treated as a continuum, so that while we know the general properties of the stellar system such as the mean stellar density, of the average velocity dispersion, we do not have information about specific orbits of stars. To this group, belongs direct integration of the FokkerPlanck equation (Inagaki & Wiyanto 1984; Kim et al. 1998) or moments of it (AmaroSeoane et al. 2004; Schneider et al. 2011), including Monte Carlo approaches to the numerical integration of this equation (Spitzer & Hart 1971). A particlebased algorithm, however, assumes that a particle is tracing a star, or a group of them. In this group, the techniques go back to the early 40’s and involved light bulbs (Holmberg 1941). The first computer simulations were performed at the Astronomisches Rechen Institut, in Heidelberg, Germany, by (von Hoerner 1960, 1963), using 16 and 25 particles. These first steps led to the modern body algorithms.
We can distinguish two types of body algorithms: the socalled collisionless, where a star just sees the background potential of the rest of the stellar system (e.g. the BarnesHut treecode or the fast multipole method Barnes & Hut 1986; Greendard 1987, which scale as and , with the particle number, respectively), and the more expensive collisional one, or “directsummation”, in which one integrates all gravitational forces for all stars to take into account the graininess of the potential and individual time steps, to avoid large numerical errors. This is important in situations in which close encounters between stars play a crucial role, such as in galactic nuclei and globular clusters, because of the exchange of energy and angular momentum. The price to pay however is that they typically scale as .
A very well known example is the family of directsummation Nbody
integrators of Aarseth (see
e.g. Aarseth 1999; Spurzem 1999; Aarseth 2003)
However, the scaling requires supercomputers, such as traditional
Beowulf clusters, which requires a parallelisation of the code, such as the
version of Nbody6 developed by Spurzem and collaborators, Nbody6++
On the other hand, modern graphics processing units (GPUs) offer a very
interesting alternative. They have been mostly used in game consoles, embedded
systems and mobile phones. They were originally used to perform calculations
related to 3D computer graphics. Nevertheless, due to their highly parallel
structure and computational speed, they can very efficiently be used for
complex algorithms. This involves dealing with the parallel computing
architecture developed by NVIDIA
There has been recently an effort at porting existing codes to this
architecture, like e.g. the work of
Portegies Zwart et al. (2007); Hamada &
Iitaka (2007); Belleman et al. (2008) on single nodes or using large
GPU clusters
(Berczik
et al. 2011; Nitadori &
Aarseth 2012; CapuzzoDolcetta et al. 2013) and
recently, the work by Berczik et al. (2013) using up to 700 thousand GPU
cores for a few million bodies simulation with the
GPU
Largescale (meaning number of particles) simulations have recently seen an
important improvement with the work of Wang et al. (2015); Wang
et al. (2016). In his
more recent work of 2016, Wang and collaborators integrated systems of one
million bodies in a globular cluster simulation, using from 2,000 to 8,600
hours of computing time.
In this paper we present the initial version of GraviDy (Gravitational dynamics), a highlymodular, directsummation body code written from scratch using GPU technology ready to integrate a pure dynamical gravitational system. In section 2 we present in detail the structure of the code, the most relevant and innovative parts of the algorithm, and their implementation of the scheme in the idiom of GPU computing. In section 3 we check our code with a series of wellknown tests of stellar dynamics for a dense stellar system and evaluate global dynamical quantities and we also evaluate the performance of the GPU version against the CPU one. In section 5 we present the implementation of the relativistic corrections, and a set of tests. In section 6 we summarise our work and give a short description of the immediate goals that will be described in upcoming publications.
We have decided to focus on singlenode clusters (meaning one or more GPU cards embedded in a host PC) and traditional multiCPU clusters (e.g. Beowulf clusters), since this setup is more common to most users who aim to run middlescale simulations. In the appendices we give a succinct description on how to download the code, how to compile it, and the structure of the data. We also include a set of python tools to analyse the results. Moreover, we also introduce a simple visualisation tool based on OpenGL, which can provide us with information sometimes difficult to obtain with twodimensional graphics. In particular, we have made a significant effort in documentation and modularity, since it is our wish that the code is used, shaped and modified at will.
2 The current algorithm
2.1 The integration scheme
In this section we give a very brief introduction to the numerical body
problem. We refer the reader to e.g. Aarseth (2003); Heggie &
Hut (2003) or the
excellent online course “The art of computational
science”
(1) 
where is the gravitational constant, is the mass of the th particle and the position. We denote vectors with bold fonts. The basis of the problem is purely dynamical, because the orbital evolution is determined exclusive by the gravitational interaction.
The total energy of the system is a useful quantity to keep track of every time step in the integration. It is given by the expression
(2) 
where is the velocity of the particle .
To numerically integrate the system of equations we adopt the 4thorder Hermite integrator (H4 from now onwards) presented in Makino (1991); Makino & Aarseth (1992) (and see also Aarseth 1999, 2003). H4 is a scheme based on a predictorcorrector scenario, which means that we use an extrapolation of the equations of motion to get a predicted position and velocity at some specific time. We then use this information to get the new accelerations of the particles, later we correct for the predicted values using interpolation based on finite differences terms. One can use polynomial adjustment in the gravitational forces evolution among the time because the force acting over each particle changes smoothly (which is the reason why adding a very massive particle representing e.g. a super massive black hole will give you sometimes a headache). To advance the system to the following integration time we approximate the equations of motion with an explicit polynomial. This prediction is less accurate, but it is improved in the corrector phase, which consist of an implicit polynomial that will require good initial values to scale to a good convergence.
This is a fourthorder algorithm in the sense that the predictor includes the contributions of the thirdorder polynomial, and after deriving the accelerations, adds a fourthorder corrector term. In the remaining of this paper we focus on the implementation of the scheme into our GPU (and CPU) code and how to maximise all of the computational resources available. For a detailed description of the idea behind H4, we refer the reader to the article in which it was presented for the first time, (Makino & Aarseth 1992).
An advantage of the choice for H4 is that we can use the family of Aarseth’s codes (among others) as a testbed for our implementation. These codes –some of which adopt H4, but not all of them– have been in development for more than 50 years. The codes are public and have been widely used and validated, improved and checked a number of times by different people, they have been compared to other codes and even observational data. In this regard, to test our implementation and parallelisation of H4, the access to the sources of the codes is an asset.
2.2 Numerical strategy
A main goal in the development of GraviDy is its legibility. We have focused in making it easy to read and modify by other users or potential future developers without compromising the computational performance of the algorithm. This means that we have made a significant effort in keeping a clear structure in the source code so that, in principle, it can be well understood by somebody who has not previously worked with it with relatively little effort. The modularity of the code should allow new users to easily implement new physics or features into it or adapt it to the purposes they seek. It is unfortunately easy –at least to a certain extent– to miss either clarity in coding or performance, when trying to have both in a code. For instance, if we want to obtain the best performance possible, one has to use lowlevel instructions that for an outside user might result into something difficult to understand when reading or trying to modify the source code. On the other hand, name conventions for files, functions and variables might become a burden to certain applications.
While most existing body codes have achieved certain balance between the two to some degree, it is difficult to adapt them to new architectures and technology to boost their performance. For the development of GraviDy, we have followed the next steps:

Serial Implementation of the initial version,

Profiling and assessment of the integrator,

Algorithm classification and finding the hotspots,

Optimisation of the bottlenecks.
2.3 Particular choices
 Object oriented programming:

Object oriented programming (OOP) is a powerful paradigm that allows us to program an algorithm as objects interactions. In GraviDy, we use OOP. The reason beneath it is related to our parallelisation scheme, which is described below, more concretely with the data structure we have chosen.
We have mainly two possible choices for data structures: classes with arrays, or Arrays of Objects, which follows the basic idea of Struct of Arrays (SoA) and Array of Structs (AoS). For GraviDy we have chosen classes with arrays for the main units of the program structure. It is a good strategy to minimise the data transfer between Host and Device, so as to avoid having large communication times.
It is not required to update the forces of all the particles, so that we encapsulate the information of the active particles, and then we transfer the AoS to the GPU. All the remaining attributes of the bodies (i.e. those not transferred to the GPU) are just classmembers (arrays), and need to be in the host CPU. An example of this could be large linear arrays, such as the time steps of the particle.
 Class distribution:

Since our code is using OOP, we describe a brief interaction between the classes in Fig. 1. The main header, common.hpp, contains the definition of the constants, structures, macros, etc. The idea behind this model is to easily be able to add more features in upcoming versions of our code, from new utilities functions to new integration schemes.
Every class is in charge of a different mechanism, from getting the integration options from commandline, to the different integration methods using parallelism or not
^{10} .  Doubleprecision (DP) over Singleprecision (SP):

Using DP or SP in body codes has been already addressed by different authors in the related literature (see e.g. Hamada & Iitaka 2007; Nitadori 2009; Gaburov et al. 2009). Using DP is not the best scenario for GPU computing, because there is a decrease factor in the maximum performance that a code can reach. We can reach only half of the theoretical maximum performance peak, which depends on each individual card: for example, the NVIDIA Tesla C2050/M2050 has a peak of the processing power in GFLOPs with SP, but only with DP.
We choose DP for a more accurate numerical representation, because it provides us a simple way of getting better energy conservation, at the expenses of performance. There are different approaches, like the mixedprecision, (Aarseth 1985), and pseudo DP (Nitadori 2009, currently used in the code GPU, Berczik et al. 2011). These offer a relatively more accurate representation (compared to SP) without a big impact in performance.
2.4 The implementation scheme
These are the steps that GraviDy follows when running a simulation:

Memory allocation of the CPU and GPU arrays.

Initialisation of the variables related to the integration.

Copy the initial values of positions, velocities and masses of the particles to the GPU to calculate the initial system energy, and calculate the initial acceleration and its first time derivative, the socalled “jerk”. The cost of this force calculation is .

Copy the initial forces from the GPU to CPU.

Find the particles to move in the current integration time, , with a cost .

Save the current values of the forces, to use them in the correction step, with a cost .

Integration step:

Predict the particle’s positions and velocity up to the current integration time, with cost .

Copy of the predicted positions and velocities of all the particles from the CPU to the GPU.

Update the particles on the GPU, which is explained in detail in section 2.5.

Copy the particles to a temporary array on the GPU.

Calculate the forces between the particles on the GPU, with a cost .

Reduce forces on the GPU.

Copy the new forces from the GPU to the CPU.


Correct the position and velocity of the updated particles on the CPU, .

Copy the positions and velocities of the corrected particles from the CPU to the GPU.

GraviDy adheres to the usual good practises of the beginning of the development of every directsummation body code:

Directsummation, also known as particleparticle strategy, This approach is the simplest way to address the task of calculating the exerted force by all the bodies on a single body that we need to update at certain time step. This bruteforce procedure has an order , which represents the bottleneck of the algorithm.

Softened pointmass potential, as an alternative in this version of the code to a proper close encounter regularisation. All particles are represented by a dimensionless point mass. We introduce a softening parameter () in the distance calculation between two bodies while we get the new forces,
(3) so as to handle the situation in which two bodies get closer.

Block time steps, It is not straightforward to have an body code using individual time steps in parallel computing, because the idea behind massive parallelism is to perform the same task on different data chunks. We use the block time steps algorithm (Press 1986), to update group particles simultaneously. This scheme has been adopted by a number of authors (Portegies Zwart et al. 2001b; Hut 2003; Aarseth 1999, 2003; Harfst et al. 2008; Nitadori & Aarseth 2012).
The algorithm uses a H4 to integrate the evolution.
The description of all the equations for each step is presented in
Makino &
Aarseth (1992)
2.5 The parallelisation scheme
As we have already mentioned, the bottleneck of any body code is the force calculation. In this respect, GraviDy is not different and a quick performance test to get the profile of our serial code yields almost of the execution time in this calculation. We hence introduce a parallelisation scheme, which we discuss in detail now.
GraviDy is based on a directsummation H4 integrator and uses block time steps, so that in the force update process we have a nested loop for every active particle (which we will refer to from now with the subscript “act”). This means that for every particle which needs to be updated we have a loop run on the whole set of particles of our system to check whether particle is interacting with the particle.
The whole process scales with the amount of particles, as we can see in Figure 2.
We then need to parallelise the loop corresponding to each of the particles. For each of them we circulate through all of the particles, and this is the process which needs to be parallelised. Although this is in principle a straightforward scheme, since we focus on GPUs, we run into the following issues:

A GPU can launch a large number of threads, easily up to thousands of them. In our scenario, however, the number of active particles is very small compared to the total amount of particles (). This has an impact on the performance: we do not use all available threads, we are integrating a grid of forces. When the number of active particles is very low our occupancy will be bad.

Contrary, in the case in which we have to move all particles, we will have an parallelism, which maximises the GPU power. In this case, however, the memory bandwidth is the limitation factor, since every particle requires all information about all other particles.
It is better to have all particles handled by the GPU, and not only the active ones, because even though this subgroup is smaller, or even much smaller, it is more efficient from the point of view of the GPU, since the occupancy is improved. The parallelisation happens at level (i.e. when calculating the forces between active particles with the rest of the system). This idea was first implemented by Nitadori (2009), and has proven to yield very good performance.
The main ideas behind the parallelisation is how force calculation is done and the summation of the forces (“reduction”):

Force calculation: The interaction between the particle and the rest of the system is distributed among the GPU threads, which means that we launch threads, and each of them calculates its contribution with the particle. After this calculation, we have an array where each element contains the contributions all the particles.j This corresponds to the upper part of Fig.(3), which illustrates a setup of two GPUs. After the force calculation we end up with an array containing the information about the forces for all particles.

Force reduction: In the lower part of the same Fig. we depict the summation of all of these forces, which is also performed in parallel, so that we use the blocks distribution of the GPU for this task.
3 The three flavours of GraviDy: Tests
Thanks to the fact that there is a number of codes implementing similar approaches to ours, we are in the position of running exhaustive tests on GraviDy. Indeed, the global dynamics of a dense stellar system (typically an open cluster, because of the limitation in the number of particles we can integrate) has been addressed numerically by a large number of authors in the field of stellar dynamics. Therefore, we have decided to focus on the dynamical evolution of a globular cluster with a single stellar population. We present in this section a number of tests to measure the performance and the accuracy of the three versions of GraviDy which we present using different amount of particles. Our goal is to be able to offer an Open Source code that fits different needs and requirements. This is why this first release of GraviDy offers three different choices, which are general enough for different users with different hardware configurations. These are:
 (i) The CPU version

consists in the more basic implementation in this work, a CPU version. I.e. This version uses OpenMP and is intended for a system without graphic processing units, but with many cores. This flavour can be used for debugging purposes by disabling the OpenMP directives (#pragma omp). This is the basis for our further development of the code.
 (ii) The MPI version

is virtually the same serial implementation, but with OpenMPI directives added to improve the performance of the hotspots of the algorithm, in particular the force and energy calculation. In this case we use the MPI library, and hence it can be run on a single machine using a certain amount of cores as “slave” processes or on a large cluster with separated machines as slaves.
 (iii) The GPU version

discards all CPU usage and only relies on the GPU to integrate all gravitational interactions. As we mention later, we tried to use CPU combined with GPU, but we did not see any benefit in it, and the approach was hence neglected. We use CUDA to be able to interact with NVIDIA graphics processing units. The code is designed to detect the amount of present GPUs and use all of them, unless otherwise required by the user. This means that this version can use in a parallel way as many GPU cards as the host computer can harbour in a very simple and efficient way. The communication between the different GPU cards in the host computer is internal and run through Peripheral Component Interconnect Express (PCIe), a highspeed serial computer expansion bus standard, so that the data flows rapidly because of the low overhead.
The specifications of the hardware (CPU, GPU and available RAM) and operating systems we used are summarised in table 1.
System A  datura (165 nodes) 

CPU  Intel(R) Xeon(R) CPU X5650 @ 2.67GHz (24 cores) 
GPU  none 
RAM  24 GB 
OS  Scientific Linux 6.0 
System B  gpu01 (1 node) 
CPU  Intel(R) Xeon(R) CPU E5504 @ 2.00GHz (4 cores) 
GPU  4 x Tesla M2050 @ 575 Mhz (448 cores) 
RAM  24 GB 
OS  Scientific Linux 6.0 
System C  krakatoa (1 node) 
CPU  AMD Opteron 6386SE @ 2.8 GHz (32 cores) 
GPU  2 x Tesla K20c @ 706 MHz (2496 cores) 
RAM  256 GB 
OS  Debian GNU/Linux 8 
System D  sthelens (1 node) 
CPU  Intel(R) Xeon(R) CPU E52697v2 (IvyBridge) @ 2.7GHz (24 cores) 
GPU  2 x Tesla C2050 / C2070 @ 1.15 Ghz (448 cores) 
RAM  256 GB 
OS  Debian GNU/Linux 8 
3.1 Initial conditions and body units
For all of our tests we choose an equalmass Plummer sphere (Plummer 1911) for the sake of comparison with other codes. We choose standard body units (NBU, hereon) for the calculations and in the resulting output (Hénon 1971; Heggie & Mathieu 1986). This means that

The total mass of the system is 1: .

The gravitational constant (G) is set to 1: .

The total energy of the system is equal to : , with and the total kinetic and potential energy of the system, respectively.

The virial radius is set to .
The Plummer spheres have a fixed halfmass radius of and a Plummer radius of .
We used the code by Küpper et al. (2011) (McLuster) to generate all the initial conditions for the test we performed on the current work.
3.2 Accuracy, performance and speed
For GraviDy, as we have seen, we have chosen a H4 integrator. The numerical error introduced scales hence as assuming a shared time step, which means that the previous is true only if all particles are updated at every integration step. Since we use a block time step scheme, certain groups of particles share a time step value, but not all of them. Thanks to this approach, the numerical error which we reach in our integrations is slightly less than the value mentioned previously.
A free parameter, , was introduced in Makino &
Aarseth (1992)
responsible for determining the calculation of every time step of the system,
from the initial calculation to the update after every iteration. Hence, so as
to assess an optimal value for it, we perform different tests to find a balance
between a good energy conservation and a minimum wall clock time. We
explore values between and integrating a series of systems with
ranging between 1024 to 32768,
for convenience
In the same figure we describe the performance in function of by using the wall clock time in seconds for the code to reach one NBU for the same values of the parameter. We can see that the value of is inversely proportional to the time, since increasing its value results in decreasing the execution time. When we increase we implicitly increase the time step of every particle, so that one unit of time is reached sooner. We find that a value of about is the best compromise for most of our purposes, yielding an accuracy of about in most of the cases.
To measure the execution speed of our code we perform a set of tests by integrating the evolution for one NBU of a Plummer sphere with different particle numbers, ranging from to . For the analysis, we choose the time starting at and finishing at , since the first time unit is not representative because the system can have some spurious numerical behaviour resulting from the fact that it is not slightly relaxed. When testing the parameters and , we picked the time starting at and finishing at because we wanted to understand their impact not right at the beginning of the simulation. Now we allow the system to further relax so as to obtain a more realistic system. In particular, the distribution time steps drifts away from the initial setup.
We display the wall clock time of each integration in Fig. (5). We also display reference curves for the powers of , and , multiplied by different factors to adapt them to the figure. We see that GraviDy scales very closely as a power of 2. The deviations arise from the fact that not all particles are being updated at every time step.
In Figure 6 we show the acceleration factor for all parallel scenarios as compared to the singlethread CPU case, which we use as a reference point. Due to the design of the code, the maximum performance is achieved with the larger particle number. The most favourable scenario for GraviDy is, as we can see in the figure, System B. The 4 GPUs available boost the performance up to a factor of 193 as compared with the singlethread CPU case. A similar speed up is achieved on System D, which reaches a factor of 92 for the 2 GPUs. The CPUparallel version lies behind this performance: only reaching a factor of 58 for System A, using up to 240 cores.
3.3 Scaling of the three different flavours of the code
An obvious question to any user of a numerical tool is that of scaling. In this subsection we present our results for the three different versions of GraviDy of how wall clock time scales as a function of threads or cores, or what is the acceleration of the multipleGPU version of the code in function of the particle number as compared with a single GPU run, which we use as reference point.
In Fig. (7) we depict this information for the CPU, MPI and GPU versions. We can see in the CPU version that for small amounts of particles, in particular for 2k and 1k, we have an increase in the execution time with more threads, contrary to what we would expect. This is so because the amount of parallelism is not enough and the code spends more time splitting data and synchronising threads than performing the task itself, a usual situation in tasks with a low level of computation.
The MPI version uses the same jparallelisation idea from the GPU one. In this case the code splits the whole system to the amount of available slaves (be it cores or nodes), performs the force calculation and finally sums up (“reduces”) the final forces for all active particles. This procedure was performed developing our own forces datatype operations and reduction, based on structures. This means that we define our own operations to be able to “sum” two forces (which are two threedimensional arrays per particle). The simulations with small amount of particles (1k, 2k, 4k, 8k and 16k) are a clear example of a parallelisation “overkill”: using more resources than what is actually needed. Additionally, the communication process plays a role in scaling, which can be seen in the curves corresponding to these simulations for a number larger than 200 cores  the execution time increases instead of decreasing. On the other hand, large amount of particles (cases with 32k, 64k, 128k and 256k) show the expected behaviour, a better execution time with more nodes or cores. Surely this is not a solution for all simulations, since at some point the curves flatten.
The GPU version is a different scenario, since every device has its own
capability, limitations and features that makes it difficult to compare their
performances. For this reason we have decided to present the acceleration
factor of every case normalised to a singleGPU run in the same system. This
flavour of GraviDy should always have a better performance when increasing the
particle number. Although having a good occupancy is in principle the ideal
scenario in this case, it is not necessarily the best reference point to assess
the efficiency of the CUDA kernels, because it is related to register
uses, but also to the amount of redundant calculations and the arithmetic
intensity. We show the acceleration factor of two and four Tesla M2050
devices as compared to a singleGPU run which have hardware and performance
differences
Every GPU is a different device, so that in order to obtain a proper optimisation we need to first do a study in terms of kernel calls configuration. The current work present a fixed configuration of 32 threads per block, using a number of blocks corresponding to . A deeper study on each GPUdevice configuration is planned for future publication, where speeding up the first GPU implementation will be one of the main concerns.
4 The role of softening on dynamics
For the current version of GraviDy, and quoting Sverre Aarseth on a comment he got some years ago during a talk, “we have denied ourselves the pleasure of regularisation”(Kustaanheimo & Stiefel 1965; Aarseth & Zare 1974; Aarseth 1999, 2003). This means that the code resorts to softening, via the parameter .This quantity can be envisaged as a critical distance within which gravity is, for all matters, nonexistent. This obviously solves the problem of running into large numerical errors when the distance between two particles in the simulation become smaller and smaller, because since they are 0dimensional, this induces an error which grows larger and larger as they approach. This comes at a price, however. The relaxation time of the system is, approximately (see e.g. section on Twobody relaxation by AmaroSeoane 2012),
(4) 
In this equation and are the minimum and maximum impact parameters. In an unsoftened body problem they are of the order of , and the size of the cluster, respectively. In other words, , with the radius of the selfgravitating cluster, if the system is virialised, and is of the the halfmass radius order. Now suppose the code uses a softening parameter . If the value of is smaller than , then softening should play only a minor role in twobody relaxation, and the global dynamical evolution of the cluster must be similar to that of another cluster using regularisation. In the contrary case in which , the relaxation time is artificially modified, as we can read from the last equation. The larger the quantity , the more efficient is relaxation, and hence the shorter the relaxation time.
4.1 “Best” value for the softening?
We perform a series of simulations to assess the relevance of in the global dynamical evolution of an autogravitating stellar system. In Figure 8 we depict the energy error and wall clock time for six different particle numbers as a function of the softening. The lower its value, the faster the simulation. However, by using larger values of the softening, we must understand that we are evolving a system in which twobody deflections are not being taking into account. This is the most important aspect of twobody relaxation, and therefore a critical factor in the general evolution. Thus, the fundamental feature which drives the global evolution of the system is nonexisting below larger and larger distances. In particular, the larger values correspond to about 10% of the virial radius of the system. From these panels it seems that a value of is a good compromise for this particular test that we are running in this example. A good practice would be that the user tests different softening values for the case which is being addressed before making a decision for the softening. This choice is left for the user of the code, because we deem it difficult, if not impossible, to implement a selfregulating scheme in which the best value for the softening is calculated a priori.
4.2 Core collapse
Singlemass calculations
A good reference point to assess the global dynamical evolution of a dense stellar system is the core collapse of the system (see e.g. Spitzer 1987; Aarseth et al. 1974; Giersz & Spurzem 1994). We present here the evolution of the socalled “Lagrange radii” (the radii of spheres containing a certain mass fraction of the system) in Figure 9, for three representative values of the softening, the three upper panels, as calculated with GraviDy, and depict also the results of one calculation performed with NBODY6GPU (Nitadori & Aarseth 2012), the lower panel, which uses KS regularisation (Kustaanheimo & Stiefel 1965; Aarseth 2003). This can be envisaged as the “best answer”, which provides the reference point with which the other calculations should be compared. In the figures we use the halfmass relaxation time, which we introduce as
(5) 
where is the number of particles of the system, the average mass of a star, the halfmass radius, and , with the argument of the Coulomb logarithm.
From the panels we can easily see the impact of the softening parameter in the calculations: the collapse of the core is retarded for larger values. Our default choice for the softening, is just earlier than a NBODY6GPU calculation that we performed to compare with our code.
Another way of looking at the core collapse is in terms of energy. In Figure 10 we display the evolution of the energy for the same systems of Figure 9. As the collapse develops, the average distance between particles becomes smaller and smaller. There is an obvious correlation between the conservation of energy and the value of the softening. The transition between a fairly good energy conservation and a bad one happens more smoothly for larger and larger values of the softening, since the error has been distributed since the beginning of the integration. This means that, the smaller the value of the softening, the more abrupt the transition between the good and bad energy conservation, which leads to a big jump for the lowest value, . We stop the simulations at this point because of the impossibility of GraviDy to form binaries, the main way to stop the core collapse.
As discussed previously, and as we can see in Figures (10, 9), the introduction of softening in the calculations has an impact on the global dynamical behaviour of the system. We find factors of 1.001, 1.08 and 1.55 of delay to reach the core collapse for the softening values , and , respectively.
The NBODY6GPU simulation was run on a different system, using a GeForce GTX 750 (Tesla M10) GPU, which is why we compared with the overall system evolution instead of the wall clock time.
Calculations with a spectrum of masses
Additionally to the singlemass calculations, we have also addressed multimass systems. The fact of having an Initial Mass Function (IMF) accelerates the core collapse of the system, as shown by many different authors (Inagaki & Wiyanto 1984; Spitzer 1987; Kim & Lee 1997; Kim et al. 1998). In our calculations, we use a Plummer sphere with a Kroupa IMF (Kroupa 2001) and 8192 particles. In Figure (11) we present the evolution of the Lagrange radii and the energy conservation of the system. We can see that the core collapse happens around , which is the point from which the energy conservation becomes worse and worse, to achieve a value of about 6 orders of magnitude worse than in phases before the collapse. Another way of depicting the collapse is by identifying the heaviest of the stellar population and how it distributes in the core radius as calculated at . We can see this in Figure (12).
The equilibrium of the system can be evaluated by analysing the distribution of the time steps. As we have mentioned previously, in Section (2.4), the initial distribution of time steps in the system has a lognormal distribution, which in a balanced system must remain similar, or close. In Figure (13) we show the step distribution after the core collapse for the singlemass system with
5 Relativistic corrections
GraviDy includes a treatment of relativistic orbits. This has been implemented for the code to be able to study sources of gravitational waves. The approach we have used is the postNewtonian one, presented for the first time in an body code in the work of Kupi et al. (2006) (and see also AmaroSeoane & Chen 2016) and later expanded to higher orders in Brem et al. (2013). The idea is to modify the accelerations in the code to include relativistic corrections at 1PN, 2PN (periapsis shifts) and 2.5PN (energy loss in the form of gravitational wave emission). Contrary to the scheme of Kupi et al. (2006), which implements the modification in the regularised binaries, in the case of GraviDy, the corrections are active for a pair of two particles for which we set the softening to zero. The expressions for the accelerations, as well as their time derivatives can be found in the updated review of 2017 AmaroSeoane (2012).
We run a series of different tests for binaries with different mass ratios and initial semimajor axis. In Fig.(14) we display the evolution of a binary of two super massive black holes of total mass and mass ratios of 1 and 2. In Fig.(15) we show mass ratios of 5 and 100, and the latter starts with a smaller initial semimajor axis. For each of these cases we plot the geometric distance, the relative velocity and the eccentricity. Higher mass rations lead to a more complex structure in the evolution. We can see how the relative velocity increases up to a significant fraction of the speed of light as the separation grows smaller. We however note that the postNewtonian approach should not be trusted for velocities larger than about .
6 Conclusions and future work
In this work we have presented the first version of our new body code, written purely in C/C++, using OpenMPI and CUDA, which we call GraviDy. The current version of our code provides an environment to evolve a selfgravitating stellar system, and uses a H4 integration scheme, using block time steps and softening, and features relativistic corrections (periapsis shift and energy loss) for sources of gravitational radiation. This first release of GraviDy has been mainly focused on users who can have access to a machine hosting few GPUs, or usual parallel CPU systems.
We summarise here the main features of GraviDy:

The code is written using an iterative and incremental development, which is methodology similar to the Assess, Parallelise, Optimise, Deploy (APOD) development cycle presented by NVIDIA.

The code organisation is designed to be highly modular. Every critical process of the integrator is represented by a separate function or chain of functions. Our goal is to produce a code which can be read without difficulties, which makes easier future modifications or forks.

Since maintainability is one of our main goals, the documentation is also a critical factor. We document every function in the inner procedure of the integrator.

We use a H4 integrator scheme.

The code uses block time steps to improve the performance of the integrator. We evolve particles in groups of block time steps, which allows for an update of several particles at the same time.

We use GPU computing techniques, OpenMP and OpenMPI to parallelise the calculation of the gravitational interactions of our system after having localised the hotspots of our algorithm. The main objective here was to be able to update a relatively small amount of particles which share a common time step in a given moment, a situation which is against the design of GPU cards, developed to reach a high parallelism.
In this first release of GraviDy and first paper, we have presented a series of classical tests of the code, as well as a study of the performance of its different “flavours”: the single CPU version, the MPI one and the GPU version. We also address the role of the softening in the global evolution of a system, as integrated with our code. As expected, the value of the softening is crucial in determining the global dynamics, and should not be taken lightly, in particular if one is interested in studying physical phenomena for which relaxation is important, since using a softening translates into a maximum increase of the forces and the a smoothly declination to zero, which is approximate. To study a dynamical process, such as e.g. the collision of two clusters, focusing on the shortterm (i.e. for times well below a relaxation time) dynamical behaviour of the system, using a softening should be fine, but the role of the parameter should be assessed carefully by exploring different values.
The ongoing development of GraviDy includes a close encounter solver, with a timesymmetric integration scheme to treat binaries, such as the one presented in the work of Konstantinidis & Kokkotas (2010). Another immediate goal of the next releases, is to include a central massive particle and the required corrections to the gravitational forces so as to ensure a good conservation of the energy in the system. This massive particle could be envisaged as a massive black hole in a galactic centre or a star in a protoplanetary system. We also plan on bridging the gap between spherical nucleus models that focus on collisional effects and simulations of larger structure that are able to account for complex, more realistic nonspherical geometry. Finally, a future goal is to include stellar evolution routines, from which the modularity of our code will provide an easy scenario. One of the candidate modules for this could be SEVN (Spera et al. 2015).
We will follow the APOD cycle presented in this work, it is necessary to study new computational techniques, so as to improve the performance of our code: from variable precision to new parallel schemes to perform the force interaction calculation, using one or more GPU.
Appendix A About the code
GraviDy is a C/C++ and CUDA application, that uses the CUDA, OpenMPI and boost libraries.
As an overview, the compilation can be done with: make <flavour>, for the cpu, mpi and gpu versions. A simple run of the code is displayed in the Listing 1.
The URL hosting the project is http://gravidy.xyz, where you can find the prerequisites, how to get, compile and use the code more detailed. Additionally, documentation regarding the code, input and output files is included.
Inside the repository, there is a scripts directory with a set of classes to be able to handle all the output files of the code.
The code was compiled using gcc (4.9.2), openmpi(1.6.5), CUDA(6.0) and boost (1.55).
The following compilation FLAGs were used O3 Wall fopenmp pipe fstackprotector Wl,z,relro Wl,z,now Wformatsecurity Wpointerarith Wformatnonliteral Wl,O1 Wl,discardall Wl,noundefined rdynamic.
Appendix B body visualisation tool
A graphical representation of body simulations is always an attractive idea to display how the simulation was performed.Due to this reason, we decided to write an small application to have a simple 3D visualisation of GraviDy snapshots, based in OpenGL.
GravidyView is a lightweight and simple OpenGL body visualisation tool, written in C/C++. It can be downloaded from:
acknowledgements
We are thankful for hints, help and discussion to Sverre Aarseth, Holger Baumgardt, Peter Berczik, Douglas Heggie, Piet Hut, Simos Konstantinidis, Patrick Brem, Keigo Nitadori, Ulf Löckmann and Rainer Spurzem. In general, we are indebted with the very friendly body community, for keeping their codes publicly available to everybody. PAS acknowledges support from the Ramón y Cajal Programme of the Ministry of Economy, Industry and Competitiveness of Spain. CMF thanks the support and guidance of Prof. Luis Salinas during the development of his Master thesis, which was the motivation of this project. This work has been supported by the “Dirección General de Investigación y Postgrado” (DGIP) by means of the “Programa Incentivo a la Iniciación Científica” (PIIC) at the Universidad Técnica Federico Santa María, the “Comisión Nacional de Investigación Científica y Tecnológica de Chile” (CONICYT) trough its Master scholarships program and the Transregio 7 “Gravitational Wave Astronomy” financed by the Deutsche Forschungsgemeinschaft DFG (German Research Foundation). CMF acknowledges support from the DFG Project “Supermassive black holes, accretion discs, stellar dynamics and tidal disruptions”, awarded to PAS, and the International MaxPlanck Research School.
Footnotes
 All versions of the code are
publicly available at the URL
http://www.ast.cam.ac.uk/~sverre/web/pages/nbody.htm  http://www.sns.ias.edu/~starlab/
 Available at this URL http://silkroad.bao.ac.cn/nb6mpi
 http://grape.c.utokyo.ac.jp/grape
 http://www.nvidia.com
 https://www.khronos.org/opencl/
 ftp://ftp.mao.kiev.ua/pub/berczik/phiGPU/
 This impressive achievement was rewarded with a bottle of Scotch whisky (not whiskey), kindly and generously offered to him by Douglas Heggie during the excellent MODEST 15S in Kobe.
 http://www.artcompsci.org/
 For more information, please refer to the code documentation
 Eq. (2) has a typo in the sign of the second term in the sum of .
 Any number of particles can be also handle properly, not necessarily powers of 2.
 The primary difference is that model M is designed for Original Equipment Manufacturer (OEM) for an integrated system, without active cooling, while model C includes the active cooling and can be installed on any standard computer.
References
 Aarseth S. J., 1985, in Brackbill J. U., Cohen B. I., eds, Multiple time scales, p. 377  418. pp 377–418
 Aarseth S. J., 1999, The Publications of the Astronomical Society of the Pacific, 111, 1333
 Aarseth S. J., 2003, Gravitational NBody Simulations. ISBN 0521432723. Cambridge, UK: Cambridge University Press, November 2003.
 Aarseth S. J., Zare K., 1974, Celestial Mechanics, 10, 185
 Aarseth S. J., Henon M., Wielen R., 1974, A&A, 37, 183
 AmaroSeoane P., 2012, preprint, (arXiv:1205.5240)
 AmaroSeoane P., Chen X., 2016, MNRAS, 458, 3075
 AmaroSeoane P., Freitag M., Spurzem R., 2004, MNRAS,
 Barnes J., Hut P., 1986, Nat, 324, 446
 Belleman R. G., Bédorf J., Portegies Zwart S. F., 2008, New Astronomy, 13, 103
 Berczik P., et al., 2011, pp 8–18
 Berczik P., Spurzem R., Wang L., Zhong S., Huang S., 2013, in Third International Conference ”High Performance Computing”, HPCUA 2013, p. 5259. pp 52–59 (arXiv:1312.1789)
 Brem P., AmaroSeoane P., Spurzem R., 2013, MNRAS, 434, 2999
 CapuzzoDolcetta R., Spera M., 2013, Computer Physics Communications, 184, 2528
 CapuzzoDolcetta R., Spera M., Punzo D., 2013, Journal of Computational Physics, 236, 580
 Fukushige T., Makino J., Kawai A., 2005, PASJ, 57, 1009
 Gaburov E., Harfst S., Zwart S. P., 2009, New Astronomy, 14, 630
 Giersz M., Spurzem R., 1994, MNRAS, 269, 241
 Greendard L., 1987, PhD thesis, Yale University, New Haven, CT
 Hamada T., Iitaka T., 2007, New Astronomy,
 Harfst S., Gualandris A., Merritt D., Mikkola S., 2008, MNRAS, 389, 2
 Heggie D., Hut P., 2003, The Gravitational MillionBody Problem: A Multidisciplinary Approach to Star Cluster Dynamics, by Douglas Heggie and Piet Hut. Cambridge University Press, 2003, 372 pp.
 Heggie D. C., Mathieu R. D., 1986, in Hut P., McMillan S. L. W., eds, Lecture Notes in Physics, Berlin Springer Verlag Vol. 267, The Use of Supercomputers in Stellar Dynamics. p. 233, doi:10.1007/BFb0116419
 Hénon M. H., 1971, A&AS, 14, 151
 Holmberg E., 1941, ApJ, 94, 385
 Hut P., 2003, in Makino J., Hut P., eds, IAU Symposium Vol. 208, Astrophysical Supercomputing using Particle Simulations. p. 331 (arXiv:astroph/0204431)
 Inagaki S., Wiyanto P., 1984, PASJ, 36, 391
 Kim S. S., Lee H. M., 1997, Journal of Korean Astronomical Society, 30, 115
 Kim S. S., Lee H. M., Goodman J., 1998, ApJ, 495, 786
 Konstantinidis S., Kokkotas K. D., 2010, A&A, 522, A70
 Kroupa P., 2001, MNRAS, 322, 231
 Kupi G., AmaroSeoane P., Spurzem R., 2006, MNRAS, pp L77+
 Küpper A. H. W., Maschberger T., Kroupa P., Baumgardt H., 2011, MNRAS, 417, 2300
 Kustaanheimo P. E., Stiefel E. L., 1965, J. Reine Angew. Math., 218, 204
 Makino J., 1991, ApJ, 369, 200
 Makino J., 1998, Highlights in Astronomy, 11, 597
 Makino J., Aarseth S. J., 1992, PASJ, 44, 141
 Makino J., Taiji M., 1998, Scientific simulations with specialpurpose computers : The GRAPE systems. Scientific simulations with specialpurpose computers : The GRAPE systems /by Junichiro Makino & Makoto Taiji. Chichester ; Toronto : John Wiley & Sons, c1998.
 Nguyen H., 2007, Gpu gems 3, first edn. AddisonWesley Professional
 Nitadori K., 2009, PhD thesis, University of Tokyo
 Nitadori K., Aarseth S. J., 2012, MNRAS, 424, 545
 Nitadori K., Makino J., 2008, na, 13, 498
 Plummer H. C., 1911, MNRAS, 71, 460
 Portegies Zwart S. F., McMillan S. L. W., Hut P., Makino J., 2001a, MNRAS, 321, 199
 Portegies Zwart S. F., McMillan S. L. W., Hut P., Makino J., 2001b, MNRAS, 321, 199
 Portegies Zwart S. F., Belleman R. G., Geldof P. M., 2007, New Astronomy, 12, 641
 Press W. H., 1986, in Hut P., McMillan S. L. W., eds, The Use of Supercomputers in Stellar Dynamics. SpringerVerlag, p. 184
 Schneider J., AmaroSeoane P., Spurzem R., 2011, MNRAS, 410, 432
 Spera M., Mapelli M., Bressan A., 2015, MNRAS, 451, 4086
 Spitzer L., 1987, Dynamical evolution of globular clusters. Princeton, NJ, Princeton University Press, 1987, 191 p.
 Spitzer L. J., Hart M. H., 1971, ApJ, 166, 483
 Spurzem R., 1999, Journal of Computational and Applied Mathematics, 109, 407
 Taiji M., Makino J., Fukushige T., Ebisuzaki T., Sugimoto D., 1996, in Hut P., Makino J., eds, IAU Symp. 174: Dynamical Evolution of Star Clusters: Confrontation of Theory and Observations. p. 141
 Wang L., Spurzem R., Aarseth S., Nitadori K., Berczik P., Kouwenhoven M. B. N., Naab T., 2015, mn, 450, 4070
 Wang L., et al., 2016, mn, 458, 1450
 von Hoerner S., 1960, Z. Astrophys., 50, 184
 von Hoerner S., 1963, Z. Astrophys., 57, 47