Estimating the speedup of Adaptively Restrained Langevin Dynamics
Abstract
We consider Adaptively Restrained Langevin dynamics, in which the kinetic energy function vanishes for small velocities. Properly parameterized, this dynamics makes it possible to reduce the computational complexity of updating interparticle forces, and to accelerate the computation of ergodic averages of molecular simulations. In this paper, we analyze the influence of the method parameters on the total achievable speedup. In particular, we estimate both the algorithmic speedup, resulting from incremental force updates, and the influence of the change of the dynamics on the asymptotic variance. This allows us to propose a practical strategy for the parametrization of the method. We validate these theoretical results by representative numerical experiments.
keywords:
Langevin dynamics, ergodic averages, variance, adaptive methods, computational complexity, sampling efficiency1 Introduction
One fundamental aim of molecular simulations is the computation of macroscopic quantities, typically through averages of functions of the variables of the system with respect to a given probability measure , mostly distributed according to the BoltzmannGibbs density. This corresponds to a model of a conservative system in contact with a heat bath at a fixed temperature. Numerically, the highdimensional averages with respect to are often approximated as ergodic^{1}^{1}1In this article we use the term ergodic for the convergence in infinite time, i.e. the mathematical convergence (see Lelièvre and Stoltz (2016)). averages over realizations of appropriate stochastic differential equations (SDEs) (see Balian (2006)).
In many applications, the dynamics is metastable, i.e. the system remains trapped for a very long time in some region of the configuration space. An acceleration of the sampling can be achieved by improving the exploration of the phase space (with variance reduction techniques such as importance sampling, see for instance Lelièvre et al. (2010); Lelièvre and Stoltz (2016)), increasing the time step by stabilizing the dynamics ( see for instance Leimkuhler et al. (2016); Fathi and Stoltz (2015)), by changing the model as for example dissipative particle dynamics (see for instance Espanol and Warren (1995)), or the implementation (parallelization, reduction of the computational complexity (see for instance Bosson et al. (2012))). Many methods are based on a modification of the dynamics. Since, very often, the interest lies in computing of average properties, sampling can be unbiased to retrieve averages with respect to the canonical distribution. In order to increase the time step size used in the simulation, some methods consider modifying the kinetic energy based on changing the mass matrix (Bennett (1975); Plechac and Rousset (2010)). Another example is the ShadowHamiltonian MetropolisHastings method introduced by Izaguirre and Hampton (2004), which consists in integrating the Hamiltonian dynamics according to the ShadowHamiltonian, which is preserved by the numerical scheme.
In order to propose a fair comparison of sampling methods, three factors should be taken into account: the asymptotic variance of time averages, the maximal admissible time step size in the discretization and the computational effort.
In this work, we analyze the efficiency of a method based on a modified version of Langevin dynamics called “Adaptively Restrained Particles Simulations” (ARPS), first proposed in Artemova and Redon (2012). The main idea is to modify the kinetic energy function in order to freeze a number of particles at each time step and reduce the computational cost of updating interparticles forces. In Artemova and Redon (2012), the kinetic energy is set to 0 when momenta are smaller than the restraining parameter , and is set to the standard, quadratic kinetic energy for momenta larger that the fulldynamics parameter , with . Thanks to this formulation, the computational complexity of the force update is reduced, because some particles do not move and hence, forces need not be updated. The associated gain can be quantified by an algorithmic speedup factor . On the other hand, since the dynamics is modified, the asymptotic variance of time averages given by the Central Limit Theorem, differs from the asymptotic variance of the standard Langevin dynamics. Intuitively, iterates are a priori more correlated, which may translate into an increase of the statistical error. The actual speedup of the method in terms of wallclock time is therefore an interplay between the algorithmic speedup and the variances. A rigorous mathematical analysis of the ergodicity properties of this method was provided in Redon et al. (2016). Moreover, a perturbative regime study of the asymptotic variance suggests a linear behavior of the variance with respect to the parameters of the dynamics in some limiting regime.
Since the method is parameterized by two constants, it is not obvious how to choose these parameters in order to achieve an optimal speedup. Of course, the algorithmic speedup depends on the percentage of restrained particles. The percentage of restrained particles is a nonlinear function of the parameters, hence it is not trivial how to best choose their values. Our aim in this paper is to propose a strategy for choosing the parameters of the AR method.
The article is organized as follows: in Section 2.1 we make a brief overview of sampling using Langevin dynamics and we recall common strategies for its discretization. In Section 2.2 we recall the definition of ARLangevin dynamics proposed in Artemova and Redon (2012) and the alternative definition of the ARkinetic energy with better stability properties from Stoltz and Trstanova (2016). In Section 3 we give a definition of speedup and we introduce a formula for the total speedup with the AR approach. In the next two sections we analyze how this formula depends on the parametrization: in Section 4 we analyze the computational complexity of the method and we express the corresponding algorithmic speedup. This part is followed by Section 5, in which we give a relation between the restraining parameters and the percentage of restrained particles, as well as an approach for obtaining the linear approximation of the variance with respect to the restraining parameters. By combining all the necessary parts, we propose a practical strategy for the parametrization of the method and we illustrate the theoretical results by numerical simulations in Section 6.
2 Modified Langevin dynamics
In this section we recall the Langevin dynamics and the modified Langevin dynamics.
2.1 Sampling from the canonical distribution using Langevin dynamics
We consider a system of particles in a simulation box of the space dimension with periodic boundary conditions. The total dimension of the system is . We denote by momenta of the particles and by their positions in the box , where is the onedimensional unit torus. We denote . The Hamiltonian of the system, which is the sum of the kinetic () and the potential energy (), reads
Let us emphasize that we restrain ourselves to separable Hamiltonians.
The Langevin equations read:
(1) 
where is a standard dimensional Wiener process, is the friction constant and is proportional to the inverse temperature. We refer the reader to Lelièvre and Stoltz (2016) for an overview of mathematical properties of this dynamics. The invariant distribution (the Boltzmann distribution) is simply obtained as
where is the normalization constant or the partition function. We use the notation for the case of the standard kinetic energy function, and for a general kinetic energy function. Langevin dynamics generate samples from the Boltzmann distribution which are used in computation of macroscopic properties. These correspond to expected values with respect to and can be approximated by ergodic averages of the trajectories :
(2) 
Note that for any welldefined^{2}^{2}2We assume that belongs to , and grows sufficiently fast at infinity in order to ensure that . , due to the separability of the Hamiltonian, the marginal distributions in the position variable remain unchanged, since only the momenta marginals of the distribution are influenced by the modification of the kinetic energy, i.e.
Since the dynamics (1) cannot be integrated exactly its solutions are approximated by numerical integration (Milstein and Tretyakov (2013)). Basically, there are two kinds of errors occurring in the estimation of by from the numerical integration of equations (1): a statistical error, due to the finiteness of the time interval during which the sampling is performed; and a systematical error (bias) on the measure.
The statistical error for ergodic averages (2) is quantified by the Central Limit Theorem. The asymptotic variance associated with the estimator reads
Similarly, for the discretized dynamics, with time step size , we denote the estimator
If the discretized dynamics is geometrically ergodic with an invariant measure , a Central Limit Theorem holds true and the variance of the discretized process is given by (see Meyn and Tweedie (1993); Lelièvre and Stoltz (2016))
(3)  
where
In other words, for simulation steps, the statistical error is of order . The variance of the discretized process converges to the variance of the continuous process as tends to , i.e. as (see Lelièvre and Stoltz (2016)).
There are many possible ways to discretize (1), see for instance Leimkuhler et al. (2016); Mattingly et al. (2002); Kopec (2014) for a precise analysis of the properties of discretization schemes of the Langevin dynamics based on a splitting. A standard choice for the discretization of (1) is a numerical scheme of second order on the averages with respect to the time step size. It is possible to design higher order schemes, however they include at least double evaluation of the forces, which is not favorable due to the system size. Usually, the numerical schemes are constructed through a splitting of the generator of the Langevin dynamics with
For instance, the first order splitting (LieTrotter) gives the following scheme:
(4) 
where are independent and identically distributed (i.i.d.) standard dimensional Gaussian random variables. This is the socalled BAO scheme. The name is motivated by the fact that the transition kernel reads where (respectively ) is the transition operator associated with the splitting step. We refer to Stoltz and Trstanova (2016) for a construction of second order discretization schemes in the case of a general kinetic energy.
Remark 2.1.
The LieTrotter and the Strang splitting each give six possible numerical schemes according to the order of the operators A,B,O (Leimkuhler et al. (2016)). Due to the high dimensionality of the system, the bottleneck of the computational complexity is the computation of the interactions between the particles, which must be done after each update of the positions (after action A). In addition, a nonnegligible computational effort involves generation of random numbers in O. Therefore, schemes which include as few actions of A and O as possible should be preferred for a lower computational complexity.
2.2 ARLangevin dynamics
In the usual setting, the kinetic energy function of system of particles is a sum of kinetic energies of each particle, which are quadratic functions of momenta:
where is the momentum vector of the particle with mass .
ARLangevin dynamics, proposed in Artemova and Redon (2012), is based on a modified kinetic energy function that is defined as the sum of the kinetic energies of the individual particles, which are parametrized by two constants . In Artemova and Redon (2012), the ARkinetic energy was designed such that it vanishes for values smaller than the restraining parameter , and are equal to the standard kinetic energy for values bigger than the full dynamics parameter . The main idea is that the derivative of such function vanishes in this case too, and the position of the particle does not change between the two integration steps, i.e. , with being the set of indices of the restrained particles (see also Equation (4) for the computation of ). The transition between the restraining and the fulldynamics region is performed with an interpolation spline, which ensures the regularity of the kinetic energy^{3}^{3}3The choice was in Artemova and Redon (2012).. Still, the derivatives of have large values in the transition region, which might cause numerical instabilities. However, the necessary condition for the particle to remain at the same position between two time steps is that the first derivative of the kinetic energy function vanishes for some values of momenta in every space direction. In Stoltz and Trstanova (2016) an alternative definition of the ARkinetic energy function with this property was introduced, based on the vanishing velocities. The ARkinetic energy is defined starting from an interpolation of its first derivative, such that it vanishes around zero and takes values of outside the restraining region (see Figure 2 for an illustration). The order of the interpolation spline can be chosen as high as necessary. However in this article we use only linear interpolation. The modified kinetic energy is then obtained by piecewise integration (see Figure 1 for an illustration):
(5) 
for , and where is the integrated interpolation spline and is an integration constant, which corresponds to the minimal kinetic energy value of the particle^{4}^{4}4We would like to emphasize that this constant does not appear in the Langevin equations, only in the momenta marginal of the invariant measure.. Note that the total kinetic energy is the sum of individual kinetic energies of each particle in every space direction.
AR dynamics accelerate sampling by exploiting information about the kinetic energy of particles. More precisely, a particle is called restrained if it has the absolute value of each component of its momentum smaller than the restraining threshold . All other particles are defined as active particles. Note that, during the simulation, particles are switching between these states. Also, the average occupation of the active or restrained state only depends on the restraining parameters and .
Since the momenta of individual particles are independent from each other under the canonical measure, the parameters and could in fact be different for each particle, to either focus calculations on a specific part of the particle system or to adjust the scaling of parameters according to the mass of the particle.
3 Estimating the speedup
In this section we introduce a framework for the complexity analysis of the AR dynamics in the case of pairwise interactions, which are the most common interactions in numerous applications. Note that the discussion below can be easily generalized to interactions present in classical forcefields. The force acting on each particle is a sum of interactions with all other particles.
The information about the state of the particle allows us to lower the computational cost of the computation of pairwise interactions between the particles. We consider the potential
and the force acting on the particle which is given by
(6) 
The change of the force between two time steps only depends on active particles that have moved since the last time step with respect to this particle. This allows us to avoid the computation of pairwise interactions between restrained particles, hence lower the computational complexity. In order to quantify the computational cost of the force update, we define the force function such that . Then the computational cost of the force update is defined as the number of times the force function is called. The speedup of AR dynamics, due to the decreasing of the computational complexity in the force update, with respect to the nonadaptive method which updates all interactions, defines the algorithmic speedup. Since the computational complexity depends on the ratio of restrained particles, which is a quantity that varies at each time step, we consider averages over the whole simulation. More precisely, we denote by the computational cost of the force update in the ARmethod at time step and by the computational cost of a standard, nonadaptive method. We denote by the number of time steps in the simulation. Then the algorithmic speedup is the ratio of the average computational cost in the standard method and the average computational cost in the AR method:
(7) 
Note that the computational complexities in both cases are bounded functions of the number of particles and, due to the ergodicity of the dynamics, which was proved by Redon et al. (2016), the averages in (7) almost surely converge.
However, the important point is the reduction of the error for a given wallclock duration. We focus here on the statistical error, which is often the dominant source of errors. In order to express the total speedup with respect to the standard method, we need to consider not only the algorithmic speedup, but also the modification of the asymptotic variance which depends on the concrete choice of the kinetic energy (see expression (3)). We define the total speedup as a ratio of the wallclock time, which is needed by using the ARmethod in order to achieve some statistical precision, and the wallclock time needed for reaching the same precision by the standard method:
(8) 
Recall that, for an observable , we denoted by the asymptotic variance of the sampling from the discretized dynamics and by the asymptotic variance of the continuous dynamics. From the Central Limit Theorem, the statistical error at time is given by
where is of order and . Hence the number of time steps needed in order to have a statistical error of order is
The corresponding wallclock time is therefore obtained by considering the average cost as
Taking into account that (for time steps small enough, recall Section 2.1), the total speedup defined in (8) can be expressed as
(9) 
The last two terms in (9) become equal for small values of and it is therefore sufficient to study the variance of the continuous process and . As we have already mentioned, the choice of the modified kinetic energy should not change the stability properties of the standard dynamics. This would otherwise require us to choose a smaller time step size , which would lead to a smaller total speedup . Unfortunately, this is the case of the kinetic energy defined in Artemova and Redon (2012). Still, the stability can be significantly improved by using the kinetic energy given by (5) instead. In this case, the stability properties become comparable to the ones of the standard dynamics (Stoltz and Trstanova (2016)). We therefore assume in this work .
The computation in (9) shows the tradeoff between the algorithmic speedup and the change in variance. Both the algorithmic speedup and the AR variance depend on the parameters of the AR dynamics. As already showed in Artemova and Redon (2012), in some applications, the restraining parameters can be chosen such that the total speedup satisfies . Therefore, there are systems for which this method can be efficient, even though this might be counterintuitive since one could suggest that in order to accelerate the sampling, the system should move “faster” and not be restrained. Note however that the wallclock duration of the force update step depends on the implementation and on the complexity of the evaluation of . Hence, the same physical model with variance , can have various algorithmic speedups . Finally, an interesting observation is, that due to the separability of the Hamiltonian, the algorithmic speedup does not depend on the potential.
4 Algorithmic speedup
The goal of this section is to propose a methodology to analyze the algorithmic speedup (defined in (7)) of AR dynamics as a function of the percentage of restrained particles. We first describe the adaptive algorithm for computing forces, and we estimate the corresponding computational cost. In the last part, we also consider the effort for updating neighbor lists used for updating of shortranged interactions and we obtain an estimation of the algorithmic speedup per time step.
4.1 Description of the AR force update algorithm
For simplicity, we consider a system of particles where only pairwise interactions take place. In AR dynamics, this sum can be split into three kinds of interactions depending on the state of the two interacting particles: activeactive, activerestrained and restrainedrestrained. We define the set of indices of active particles and restrained particles . Then, sum (6) can be rewritten as
(10) 
The force acting on particle in the next time step can be formally obtained using the old position :
(11) 
Since, for the set of restrained particles, positions have not changed since the previous time step, one can easily see that
The computation in (11) is therefore reduced to subtracting the old and adding the new activerestrained and activeactive interactions. This simple remark provides in fact a key point for the reduced complexity of the AR algorithm.
In a standard simulation, when taking into account Newton’s third law , the computational cost of pairwise interactions is . The resulting quadratic complexity in the number of particles is not favorable due to the system size, and therefore neighbor lists are usually introduced (see Allen and Tildesley (1989); Frenkel and Smit (2002)). For a comparison of various approaches for neighbor list methods we refer the reader to Artemova et al. (2011). Neighbor lists can be used in systems where forces vanish after a certain cutoff distance, so that each particle only interacts with a relatively limited number of neighbors. For simplicity, we consider a homogenous system where we assume that the number of neighbors of a particle is the same for each particle. Taking into account that, for each pair , we may only compute the force and deduce thanks to Newton’s third law, the number of interactions reduces to .
The BAO discretization scheme (4) can be formalized in the following way: {mdframed}
In AR dynamics, the information about which particles are going to move after the position update is already available after updating the momenta B, since the kinetic energy will not change anymore. The algorithm above may thus be modified in the following way:
Updating neighbor lists normally consists in reassigning each particle to a specific grid cell (in our implementation we used a combination of Verlet lists and linkedcell lists (Frenkel and Smit (2002))). In AR dynamics, restrained particles do not have to be reassigned, and neighbor lists may be updated more efficiently. More precisely, the complexity of updating the neighbor lists goes from the number of particles, to , where is the number of active particles.
Note that the force function is called in both AR force updates (subtract and add steps), since we need to evaluate forces for positions at the previous time step. It would be possible to avoid updating forces twice by saving all pairwise forces, but this may result in a quadratic space complexity. We will not analyze this case, although it would result in a larger algorithmic speedup and lead to less restrictive conditions on the efficiency of AR dynamics.
Note that there is a slight overhead due to computing the AR kinetic energy functions , which is more complicated than in the standard case. Still, this involves additional operations, and can be neglected compared to the cost of the force update in practical applications. Furthermore, the overhead mostly comes from the transition regime since vanishes for restrained particles and becomes for the fulldynamics state.
Note that a similar strategy for incremental force update may be applied to other splitting schemes of the modified Langevin equations. However, the status of a particle (active, in transition or restrained) depends on the state of the momenta before the position update, and hence this status should not be destroyed by updating momenta between two position updates. Using the same notation as in Section 2.1 and Leimkuhler et al. (2016), this implies that the second order splittings BAOAB and OABAO are not directly suited for a modification by the AR dynamics algorithm, since between the two position updates A, the momentum changes by either O or B step. On the other hand, ABOBA, BOAOB, OBABO and AOBOA can be used, since the lists of active particles can be created before the position update A and hence activeactive and activerestrained interactions can be subtracted and added after position update A.
4.2 Complexity analysis
At each time step, the computational cost of the force update depends on the percentage of restrained particles. Let us denote the number of active particles by , where is the average ratio of active particles. The number of restrained particles is then . We are going to formalize the computational complexity of the force update as a function of the ratio of restrained particles, denoted by .
We recall that we have considered the average computational cost over the whole trajectory in equation (7), since the instantaneous computational cost may vary at each time step. Because, in the algorithm analyzed in this paper, we add and subtract pairwise forces, the computational complexity of the force update in an AR simulation is lower than a regular force update if and only if a sufficient number of particles is restrained. We are thus going to analyze which conditions on the number of restrained particles are sufficient to obtain a speedup larger than one, when a standard simulation has a linear or quadratic complexity^{5}^{5}5The quadratic complexity corresponds to bonded interactions and the linear complexity to nonbonded, in which case the neighbor lists can be applied. . This analysis can be extended to other force update algorithms.
4.2.1 Quadratic complexity
Let us first consider a standard (nonadaptive) simulation with a quadraticcomplexity force update algorithm, i.e. when no neighborlists are used. In this case, the number of interactions computed at every time step is . In AR dynamics, we do not need to recompute interactions between restrained particles, hence we only update interactions involving active particles, either with other active particles, or with restrained particles. As a result, the computational cost for the AR force update is^{6}^{6}6The factor of 2 comes from the need to subtract old forces (with previous positions) and add new forces (with current positions).:
and is satisfied for
(12) 
and . The inferior bound on the number of particles is not a restrictive condition for molecular dynamics, where the number of particles is usually much bigger. (For example, for , the number of particles must be larger than .) More importantly, this implies that at least of particles must be restrained in order for this force update algorithm to be beneficial, in which case the algorithmic speedup is:
When the number of particles tends to infinity, this becomes
(13) 
Note that if the double computation of forces can be avoided (for example by storing previous pairwise forces), the complexity becomes
so that is achieved for any and , resulting in the following speedup:
4.2.2 Linear complexity
Let us now consider the (much more frequent) case where the complexity of the force update is linear, e.g. when forces become sufficiently small after a given cutoff distance , and neighbor lists may be used to efficiently determine which particles are interacting. The reference complexity is therefore , where is the (average) number of neighbors. The algorithm for the adaptive force update is as follows: for all active particles compute interactions with their neighbors, and between the active neighbors use . The total number of interactions to be updated in the AR dynamics algorithm is then:
(14) 
where the set contains the indices of the active neighbors of the particle , contains the indices of the restrained neighbors, and . The necessary condition for is then
(15) 
Note that this condition does not depend on , nor on . The AR dynamics algorithm is more efficient in number of operations for forces update if and only if the percentage of restrained particles is bigger than . The algorithmic speedup is hence
(16) 
Again, avoiding the double recomputation of force components from the old positions for the active particles, removes a factor of 2 from and the computational cost becomes , which implies an unconditional algorithmic speedup .
An important conclusion is that an incremental force update is computationally beneficial if the percentage of restrained particles is larger than a threshold . We may thus modify Algorithm 2 as follows:
Finally, we consider the case where the neighbor lists are updated at each time step^{7}^{7}7Note that this can be easily modified in order to express the update of neighborlists every time period .. This is not usually done in practical applications, where neighbor lists are updated only after a certain time period which can be computed from the maximal velocity of the particles. The cost per time step then extends in reassigning particles into the grid, which gives order of operations. In the AR simulation, only active particles need to be reassigned into the grid. Therefore, the cost per time step computed in (14) becomes
Assuming that there are neighbors in average, the resulting speedup is:
(17) 
5 Total speedup
As explained above, the total speedup (7) reachable by AR dynamics when estimating observables depends on both the computational complexity of the force update, and the variance of the AR dynamics.
In this section, we first analyze how the percentage of restrained particles depends on the restraining parameters and . Then, we approximate the variance of the AR dynamics by a linear function. Combining both, we finally express the total speedup as a function of and .
5.1 Percentage of restrained particles
The percentage of restrained particles can be computed from the average occupation of the restrained state of each particle. In other words, it is the probability that the momenta of one particle belong to the restrained region of phase space. For the AR kinetic energy function (5), the average occupation of the restrained state of particle with parameters and is the expected value of the indicator function of the absolute values of all momenta components of one particle being smaller than the restraining parameter .
We denote by the invariant measure which corresponds to the AR kinetic energy function with parameters and and we compute
(18) 
where the momenta normalization constant of the particle is simply , with
(19)  
Note that in the standard dynamics .
Finally, considering a system consisting of particles with various restraining parameters and , the total average percentage of restrained particles can be computed as an average over the individual values of each particle. Denoting by the number of particles with parameters and and by the set of all chosen pairs , the total average percentage of restrained particles^{8}^{8}8Note that this corresponds to the notation in Section 4. is given by
(20) 
For example, the percentage of restrained particles for a system consisting of a dimer that follows standard dynamics and that is surrounded by solvent particles following AR dynamics with nonzero parameters and is:
since, in standard dynamics, the average occupation of the restrained state is zero and .
In conclusion, the algorithmic speedup can be estimated using the computational complexity of the algorithm (see Section 4) with the speedup being a function of .
Figure 3 shows, for defined in (5), the average occupation of the restrained state as a function of for various such that in dimension three. We depicted also the value of restrained particles which corresponds to the necessary condition for (given by (12) or (15)). We observe on this figure that the bigger , the bigger average occupation of the restrained state. Figure 4 shows the dependence of on the temperature. This suggests that the restraining parameters should be scaled with respect to the temperature .
Finally, Figure 5 shows as a function of both parameters. Note that the highest value of percentage of restrained particles is located close to the diagonal, i.e. when the gap between the parameters and is small.
5.2 Linear approximation of the variance
In Redon et al. (2016), it was proved that there exists a regime in which the variance from the AR dynamics simulations can be approximated by a linear function of the restraining parameters (see (Redon et al., 2016, Proposition 4.3)): there exists small enough such that for there exist constants such that for
(21) 
The total speedup of the wallclock time needed to reach a certain statistical precision (9) can hence be expressed in terms of the restraining parameters using (21) as
(22)  
The gap between the restraining parameters and should be big enough to ensure a smooth transition between the full and the restrained dynamics and prevent numerical instabilities. Note, however, that in the numerical experiments performed in (Redon et al., 2016, Section 5.1), where the variance was computed for a simple 1D system, it was shown that the relative increase of the variance with respect to the fulldynamics parameter is more significant than with respect to the restraining parameter . This result is not surprising, since the gap between the parameters smooths out the dynamics, which translates into an increase of correlations. This implies that the optimal strategy is to choose the gap between the parameters as small as possible while still maintaining the numerical stability and keeping the systematical error sufficiently low (i.e. the error on the computed averages, arising from the fact that (Leimkuhler et al. (2016))). At the same time, the restraining parameters should give the desired percentage of restrained particles . For example, in the case when , the relative derivative of the restrained energy of one particle with respect to , almost vanishes after the value . Hence this is a critical value after which the growth of function slows down (see again Figure 3). Having in mind that the variance locally increases with respect to , this implies that, in this region, the efficiency of the algorithmic speedup does not grow fast enough with increasing , while the variance might be still growing. In this case, either the gap should be chosen smaller, or one must ensure that the variance does not grow too fast, in order to compensate the variance growth with the algorithmic speedup.
It is easy to estimate the algorithmic speedup . The problematic part is to estimate the sensitivity of the variance of a given observable with respect to the modification by the restrained dynamics, i.e. the estimation of and in (22). This can be done by a linear interpolation in the preprocessing part, which should involve at least three AR dynamics simulations in order to estimate the constants and . Finally, (22) allows to have a complete expression for the total speedup as a function of the parameters and . Choosing as small as possible, one can find the optimal which produces the highest total speedup (see Section 6 for a numerical example).
We thus propose the following guidelines to estimate the total speedup with respect to the parameters and :{mdframed}

Choose the order (scale) of the restraining parameters and for each particle according to its mass, its role in the system and the temperature .

Choose the minimal gap between and with respect to the numerical stability.

Compute the algorithmic speedup according to the implementation algorithm according to Section 4.

Estimate the linear approximation of the variance for the observable .

Find the optimal value of (with ) by maximizing .
6 Numerical illustration
In order to illustrate the theoretical results from the previous section, we consider a system of particles consisting of a dimer () surrounded by 62 solvent particles () in space dimension (see Figure 6 for an illustration of this system). This model is representative of many applications in biology, chemistry or physics, where the macroscopic property only depends on a small part of the simulated system. For example, in simulation of a protein in solvent the interest lies in computation macroscopic properties of the protein (see for instance Artemova and Redon (2012)). The validation of this method for realworld problems is still needed and this work has already started by the ARPS method being implemented into LAMMPS (see Singh and Redon (2016)).
We use periodic boundary conditions with boxlength such that the density is . We consider reduced units such that particles have identical masses and the temperature is chosen so that . The friction constant in the Langevin equations is . Solvent particles interact with each other and with the dimer particles by a truncated LennardJones potential with parameter with a cutoff distance . Dimer particles interact with eachother with a doublewell potential (with width and height ). This potential corresponds to a metastable system, with the two metastable states: compact and stretched. For a more precise formulation, see Section 5.2 in Redon et al. (2016) or Lelièvre et al. (2010). We discretize the modified Langevin equations (1) by a secondorder scheme (OBABO) with time step size and perform time steps.
We use neighborlists based on the cutoff distance of the LennardJones potential , according to Algorithm 3. The average number of neighbors is estimated as . We run one reference simulation in the standard dynamics.
In the AR simulations, we consider nonzero restraining parameters on the solvent only, and we let dimer particles follow the standard dynamics. In order to demonstrate the dependence of the total speedup on the restraining parameters and , we consider the following observables: the dimer distance , the dimer potential and the kinetic temperature . The first two observables only depend on the positions of the dimer particles, hence we expect that the variance will not be much modified even for large restraining parameters. The function depends on the momenta of all particles and satisfies . The knowledge of the exact average allows us to verify that the time step size is chosen sufficiently small in order to make the systematic error on the averages smaller than even for . The asymptotic variance of a time average for a given observable is estimated by approximating the integrated autocorrelation function by a trapezoidal rule (see Section 5.2 in Redon et al. (2016)).
First, we confirm theoretical predictions for the algorithmic speedup . In our simulations, we measure the time per force update, as well as the time per time step. We compare the measured speedup, which is a ratio of the measured time in the standard dynamics and the AR dynamics, with the estimated speedup in the force update (16) and for the overall time step (17). Figure 7 shows a comparison of the predicted algorithmic speedup and the measured algorithmic speedup in our simulation, which demonstrates that it is possible to roughly estimate the computational behavior of a specific implementation. Even for our simple implementation, however, the mismatch in the curves may have multiple causes: the update of positions and momenta, the creating of lists of active particles, the updating of the neighborlists etc., and the fact that with growing and a fixed , the particle spends more time in the transition region which is computationally more expensive due to the spline. As we suggest later in the paper, realistic implementations such as those found in popular MD packages (e.g. LAMMPS, GROMACS, etc., see Singh and Redon 2016) are very complex, and the best strategy to determine AR parameters may be to actually measure algorithmic speedups on short simulations.
Figure 8 plots the estimated relative variation of the variance of three observables as a function of the parameters for with . Recall that only the solvent particles are restrained and therefore the variance of an observable as , which depends directly on these degrees of freedom, is more perturbed from the variance in the standard case than the variance of an observable depending on particles following the standard dynamics (the dimer). We confirm the results showed in Redon et al. (2016): the variance of is modified more drastically than the variance of observables measured on the dimer with growing .
Finally, combining the algorithmic speedup with the variance , we estimate the total speedup according to (9). This is depicted on Figure 9. We again consider in order to demonstrate the impact of the gap between the parameters on the total speedup : the smaller the gap, the larger becomes. Also, it holds that for the dimer observables only (up to ), and not for the global observable , for which the relative deviation of the variance dominates the algorithmic speedup. This supports the idea that we can speedup the computation of macroscopic properties that depend on unrestrained degrees of freedom, i.e. those of the dimer in this example.
It is easy to compute the algorithmic speedup . The problematic part is the determination of the sensitivity of the observable on the restraining parameters (see again Figure 8). Since the variance can be approximated by a linear function of the restraining parameters at least locally, we can compute the slopes such that^{9}^{9}9Note that, in this linear approximation, we consider a fixed ratio such that . from three AR simulations with parameters . More precisely, this allows us to approximate the total speedup as
(23) 
We choose and we estimate the slope . Table 1 shows the comparison with the slope directly obtained from simulations with fixed and for (see Figure 8).
: 3 points  :  
: 3 points  : 