Simulating Stochastic Dynamics Using Large Time Steps

Simulating Stochastic Dynamics Using Large Time Steps

Abstract

We present a novel approach to investigate the long-time stochastic dynamics of multi-dimensional classical systems, in contact with a heat-bath. When the potential energy landscape is rugged, the kinetics displays a decoupling of short and long time scales and both Molecular Dynamics (MD) or Monte Carlo (MC) simulations are generally inefficient. Using a field theoretic approach, we perform analytically the average over the short-time stochastic fluctuations. This way, we obtain an effective theory, which generates the same long-time dynamics of the original theory, but has a lower time resolution power. Such an approach is used to develop an improved version of the MC algorithm, which is particularly suitable to investigate the dynamics of rare conformational transitions. In the specific case of molecular systems at room temperature, we show that elementary integration time steps used to simulate the effective theory can be chosen a factor larger than those used in the original theory. Our results are illustrated and tested on a simple system, characterized by a rugged energy landscape.

I Introduction

The investigation of a vast class of physical phenomena requires the understanding of the long-time dynamics of classical systems, in contact with a heat-bath. Examples include critical dynamics, molecular aggregation, protein folding, to name a few.

The most natural strategy to describe these processes is to integrate numerically the equations of motion, i.e. to perform MD simulations. Unfortunately, when the number of degrees of freedom is very large, or in the presence of large free energy barriers, MD approaches become extremely costly pandeeCo (), or even impracticable. The problem arises because the time scale associated with the system’s local conformational changes can be many orders of magnitude smaller that the time scales of the dynamics one is interesting in studying. As a result, most of the computational time is invested in simulating uninteresting thermal oscillations.

This situation is exemplified in Fig.1, where we show the stochastic motion of a point particle, interacting with a 2-dimensional external potential. The solid line was obtained by means of a MD simulation and illustrates how, at short time scales, the dynamics of this system is dominated by fast modes associated to thermal diffusion. However, when the evolution of the system is described using much lower time resolution power, the effect of such short-time thermal fluctuations tends to average out and to become unimportant. This is evident from the comparison between the solid line and the dashed line, which was obtained by averaging over blocks of consecutive frames in the original MD trajectory. At long times, the dynamics of system is mostly sensitive to the structure of the external energy landscape, which was chosen to be spherically symmetric.

Clearly, an important question to ask is whether it is possible to develop theoretical/computational frameworks which yield directly the correct long-dynamics, but avoid investing computational time in simulating the short-time thermal oscillations. Significant progress in this direction has been recently made developing approaches based on Markov State Models MSM1 (); MSM2 (); MSM3 (). A potential difficulty of such approaches resides in correct identification of the metastable states. In addition, for each different system, one needs to perform a large set of independent MD simulations in order to accurate calculation of the rate coefficients.

In this work, we present an alternative approach to simulate the dynamics over long times. We develop a rigorous effective theory which (i) yields by construction the correct long-dynamics and (ii) does not require to identify meta-stable states, nor to evaluate the transition matrix by MD. To our goal, we use a field theory approach, based on Renormalization Group (RG) ideas and on the notion of effective field theory howto (). Such a powerful tools have been already successfully applied to describe the low-energy dynamics of a vast variety of of quantum and statistical systems characterized by a separation of scales —see e.g. zinn (); kleinert ()—. To the best of our knowledge, this method has never been applied to develop an effective theory to efficiently simulate the long-time stochastic dynamics of a system in contact with a heat bath.

The main idea of our approach is to exploit the decoupling of time scales in the system in order to define a perturbative series, in which the expansion parameter is the ratio of short- over large- time scales. In such a perturbative framework, the average over the short-time fluctuations can be computed analytically, to any desired level of accuracy. The average over the fast thermal oscillations gives rise to new terms in the stochastic path integral, which represent corrections both to the interaction and to the diffusion coefficient. Such new terms implicitly take into account of the dynamics of the fast degrees of freedom, which have been integrated out from the system.

Once a finite number of such effective terms corresponding to a given accuracy have been calculated analytically, it is possible to simulate the dynamics of the system using much larger time steps. By construction, in the regime of decoupling of fast and slow modes, one is guaranteed that the effective long time theory generates the same probability distributions of the underlying, more fundamental stochastic theory. It is important to emphasize the fact that the present approach is not equivalent to simply including higher-order corrections in the Trotter expansion trotter (). Indeed, the assumption of decoupling of time scales leads to further simplifications with respect to such an approach.

The paper is organized as follows. In section II, we review the path integral formulation of the Langevin dynamics and we outline the formal connection between stochastic dynamics and evolution of a quantum particle in imaginary time. Such a connection is used in section III to identify and isolate the dynamics of the fast degrees of freedom. In sections IV and V we present our perturbative scheme which allows to integrate out the fast modes and derive the effective interactions and diffusion coefficients. In section VI we discuss how the effective theory for the dynamics at long time scales can be simulated using the diffusion MC algorithm, which is briefly reviewed in appendix B. Section VI is devoted to simple examples, which illustrate how this method works in practice. In section VIII we discuss the applicability of the present approach to simulate the Langevin dynamics of molecular systems. Results and conclusions are summarized in section IX.

Figure 1: Langevin dynamics of a point particle in a 2-dimensional external potential. The solid line denotes the result of an MD simulation. The dashed line is the result of averaging over blocks of consecutive frames of the MD trajectory. Such an average smoothes out the trajectory.

Ii Langevin Dynamics

We consider a system defined by a stochastic -dimensional variable obeying the Langevin Eq.n:

(1)

where is a potential energy function, is the mass, is the friction coefficient and is a correlated Gaussian noise. In many molecular systems of interest, the acceleration term is damped at time scales of the order , which much smaller than the time scale associated to local conformational changes. If such a term is dropped one obtains the so-called over-damped or velocity Langevin Eq.:

(2)

where is a rescaled delta-correlated Gaussian noise, satisfying the fluctuation-dissipation relationship:

(3)

The over-damped Langevin Eq. defines a Markovian process. The probability distribution generated by such a stochastic differential equation obeys the Fokker-Planck Eq.

(4)

By performing the substitution

(5)

the Fokker-Planck Eq.(4) can be recast in the form of a Schrödinger Equation in imaginary time:

(6)

where the effective ”Quantum Hamiltonian” operator reads

(7)

and is called the effective potential and reads

(8)

Hence, the problem of studying the diffusion of a classical particle can be mapped into the problem of determining the quantum-mechanical propagation in imaginary time of a virtual system, defined by the effective quantum Hamiltonian (7), interacting with the effective potential .

Let be the Green’s function of the Fokker-Planck operator, subject to initial condition , i.e.

(9)

The interpretation of such a Green’s function is the probability for the system to be in at , conditioned to start from at the initial time. Formally, such a conditional probability can be related to the ”quantum” propagator of the effective Hamiltonian (7):

(10)
(11)

Hence, it is immediate to derive a path integral representation of the Green’s function :

(12)

where is the effective ”action”,

(13)

The pre-factor in Eq. (11) can be transformed away, noticing that . One than obtains a path integral in which the statistical weight contains the Onsager-Machlup functional

(14)

Eq. (12) provides an expression for the conditional probability in terms of the microscopic stochastic dynamics governing the system. It represents the starting point of the Dominant Reaction Pathway approach DRP1 (); DRP2 (); DRP3 (); DRP4 (), which deals with the problem of finding the most probable transition pathways between the given configurations and , which are visited at the initial and final time , respectively.

On the other hand, in this work we are interested in the corresponding initial value problem, i.e. we are want to develop an effective theory which yields directly the long-time evolution of the probability density , solution of Eq. (4), starting from a given initial probability density . The probability density , the Green’s function and the initial distribution are related by the Eq.

(15)

Hence, for positive time intervals, the conditional probability can be considered as the propagator associated to the stochastic Fokker-Planck Eq. (4).

Iii Separation of Fast and Slow Modes

Without loss of generality, let us focus on the stochastic path integral (12), with periodic boundary conditions:

(16)

We observe that the inverse temperature plays the role of , in the analogy with the quantum mechanical formalism. Hence, the loop expansion of the path integral (16) generates an expansion in powers of .

Let us introduce the Fourier conjugate:

(17)
(18)

where are the Matsubara frequencies:

(19)

In numerical simulations, the integration of the over-damped Langevin Eq. is performed by choosing a finite elementary time step . In frequency space, this implies the existence of an ultra-violet cut-off , which is inversely proportional to :

(20)

Such a relationship becomes a strict equality in the case of periodic boundary conditions, as in Eq. (16). In general, when the boundary conditions are not periodic, it represents just an order-of-magnitude estimate of the largest Fourier frequencies, which are associated to a given choice of the integration time step .

Let us now introduce a parameter and split the frequency interval as . Then the Fourier decomposition of a path contributing to (16)) can be split as:

(21)

where and will be referred to as the slow- and fast- modes respectively:

(22)
(23)

The main purpose of this work is to develop a perturbation series to systematically integrate out from the path integral the modes with frequencies . To this end, we begin by re-writing the ”kinetic” term which appears in the effective action (13) of the path integral (12) as a sum of the kinetic energy of slow and fast modes:

(24)

where denotes the shell of hard modes .

Let us now consider the potential term and expand around the slow modes

(25)

The complete path integral (16) can be split in the following way:

(26)

in this expression, the action functional is evaluated on the slow-modes only and depends on the original effective potential (which we also shall refer to as the ”tree-level” effective potential). is a correction term action which accounts for the dynamics of the fast modes which are integrated out:

(27)

where the is an effective interaction term. In such an Eq., the integration over the hard modes is performed in Fourier space,

(28)

Eq. (26) is formally exact. In the next section, we evaluate the effective action perturbatively. The effective interaction which includes the correction coming from will be referred to as the renormalized effective interaction.

Iv Renormalized Effective Interaction

Figure 2: Examples of connected graphs appearing in the exponent of Eq. (38). The diagrams on the upper part (dumbbell diagrams) are one-particle reducible, while those in the middle and in the bottom are one-particle-irreducible. In particular, those in the middle (daisy diagram) are local in time.

In the previous section, we have seen that the the integration over the fast modes generates an additional term in the effective action for the slow modes:

(29)

where

(30)

In this section we formally perform such an integration. We begin by re-writing as

(31)

where the notation denotes the expectation value evaluated in the free theory

(32)

To evaluate the matrix element , we represent the “operator” by its power series:

(33)

Next, we expand the interaction action around the slow modes1:

(34)

where are vertices with couplings

(35)

Notice that each term in the perturbative expansion (34) generates a new vertex, with an increasing power of the field. The couplings to the fast modes depend implicitly on the time , through the slow modes .

By Wick theorem, each term in the series (33) can be related to a Feynman graph with vertexes given by (35) and propagators given by —see appendix A —:

(36)

The expansion (33) can be re-organized as the exponent of the sum performed over only connected diagrams:

(37)

Hence, the path integral (26) for the slow modes can be given the following exact diagrammatic representation

(38)

Below we give a classification of all the connected diagrams that may give a contribution to the expansion above. Firstly note that all diagrams that involve an odd numbers of fast field vanish thanks to the Wick theorem. We are thus left with the following sets of, a priori nonvanishing, diagrams:

  • 1PR (one-particle-reducible) diagrams, namely diagrams that can be topologically separated into two distinct subdiagrams by cutting one internal fast mode line (propagator): they have the topology of a dumbbell. The simplest examples of dumbbell diagrams are depicted in the upper part of Fig. 2.

    Figure 3: Diagrammatic representation of the local time-derivative expansion of a non-local diagram —Eq. (50)—. Solid lines are fast-mode propagators, while dashed lines represent a single time derivative acting on the corresponding vertex function.

    The main assumption of this work is the existence of a gap between slow modes and fast modes. Under such assumption all the 1PR diagrams give vanishing contributions. From the physical point of view, this can be understood as a consequence of energy conservation: in order for the total energy flowing through a vertex with a single hard mode to be conserved, at least one of the external modes has to be hard. On the other hand, our working assumption implies that all the modes in the external legs of diagrams are soft. This result can be rigorously proven for all 1PR. As an example, we explicitly compute the upper left diagram of Fig. 2. We have

    (39)

    We note that the effective potentials depend smoothly on time, through the periodic functions . Hence, the terms and in Eq.(39) can be expressed in terms of their Fourier-transform,

    (40)
    (41)

    This allows to perform the time integrals, which simply yield . Due to such delta-functions, only hard -modes survives, which are projected in a term

    (42)

    These modes thus yield negligible contributions under the physical assumption of large separation of frequency scales. On the other hand, if one does not assume a separation of time scale, this diagram gives finite contribution and has to be accounted for. Note that this term has the same structure as the first correction which appears when one performs higher-order Trotter expansiontrotter ().

    It is not difficult to check that such result holds for all 1PR diagrams, so that we can reduce our effective action to the sum of 1PI (one-particle-irreducible) diagrams, i.e. diagrams that cannot be disconnected by cutting a single internal line. They can be classified in two main groups:

  • 1PI “daisy” diagrams, namely diagrams with a single vertex. Such diagrams only involve equal-time hard propagators and only give rise to contributions to the renormalized effective action which are local in time: they have the topology of a daisy, hence the name. Examples of daisy diagrams are depicted in the middle part of Fig. 2. It is not difficult to compute a generic daisy diagram with petals (propagators). It is due to the vertex with hard fields and reads

    (43)

    where is the Laplacian operator, and the numerical factor in front is a combinatorial factor. The sum, i.e. the equal-time propagator, can be easily performed by taking the continuum limit that simply yields 2

    (44)

    so that we finally obtain

    (45)

    where we have reinstated the diffusion coefficient . Hence, one can even formally resum all the daisy diagrams into the compact expression

    (46)

  • 1PI non-daisy diagrams: all other non-local diagrams. The simplest examples of such diagrams are depicted in the lower part of Fig. 2. These diagrams generate contributions to the renormalized effective action that are non-local in time and give rise to infinite series of local diagrams. For example, the evaluation of the lower left diagram of Fig. 2 yields a contribution of the form:

    where the in front is a combinatorial factor. After Fourier transforming the potentials (see discussion below eq. (39)), the integrals over times yield . Hence,

    (48)

    Now we again make use of the assumption that slow modes and fast modes of physical processes under study are separated by a large gap. Under such assumption we can safely expand the second fraction in the latter expression in power series of slow modes and rewrite (48) as higher-time-derivative expansion. Let us reintroduce the integral over time as so that powers of can be traded with time-derivative of the potential (note that odd powers vanish upon symmetric sum; in fact they would give rise to total time-derivative terms that are zero upon integration thanks to periodicity in time.) We are thus left with

    (49)

    Sums over hard frequencies can be performed in the continuum limit with the help of formula (44) and time-derivatives can be partially integrated in order to rewrite the latter in a more symmetric form

    (50)

    The infinite higher-derivative expansion is the legacy of non-locality in time: such an expansion can diagrammatically represented as an infinite sum of local (daisy-like) diagrams, as depicted in Fig. 3.

It is intuitive to expect that, in the presence of decoupling low and high frequency modes, the higher-derivative terms should be suppressed. In the next section, we shall generalize this statement and present a quantitative method to systematically organize all contributions to the effective action in terms of a perturbative series.

V Slow-mode perturbation theory

The diagrammatic representation of the path integral given by Eq. (38) is formally exact, but rather useless. In fact, it is obviously impossible to evaluate and re-sum exactly all the infinitely many Feynmann graphs appearing in the exponent. On the other hand, in this section we show that it is possible to compute the renormalized effective action to an arbitrary level of precision, by calculating only a finite number of Feynmann graphs. This way, the low-frequency effective theory retains predictive power.

The idea is to exploit the decoupling of the short-time dynamics from the long-time dynamics to organize the sum over all possible graphs as a perturbative expansion. We shall refer to such a systematic evaluation of the renormalized low-frequency effective action as to the slow-mode perturbation theory.

The first step in the construction of our perturbation series is to identify all the dimensionless combinations of the physical quantities which appear in the Feynmann graphs contributing to (38), evaluated in stationary phase approximation. Let us first define the quantities

(51)

where is the typical wave vector on the spatial Fourier transform of and is the typical frequency in temporal Fourier transform of .

Using these combinations, we can thus construct the following dimensionless combinations:

(52)

We are interested in describing the dynamics of physical systems for which each of these parameters can be considered small. In order to illustrate the physical interpretation of the condition , we observe that the probability for the system to remain in the same configuration , during an elementary time interval is

(53)

Hence, the combination represents3 the typical time scale associated to local conformational changes, and the condition expresses the condition that the time spent on average by the system in each configuration is large compared to the elementary short-time scale, .

Figure 4: Examples of the diagrams with the lowest degree of slowness, up to .

The condition implies that the effective potential varies over length scales which are large, compared with the mean distance covered by Brownian motion in an elementary time interval . Finally, the condition implies that the typical slow mode frequencies are small compared to the ultra-violet cut-off, which is of the order of the inverse of the elementary time interval .

It is easy to see that any local diagram in the expansion of the renormalized effective action comes about with integer powers of these coefficients, when compared to the tree level effective action. In particular, any diagram composed by vertices of hard fields will involve spatial derivatives and propagators each of which yields a power of . Finally, each additional vertex yields a power of and each time derivative yields a power of . So, the above diagram, at the lowest level in time derivatives will be of order with respect to the tree level expression. Higher time derivative terms will add powers . It is thus natural to define a degree of slowness for a local diagram, given by

(54)

where is the number of vertices, the number of time derivatives and the number of spatial derivatives. The definition in Eq. (V) is normalized in such a way that .

It is easy to check that the degree of slowness corresponds to the power of of the local diagrams. Note also that for daisy diagrams and for all other diagrams where , the degree is nothing but the number of loops. One can thus easily write down and compute the finite set of local diagrams that renormalize the effective action up to a fixed (yet arbitrary) level of precision . Let us consider a few simple examples.

  • corresponds to a single daisy diagram with , and , represented in the left panel of Fig.4. The expression of this diagram is given by Eq. (45) and gives a correction to the effective action of the form

    (55)
  • corresponds to two further diagrams, one daisy diagrams with either and the two-vertex local diagram with and no time-derivatives. This latter however is 1PR and gives no contribution. We are thus left with the corrections

    (56)
  • corresponds to two further diagrams, one daisy diagrams with and the two-vertex local diagram with and no time-derivatives. This latter can be simply read off from eq. (50). Hence

    (57)

    that involves in the last term the trace of the square of Hessian of the tree level potential . In order to see the first time derivatives appearing into the renormalized effective action we need to consider where, along with several other corrections, we have the correction coming from the second term in (50) that yields

    (58)

    that can be also recast as a correction of the kinetic action

    (59)

Some comments on the results obtained in this section are in order. First of all, we emphasize that the effective interactions have been derived under the assumption that the modes which are relevant for the long-time dynamics vary over time scales much longer than that of the fast modes, which enter in the loop diagrams. This is the crucial assumption of all renormalization group approaches. Our results confirm the intuitive picture that if one adopts a low ”time-resolution power”, then the effective interactions generated by the ultra-violet modes can be regarded as instantaneous. This is in fact general property of renormalization group theory, which is preserved to any order in the perturbative expansion. Finally, we note that the correction terms generated by the integration over the fast modes is suppressed, in the small temperature limit.

Vi Renormalization Group Improved Monte Carlo

The usefulness of the renormalization procedure resides in the fact that it gives rise to an effective theory, in which the largest frequency scale is lowered form to . Equivalently, the shortest time scale is increased form to . By construction, in the regime of decoupling of fast and slow modes, the probability density generated by the new slow-mode effective theory must be the same as that of the original (i.e. tree-level) theory. In this section, we show how it is possible to use the slow-mode effective theory to develop improved MC algorithms for the time evolution of the probability density , in which the elementary time step used to propagate the configurations is increased by a factor .

The starting point of the MC approach DMC () is to write the probability of observing the system in configuration at time in terms of the Green’s function of the Fokker-Planck Eq. :

(60)

where is the density of states at the initial time.

One then uses Trotter’s formula to write the transition probability as a sequence of intermediate elementary propagation steps:

(61)

If a sufficiently large number of intermediate steps is adopted, then the time steps can be considered infinitesimal and the (unnormalized) transition probability can be calculated analytically

(62)

”Completing the square” in the first exponent, one finds

(63)

Now and recalling the definition of the effective potential (8) in the second exponent, this Green’s function can be written as:

(64)

In the MC algorithm, one starts from a set of initial system’s configurations, sampled according to he distribution . Such an ensemble is evolved in time, according to the following procedure. Each configuration is propagated for an elementary time interval , by sampling from the Gaussian

(65)

in Eq. (64). Such a configuration is then re-weighted according to the factor

(66)

The iteration of such a procedure for many consecutive elementary propagations gives rise to a set of diffusive trajectories, called walkers. In the so-called diffusion MC algorithm, the term is used to replicate or annihilate the walkers. The ensemble of configurations obtained according to this procedure is distributed according to the probability density (60).

For the MC algorithm to be efficient, the fluctuations in the statistical weight of the walkers —or, equivalently, in the number of walkers— should remain small, throughout the entire time-evolution. This condition is verified if the factor is always of order one. Note however that this term tends to enhance (suppress) the weight of configurations in the vicinity of the local minima (maxima) of , where the Laplacian is positive (negative). Hence, if the energy landscape varies very rapidly in space, then the fluctuations in the statistical weights —or in the number of walkers— will in general be large, unless the elementary time step is chosen very small. This feature represents a limiting factor of MC simulations, which makes the sampling of the probability density at large times very computationally expensive.

Figure 5: Average position at thermal equilibrium, obtained from diffusion MC simulations with (circles) and without (square) the branching factor of Eq. (66), for different values of the discretization time step . Errors are smaller than the symbols.

Clearly, the elementary propagation time step is the shortest time scale in the simulation. On the other hand, in the slow-mode effective theory one integrates out the dynamics in the time scale range . Hence, we expect that by taking into account of the corrections associated to the renormalized effective interaction it is possible to perform MC simulations in which the integration time step is chosen a factor larger.

In practice, the effective slow-mode theory introduces a correction in the re-weighting —or branching—. To order one has