Metropolis integration schemesfor self-adjoint diffusions

Metropolis integration schemes
for self-adjoint diffusions

Nawaf Bou-Rabee Department of Mathematical Sciences, Rutgers University Camden, 311 N 5th Street, Camden, NJ 08102, USA, (nawaf.bourabee@rutgers.edu). The work of N. B-R. was supported by NSF grant DMS-1212058.    Aleksandar Donev Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, NY 10012-1185, (donev@courant.nyu.edu). A. Donev was supported in part by the Air Force Office of Scientific Research under grant number FA9550-12-1-0356.    Eric Vanden-Eijnden Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, NY 10012-1185, (eve2@cims.nyu.edu).
Abstract

We present explicit methods for simulating diffusions whose generator is self-adjoint with respect to a known (but possibly not normalizable) density. These methods exploit this property and combine an optimized Runge-Kutta algorithm with a Metropolis-Hastings Monte-Carlo scheme. The resulting numerical integration scheme is shown to be weakly accurate at finite noise and to gain higher order accuracy in the small noise limit. It also permits to avoid computing explicitly certain terms in the equation, such as the divergence of the mobility tensor, which can be tedious to calculate. Finally, the scheme is shown to be ergodic with respect to the exact equilibrium probability distribution of the diffusion when it exists. These results are illustrated on several examples including a Brownian dynamics simulation of DNA in a solvent. In this example, the proposed scheme is able to accurately compute dynamics at time step sizes that are an order of magnitude (or more) larger than those permitted with commonly used explicit predictor-corrector schemes.

Key words. explicit integrators; Brownian dynamics with hydrodynamic interactions; Metropolis-Hastings algorithm; small noise limit; predictor-corrector schemes; DNA simulations; ergodicity; fluctuation-dissipation theorem;

AMS subject classifications. Primary, 65C30; Secondary, 65C05, 60J05

1 Introduction

This paper is about simulation of diffusions of the type:

\hb@xt@.01(1.1)

where we have introduced the following.


state of the system
potential energy
force
mobility matrix
divergence of mobility matrix
noise coefficient matrix
Brownian motion
inverse temperature factor

Let denote the transpose of the real matrix . Assuming that

\hb@xt@.01(1.2)

this dynamics defines a Markov process whose generator is self-adjoint with respect to the following density

\hb@xt@.01(1.3)

Indeed since the action of on any suitable test function can be written as

\hb@xt@.01(1.4)

it follows from integration by parts that

\hb@xt@.01(1.5)

where denotes an -inner product weighted by the density :

This property implies that the diffusion is -symmetric [45], in the sense that

\hb@xt@.01(1.6)

where denotes the transition density of given that , i.e.,

for any measurable set . In fact, (LABEL:eq:idb) corresponds to an infinitesimal version of (LABEL:eq:db). We stress that (LABEL:eq:db) can hold even if is not normalizable: in this case, the solution to (LABEL:sde) reaches no statistical steady state, i.e., it is not ergodic. If, however, the density (LABEL:eq:nu) is normalizable,

\hb@xt@.01(1.7)

then the diffusion (LABEL:sde) is ergodic with respect to . This normalized density is called the equilibrium probability density of the diffusion and is also referred to as the Boltzmann-Gibbs density. When (LABEL:eq:normalize) holds, the dynamics observed at equilibrium is time-reversible, and the property (LABEL:eq:db) is referred to as the detailed balance condition in the physics literature. Our main purpose here is to exploit the properties above, in particular (LABEL:eq:db), to design stable and accurate numerical integrators for (LABEL:sde) that work even in situations when the density in (LABEL:eq:nu) is not normalizable.

The importance of this objective stems from the wide range of applications where diffusions of the type (LABEL:sde) serve as dynamical models. For example, (LABEL:sde) is used to model the evolution of coarse-grained molecular systems in regimes where the details of the microscopic interactions can be represented by a heat bath. Multiscale models of polymers in a Stokesian solvent fit this category, a context in which (LABEL:sde) is referred to as Brownian dynamics (BD) with hydrodynamic interactions [12, 13, 66, 86]. BD has been used to quantitatively simulate the non-equilibrium dynamics of biopolymers in dilute solutions [51, 79], and to validate and clarify experimental findings for bacteriophage DNA dynamics in extensional, shear, and mixed planar flows [52, 36, 35, 40, 76, 77]. The dynamics of DNA molecules in confined solutions with complex geometries has also been studied using BD to guide the design of microfluidic devices to manipulate these molecules [7, 41, 42, 43, 87, 88, 26, 25, 24, 27, 19, 8, 89]. To be clear, the proposed integrator is not able to simulate diffusions that do not satisfy the -symmetry condition, e.g., polymers in shear flows.

Because of the relevance of (LABEL:sde) to BD, this is also the context in which most work on the development of numerical schemes to simulate this equation has been devoted [66, 37]. Perhaps the most widely used among these is the so-called Fixman scheme, which is an explicit predictor-corrector scheme that also has the advantage that it avoids the explicit computation of the divergence of the mobility matrix [13]. Unfortunately, explicit BD time integrators like the Fixman scheme are often insufficient in applications because the interplay between the drift and the noise terms in (LABEL:sde) can induce numerical instabilities or artifacts such as systematic drifts [56, 70]. This is a well known problem with explicit discretizations of nonlinear diffusions [82, 30, 64, 29, 38], and it is especially acute if the potential force in (LABEL:sde) is stiff, which is typically the case in applications. The standard solution to this problem is to introduce implicitness. In a sequence of works, this strategy was investigated to deal with the steep potentials that enforce finite extensibility of bond lengths in bead-spring models [14, 28, 39, 80, 32]. The leading semi-implicit predictor-corrector scheme emerging from this effort is numerically stable at much larger time step sizes than explicit predictor-corrector schemes but it requires a nonlinear solve at every step, which is time-consuming and raises convergence questions about the methods used to solve this nonlinear system that are not easy to answer, in general.

In this paper, following the strategy introduced in [5], we adopt a probabilistic approach based on the -symmetry property in (LABEL:eq:db) to build stable explicit integrators for (LABEL:sde) that avoid implicitness and do not assume a specific form of the potential forces. Specifically, we propose an integrator for (LABEL:sde) with the following properties.

(P1)

it samples the exact equilibrium probability density of the SDE when this density exists (i.e., when in (LABEL:eq:nu) is normalizable);

(P2)

it generates a weakly accurate approximation to the solution of (LABEL:sde) at constant temperature;

(P3)

it acquires higher order accuracy in the small noise limit, ; and,

(P4)

it avoids computing the divergence of the mobility matrix.

We stress that Properties (P2)-(P4) hold even when the density in (LABEL:eq:nu) is not normalizable, i.e., (LABEL:sde) is out of equilibrium and not ergodic with respect to any invariant distribution. Thus, even though the scheme we propose involves a Monte-Carlo step, its aim and philosophy are very different from Monte-Carlo methods where a discretized version of (LABEL:sde) is used as a proposal step but whose only goal is to sample a target distribution with no concern for the dynamics of (LABEL:sde) [62, 22, 73, 9, 31, 44, 55, 2, 1, 54]. Compared to the method proposed in [5], the main novelty of the scheme introduced here stems from Properties (P3) and (P4). The point of (P3) is that the finite time dynamics of in the small noise limit, e.g., how the solution moves to the critical points of , is a limiting situation that the approximation should be able to handle. To achieve (P3) we need to control the rate of rejections in the Monte Carlo step in the small noise limit which is nontrivial because the density  in (LABEL:eq:nu) that enters the key relation (LABEL:eq:db) becomes singular in this limit. Similarly, (P4) constrains the type of proposal moves we can use in the scheme, and this property is important in situations where the mobility matrix does not have an explicit expression, e.g., when the solvent is confined [19]. In the sequel, we use a BD model of DNA in a solvent to show that the new scheme achieves the same accuracy as explicit predictor-corrector schemes with time step sizes that are an order of magnitude (or more) larger. Beside BD, this scheme should also be useful in the simulation of polymer conformational transitions in electrophoretic flows [48, 68, 49, 85, 33, 34], in other contexts where the effective dynamics of a set of coarse-grained variables satisfies (LABEL:sde) [10, 59, 58, 11, 53], in applications to Bayesian inference [15], etc.

Organization of the paper

The new scheme, with a structure that immediately shows that it satisfies Property (P4) is introduced in §LABEL:mainresults. In §LABEL:sec:numerics, we present numerical examples with comparisons to the Fixman algorithm. A proof of the ergodicity Property (P1) of the scheme is provided in §LABEL:sec:ergodicity. The weak accuracy Property (P2) is proven in §LABEL:sec:accuracy using (LABEL:eq:db). The behavior of the scheme in the small noise limit, specifically Property (P3), is investigated in §LABEL:sec:deterministicaccuracy. A conclusion is given in §LABEL:sec:conclusion.

2 The integrator and its main properties

Following [5], the scheme introduced in this paper combines a proposal step obtained via time-discretization of (LABEL:sde) with an accept/reject Monte-Carlo step. The detailed algorithm is given below in terms of vector and matrix-valued variables and , respectively, whose explicit forms will be specified shortly in such a way that Properties (P1)–(P4) are met.

Algorithm 2.1 (Metropolis Integrator)

Given the current position at time the algorithm proposes an updated position at time for some time step size via

\hb@xt@.01(2.1)

Here denotes a Gaussian random vector with mean zero and covariance . The proposal move is then accepted or rejected by taking as actual update for the position at time the value

\hb@xt@.01(2.2)

where is a Bernoulli random variable which assumes the value 1 with probability and value 0 with probability . The function is known as the acceptance probability and is given by

\hb@xt@.01(2.3)

where satisfies:

\hb@xt@.01(2.4)

Figure 2.1: A diagram depicting the points , and appearing in Algorithm LABEL:MetropolisIntegrator, and how they are related by the vectors and introduced in (LABEL:proposal) and (LABEL:eq:linsolv), respectively.

We refer to Algorithm LABEL:MetropolisIntegrator as Metropolis integrator in the rest of the paper. Figure LABEL:proposal_move_diagram illustrates how the proposal move in (LABEL:proposal) is related to the initial state and the Gaussian random vector . We remark that the calculation of the acceptance probability in (LABEL:alphah) usually requires a Cholesky factorization to determine and its determinant; and a linear solve to determine  via (LABEL:eq:linsolv). If is positive definite – which is the case in all of the examples considered in this paper – this linear system has a unique solution. Let us now discuss how to choose and in (LABEL:proposal) by requiring that Properties (P1)–(P4) are satisfied. It is useful to look at these properties sequentially and see the constraints on and they entail.

2.1 Property (P1): Ergodicity

Under quite general assumptions on , and (see Assumptions LABEL:P1_nu_assumptionLABEL:P1_Gh_assumption, and LABEL:P1_Bh_assumption), it will be shown in §LABEL:sec:ergodicity that Algorithm LABEL:MetropolisIntegrator is ergodic with respect to the equilibrium probability distribution with density . Thus, one can stably generate an infinitely long trajectory from the method with the right stationary distribution. Moreover, the ergodic theorem implies that for every initial state and sufficiently small time step size:

\hb@xt@.01(2.5)

where is any suitable test function [60]. Note that the acceptance probability function in (LABEL:alphah) does not involve derivatives of either or because of a symmetry in the law of the proposal move. Specifically, given the Gaussian random vector , the proposal move satisfies the equation:

which is invariant under the step reversal transformation . This invariance is a key step in the proof of Theorem LABEL:thm:ergodicityofscheme, where the -symmetry and ergodicity of the Metropolis Integrator are proved. It is not surprising that Property (P1) holds for quite general and (for example ) since the -symmetry property (LABEL:eq:db) is enforced by the accept/reject step in Algorithm LABEL:MetropolisIntegrator as long as the correct acceptance probability is used. (Note that the transition probability distribution of Algorithm LABEL:MetropolisIntegrator has no density with respect to Lebesgue measure, so (LABEL:eq:db) must be reinterpreted in terms of distributions: this is a minor technical difficulty on which we will dwell upon in §LABEL:sec:ergodicity.) This idea is the essence of the Metropolis-Hastings method. More remarkable is the next observation, relevant to Property (P2).

2.2 Property (P2): Weak Accuracy

For every sufficiently regular and (see Assumptions LABEL:P2_Gh_assumption and LABEL:P2_Bh_assumption) satisfying

\hb@xt@.01(2.6)

where is the mobility matrix entering (LABEL:sde), Algorithm LABEL:MetropolisIntegrator is weakly accurate on finite time intervals:

\hb@xt@.01(2.7)

for every , , and sufficiently regular (see Assumption LABEL:P2_U_Assumption). Here denotes the expectation conditional on the initial state being . The observation above is remarkable in that it holds for any sufficiently regular , including the trivial . As we will see in §LABEL:sec:accuracy, this really is a consequence of an (infinitesimal) fluctuation-dissipation theorem: as long as the noise term is handled accurately in the proposal move (LABEL:proposal) (which it is if (LABEL:P2condition) holds), weak accuracy follows from the fact that the only diffusion satisfying the -symmetry property (LABEL:eq:db) (which it does through Property (P1)) is the one with the correct drift term. Note that this immediately opens the door to schemes where the divergence of does not need to be computed. In fact, with , the calculation of no part of the drift is necessary. Of course, this trivial choice is not the best (nor even a good) one in terms of accuracy, and to enforce Property (P3), we will have to use more specific and .

2.3 Property (P3): Second-order Accuracy in Small Noise Limit

Let and be the following two-stage Runge-Kutta combinations:

\hb@xt@.01(2.8)
\hb@xt@.01(2.9)

with parameter values:

\hb@xt@.01(2.10)

Algorithm LABEL:MetropolisIntegrator operated with this choice of and satisfies:

\hb@xt@.01(2.11)

for every and and sufficiently regular and (see Assumptions LABEL:P3_U_Assumption and LABEL:P3_M_Assumption).

When the mobility matrix is constant, then and reduces to the so-called Ralston Runge-Kutta method [67]. The above statement, which is one of the main results of this paper, is established in §LABEL:sec:deterministicaccuracy using an asymptotic analysis of the acceptance probability function in the deterministic limit as . This analysis reveals that the parameter values in (LABEL:rk2_optimal_parameters) are optimal, that is, they are the only choice that yield Property (P3). The validity of this statement requires assumptions whose sufficiency is discussed in the numerical examples in §LABEL:sec:numerics. When these conditions are violated, §LABEL:second_order proposes a modification to the Metropolis integrator which involves adapting the parameter appearing in (LABEL:rk2_optimal_parameters). The small noise limit is relevant when the deterministic drift dominates the dynamics of (LABEL:sde), which is the case in BD applications when the Péclet number is moderate to high or when the system is driven by an external flow. (The Péclet number compares the work done by the potential force to the thermal energy .) As a by-product of (P3), in §LABEL:sec:accuracy we show that the Metropolis integrator is -weakly accurate when the mobility matrix is constant.

2.4 Additional remarks on integrator

Note that Property (P4) automatically follows when and are given by (LABEL:rk2_Gh) and (LABEL:rk2_Bh), respectively, since at no point do we need to calculate the divergence of . Note also, that with this choice of and , the proposal move in Algorithm LABEL:MetropolisIntegrator involves internal stages. If an internal stage variable is not in the domain of definition of the mobility matrix or force, e.g., assumes a non-physical value, it is straightforward to show that any extension of these functions results in an algorithm that satisfies the -symmetry condition (LABEL:eq:db). Hence, we suggest using the trivial extension where the mobility matrix is set equal to the identity matrix and the force is set equal to zero. Independent of the extension chosen, the energy is taken to be infinite at non-physical states, which from (LABEL:alphah) implies that non-physical proposal moves are always rejected.

Let us end this section by stressing that the idea of using Monte-Carlo to perform BD simulation is not new and goes back at least to [46, 47]. In place of the proposal move (LABEL:proposal), these papers use the Ermak-McCammon scheme, which corresponds to a forward Euler discretization of (LABEL:sde). This scheme reduces to the Metropolis-adjusted Langevin algorithm (MALA) when the mobility matrix is constant [72, 71, 4]. MALA is a special case of the smart and hybrid Monte-Carlo algorithms, which are older and more general sampling methods [73, 9]. However, the Metropolized Ermak-McCammon scheme has two drawbacks: it involves the divergence of the mobility tensor, and worse, as illustrated in the next section, there are important situations where the acceptance probability in the scheme breaks down in the small noise limit. The proposed integrator gets around these problems.

Next, we will use several examples to illustrate these properties of the Metropolis integrator then prove all the statements made in this section in §LABEL:sec:ergodicity, §LABEL:sec:accuracy and §LABEL:sec:deterministicaccuracy.

3 Numerical Examples

Unless otherwise indicated, in this section we work with the Metropolis integrator, Algorithm LABEL:MetropolisIntegrator, operated with and given by (LABEL:rk2_Gh) and (LABEL:rk2_Bh) with parameter values (LABEL:rk2_optimal_parameters). Test problems consist of the following self-adjoint diffusions:

(E0)

Brownian particle with a heavy-tailed stationary density;

(E1)

Brownian particle with a tilted square well potential energy;

(E2)

1D bead-spring chain in a confined solvent;

(E3)

3D bead-spring chain in an unbounded solvent; and,

(E4)

Brownian particle with a two-dimensional double-well potential energy.

Examples (E0) and (E1) involve a non-normalizable density. Example (E1) shows that standard explicit integrators may not detect properly features of a potential with jumps, which leads to potentially unnoticed but large errors in dynamic quantities such as mean first passage times. As shown in (E2) and (E3), standard integrators can also fail when the potential contains a hard-core or Lennard-Jones-type component. Example (E2) is not physically realistic, however, it is simple enough to be used for intensive numerical tests and yet complex enough to capture some of the essential features which make (LABEL:sde) challenging to simulate including multiplicative noise, nontrivial , and steep potentials. On the other hand, example (E3) is physically relevant to polymeric fluid simulations but, since the solvent is unbounded, in this case. In sum, (E1) – (E3) demonstrate two possible modes of failure of standard explicit integrators. (E4) illustrates the properties of the proposed scheme in the small noise limit. In the numerical tests that follow, we assess accuracy and stability as a function of the time-step size parameter. In practice, however, the computational cost/time of the algorithms should also be assessed, but since this cost is problem-specific, we do not carry out such assessments in this paper.

In the numerical tests that follow, we will compare the Metropolis integrator to the Fixman scheme. There are variants of this predictor-corrector scheme, and among these we implemented the following trapezoidal discretization:

\hb@xt@.01(Fixman)

where is a Gaussian random vector with mean zero and covariance . This method is weakly first order accurate, second-order deterministically accurate, and second-order weakly accurate for the special case of additive noise. Notice that the scheme has the nice feature that it avoids computing the divergence of the mobility matrix. The approximation to the noise in the Fixman integrator can be derived from a two-step discretization of the ‘kinetic stochastic integral’, which is introduced in [37].

3.1 Example (E0): Brownian particle with a heavy-tailed stationary density

To confirm that the proposed integrator applies to diffusions with non-normalizable densities , consider a Brownian particle on the interval governed by (LABEL:sde) with , , and the following potential energy:

\hb@xt@.01(3.1)

where is a positive parameter. If the stationary density is normalizable with and the rate of convergence to equilibrium is sub-exponential. In contrast, if the stationary density is no longer integrable; see Theorem 2.1 in [21] for more details. Note that if a proposal move in Algorithm LABEL:MetropolisIntegrator is not in the interval , then this move is rejected by the algorithm. Figure LABEL:fig:E0_heavy_tailed confirms that the Metropolis integrator is finite-time weakly accurate for a value of above and below one, and for two different choices of , namely the trivial choice and the Ralston Runge-Kutta combination given in (LABEL:rk2_Gh) with parameter values (LABEL:rk2_optimal_parameters). The rate of convergence and improved accuracy of the latter, which uses a higher-order approximation to the drift, emphasizes that is not a good choice. In fact, Theorem LABEL:thm:finite_time_accuracy_additive_noise proves that when the mobility matrix is constant, the Metropolis integrator with any second-order Runge-Kutta combination in (LABEL:rk2_Gh) is weakly -order accurate at constant temperature. The inset in the figure confirms that the Metropolis integrators reproduce the stationary probability density of the diffusion when it exists, that is, when is above one. Since all of the derivatives of the potential in (LABEL:heavy_tailed) are continuous and bounded on , a standard integrator that can deal with moves that leave the interval is sufficient in this example.

Figure 3.1: Brownian Particle with a Heavy-tailed Stationary Density. The main figure graphs the relative error in the Metropolis integrators’ estimate of the mean-squared displacement of the particle at time with initial condition as a function of the time step size for two different values of and using a sample average with samples. The black and gray colors indicate the Metropolis integrator with chosen to be the Ralston Runge-Kutta combination (LABEL:rk2_Gh) and zero, respectively. To derive a benchmark solution, the associated Fokker-Planck equation was numerically solved yielding the following approximations: () when ( ), respectively. The dashed lines in the main figure are and reference slopes. When the stationary density of this process is normalizable, and the Metropolis integrators exactly reproduce this probability density function, as confirmed in the inset. Specifically, the log-log plot in the inset shows that the approximate densities of the Metropolis integrators (solid black and gray) overlap the true stationary probability density of the process (dashed).

3.2 Example (E1): Brownian particle with a tilted square well potential energy

The following example emphasizes that the Metropolis integrator, Algorithm LABEL:MetropolisIntegrator, applies to self-adjoint diffusions even when the density is not normalizable. To introduce this example, it helps to consider simulating a Brownian particle moving in a regularized, periodic square well potential. In order to adequately resolve the jump in the potential, a standard scheme requires a sufficiently small time step size. Without this resolution the scheme’s equilibrium distribution will be essentially uniform, and its estimate of dynamic quantities associated to crossing the square potential barrier will be inaccurate. These predictions are confirmed in the following numerical experiment. Inspired by [69], we introduce a constant tilting force so that the particle drifts to the right intermittently stopping at the jumps in the potential. With this tilting force, Stratonovich was able to derive a formula for the mean first passage time, which we use below to benchmark and test the Fixman and Metropolis integrator [81]. At this point it is worth mentioning that when the mobility matrix in (LABEL:sde) is constant, the Fixman scheme reduces to a second-order trapezoidal discretization of the drift and a first-order approximation to the noise.

To be more precise, let be a periodic, square well potential given by:

where is a smoothness parameter, and for all . The period in this function is selected so that the jumps in over one cycle occur at and . A Brownian particle moving in a tilted square well potential satisfies an equation of the form (LABEL:sde) with mobility equal to unity and,

where we have introduced the following tilted potential energy function:

Observe that when the potential reduces to . For every , it is straightforward to use (LABEL:eq:generator) to show that the generator of is self-adjoint with respect to the density . When the stationary density is normalizable over one period of . When the constant tilting force gives rise to a downward tilt in the potential as shown in the northwest inset in Figure LABEL:fig:tiltedsquarewell, and so, is not normalizable since . The first passage time of from any to is defined as:

The numerical experiments that follow consist of launching an integrator with initial condition and time step size , and terminating the simulation at the first time step where . To avoid systematically overestimating the first passage time, we use the approximation, . As a side note, we mention that the accuracy of this approximation to the mean first passage time (which goes like [16, 17, 18]) can be improved upon by accounting for the probability that the particle reaches in between each discrete step [57].

The numerical and physical parameters used in the numerical experiments are given in Table LABEL:tab:tiltedsquarewell. To visualize the long-term behavior of the schemes, it helps to plot the probability density of points produced by each scheme modulo a period of the square well potential. An approximation to this density is shown in the southeast insets in Figure LABEL:fig:tiltedsquarewell. At both the coarse and fine time step size tested, the Fixman scheme underestimates the barrier height, and consequently, numerical tests show that it grossly underestimates, e.g., the mean first passage time between wells. This underestimate persists unless its time step size is small enough to resolve the barrier (). The Metropolis integrator is able to capture the features of the potential even at the coarse time step size , and its approximation to the mean first passage time is about accurate with a time step size .

Parameter Description Value
Physical Parameters
constant tilting force
inverse temperature factor
square well
smoothness parameter
Numerical Parameters
time step size
# of first passage
time samples
Table 3.1: Simulation Parameters for a Brownian Particle with a Tilted Square Well Potential.
Figure 3.2: Brownian Particle with a Tilted Square Well Potential. The main figure graphs the relative error in the mean first passage time as a function of the time step size for the Metropolis integrator (solid). The error bars represent a confidence interval. The dashed line is an reference slope. The inset on the upper left shows one cycle of the tilted square well potential. The insets on the lower right plot the approximate density of the Fixman (solid) and Metropolis integrator (dashed) obtained by wrapping the points along a trajectory around the boundaries of a period of the square well potential. By comparing the densities at the coarse and fine time step size, notice that the density of the Metropolis integrator is converged at the coarse time step size, while the Fixman scheme at both the coarse and fine time step size is unable to detect properly the jump in the potential. This difference is reflected in the main figure which shows that the Metropolis integrator is about accurate with a time step size , while the Fixman scheme is inaccurate ( relative error) at the time step sizes tested.

3.3 Example (E2): 1D bead-spring chain with hydrodynamic interactions

This example confirms Properties (P2) and (P3) of the Metropolis integrator. Consider a 1D bead-spring chain consisting of beads and springs confined to an interval and immersed in a ‘fictitious solvent’ with viscosity as illustrated in Figure LABEL:fig:1Dbeadspringchain. Incompressibility implies that the velocity of a 1D solvent is constant, and so to obtain nontrivial hydrodynamic interactions, we remove this physical constraint. Even though the resulting solvent does not satisfy incompressibility, this example captures some of the key elements of hydrodynamic interactions – including a non-trivial – while being simple enough to permit detailed numerical studies.

We begin by writing this system as a self-adjoint diffusion of the form (LABEL:sde) with a normalizable density . Assume that the bead and solvent inertia are negligible. Let and denote the bead position and force, respectively; let denote the solvent velocity for ; and, let for . Order the particle positions so that:

This ordering is maintained by the following soft-core spring potential:

\hb@xt@.01(3.2)

where the acronym stands for finitely extensible nonlinear elastic (FENE) spring force law. There are two parameters in this potential: an energy constant and bond length at rest . The linear behavior of about its resting position is given by a Hookean spring with stiffness . Moreover, this potential possesses a singularity at that prevents interbead collisions, and a second singularity at to enforce finite extension of the spring length. Potentials of this type play an important role in capturing the right non-Newtonian behavior of a dilute polymer solution using bead-spring models [3, 66].

In addition to neighboring spring interactions, the particles are coupled by non-bonded interactions mediated by a fictitious solvent. Since the bead and solvent inertia are negligible, a balance between the solvent viscous force and the bead spring forces leads to the following equations for the solvent velocity:

\hb@xt@.01(3.3)

where is a point mass located at the position of the bead , and is the force on the bead. In this equation the location of the beads and the forces on the beads are known, and the solvent velocity is unknown. Let , and in terms of which, the mobility matrix in (LABEL:sde) comes from solving (LABEL:1Dstokes) and it describes the linear relationship between the solvent velocities at the bead positions and the bead forces:

\hb@xt@.01(3.4)

The deterministic drift in (LABEL:sde) arises from assuming the solvent velocity matches the bead velocities at the bead positions. A heat bath is then added to this drift to ensure that the density of the stationary distribution of the resulting equations is proportional to in (LABEL:eq:nu), where the total potential energy is just the sum of the spring potential energies:

Now we show how to solve (LABEL:1Dstokes) for the solvent velocity, and in the process, derive a procedure for exactly evaluating the mobility matrix in this specific case.

In terms of the friction matrix , the linear relationship (LABEL:1Dmobility) can be written as:

\hb@xt@.01(3.5)

Given the instantaneous forces and positions of the particles, the solution to (LABEL:1Dstokes) can be derived as follows. In between the particles, the fluid velocity is linear since the solvent experiences no force there. At the particle position, there is a discontinuity in the derivative of the fluid velocity:

\hb@xt@.01(3.6)

for . With and , notice that (LABEL:1Dkink) implies:

\hb@xt@.01(3.7)

Comparing this equation with (LABEL:1Dfriction), it is evident that is a tridiagonal matrix with entries given by:

\hb@xt@.01(3.8)

These expressions are explicit and straightforward to implement in the Fixman scheme or the Metropolis integrator, Algorithm LABEL:MetropolisIntegrator. Note that the friction and mobility matrices are symmetric positive-definite, and the friction matrix is tridiagonal while the mobility matrix is not tridiagonal, in general.

For the numerical experiment, consider an eight bead chain that is initially compressed as shown in the northwest inset of Figure LABEL:fig:brownianlattice. Setting , , and equal to unity is equivalent to rescaling the system by a characteristic length, time and energy scale. This leaves the inverse temperature factor as a free physical parameter that we vary in the numerical experiments. These simulation parameter values are provided in Table LABEL:tab:brownianlattice. The equilibrium positions of the beads are indicated by the tick marks in this inset. We use the Fixman and Metropolis integrator, Algorithm LABEL:MetropolisIntegrator, to simulate the relaxation of the system from this state. Specifically, the schemes compute the expected value of the average position of the particles as the lattice relaxes towards equilibrium:

\hb@xt@.01(3.9)

for , where time units was found to be sufficient to capture the relaxation dynamics for all parameter values tested.

The relative error of the Metropolis integrator in computing (LABEL:observable1) is shown in Figure LABEL:fig:brownianlattice for a range of temperatures. Near the deterministic limit, we used a forward Euler scheme operated at a very small time step size to obtain a benchmark solution (), and observe from Figure LABEL:fig:brownianlattice, that the Metropolis integrator is accurate for time step sizes that exceed . As the temperature is increased, and in the absence of an analytical solution, we used Richardson extrapolation to obtain a benchmark solution. Using this benchmark we estimated the error of the Metropolis integrator at various temperatures. These graphs show that although the relative error increases with increasing temperature, the Metropolis integrator remains within error for a time step size from the deterministic limit, , to the moderate temperature, .

To deal with stochastic instabilities in the Fixman scheme, we employed ‘the method of rejecting exploding trajectories’ [64]. In this stabilization technique, if a non-physical move is generated by the Fixman scheme the entire sample path is rejected. At the very low temperature of , and if , the Fixman scheme rejects a negligible amount of trajectories () and is as accurate as the Metropolis integrator operated at . However, the Fixman scheme rejects almost all trajectories generated at the low temperature even if the time step size is reduced to , and its performance is much worse at the moderate temperature .

Figure 3.3: This figure shows a one-dimensional bead spring chain in a solvent. The chain consists of beads connected by springs confined to an interval . The spring potential energies are modeled by a two-sided FENE potential (LABEL:fene) that is plotted in the inset. The singularities in this potential enforce excluded volume between beads and finite extension of the spring length. The solvent velocity satisfies a steady-state Stokes equation with point sources of drag located at the bead positions.
Parameter Description Value
Physical Parameters
solvent viscosity
chain length
energy constant
inverse temperature factor
# of beads
time-span
for simulation
Numerical Parameters
time step size
# of sample
trajectories
initial positions
equilibrium
positions of springs
Table 3.2: Simulation parameters for a one-dimensional bead-spring chain in a solvent.
Figure 3.4: One-dimensional bead-spring chain in a solvent. The main figure shows the relative error in the mean position of the particles (LABEL:observable1) computed using the Metropolis integrator at the values shown in the legend. The error bars represent a confidence interval. The northwest inset shows the initial condition (dots) and equilibrium positions (tick marks) of the beads. Observe that the Metropolis integrator remains within accurate at the large time step size for a wide range of temperatures. This figure confirms that the Metropolis integrator is weakly accurate at constant temperature Property (P2), and second-order accurate in the small noise limit Property (P3). The curve corresponds to a that is large enough () so that the rate of convergence becomes visible.

3.4 Example (E3): DNA dynamics in a solvent

The following example applies the Metropolis integrator to a BD simulation of a DNA molecule with hydrodynamic interactions. Consider a bead-spring model of a bacteriophage DNA molecule with spherical beads, where each spring approximates the effect of 4850 base pairs, so that ten springs contain approximately the number of base pairs in a DNA molecule with a contour length of [90]. In this case the energy function is simply the sum of the spring potential energies, whose formula is written below in (LABEL:eq:wlc). Assume the beads are spherical with radius and move in a Stokesian solvent with viscosity . To be specific, the solvent velocity and pressure satisfy:

\hb@xt@.01(3.10)

where is the domain of the solvent and is the applied force density due to the beads. We augment these equations with the following boundary conditions: the fluid is at rest at infinity and satisfies no-slip conditions on the surfaces of each bead. Let , , and denote the position, translational velocity, and force of the bead where . Introduce the dimensional vectors of bead positions , bead translational velocities and bead forces . An immediate consequence of the linearity of the Stokes equation (LABEL:stokes) is that:

where is the mobility matrix. This linear relationship always holds for bodies moving in a Stokes fluid. Moreover, the matrix is always symmetric and positive definite for every .

Determining the entries of the mobility matrix requires solving the Stokes equation (LABEL:stokes) for the solvent velocity field which is a very complicated boundary value problem. This difficulty motivates using the Rotne-Pragner-Yamakawa (RPY) approximation of the solvent velocity field, which leads to the following approximate mobility matrix:

\hb@xt@.01(3.11)

for all . Here, we have introduced the matrix defined as:

\hb@xt@.01(3.12)

where and are the following scalar-valued functions:

and,

The quantity is the mobility constant produced by a single bead translating in an unbounded solvent at a constant velocity: . The approximation (LABEL:RPYmobility) preserves the physical property that the mobility matrix is positive definite, satisfies for all , and is exact up to where is the bead radius and is the distance between distinct beads and [74].

A well-defined characteristic length of DNA is its Kuhn length which represents the distance along the polymer chain over which orientational correlations decay. (The bending rigidity of a polymer decreases with increasing .) For DNA, the Kuhn length is approximately a tenth of a micrometer. In terms of which, consider a worm like chain (WLC) model for the spring potential energy given by:

\hb@xt@.01(3.13)

where is the maximum length of each spring. This empirical potential energy captures the entropic elasticity which causes the DNA molecule to be in a tightly coiled state. The linear behavior of this potential is given by a Hookean spring with spring constant: . The strength of the hydrodynamic interactions can be quantified by using this spring constant to compute the dimensionless bead radius: . Using the DNA parameter values provided in Table LABEL:tab:DNAparameters, we find that which for a dimensionless Rouse model (same bead-spring chain, but with Hookean springs) signifies a moderate strength of hydrodynamic interactions [66]. Since the linear Rouse model has the same features as the nonlinear DNA model minus the steep potential, a reasonable time step size for the DNA simulation can be determined by simulating a dimensionless Rouse model at this value of and the same non-random initial condition. In particular, numerical experiments reveal that a time step size of leads to an average acceptance probability of about as the Rouse chain transitions from a stretched to a coiled state.

With approximately this time step size , we use the Metropolis integrator to generate one thousand trajectories of the DNA chain from the initial conformation shown in the inset of Figure LABEL:fig:DNAsimulation using the values provided in Table LABEL:tab:DNAparameters over the interval . This time-span is sufficiently long to capture the relaxation dynamics of the chain. The average acceptance probability at this time step size is about , which implies that approximately the same time step size works for both the Rouse and DNA model, and emphasizes that the Metropolis integrator is to some extent insensitive to the stiffness in the WLC potential. From the simulation data, the radius of gyration was estimated and this estimate is plotted in Figure LABEL:fig:DNAsimulation. Repeating this experiment at higher time resolution led to no noticeable change in this graph. The Metropolis integrator seems able to compute dynamics for this system at a time step size that is more than larger than what is possible using the Fixman scheme combined with ‘the method of rejecting exploding trajectories’ described in the 1D bead-spring example.

Parameter Description Value Units
Physical Parameters
# of beads
Kuhn length
bead radius
maximum spring length
solvent viscosity
thermal energy
time-span
Numerical Parameters
time step size
# of sample
trajectories
Table 3.3: DNA Simulation Parameters.
Figure 3.5: DNA Simulation. The main figure plots the exponential decay in the mean-squared radius of gyration and the inset plots snapshots of the relaxation of a DNA molecule from an initial conformation shown in the top-most panel of the inset. As shown in that panel, the beads are initially evenly spaced on the x-axis with a spacing of . In this example, the Metropolis integrator is able to take about one to two orders of magnitude larger time step size than conventional explicit schemes.

3.5 Example (E4): Brownian particle with a double well potential energy

In the small noise limit, the proposal move (LABEL:proposal) converges to a deterministic update. If this update is a higher order accurate discretization of the zero noise limit of (LABEL:sde), then the proposal move will be deterministically accurate to that order. However, it does not follow from this alone that the actual update in (LABEL:actualupdate) is deterministically accurate too. Indeed, Property (P3) also requires that all moves are accepted in the small noise limit. This requirement may appear to contradict Property (P2): that the algorithm is weakly accurate so long as the noise is handled correctly regardless of the proposal move used, but it does not. To be perfectly clear, the properties of the scheme in the small noise limit are asymptotic statements in while keeping the time step size fixed (and sufficiently small for certain estimates to be valid), whereas Property (P2) assumes that is fixed and is sufficiently small.

In §LABEL:sec:deterministicaccuracy we show that for general and ,

\hb@xt@.01(3.14)

where we have introduced

\hb@xt@.01(3.15)

From (LABEL:asymptotic_alpha) it follows that if for all , then all moves are accepted in the small noise limit, and the Metropolis integrator acquires the deterministic accuracy of its proposal move. On the other hand, proposal moves emanating from points where will most likely be rejected if the noise is small enough, and therefore, in the small noise limit the Metropolis integrator is not deterministically accurate at these points. In §LABEL:sec:deterministicaccuracy it is proved that when and are given respectively by (LABEL:rk2_Gh) and (LABEL:rk2_Bh) with parameter values given in (LABEL:rk2_optimal_parameters), the function is negative definite for small enough, and hence, the Metropolis integrator operated with this choice of and is second-order deterministically accurate. A sufficient condition for this statement to be true is introduced in §LABEL:second_order. The following example illustrates these ideas, and in particular, the sufficiency of this condition.

Consider a Brownian particle with the following two-dimensional double-well potential energy function:

and a mobility matrix set equal to the identity matrix for simplicity. The Hessian of this potential energy becomes singular at the two points marked by an ‘x’ in Figures LABEL:fig:dw2db (a) and (b). For this example, in (LABEL:rk2_Gh) with parameter values given in (LABEL:rk2_optimal_parameters) simplifies to the Ralston Runge-Kutta combination:

\hb@xt@.01(3.16)

We will compare this choice against the following choices of :

\hb@xt@.01(3.17)
\hb@xt@.01(3.18)

which correspond to a first-order forward Euler and second-order midpoint discretization of (LABEL:sde) in the zero noise limit, respectively. We also consider a third-order accurate Kutta approximation given by:

\hb@xt@.01(3.19)

Figure LABEL:fig:dw2da plots the function and a sample trajectory for the forward Euler and midpoint schemes. In the gray-shaded regions, the function is positive, and therefore, if the noise is small enough, the sample trajectory of the Metropolis scheme will likely get stuck in these regions. In the simulation we pick . The Metropolis integrator based on given by (LABEL:eulerG) or (LABEL:midpointG) stops accepting proposal moves once they hit the shaded region.

Figure LABEL:fig:dw2db plots the function of the Ralston and Kutta schemes, which as expected are negative definite if is small enough. The inset in Figures LABEL:fig:dw2db (a) and (b) zoom into neighborhoods of the two points where the matrix is singular, showing that the function of the Ralston scheme can be positive in these neighborhoods. The insets also show that these regions become smaller as the time step size is reduced. (On the other hand, the function of the midpoint-based Metropolis schemes can be positive in regions that do not shrink with decreasing time step size – illustrating the optimality of the Ralston scheme among two-stage Runge-Kutta methods.) The Kutta scheme does not appear to have a problem at these singular points of the Hessian.

We have also considered the same system, but with the non-constant mobility matrix given by:

Experiments revealed that choosing