We propose a novel stochastic method to generate Brownian paths conditioned to start at an initial point and end at a given final point during a fixed time under a given potential . These paths are sampled with a probability given by the overdamped Langevin dynamics. We show that these paths can be exactly generated by a local Stochastic Partial Differential Equation (SPDE). This equation cannot be solved in general. We present several approximations that are valid either in the low temperature regime or in the presence of barrier crossing. We show that this method warrants the generation of statistically independent transition paths. It is computationally very efficient. We illustrate the method on the two dimensional Mueller potential as well as on the Mexican hat potential.
Pasteur] Unité de Dynamique Structurale des Macromolécules, URA 3528 du CNRS, Institut Pasteur, 75015 Paris, France Davis] Department of Computer Sciences and Genome Center, University of California, Davis, CA 95616, USA CEA] Institut de Physique Théorique, CEA, IPhT, CNRS, URA2306, F-91191 Gif-sur-Yvette, France \alsoaffiliation[BCSRC] Beijing Computational Science Research Center, No.3 HeQing Road, Haidian District, Beijing, 100084, China Fast sampling of transition paths] Conditioned Langevin Dynamics enables efficient sampling of transition paths
Chemical and biological are controlled by dynamics. Indeed, it is the ability of molecules to change conformations that leads to their activity. Observing experimentally or predicting functional conformational changes is however a very difficult problem. At the core of this problem is the fact that a transition between two conformations of a molecule is a rare event compared to the time scale of the internal dynamics of the molecule. This event is a consequence of random perturbations in the structure of the molecule, drawing its energy from the surrounding heat bath; it is rare whenever the energy barrier that needs to be crossed is high compared to . The transition state theory (TST) offers a framework for studying such rare events 1, 2. The main idea of the TST is that the transition state is a saddle point of the energy surface for the molecule of interest. In many cases, the most probable transition path is then simply the minimum energy path (MEP) along that energy surface. The TST however is limited to situations in which the potential energy surface is rather smooth; it also assumes that every crossing of the energy barrier through the transition state gives rise to a successful reaction. For systems with a rugged potential energy landscape, or when entropic effects matter, the saddle points do not necessarily play the role of transition states 3.
To alleviate the shortcomings of the TST, Vanden Eijden and colleagues proposed an alternate view of transitions, the Transition Path Theory (TPT) 4, 3, 5. At zero temperature the TPT is deemed exact. In principle, it eliminates the need for sampling the transition path ensemble. It also provides a framework for finding the shortest, or most probable transition path between two conformations of a molecule. As such, it has served as a touchstone for the development of many path finding algorithms. Some of those were developed for finding the Minimum Energy Path on the energy surface for a molecule, such as morphing techniques 6, 7, gradient descent methods 8, 9, 10, 11, the nudged elastic band method (NEB) 12, 13, 14 and the string method 15, 16, 17, 18, 19, 20. Other algorithms are concerned with either finding the Minimum Free Energy Path on the free energy surface for the molecule,21, 22, 23, 24. or finding the path that minimizes a functional, as implemented in the Minimum Action Path methods 25, 26, 27, 28, 29, 30, 31. This list is not a comprehensive coverage of all existing techniques for finding transition paths, as this is a very active area of research with many techniques proposed every year.
Due to the inherent fluctuations underlying the transition phenomenon there are many ways however in which a transition can take place. The methods described above usually generate one path along this transition, the most “probable” one, where probable refers to minimum energy, free energy, or an action. Path sampling methods expand upon this view by using this path as a seed to generate a Monte Carlo random walk in the path space of the transition trajectories, and thus generate an approximation of the ensemble of all possible transition paths 32, 33. All the relevant kinetic and thermodynamic information related to the transition can then be extracted from the ensemble, such as the reaction mechanism, the transition states, and the rate constants. The main drawback of these methods however is that they are very time consuming and therefore limited to small systems. In addition, they generate highly correlated trajectories in which case the space of sampled trajectories depends strongly on the initial path.
In parallel to path sampling methods, much effort has recently been dedicated to the development and analysis of Markov State Models (MSMs) 34, 35, 36. MSMs aim at coarse-graining the dynamics of the molecular system via mapping it onto a continuous-time Markov jump process, that is, a process whose evolution involves jumps between discretized states representing typical conformations of the original system. Much of the recent work focuses on generating those conformations and the dynamics between them, usually using molecular dynamics simulations. To this day, MSMs remain computationally intensive methods.
In this paper we are concerned with the problem of path sampling. Following preliminary work by one of us 37, we propose a novel method for generating paths using a Stochastic Partial Differential Equation (SPDE). This equation cannot be solved in general but we were able to find approximations that are valid for different regimes for the dynamics of the system.
The paper is organized as follows. In the next section, we derive the SPDE and describe the different approximations we have implemented to solve this equation. In the following section, we show applications to two well-studied 2D problems. Finally, we conclude the paper with a discussion of the extension of the method to study transition pathways for bio-molecular systems.
2.1 Derivation of the bridge equation
We assume that the system is driven by a force and is subject to stochastic dynamics in the form of an overdamped Langevin equation.
For the sake of simplicity, we illustrate the method on a one-dimensional system, the generalization to higher dimensions or larger number of degrees of freedom being straightforward. We follow closely the presentation given in Ref. 37.
The overdamped Langevin equation reads
where is the position of the particle at time , driven by the force , is the friction coefficient, related to the diffusion coefficient through the Einstein relation , where is the Boltzmann constant and the temperature of the heat bath. In addition, is a Gaussian white noise with moments given by
where is the inverse temperature. This equation is to be supplemented by the initial condition , where the particle is assumed to be at at time . To emphasize this initial condition, we will often use the notation .
We now study the probability over all paths starting at at time and conditioned to end at a given point at time , to find the particle at point at time . This probability can be written as
where we use the notation
Indeed, the probability for a path starting from and ending at to go through at time is the product of the probability to start at and to end at by the probability to start at and to end at .
It can be easily checked that the conditional probability satisfies a new Fokker-Planck equation
This equation has been previously obtained using the Doob transform 40, 41 in the probability literature and provides a simple recipe to construct a generalized bridge. It generates Brownian paths, starting at conditioned to end at , with unbiased statistics. It is the additional term in the Langevin equation that guarantees that the trajectories starting at and ending at are statistically unbiased. This equation can be easily generalized to any number of degrees of freedom.
In the following, we will specialize to the case where the force is derived from a potential . The bridge equation becomes
In that case, the Fokker-Planck equation corresponding to this modified Langevin equation can be recast into an imaginary time Schrödinger equation 37, and the probability distribution function can be written as
where is a ”quantum Hamiltonian” defined by
and the potential by
We denote by the matrix element of the Euclidian Schrödinger evolution operator
We see on the above form that when , the matrix element converges to , and it is this singular attractive potential which drives all the paths to at time .
2.2 Transition paths
The bridge equations (7) or (12) can be solved exactly in a certain number of cases 42. However in general, for systems with many degrees of freedom, the functions or cannot be computed exactly and one has to resort to some approximations. In the following, we will be mostly interested in problems of energy or entropy barrier crossing, which are of utmost importance in many chemical, biochemical, or biological reactions.
The matrix element can be written as a Feynman path integral
The free case is defined as
where is the probability distribution for a free particle.
Equation (13) can be rewritten as
where the expression denotes the expectation value with the Brownian measure .
The convexity of the exponential function implies the Jensen inequality 43, which states that for any operator and any probability measure, one has
Equality occurs when the probability is a function; it is thus a good approximation when the operator has small fluctuations.Taking to be
Using the expression
after some calculations, we obtain
After a change of variable, this expression becomes
and the constrained Langevin equation (12) becomes
or, after the change of variable ,
Integration by part with respect to yields the equivalent form
which does not require the cumbersome evaluation of . These forms require an integration over the Gaussian variable which can be performed by sampling this variable
where the variables are Gaussian variables (with zero average and unit variance).
However, as we have seen, the approximation (17) is valid if the exponent does not fluctuate too much over the trajectories relevant to the transition. There are two cases when this approximation can be further simplified and where the -integral can be avoided:
According to Kramers theory, the total transition time (waiting + crossing) scales like the exponential of the barrier height while it has been shown that the crossing time (Transition Path Time) scales like the logarithm of the barrier 44, 45, 46. We have thus .
As discussed before, the barrier crossing time is very short compared to the Kramers time. Therefore the transition trajectories are very weakly diffusive, and are thus almost ballistic. Consequently, we have and again we can neglect the term in . Equation (18) becomes
All the equations described above are easily generalized to any number of particles in any dimension, interacting with any many-body potentials. They are integro-differential stochastic Markov equations, as the variable depends only on the stochastic variable . One can generate many independent trajectories by integrating these equations with different noise histories . To test the validity of the main approximation (17), one should compute the variance of the random variable in eq.(16) over all the trajectories generated. Computing the correction to the Jensen inequality, it is easily seen that the approximation is reliable provided
2.3 Simulation Time
For barrier crossing, what simulation time should be used? Obviously, for any initial and final state and , there is a set of Langevin trajectories which make the transition in any given time . If the time is very short compared to the typical time scales of large motions of the system, there is a small number of such trajectories, since they require a very specific noise history. As a result, the approximations presented above are reliable and the factor is much smaller than 1. However, this is not a very interesting regime, as trajectories are driven by the boundary conditions. If we are interested in simulating transition paths, the time should obviously be larger than the typical TPT . Indeed, if is smaller than , we will simulate paths driven by the final state. On the other hand, if is too large, then we will also simulate part of the waiting time in the wells, where fluctuations are large (except maybe at low temperature). Therefore, in order to simulate transition paths as accurately as possible, one should use a simulation time larger than the typical TPT , but not much larger.
We now illustrate these concepts on two examples: the Mueller potential, and the Mexican hat potential.
3.1 The Mueller potential
The Mueller potential is a standard benchmark potential to check the validity of methods for generating transition paths. It is a two dimensional potential given by
This potential has 3 local minima denoted by A,B,C, separated by two barriers (Fig.1). The effective potential can be calculated analytically, as well as its gradient. Equations (18), (22) and (23) can easily be solved numerically. We display only the trajectories generated by (23). The simulation time is chosen so that we observe a small waiting time around the initial as well as the final point, namely . We use 50 points for the integration over . We display a sample of 500 trajectories obtained from eq. (23) with at temperature . We can compute the average trajectory as well as its variance. These trajectories are displayed on Fig. 1, where we plot the AB, BC and AC trajectories.
To assess the quality of the approximations, we check the criterion (24). For the trajectories AB, we obtain , for BC and for AC. Therefore, the approximation is quite reliable for the AB trajectories, but less for the others. In fact, it is instructive to study the accuracy of the method when varying . For that matter, in Fig. 2, we plot the factor as a function of , for the AB transition. We see that for both small and large , the factor is small, with a maximum at . For small , the trajectories fluctuate around the straight line trajectory joining A to B through high barriers (see Fig. 1A). For large , the trajectories fluctuate around the potential energy valley joining A to B. As increases from small values, the ensemble of trajectories include trajectories going through the high barrier and through the valley, and at , there is a strong mixing of both types of trajectories, giving rise to a large value of . When increases further, the trajectories going through the barrier disappear from the ensemble, and only valley trajectories remain, yielding a decrease of .
3.2 The Mexican hat potential
The potential of the Mexican hat is given by
and has therefore a circle of minima for with and a maximum at with energy . Again we solve equations (23), using 50 points for the integral. Given the small barrier of this potential , we go to low temperature. On Fig.3A, we plot 100 trajectories, generated at temperature , starting at and all ending at . The total time is and the time step is . The quality criterion (24) gives . The trajectories divide into three dominant groups, those that take a northern route (30), those that take a southern route (40) along the circle of minima, and those that go directly through the energy barrier (30). The distribution into those three groups was decided based on the mean value for the coordinates along the trajectories. If we take a longer duration, the fraction of trajectories that go through the central barrier decreases. For example, for , there are only 9 of those trajectories , as seen on Fig. 3B. The quality criterion is then .
In summary, we show here a novel method to generate ab initio
transition path trajectories using a formalism called Conditioned Langevin dynamics.
The most crucial parameter of the theory is the total length of the simulation,
which must be carefully chosen so as to allow short waiting times around both the initial
and final states. We define a quality criterion R that can be calculated a posteriori
to see if the approximation made to solve the underlying PDE is justified or not.
Here we have tested the method on simple one- or two-dimensional potentials, but
our aim is to use it in more complicated situations, such as conformational transitions in proteins.
Proteins are small biopolymers (up to a few hundred amino-acids) that do not stay in one of their two states (e.g. native or denatured, open or closed, apo or holo in the allosteric picture), but rather make unfrequent stochastic transitions between them. They are usually represented in a coarse-grained manner using one or two beads per amino-acid. The picture which emerges is that of the system staying for a long time in one of the minima and then making stochastically rapid transitions to the other minimum. It follows that for most of the time, the system performs harmonic oscillations in one of the wells, which can be described by normal mode analysis. Rarely, there is a very short but interesting physical phenomenon, where the system makes a fast transition between minima. This picture has been confirmed by single molecule experiments, where the waiting time in one state can be measured, although the time for crossing is so short that it cannot be resolved 47. This scenario has also been confirmed recently by very long millisecond molecular dynamics simulations which for the first time show spontaneous thermal folding-unfolding events 48.
It is possible to model this behavior using a an energy function that is based on a simplified Elastic Network Model that is a mixture of two Elastic Networks centered around each of the two states of the macromolecule 8, 49. We will present the results of the Conditioned Langevin Equation applied to this situation in a forthcoming paper.
MD acknowledges financial support from the Agence Nationale de la Recherche (ANR) through the program Bip-Bip.
- Eyring 1935 Eyring, H. The activated complex and the absolute rate of chemical reactions. Chem. Rev. 1935, 17, 65–77.
- Wigner 1938 Wigner, E. The transition state method. Trans. Faraday Soc. 1938, 34, 29–41.
- E and Vanden-Eijnden 2010 E, W.; Vanden-Eijnden, E. Transition-path theory and path-finding algorithms for the study of rare events. Annu Rev Phys Chem 2010, 61, 391–420.
- E and Vanden-Eijnden 2006 E, W.; Vanden-Eijnden, E. Towards a theory of transition paths. J. Stat. Phys. 2006, 123, 503â–23.
- Vanden-Eijnden 2014 Vanden-Eijnden, E. Transition path theory. Adv. Exp. Med. Biol. 2014, 797, 91–100.
- Kim et al. 2002 Kim, M.; Jernigan, R.; Chirikjian, G. Efficient generation of feasible pathways for protein conformational transitions. Biophys. J. 2002, 83, 1620–1630.
- Weiss and Levitt 2009 Weiss, D. R.; Levitt, M. Can morphing methods predict intermediate structures? J. Mol. Biol. 2009, 385, 665–674.
- Maragakis and Karplus 2005 Maragakis, P.; Karplus, M. Large amplitude conformational change in proteins explored with a plastic network model: adenylate kinase. J. Mol. Biol. 2005, 352, 807–822.
- Zheng et al. 2007 Zheng, W.; Brooks, B.; Hummer, G. Protein conformational transitions explored by mixed elastic network models. Proteins: Struct. Func. Bioinfo. 2007, 69, 43–57.
- Tekpinar and Zheng 2010 Tekpinar, M.; Zheng, W. Predicting order of conformational changes during protein conformational transitions using an interpolated elastic network model. Proteins: Struct. Func. Bioinfo. 2010, 78, 2469–2481.
- Pinski and Stuart 2010 Pinski, F.; Stuart, A. Transition paths in molecules: gradient descent in pathspace. Journal of Chemical Physics 2010, 132, 184104.
- Jonsson et al. 1998 Jonsson, H.; Mills, G.; Jacobsen, K. W. In Classical and Quantum Dynamics in Condensed Phase Simulations; Berne, B. J., Ciccotti, G., Coker, D. F., Eds.; World Scientific: Singapore, 1998; Chapter 16, pp 385–404.
- Henkelman et al. 2000 Henkelman, G.; Uberuaga, B.; Jonsson, H. A climbing image nudged elastic band method for finding saddle points and minimum energy paths. J. Chem. Phys. 2000, 113, 9901–9904.
- Sheppard et al. 2008 Sheppard, D.; Terrell, R.; Henkelman, G. Optimization methods for finding minimum energy paths. J. Chem. Phys. 2008, 128, 134106.
- E et al. 2002 E, W.; Ren, W.; Vanden-Eijnden, E. String method for the study of rare events. Phys. Rev. B 2002, 66, 052301.
- Ren et al. 2005 Ren, W.; Vanden-Eijnden, E.; Maragakis, P.; E, W. Transition pathways in complex systems: Application of the finite-temperature string method to the alanine dipeptide. J. Chem. Phys. 2005, 123, 134109.
- E et al. 2007 E, W.; Ren, W.; Vanden-Eijnden, E. Simplified and improved string method for computing the minimum energy paths in barrier-crossing events. J. Chem. Phys. 2007, 126, 164103.
- Vanden-Eijnden and Venturoli 2009 Vanden-Eijnden, E.; Venturoli, M. Revisiting the finite temperature string method for the calculation of reaction tubes and free energies. J. Chem. Phys. 2009, 130, 194103.
- Ren and Vanden-Eijnden 2013 Ren, W.; Vanden-Eijnden, E. A climbing string method for saddle point search. J. Chem. Phys. 2013, 138, 134105.
- Maragliano et al. 2014 Maragliano, L.; Roux, B.; Vanden-Eijnden, E. Comparison between Mean Forces and Swarms-of-Trajectories String Methods. J. Chem. Theory Comput. 2014, 10, 524–533.
- Maragliano et al. 2006 Maragliano, L.; Fischer, A.; Vanden-Eijnden, E.; Ciccotti, G. String method in collective variables: minimum free energy paths and isocommittor surfaces. J. Chem. Phys. 2006, 125, 24106.
- Pan et al. 2008 Pan, A.; Sezer, D.; Roux, B. Finding transition pathways using the string method with swarms of trajectories. J. Phys. Chem. B. 2008, 112, 3432–3440.
- Matsunaga et al. 2012 Matsunaga, Y.; Fujisaki, H.; Terada, T.; Furuta, T.; Moritsugu, K.; Kidera, A. Minimum free energy path of ligand-induced transition in adenylate kinase. PLoS Comput. Biol. 2012, 8, e1002555.
- Branduardi and Faraldo-Gomez 2013 Branduardi, D.; Faraldo-Gomez, J. D. String method for calculation of minimum free-energy paths in Cartesian space in freely-tumbling systems. J. Chem. Theory Comput. 2013, 9, 4140–4154.
- Olender and Elber 1996 Olender, R.; Elber, R. Calculation of classical trajectories with a very large time step: formalism and numerical examples. J. Chem. Phys. 1996, 105, 9299â–9315.
- Eastman et al. 2001 Eastman, P.; Gronbech-Jensen, N.; Doniach, S. Simulation of protein folding by reaction path annealing. J. Chem. Phys. 2001, 114, 3823.
- Franklin et al. 2007 Franklin, J.; Koehl, P.; Doniach, S.; Delarue, M. MinActionPath: maximum likelihood trajectory for large-scale structural transitions in a coarse grained locally harmonic energy landscape. Nucl. Acids. Res. 2007, 35, W477–W482.
- Faccioli et al. 2006 Faccioli, P.; Sega, M.; Pederiva, F.; Orland, H. Dominant pathways in protein folding. Phys. Rev. Lett. 2006, 97, 108101.
- Vanden-Eijnden and Heymann 2008 Vanden-Eijnden, E.; Heymann, M. The geometric minimum action method for computing minimum energy paths. J. Chem. Phys. 2008, 128, 061103.
- Zhou et al. 2008 Zhou, X.; Ren, W.; E, W. Adaptive minimum action method for the study of rare events. J. Chem. Phys. 2008, 128, 104111.
- Chandrasekaran et al. 2016 Chandrasekaran, S.; Dhas, J.; Dokholyan, N.; Carter Jr, C. A modified PATH algorithm rapidly generates transition states comparable to those found by other well established algorithms. Struct. Dyn. 2016, 3, 012101.
- Pratt 1986 Pratt, L. A statistical method for identifying transition states in high dimensional problems. J. Chem. Phys. 1986, 85, 5045â–48.
- Bolhuis et al. 2002 Bolhuis, P.; Dellago, C.; Geissler, P. L.; Chandler, D. Transition path sampling: throwing ropes over rough mountain passes, in the dark. Annu. Rev. Phys. Chem. 2002, 53, 291–318.
- Chodera et al. 2007 Chodera, J.; Singhal, N.; Pande, V.; Dill, K.; Swope, W. Automatic discovery of metastable states for the construction of Markov models of macromolecular conformational dynamics. J. Chem. Phys. 2007, 126, 155101.
- Bowman et al. 2009 Bowman, G.; Beauchamp, K.; Boxer, G.; Pande, V. Progress and challenges in the automated construction of Markov state models for full protein systems. J. Chem. Phys. 2009, 124101.
- Pande et al. 2010 Pande, V.; Beauchamp, K.; Bowman, G. Everything you wanted to know about Markov State Models but were afraid to ask. Methods 2010, 52, 99–105.
- Orland 2011 Orland, H. Generating transition paths by Langevin bridges. J. Chem. Phys. 2011, 134, 174114.
- Kampen 1992 Kampen, N. V. Stochastic Processes in Physics and Chemistry; North–Holland: Amsterdam, The Netherlands, 1992.
- Zwanzig 2001 Zwanzig, R. Nonequilibrium Statistical Mechanics; Oxford University Press: Oxford, United Kingdom, 2001.
- Doob 1957 Doob, J. Conditional Brownian motion and the boundary limits of harmonic functions. Bull. Soc. Math. France 1957, 85, 431–458.
- Fitzsimmons et al. 1992 Fitzsimmons, P.; Pitman, J.; Yor, M. Seminar on Stochastic Processes, 1992; Birkhaeuser: Boston, MA, USA, 1992.
- Majumdar and Orland 2015 Majumdar, S.; Orland, H. Effective Langevin equations for constrained stochastic processes. J. Stat. Mech. Theor. Exp. 2015, 2015, P06039.
- Jensen 1906 Jensen, J. Sur les fonctions convexes et les inégalités entre les valeurs moyennes. Acta Math. 1906, 30, 175–193.
- Gopich and Szabo 2006 Gopich, I.; Szabo, A. Theory of the statistics of kinetic transitions with application to single-molecule enzyme catalysis. J. Chem. Phys. 2006, 124, 154712.
- Kim and Netz 2015 Kim, W.; Netz, R. The mean shape of transition and first-passage paths. J. Chem. Phys. 2015, 143, 224108.
- 46 Carlon, E.; Orland, H. submitted
- Chung et al. 2012 Chung, H.; McHale, K.; Louis, J.; Eaton, W. Single–molecule fluorescence experiments determine protein folding transition path times. Science 2012, 335, 981–984.
- Lindorff-Larsen et al. 2011 Lindorff-Larsen, K.; Piana, S.; Dror, R.; Shaw, D. How Fast-Folding Proteins Fold. Science 2011, 334, 517–520.
- Das et al. 2014 Das, A.; Gur, M.; Cheng, M. H.; Jo, S.; Bahar, I.; Roux, B. Exploring the Conformational Transitions of Biomolecular Systems Using a Simple Two-State Anisotropic Network Model. PLoS Comput. Biol. 2014, 10, e1003521.