Ensemble annealing of complex physical systems

Ensemble annealing of complex physical systems

Abstract

Algorithms for simulating complex physical systems or solving difficult optimization problems often resort to an annealing process. Rather than simulating the system at the temperature of interest, an annealing algorithm starts at a temperature that is high enough to ensure ergodicity and gradually decreases it until the destination temperature is reached. This idea is used in popular algorithms such as parallel tempering and simulated annealing. A general problem with annealing methods is that they require a temperature schedule. Choosing well-balanced temperature schedules can be tedious and time-consuming. Imbalanced schedules can have a negative impact on the convergence, runtime and success of annealing algorithms. This article outlines a unifying framework, ensemble annealing, that combines ideas from simulated annealing, histogram reweighting and nested sampling with concepts in thermodynamic control. Ensemble annealing simultaneously simulates a physical system and estimates its density of states. The temperatures are lowered not according to a prefixed schedule but adaptively so as to maintain a constant relative entropy between successive ensembles. After each step on the temperature ladder an estimate of the density of states is updated and a new temperature is chosen. Ensemble annealing is highly practical and broadly applicable. This is illustrated for various systems including Ising, Potts, and protein models.

free energy; density of states; annealing; histogram reweighting; Monte Carlo simulation; replica-exchange Monte Carlo
pacs:

I Introduction

Simulational science often involves the generation of configurations from high-dimensional probability distributions as well as the computation of ensemble averages and normalization constants. Numerous applications in statistical physics, biomolecular simulation and Bayesian inference illustrate the ubiquitous need for efficient sampling methods. Challenges are posed by the complexity of the system, its shear size, slow convergence and non-ergodicity.

To address these challenges, algorithms that work with modified versions of the system have been proposed. One idea is to simulate the system at multiple temperatures and utilize the enhanced flexibility at higher temperatures to avoid local free-energy minima at lower temperatures. This idea is the basis of sampling algorithms such as replica-exchange Monte Carlo Swendsen and Wang (1986) and simulated tempering Marinari and Parisi (1992) but also used in popular optimization algorithms such as simulated annealing Kirkpatrick et al. (1983).

Parallel tempering (PT) Geyer (1991), for example, considers a family of canonical ensembles at different temperatures. The ensembles are simulated independently and occasional swaps of configurations between ensembles at nearby temperatures allow the simulation to escape from metastable states. From a PT simulation thermodynamic quantities such as free energies and heat capacities can then be computed with high accuracy. But the success and convergence of a PT run depends critically on the choice of the temperature schedule. To choose a good temperature schedule can be highly non-trivial, especially for systems undergoing phase transitions. A well-balanced schedule entails overlap between ensembles at neighboring temperatures. This means that we have to use more and more replicas with increasing system size because energy is extensive Predescu et al. (2005). Moreover, PT explores temperature space on a fixed ladder. If we want to use multiple temperatures or control parameters as in multi-dimensional PT Sugita et al. (2000) we are suffering from the curse of dimensionality. Another source of inefficiency is the fact that configurations at high temperatures are constantly being produced but no longer needed once the simulation has converged.

Multi-canonical sampling algorithms Berg and Neuhaus (1992) are a powerful alternative to annealing methods. Rather than utilizing a temperature parameter to modify the system, multi-canonical algorithms draw configurations from an ensemble whose weight is inversely proportional to the density of states (DOS) of the system, such that ideally the energy histogram will be constant. However, this requires that we know the DOS before the actual simulation, which is rarely the case.

The Wang-Landau (WL) algorithm Wang and Landau (2001) is an ingenious variant of multi-canonical sampling that sidesteps this problem. The unknown density of states is estimated in the course of a WL run, configurational samples are generated as a by-product. The fact that the correct DOS should produce a flat energy histogram can be used to monitor the convergence of the method. By gradually decreasing the learning rate, the simulation is stabilized and converges.

WL sampling has originally been developed for discrete systems Wang and Landau (2001). Its direct extension to large or continuous systems requires choosing an energy range and binning. But there might be forbidden energy levels that cannot be visited, in which case the corresponding bins remain empty and the energy histogram will never be flat. These problems are aggravated for multi-dimensional DOS over more than one macrovariable because the number of bins grows exponentially in the number of macrovariables. In that case flatness of the energy histogram ceases to be a useful convergence criterion and must be replaced by other criteria Zhou et al. (2006). To apply these modifications in practice remains a challenge and involves parameter tweaking.

This article proposes an algorithm, ensemble annealing, that solves these issues and produces samples and an estimate of the DOS. Ensemble annealing is inspired by the nested sampling method for Bayesian computation Skilling (2006) and can be viewed as a generalization of nested sampling to the canonical or other ensembles. The algorithm applies both to discrete and continuous systems. In contrast to simulated annealing or parallel tempering, ensemble annealing constructs an optimal temperature protocol adaptively and has only few algorithmic parameters.

Ii Ensemble annealing

Ensemble annealing is a sequential algorithm that steps through iterations denoted by . non-interacting particles or walkers are employed to explore a series of ensembles typically starting in a high temperature ensemble, then cycling through ensembles at lower and lower temperatures, until the destination ensemble is reached. For each ensemble, the walkers produce configurations where . In contrast to other annealing and tempering methods only the start and final ensemble have to be chosen. The intermediate ensembles are found during the simulation by placing them such that a constant overlap between successive ensembles is maintained. To implement this approach, we need to agree on various concepts, mainly what kind of ensembles will be considered, and how to measure distances between ensembles.

ii.1 Ensembles

Let us denote the target ensemble from which we aim to generate configurations by . Often with energy . In Bayesian inference, for example, denotes the prior distribution and corresponds to negative log likelihood. In a physical simulation of particles in a box, will be uniform over the box and will be the interaction energy between all particles. Note that in practice both and are often unnormalized.

To draw configurations from we consider a series of ensembles

(1)

where normalizes the -th ensemble. Here, we assume that ensemble depends on the configuration only through the macrovariable , but the method works also for more general ensembles. Typically where is a parameterized family and a protocol parameter. The distributions are intermediate helper or bridging distributions. In case of the canonical ensemble, configurations are weighted by the Boltzmann factor

(2)

where is the inverse temperature and is the partition function.

Obviously the canonical ensemble is a widespread choice in annealing methods, but it might also be worthwhile to consider other ensembles. For example, in parallel tempering the use of the Tsallis ensemble

(3)

with control parameter 1, inverse temperature and minimum energy has been proposed Hansmann and Okamoto (1997). A multi-parameter combination of the Boltzmann and Tsallis ensemble is used in complex Bayesian data analyses Habeck et al. (2005) to independently control the prior density and the likelihood function.

Another ensemble that is of potential interest is the Fermi distribution

(4)

which has two control parameters: the inverse temperature and an energy cutoff . The zero-temperature Fermi ensemble approaches a stepfunction, i.e. configurations with energies greater than are assigned zero probability:

(5)

where is the Heaviside step function. This ensemble is used in the nested sampling method for Bayesian computation Skilling (2006) and also related to the microcanonical ensemble Ray (1991); Martin-Mayor (2007):

(6)

where is the dimension of configuration space (number of configurational degrees of freedom) and the total energy of the system (potential plus kinetic energy).

Note that the target ensemble does not necessarily need to be a member of the bridging family, i.e. there might be no such that , which is the case, for example, in nested sampling and the microcanonical ensemble.

The density of states (DOS) over the prior or reference distribution is defined as

(7)

with denoting the delta function. With the help of the DOS it is straightforward to compute how the energies are distributed in the intermediate ensembles:

(8)

where is a one-dimensional distribution.

ii.2 Relative entropy

When choosing the intermediate distributions that bridge between the initial and final ensemble, it is essential to control the “distance” or overlap between successive ensembles and . We use the Kullback-Leibler (KL) divergence Kullback and Leibler (1951) or relative entropy

(9)

for this purpose. The relative entropy satisfies the Gibbs inequality with equality only if and are identical. Therefore the Kullback-Leibler divergence qualifies as an “entropic distance” between ensembles and . However in contrast to a true distance the KL divergence is not symmetric under interexchange of and . It is only well-defined if is “broader” than , i.e. if the support of is contained in the support of , and therefore a directed divergence. In information theory, the KL divergence is used to quantify information gain.

Let us now consider the relative entropy between two members and of the family of bridging distributions [Eq. (1)]. With the help of the DOS we can reduce the high-dimensional configurational integral [Eq. (9)] to a one-dimensional integral over the energies:

(10)

where denotes an average over the -th ensemble . For the canonical ensemble [Eq. (2)] the relative entropy reduces to

(11)

where is the free energy at inverse temperature .

Throughout this article, we will use the relative entropy to measure the distance between ensembles and . Other measures that quantify the overlap between different ensembles might also be useful. For example, we could use the exchange rate of a parallel tempering simulation

(12)

as a measure to compare ensembles. The Jensen-Shannon divergence Lin (1991), a symmetrized version of the relative entropy, has been used in thermodynamic control Crooks (2007). The Hellinger distance Gelman and Meng (1998) is a widespread distance used mainly in statistics and may also provide a useful measure for comparing ensembles. In this article, however, we have not explored measures for comparing ensembles other than the relative entropy.

Given a continuous bridging family, the optimal annealing protocol would involve infinitely many steps (adiabatic annealing). We want to reach the target ensemble in finitely many steps but produce intermediate ensembles that have a fixed and finite relative entropy . We will later see that for small this amounts to cooling with constant thermodynamic speed. As we move from ensemble to the next ensemble we need to evaluate their relative entropy . Equation (10) shows that this involves the computation of ensemble averages as well as the estimation of free energy differences. These are challenging computational problems, which can be solved by the methods outlined in the next subsection.

ii.3 Estimation of the relative entropy

Because the relative entropy [Eq. (10)] both involves the normalization constants , as well as an ensemble average, it is computationally challenging to evaluate accurately. However, if we know the density of states , the configurational integrals can be reduced to low-dimensional integrals. Therefore, ensemble annealing estimates during the course of the simulation, similar to the Wang-Landau method Wang and Landau (2001) or nested sampling Skilling (2006). The estimation of the DOS relies on histogram methods Ferrenberg and Swendsen (1989); Habeck (2012a).

If we work with non-interacting walkers at the -th iteration, the configurations are denoted by (i.e. the first index indicates the ensemble, whereas the second index enumerates the walkers). At each ensemble annealing iteration , a non-parametric estimate of the DOS

(13)

is updated where are the energies of the visited configurations. The discrete DOS assigns a weight to every configuration that has been generated by the walkers during the entire simulation up to the current ensemble . That is, the vector of all weights expands in each iteration and is constantly updated (which is indicated by the superscript).

With the help of the estimated DOS it is straightforward to compute the relative entropy between two ensembles and :

(14)

where

(15)

These relations are used in histogram methods for estimating free energy differences Habeck (2012b, a). The weights are obtained using the histogram iterations

(16)

in which each update of the weights is followed by their normalization and a re-evaluation of the partition functions according to Eq. (15). We start the iteration from the previous DOS estimate (setting the weights of the new states to zero), which speeds up the convergence of the histogram iterations.

ii.4 Initialization and equilibration of ensembles

The estimated DOS serves two purposes: First, to estimate the relative entropy between two ensembles reliably; second, to initialize the walkers to sample the next ensemble by recycling configurations that have been generated previously, which are then equilibrated in the new ensemble. In the -th ensemble annealing iteration, ensemble is approximated by

(17)

We use this approximation to generate initial states for the walkers by the following scheme: First, we draw an energy level according to the probability . Second, we randomly pick one among all configurations that map to the energy drawn in the first step. In continuous systems, it is very unlikely that two configurations were generated that have exactly matching energies. However, in discrete systems such as the two-dimensional Ising model there are only finitely many energy levels. In this case, we can speed up the DOS estimation [Eqs. (15) and (16)] by working with histograms as explained in Habeck (2012a). Due to the limitation of the approximation (17), the recycled states need to be equilibrated in the correct ensemble [Eq. (1)] using Monte Carlo or molecular dynamics simulations.

Figure 1: (Color online) Ensemble annealing of a one-dimensional particle in a Schwefel potential (shown on the right) using 100 walkers. Every stripe marked by a dashed boundary shows the configurations of the walkers after the equilibration step. A random number has been added to the x-coordinates (iteration index) for better visualization. All Particles within each stripe experience the same temperature during equilibration.

ii.5 Algorithm

We have now all tools at hand to formulate the ensemble annealing algorithm. Ensemble annealing is an adaptive sequential Monte Carlo algorithm. The main parameters are the number of walkers and the relative entropy between successive ensembles and . Choosing ensembles with a constant relative entropy ensures that the annealing process proceeds at a constant thermodynamic speed. Iteration comprises the following steps:

  1. Initialization: Using the current estimate of the DOS [Eq. (13)], the particles are initialized by drawing energies from [Eqs. (8) and (13)] and finding the corresponding configurations by a simple lookup in the energy table such that . Because is only an approximation, the initial states will not be equilibrated.

  2. Equilibration: The states are equilibrated in the new ensemble by running Monte Carlo or molecular dynamics simulations starting from and producing new states . The new configurations and energies are added to the pool of all states visited so far.

  3. DOS estimation: A new estimate of the DOS, , is computed from all energies and temperatures using non-parametric histogram reweighting Habeck (2012a). To speed up the convergence, the previous DOS estimate is used to initialize the iterations.

  4. Annealing: The next ensemble is adjusted such that it has a desired relative entropy with respect to the current ensemble , i.e. satisfies .

The algorithm has only few parameters, namely the initial and final ensemble, the number of walkers and the target relative entropy between successive ensembles. Evidently, determines the cooling or compression rate. For smaller the overlap between successive ensembles is larger and the annealing progresses more slowly. If we allow to be large, we anneal faster but risk to fail to equilibrate.

ii.6 Application to the harmonic oscillator

Let us illustrate ensemble annealing for a simple system, the one-dimensional harmonic oscillator with energy and ground state in the canonical ensemble:

The distance between two ensembles at inverse temperatures and , , is:

(18)

In this case the KL divergence depends only on the ratio between two successive temperatures. The constant relative entropy criterion yields a constant cooling rate which is determined by

(19)

This results in the geometric schedule . For we reach the adiabatic limit of infinitely slow cooling since . Geometric schedules have been proposed for optimal simulated annealing Kirkpatrick et al. (1983).

Alternatively, we could consider the ground state a control parameter, , and let , with and . The relative entropy is now according to Eq. (11):

Constant steps in the relative entropy lead to a schedule that is linear in the inverse temperature: . That is, the ground state is shifted either in the positive or the negative direction depending on the targeted ground state.

These examples highlight that it is not sufficient to prescribe the relative entropy to choose the next ensemble. We must also impose some sense of directionality in order to shift the ensemble closer to the target ensemble. This will become particularly important in multi-dimensional annealing.

Iii Canonical ensemble

We will now apply annealing of the canonical ensemble to various systems including discrete systems such as Ising and Potts models and a continuous protein model.

iii.1 One-dimensional example

To illustrate ensemble annealing, we first apply it to a system with a one-dimensional configuration space and a rugged energy function , the one-dimensional Schwefel function. We deliberately choose a large number of walkers for illustrative purposes (); a smaller number of walkers would be sufficient in this one-dimensional example. At every iteration, equilibration is achieved by using a random walk Metropolis Monte Carlo scheme Metropolis et al. (1957) consisting of 10 random steps drawn from a uniform distribution. The relative entropy is set to . Figure 1 shows the configurations at the various temperatures obtained by the constant relative entropy criterion. We start at and target a final inverse temperature . As ensemble annealing progresses the walkers become more and more localized in the dominant modes of the target ensemble. The relative proportions are reproduced accurately.

This example also suggests that it should be possible to prune the number of the walkers during the annealing process. In the course of annealing, the ensemble becomes more and more concentrated (as monitored by a decrease in the entropy ), and fewer walkers are needed to explore and represent it. Using the Boltzmann relation where is the number of accessible microstates, we could decrease the number of walkers in each iteration and thereby save computational resources. However, we have not explored this strategy further in this article.

iii.2 Ising and Potts model

We now apply ensemble annealing to the two-dimensional Ising and Potts model. Figure 2 shows simulation results for the lattice. particles were used and the relative entropy was set to . The equilibration step consisted of Metropolis Monte Carlo runs that randomly select lattice sites and try to flip the spin (Ising model) or draw a random color (Potts model). Figure 2(a) shows how the algorithm improves the initial DOS estimate. The algorithm starts with random spin configurations from which the initial DOS covering only a limited energy range is derived. As the algorithm proceeds, lower energy states are generated and the DOS expands into the lower energy region. This process continues until the full energy range has been explored and a highly accurate estimate of the DOS is produced. The accuracy of the estimated DOS is illustrated in Fig. 2(b). The estimation error can be as small as with WL sampling Wang and Landau (2001). A similar accuracy is also obtained for the ten state Potts model which undergoes a first order phase transition. The DOS tends to be more accurate for the low energy states. In most situations this is desirable because one is primarily interested in the thermodynamic properties of the system at finite temperatures, at which the low energy states contribute most strongly.

(a)    Ensemble annealing of the Ising model

(b)    DOS of the Ising and Potts model
Figure 2: (Color online) (a) Improvement of the estimated DOS during ensemble annealing. Four equally spaced stages of ensemble annealing were picked; the iteration index is indicated in the legend. The true DOS Beale (1996) is shown as a thick black curve; the estimated DOS as yellow [gray] area. (b) Estimated DOS for the Ising (left) and the ten state Potts model (right). Again the black line shows the true DOS and the yellow [gray] line the DOS produced during ensemble annealing. The insets show the relative error in the microcanonical entropy .

Figure 3: (Color online) Ensemble annealing of the Gō model. Left panel: Density of states obtained with ensemble annealing (yellow [gray] line) and a parallel tempering simulation (thick black line). Right panel: Fraction of native contacts as a function of the inverse temperature (black area). The heat capacity is indicated by the yellow [gray] line and has been scaled to match the ordinate range. It peaks at the folding temperature . The ribbon diagrams show configurations in the unfolded state (black ribbon on the left) and in the folded state (white ribbon on the right).

iii.3 Protein model

Ensemble annealing readily applies to continuous systems such as Gō models that have been used extensively to study protein folding (see e.g. Whitford et al. (2009)). In our version of the Gō model, the dihedral angles are the only conformational degrees of freedom; bond lengths and angles are fixed to ideal values. The energy function is comprised of a generic non-bonded energy potential and the Gō term. The non-bonded energy penalizes atom clashes using the same quartic repulsion term as in Ref. Habeck et al. (2005). The Gō term enforces the native structure by imposing a Lennard-Jones potential on the C distances between residues in contact in the native state. The inverse temperature serves as the control parameter in ensemble annealing runs. As in Ref. Habeck et al. (2005), we used hybrid Monte Carlo Duane et al. (1987) for equilibration. We seeded the simulation with 100 random structures and annealed an ensemble of structures; the relative entropy was set to . For reference, we also ran a parallel tempering simulation of the Gō model using 37 temperatures. The DOS obtained with ensemble annealing was used to optimize the temperatures to produce an exchange rate of 48% on average. 10000 replica transitions were simulated and an estimate of the DOS was obtained by running histogram reweighting.

We studied the Gō model derived for a small protein domain, the 59 amino-acid Fyn-SH3 domain (PDB code 1SHF). Figure 3(a) shows the density of states obtained with ensemble annealing and compares it to the reference computed with an exhaustive parallel tempering simulation. The agreement is very high over the entire energy range. In Figure 3(b) we study the characteristics of the Gō model as revealed by ensemble annealing. Shown is the average number of native contacts as a function of the inverse temperature. The folding transition is marked by a sudden increase in the number of native contacts. The heat capacity peaks at indicating a folding temperature of roughly .

(a)    Energy histograms

(b)    Temperature schedule and PT swap rates
Figure 4: (Color online) Ensemble annealing of ten state Potts model. (a) Energy histograms at the temperatures found by ensemble annealing. Only every twentieth histogram is shown for clarity. The temperature is indicated by the color. (b) Left: Temperature schedule. The thin yellow [gray] line indicates the schedule found by ensemble annealing. The thick black line corresponds to the protocol based on minimum entropy production Salamon et al. (1988). The dashed gray line indicates the critical temperature. Right: PT swap rates obtained with the temperature schedule. The heat capacity is shown as yellow [gray] curve.

Iv Schedules and paths

We will now have a closer look at the schedules constructed by ensemble annealing and compare them to other schedules that have been proposed in the literature. Moreover, we discuss the possibility to use ensemble annealing as a numerical method to construct near-optimal thermodynamic paths.

iv.1 Temperature schedule

Figure 4 shows the energy histograms and temperature schedule found by ensemble annealing for the ten state Potts model. By way of construction of the schedule, the energy histograms of successive ensembles have a constant overlap (Fig. 4(a)). The temperature schedule is non-trivial and deviates from the linear, geometric, and logarithmic schedules that have been proposed in the literature Kirkpatrick et al. (1983); Gelman and Meng (1998). Initially, the inverse temperatures grow sublinearly. In this phase, the schedule constructed by ensemble annealing is reminiscent of the logarithmic schedule proposed by Geman and Geman Geman and Geman (1984), i.e. . As ensemble annealing approaches the critical temperature, the cooling rate it slowed down automatically such that the system is not quenched and avoids being trapped in a metastable state. Beyond the critical point, the temperatures show a super-exponential increase (Fig. 4(b)).

Salamon and co-workers have proposed an adaptive version of simulated annealing more than two decades ago Salamon et al. (1988); Ruppeiner et al. (1991). Their algorithm finds the temperature schedule by minimizing the entropy production whereupon the temperature changes inversely proportional to the square root of the heat capacity. This rule follows directly from the constant relative entropy criterion. For small changes in inverse temperature, we have

(20)

where

is proportional to the heat capacity . If the desired relative entropy is small, the increment in inverse temperature is

Integration over the inverse temperature increments

(21)

generates a schedule that is very close to the one found by ensemble annealing at finite (Fig. 4(b)). Comparison with the schedule derived by Salamon et al. shows that is proportional to the thermodynamic speed of the annealing process. In the context of Bayesian computation, similar, but independent arguments have been put forward by Skilling Skilling (2006) who uses to control the rate of compression as the system moves from the prior to the posterior probability.

From a practical point of view, an ensemble annealing run can be used to seed a parallel tempering simulation that has a well-balanced schedule and equilibrated initial states. The right panel in Fig. 4(b) illustrates that the exchange rates are indeed uniform for a PT simulation when using every fifth temperature of the ensemble annealing schedule. A drop in the swap rate is only observed close to the critical temperature where the heat capacity peaks.

iv.2 Minimal dissipation paths

Let us now see if the results of the previous section generalize to multiple temperatures. Although ensemble annealing can be applied to any family of bridging distributions [Eq. (1)], let us focus on parametric families of the form

where denotes the vector of all control parameters. The second order expansion of the relative entropy is Crooks (2007):

(22)

where the zero and first order term vanish because and is the global minimum of viewed as a function of . Because the Fisher information matrix

(23)

is positive definite, it defines a metric on the space of distributions parameterized by . Equation (20) is a special case of the general relation (22) for the Boltzmann ensemble with a single temperature, where the Fisher information is simply .

In statistics, the Fisher metric has been studied since the beginnings of information geometry. The Fisher information can also be used to define a thermodynamic length and action (see Crooks (2007) and references therein). Quasistatic processes that switch between two thermodynamic states follow minimal dissipation paths in space. These can be computed by minimizing the thermodynamic length (see, for example, Diósi et al. (1996); Crooks (2007); Sivak and Crooks (2012)). Therefore, the optimal path is a geodesic on the Riemanian manifold equipped with the Fisher information metric. Very similar results have been presented by Gelman and Meng in their work on bridge and path sampling Gelman and Meng (1998).

By taking constant but finite steps in relative entropy followed by an equilibration, ensemble annealing approximates a quasistatic process. After successful equilibrations, the relative entropy accumulated during ensemble annealing is

(24)

and approximates the thermodynamic action due to Eq. (22). If we aim to optimize the use of computing resources, we have to minimize , the number of bridging distributions. For a single control parameter this is straightforward: we have to follow the geodesic towards the destination ensemble. In the canonical ensemble, for example, if the destination temperature is lower than the initial temperature (annealing), we have to increase such that also for all intermediate temperatures. For ensembles with multiple control parameters the situation is more complicated because minimizing the accumulated relative entropy [Eq. (24)] requires the computation of a discrete geodesic. However the DOS is generally unknown, and we can compute only in the vicinity of the current state. It is not possible to evaluate reliably the length of an entire path connecting the initial and the destination ensemble. We can only search locally without any guarantee that the generated path is close to the geodesic.

V Non-Boltzmann ensembles

Figure 5: (color online) Annealing of the Ising model in the Tsallis ensemble. (a) Probing different at fixed . The thick black line shows the microcanonical temperature . The dashed lines show the right hand side of Eq. (25) for different . The curve produced with set to the ground state energy intersects twice corresponding to two peaks in the energy distribution . (b) Comparison of ensemble annealing of the Tsallis ensemble with (dashed red line) and (yellow solid line). The true DOS is shown as thick black line.

v.1 Tsallis ensemble

A major advantage of using the Boltzmann distribution (2) as bridging family is that many powerful methods to simulate the canonical ensemble exist. We can use these algorithms in the equilibration step. But it can be beneficial to consider also other ensembles, because they might bridge more efficiently between the initial and final ensemble. The Tsallis ensemble has been used previously in combination with parallel tempering Hansmann and Okamoto (1997); Habeck et al. (2005). The motivation for this choice is that due to the heavier tails of the Tsallis ensemble replicas have a larger overlap and can exchange states even if they show large energy differences. As a consequence, the number of intermediate replicas should be smaller than with the Boltzmann ensemble.

This is indeed confirmed by an analysis of the Ising model. Test calculations based on the correct DOS show that the canonical ensemble requires 273 to reach the destination ensemble () at a relative entropy of , whereas the Tsallis ensemble needs only 85 values to bridge between (corresponding to a very high canonical temperature) and . However, in practice this apparent advantage does not hold up. The reason is that the Tsallis ensemble typically yields multimodal energy distributions at intermediate . To see this let us first consider the more general case where a parametric bridging family is used. According to Eq. (8) the energy distribution at is proportional to and peaks at solving:

where and are the microcanonical entropy and inverse temperature. In case of the canonical ensemble, this equation is simply , that is the energy distribution peaks at the energy whose microcanonical temperature matches the canonical temperature. In case of the Tsallis ensemble (3), we have:

(25)

This equation can have multiple solutions depending on and , which is why it is difficult to get annealing of the Tsallis ensemble running in a stable fashion. Not only the control parameter , but also the minimum energy plays a critical role (see Fig. 5). If is exactly set to the energy of the ground state , the energy distribution of the Ising model becomes bimodal with a sharp peak around the ground state energy and a second peak corresponding to high temperatures. That is, in order to generate samples from this ensemble we have to simulate two phases simultaneously. As a consequence the DOS estimate produced by ensemble annealing shows systematic errors (Fig. 5(b)), despite producing an efficient schedule with 103 bridging distributions. If we lower , the phase separation is less dramatic and consequently the DOS estimate is as accurate as with the Boltzmann ensemble. But we also lose the efficiency of the Tsallis ensemble in bridging large energy differences, which is reflected in the larger number of : 230 for which is similar to the 270 temperatures produced by Boltzmann annealing. This shows that is an additional algorithmic parameter which is delicate to choose.

v.2 Microcanonical ensemble and nested sampling

Nested sampling has been invented by Skilling Skilling (2006) to solve Bayesian inference problems. Bayesian inference demands that we draw from a posterior distribution and compute its normalization constant, which are essentially the tasks that ensemble annealing addresses. In Bayesian inference is the prior, the likelihood function; the destination ensemble that we aim to characterize is the posterior distribution over some inference parameter(s) .

Nested sampling is based on the microcanonical ensemble [Eq. (5)]; the control parameter is the maximum energy that the system is allowed to reach Habeck (2015). Therefore nested sampling can be viewed as a special case of ensemble annealing based on a zero-temperature Fermi or the microcanonical ensemble. The relative entropy between two ensembles [Eq. (10)] with energy levels simplifies to:

(26)

where the normalization constant

(27)

is the cumulative DOS or configuration space volume. From a Bayesian point of view, is the prior mass enclosed by the likelihood contour . The control parameter is reduced from infinity to the energy of the ground state .

There are several differences between nested sampling and annealing of the ensemble (5) using as control parameter. These differences result from the fact that all of the features that ensemble annealing aims to implement explicitly are built-in to nested sampling. In fact, nested sampling’s design principles served as a guide to develop the ensemble annealing algorithm.

Ensemble annealing uses histogram methods to estimate the DOS, whereas nested sampling utilizes order statistics due to the special form the of truncated ensemble (5). As a consequence of the truncation, will be uniformly distributed over (defined for ), which is clear from Eq. (8) 2. Therefore the configuration space volume associated with the maximum energy state follows the distribution where and is the maximum energy among all walkers. Based on this result from order statistics, nested sampling estimates at well-dispersed energy cutoffs .

Another elegant feature of nested sampling is that if , the next ensemble achieving a compression of is the one in which the energy is bounded by . This results from the fact that where the average is over the Beta distribution . Therefore the search for the next control parameter will simply yield , and we only have to resample the state with the highest energy.

Figure 6: (Color online) Energy contours found by nested sampling (thick black line) and ensemble annealing (yellow [gray] line) for the Ising model.

Although nested sampling is much more efficient at cooling the truncated ensemble (5), it is also possible to run the ensemble annealing algorithm. Both methods produce comparable sequences of energy levels for the Ising model with and (see Fig. 6). Also the estimated DOS is of similar accuracy. For this example, nested sampling runs at a speed that is three orders of magnitude faster than ensemble annealing. This is due to the fact that DOS estimation and annealing (i.e. the choice of the next energy limit) are instantaneous in nested sampling, because they are built-in to the method. Ensemble annealing, on the contrary, needs to run the histogram iterations for every energy contour. The histogram iterations converge only very slowly. Each iteration is dominated by DOS estimation because equilibration of the Ising model is very fast. For other systems such as proteins it will be the equilibration step rather than DOS estimation that consumes most of the computation time. In this situation, the discrepancy between nested sampling and ensemble annealing will not be as dramatic as for the Ising model.

For the -dimensional harmonic oscillator we have and . As in the canonical ensemble, the relative entropy depends on the ratio of two successive control parameters: . Therefore nested sampling and ensemble annealing progress geometrically according to where . Let us compare this to the thermal approach using the inverse temperature as a control parameter. The compression rate of the canonical distribution is given by [Eq. (19)]. Therefore where and are the compression rates of thermal and microcanonical annealing. Rewritten we have , and therefore . This means that annealing the canonical ensemble compresses faster than annealing the microcanonical ensemble. We observe this for the application to the Ising model (Fig. 6). Canonical annealing with walkers and a relative entropy of requires only 42 iterations to reach the destination ensemble. Microcanonical annealing and nested sampling, on the contrary, need approximately 1800 iterations until convergence. The reason for this is that states accumulate at the maximum energy , and therefore nested sampling and microcanonical annealing will produce many intermediate ensembles in order to bridge between the initial and the destination ensemble.

Vi Conclusion

Ensemble annealing is a Monte Carlo algorithm that steps through a sequence of ensembles and generates conformational samples. Along with the samples, it also estimates the density of states using histogram methods. The ensembles are placed in an adaptive manner so as to maintain a constant, pre-chosen relative entropy between successive ensembles. Ensemble annealing can be applied to a variety of bridging distributions, foremost the canonical ensemble but also to non-Boltzmann families such as the Tsallis or the microcanonical ensemble. There is a close connection to the nested sampling algorithm. In fact, ensemble annealing aims to implement the features that are built-in to nested sampling: control of the compression or thermodynamic speed, as well as reliable estimation of the compression based on the DOS or the configuration space volume. Nested sampling is intimately tied to the truncated ensemble (5), whereas ensemble annealing is more general in the choice of the ensemble itself, which can help to speed up the simulation and allows the use of samplers that work efficiently with a particular ensemble (such as, for example, hybrid Monte Carlo in the canonical ensemble).

Ensemble annealing is also related to previous work by Salamon and co-workers Salamon et al. (1988); Ruppeiner et al. (1991) on simulated annealing. Our approach is more general and gives richer results because it not only finds the system’s ground state but reconstructs the entire DOS. That way ensemble annealing can be used to both simulate thermodynamic systems and solve difficult optimization problems. By means of the DOS, all visited states contribute to the computation of ensemble averages making our approach more robust. Moreover, it is possible to work with multiple control parameters, which will be studied in future extensions of ensemble annealing.

Acknowledgements.
This work was supported by Deutsche Forschungsgemeinschaft (DFG) grants No. HA 5918/1-1 and No. SFB860 TP B9.

Footnotes

  1. Normally, the control parameter of the Tsallis distribution is called . Because we already use for the bridging ensemble, we denote the Tsallis parameter by .
  2. In statistics, the transformation from to where is the cumulative distribution function of is called probability transform. It is a basic mathematical fact that if follows , will be uniformly distributed over .

References

  1. R. H. Swendsen and J.-S. Wang, Phys. Rev. Lett. 57, 2607 (1986).
  2. E. Marinari and G. Parisi, Europhys. Lett. 19, 451 (1992).
  3. S. Kirkpatrick, C. D. J. Gelatt,  and M. P. Vecchi, Science 220, 377 (1983).
  4. C. J. Geyer, in Computing Science and Statistics: Proceedings of the 23rd Symposium on the Interface (1991) pp. 156–163.
  5. C. Predescu, M. Predescu,  and C. V. Ciobanu, J Phys Chem B 109, 4189 (2005).
  6. Y. Sugita, A. Kitao,  and Y. Okamoto, J. Chem. Phys. 113, 6042 (2000).
  7. B. A. Berg and T. Neuhaus, Phys. Rev. Lett. 68, 9 (1992).
  8. F. Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001).
  9. C. Zhou, T. C. Schulthess, S. Torbrugge,  and D. P. Landau, Phys. Rev. Lett. 96, 120201 (2006).
  10. J. Skilling, Bayesian Analysis 1, 833 (2006).
  11. Normally, the control parameter of the Tsallis distribution is called . Because we already use for the bridging ensemble, we denote the Tsallis parameter by .
  12. U. H. E. Hansmann and Y. Okamoto, Phys. Rev. E 56, 2228 (1997).
  13. M. Habeck, M. Nilges,  and W. Rieping, Phys. Rev. Lett. 94, 0181051 (2005).
  14. J. R. Ray, Phys. Rev., A 44, 4061 (1991).
  15. V. Martin-Mayor, Phys. Rev. Lett. 98, 137207 (2007).
  16. S. Kullback and R. A. Leibler, Ann. Math. Stat. 22, 79 (1951).
  17. J. Lin, IEEE Transactions on Information Theory 37, 145 (1991).
  18. G. E. Crooks, Phys. Rev. Lett. 99, 100602 (2007).
  19. A. Gelman and X. Meng, Statistical Science 13, 163 (1998).
  20. A. M. Ferrenberg and R. H. Swendsen, Phys. Rev. Lett. 63, 1195 (1989).
  21. M. Habeck, Phys. Rev. Lett. 109, 100601 (2012a).
  22. M. Habeck, in Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics (AISTATS), Vol. 22 (JMLR: W&CP 22, 2012) pp. 486–494.
  23. N. Metropolis, M. Rosenbluth, A. Rosenbluth, A. Teller,  and E. Teller, J. Chem. Phys. 21, 1087 (1957).
  24. P. D. Beale, Phys. Rev. Lett. 76, 78 (1996).
  25. P. C. Whitford, J. K. Noel, S. Gosavi, A. Schug, K. Y. Sanbonmatsu,  and J. N. Onuchic, Proteins 75, 430 (2009).
  26. S. Duane, A. D. Kennedy, B. Pendleton,  and D. Roweth, Phys. Lett. B 195, 216 (1987).
  27. P. Salamon, J. D. Nulton, J. R. Harland, J. Pedersen, G. Ruppeiner,  and L. Liao, Computer Physics Communications 49, 423 (1988).
  28. S. Geman and D. Geman, IEEE Trans. PAMI 6, 721 (1984).
  29. G. Ruppeiner, J. M. Pedersen,  and P. Salamon, J. Phys. I France 1, 455 (1991).
  30. L. Diósi, K. Kulacsy, B. Lukács,  and A. Rácz, J. Chem. Phys. 105, 11220 (1996).
  31. D. A. Sivak and G. E. Crooks, Phys. Rev. Lett. 108, 190602 (2012).
  32. M. Habeck, AIP Conference Proceedings 1641 (2015).
  33. In statistics, the transformation from to where is the cumulative distribution function of is called probability transform. It is a basic mathematical fact that if follows , will be uniformly distributed over .
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
214471
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description