# Hamiltonian Variational Auto-Encoder

###### Abstract

Variational Auto-Encoders (VAEs) have become very popular techniques to perform inference and learning in latent variable models as they allow us to leverage the rich representational power of neural networks to obtain flexible approximations of the posterior of latent variables as well as tight evidence lower bounds (ELBOs). Combined with stochastic variational inference, this provides a methodology scaling to large datasets. However, for this methodology to be practically efficient, it is necessary to obtain low-variance unbiased estimators of the ELBO and its gradients with respect to the parameters of interest. While the use of Markov chain Monte Carlo (MCMC) techniques such as Hamiltonian Monte Carlo (HMC) has been previously suggested to achieve this Salimans et al. [2015], Wolf et al. [2016], the proposed methods require specifying reverse kernels which have a large impact on performance. Additionally, the resulting unbiased estimator of the ELBO for most MCMC kernels is typically not amenable to the reparameterization trick. We show here how to optimally select reverse kernels in this setting and, by building upon Hamiltonian Importance Sampling (HIS) Neal [2005], we obtain a scheme that provides low-variance unbiased estimators of the ELBO and its gradients using the reparameterization trick. This allows us to develop a Hamiltonian Variational Auto-Encoder (HVAE). This method can be re-interpreted as a target-informed normalizing flow Rezende and Mohamed [2015] which, within our context, only requires a few evaluations of the gradient of the sampled likelihood and trivial Jacobian calculations at each iteration.

Hamiltonian Variational Auto-Encoder

Anthony L. Caterini, Arnaud Doucet, Dino Sejdinovic Department of Statistics University of Oxford {anthony.caterini, doucet, dino.sejdinovic}@stats.ox.ac.uk

noticebox[b]Preprint. Work in progress.\end@float

## 1 Introduction

Variational Auto-Encoders (VAEs), introduced in Kingma and Welling [2014] and Rezende et al. [2014], are popular techniques to carry out inference and learning in complex latent variable models. However, the standard mean-field parametrization of the approximate posterior distribution can limit its flexibility. Recent work has sought to augment the VAE approach by sampling from the VAE posterior approximation and transforming these samples through mappings with additional trainable parameters to achieve richer posterior approximations. The most popular application of this idea is the Normalizing Flows (NFs) approach Rezende and Mohamed [2015], in which the samples are deterministically evolved through a series of parameterized invertible transformations called a flow. NFs have demonstrated success in various domains Berg et al. [2018], Kingma et al. [2016], but their form of the flow does not explicitly use information about the target posterior, so it is unclear whether an improved performance is due to an accurate posterior approximation or simply a result of overparametrization. The related Hamiltonian Variational Inference (HVI) Salimans et al. [2015] instead stochastically evolves the base samples according to Hamiltonian Monte Carlo (HMC) Neal et al. [2011] and thus uses target information, but relies on defining reverse dynamics in the flow, which, as we will see, turns out to be unnecessary and suboptimal.

One of the key components in the formulation of VAEs is the maximization of the evidence lower bound (ELBO). The main idea put forward in Salimans et al. [2015] is that it is possible to use MCMC iterations to obtain an unbiased estimator of the ELBO and its gradients. This estimator is obtained using an importance sampling argument on an augmented space, with the importance distribution being the joint distribution of the states of the ‘forward’ Markov chain, while the augmented target distribution is constructed using a sequence of ‘reverse’ Markov kernels such that it admits the original posterior distribution as a marginal. The performance of this estimator is strongly dependent on the selection of these forward and reverse kernels, but no clear guideline for selection has been provided therein. By linking this approach to earlier work by Del Moral et al. [2006], we show how to select these components. We focus, in particular, on the use of time-inhomogeneous Hamiltonian dynamics, proposed originally in Neal [2005], to augment underlying VAE models. This method uses reverse kernels which are optimal for reducing variance of the log-likelihood estimators and allows for simple calculation of the approximate log-posteriors after evolution through the dynamics. Additionally, we can easily use the reparameterization trick to calculate unbiased gradients of the ELBO with respect to parameters of interest. The resulting method, which we refer to as the Hamiltonian Variational Auto-Encoder (HVAE), can be thought of as a Normalizing Flow scheme in which the flow depends explicitly on the target distribution. This combines the best properties of HVI and NFs, resulting in target-informed and inhomogeneous deterministic Hamiltonian dynamics, while being scalable to large datasets and high dimensions.

## 2 Evidence Lower Bounds, MCMC and Hamiltonian Importance Sampling

### 2.1 Unbiased likelihood and evidence lower bound estimators

For data and parameter , consider the likelihood function

where are some latent variables. Assume we have access to a strictly positive unbiased estimate of , then

(1) |

with , denoting all the random variables used to compute . Here, denotes additional parameters of the sampling distribution. We emphasize that depends on both and potentially as this is not done notationally. By applying Jensen’s inequality to (1), we thus obtain, for all and ,

(2) |

It can be shown that decreases as the variance of decreases; see, e.g., Burda et al. [2016], Maddison et al. [2017]. The standard variational framework corresponds to and , while the Importance Weighted Auto-Encoder (IWAE) Burda et al. [2016] corresponds to , and

In the general case, we do not have an analytical expression for , but when performing stochastic gradient variational inference we only require an unbiased estimator of . This is given by if the reparameterization trick Glasserman [1991], Kingma and Welling [2014] is used, i.e. , and is a smooth enough function of . As a guiding principle, one should attempt to obtain a low-variance estimator of , which typically translates into a low-variance estimator of . We can analogously optimize with respect to through stochastic gradient ascent to obtain tighter bounds.

### 2.2 Unbiased likelihood estimator using MCMC

Salimans et al. [2015] propose to build an unbiased estimator of by sampling a (potentially time-inhomogeneous) ‘forward’ Markov chain of length using and for . Using artificial ‘reverse’ Markov transition kernels for , it follows easily from an importance sampling argument that

(3) |

is an unbiased estimator of as long as the ratio in (3) is well-defined. In the framework of the previous section, we have and is given by the denominator of (3). Although we did not use measure-theoretic notations, the kernels are typically MCMC kernels which do not admit a density with respect to the Lebesgue measure (e.g. the Metropolis–Hastings kernel). This makes it difficult to define reverse kernels for which (3) is well-defined as evidenced in Salimans et al. [2015, Section 4] or Wolf et al. [2016]. Del Moral et al. [2006] – wherein the estimator (3) was originally introduced – provides generic recommendations for obtaining low-variance estimators of : select as MCMC kernels which are invariant, or approximately invariant, with respect to , where is a sequence of artificial densities bridging to smoothly using . It is also established in Del Moral et al. [2006] that, given any sequence of kernels , the sequence of reverse kernels minimizing the variance of is given by where denotes the marginal density of under the forward dynamics, yielding

(4) |

For stochastic forward transitions, it is typically not possible to compute and the corresponding estimator (4) as the marginal densities do not admit closed-form expressions. However this suggests that should be approximating and various schemes are presented in Del Moral et al. [2006]. As noticed by Del Moral et al. [2006] and Salimans et al. [2015], Annealed Importance Sampling (AIS) Neal [2001] – also known as the Jarzynski-Crooks identity (Crooks [1998], Jarzynski [1997]) in physics – is a special case of (3) using, for , a -invariant MCMC kernel and tractable, but suboptimal, reverse transition kernels. AIS provides state-of-the-art estimators of the marginal likelihood and has been widely used in machine learning. Unfortunately, it typically cannot be used in conjunction with the reparameterization trick. Indeed, although it is very often possible to reparameterize the forward simulation of in terms of the deterministic transformation of some random variables independent of , this mapping is not continuous because the MCMC kernels it uses typically include singular components. In this context, although (1) holds, is not an unbiased estimator of ; see, e.g., Glasserman [1991] for a careful discussion of these issues.

### 2.3 Using Hamiltonian dynamics

Given the empirical success of Hamiltonian Monte Carlo (HMC) Hoffman and Gelman [2014], Neal et al. [2011], various contributions have proposed to develop algorithms exploiting Hamiltonian dynamics to obtain unbiased estimates of the ELBO and its gradients when . This is indeed the basis of Salimans et al. [2015]. However, the algorithm proposed therein relies on a time-homogeneous leapfrog where momentum resampling is performed at each step and no Metropolis correction is used. It also relies on a rather arbitrary reverse kernel. To address the limitations of Salimans et al. [2015], Wolf et al. [2016] have proposed to include some Metropolis acceptance steps, but they still limit themselves to homogeneous dynamics and their estimator is not amenable to the reparameterization trick. Finally, in Hoffman [2017], an alternative approach is used where the gradient of the true likelihood is directly approximated by using Fisher’s identity and HMC to obtain approximate samples from . However, the MCMC bias can be very significant when one has multimodal latent posteriors, and is strongly dependent on both the initial distribution and .

Here, we follow an alternative approach where we use Hamiltonian dynamics that are time-inhomogeneous as in Del Moral et al. [2006] and Neal [2001], and use optimal reverse Markov kernels to compute . This estimator can be used in conjunction with the reparameterization trick to obtain an unbiased estimator of . This method is based on the Hamiltonian Importance Sampling (HIS) scheme proposed in Neal [2005]; one can also find several instances of related ideas in physics Jarzynski [2000], Schöll-Paschinger and Dellago [2006]. We work in an extended space , introducing momentum variables to pair with the position variables , with for some symmetric positive definite mass matrix . Essentially, the idea is to sample using deterministic transitions so that , where and , define diffeomorphisms corresponding to a time-discretized and inhomogeneous Hamiltonian dynamics. In this case, it is easy to show that

(5) |

It can also be shown that this is nothing but a special case of (3) (on the extended position-momentum space) using the optimal reverse kernels . This setup is similar to the one of normalizing flows Rezende and Mohamed [2015], except here we use a flow informed by the target distribution. Indeed, Salimans et al. [2015] is mentioned in Rezende and Mohamed [2015], but the flow therein is homogeneous and yields a high-variance estimator of the normalizing constants even if is used, as demonstrated in our simulations in section 4.

Note that we can write the estimator in (5) as

(6) |

Hence, if we can simulate using , where and is a smooth mapping, then we can use the reparameterization trick since are also smooth mappings.

In our case, the deterministic transformation has two components: a leapfrog step, which discretizes the Hamiltonian dynamics, and a tempering step, which adds inhomogeneity to the dynamics and allows us to explore isolated modes of the target. To describe the leapfrog step, we first define the potential energy of the system as for a single datapoint . Leapfrog then takes the system from into via the following transformations:

(7) | ||||

(8) | ||||

(9) |

where are the individual leapfrog step sizes per dimension, denotes elementwise multiplication, and the gradient of is taken with respect to .
The composition of equations (7) - (9) has unit Jacobian since each equation describes a shear transformation.
As for the tempering part, we prescribe an inverse temperature schedule that increases from initially to at the end of each trajectory.
We implement tempering at each step ^{1}^{1}1 by multiplying the momentum output of leapfrog by before going to the next leapfrog step. This cooling operation has Jacobian . We obtain by
composing the leapfrog integrator with the cooling operation, which implies that the Jacobian is given by , which in turns implies

The only remaining component to specify is the initial distribution. We will set , where will be referred to as the variational prior with parameters , and is the canonical momentum distribution at inverse temperature . The full procedure to generate an unbiased estimate of the ELBO from (2) on the extended space for a single point is given in Algorithm 1. The set of variational parameters to optimize is given by , although we will often avoid optimizing as we can capture the same effect by optimizing individual leapfrog step sizes per dimension [Neal et al., 2011, Section 4.2]. We can see from (6) that we will obtain unbiased gradients with respect to and from our estimate of the ELBO if we write , for and , provided we are not also optimizing with respect to . We will require additional reparameterization when we do elect to optimize with respect to , but this is generally quite easy to implement on a problem-specific basis and is well-known in the literature.

## 3 Stochastic Variational Inference

We will now describe how to use Algorithm 1 within a stochastic variational inference procedure, moving to the setting where we have a dataset and for all . In this case, we are interested in finding

(10) |

where is the empirical measure over the data. We resort to variational methods since cannot generally be calculated exactly and maximize instead the surrogate ELBO objective function

(11) |

for defined as in (2). We can now turn to stochastic gradient ascent (or a variant thereof) to jointly maximize (11) with respect to and by approximating the expectation over using minibatches of observed data.

For our specific problem, we can reduce the variance of the ELBO calculation by analytically evaluating some terms in the expectation as follows:

(12) | ||||

where we write under reparameterization. We can now consider the output of Algorithm 1 as taking a sample from the inner expectation for a given sample from the outer expectation. A full algorithm to optimize (12) stochastically is provided in Algorithm 2. In practice, we take the gradients of (12) using automatic differentation packages such as TensorFlow.

## 4 Experiments

In this section, we discuss the experiments used to validate our method. We first test HVAE on a simple tractable example (where no neural networks are needed), and then perform larger-scale tests on the MNIST dataset. We will provide code for all experiments online to foster reproducibility of experiments. All models were trained using TensorFlow Abadi et al. [2015].

### 4.1 Gaussian Model

The generative model that we will consider first is a Gaussian likelihood with an offset and a Gaussian prior on the mean, given by

where is constrained to be diagonal. We will again write to denote an observed dataset under this model, where each . The goal of the problem is to learn the model parameters and , and calculate an estimate of the marginal log-likelihood by importance sampling from the approximate posterior defined by our variational method. Note that in this case the true marginal log-likelihood is tractable, allowing us to compare estimates with the true values.

Here, the log unnormalized posterior is given by

which can in this case be computed exactly using summary statistics so there is no need for the outer expectation with respect to from (11) or (12).

For this example, we will use HVAE with variational prior being the true prior, i.e. and a fixed isotropic mass matrix . The gradient of the potential is given by

where is the empirical mean of the dataset . The set of variational parameters here is , where contains the per-dimension leapfrog stepsizes and is the initial inverse temperature. We constrain each of the leapfrog step sizes such that for some , for all – this is to prevent the leapfrog discretization from entering unstable regimes. In practice, we simply set without issue, although this can be lowered if instability issues arise in other settings, or alternatively raised if the step sizes approach this maximum in training without leapfrog displaying instability. Note that in this example; in particular, we do not optimize any parameters of the variational prior and thus require no further reparameterization.

We will compare HVAE with a basic Variational Bayes (VB) scheme with mean-field approximate posterior , where is diagonal and denotes the set of learned variational parameters. This mirrors a typical VAE setup. Here, . We will also include a planar normalizing flow of the form of equation (10) in Rezende and Mohamed [2015], but with the same flow parameters across iterations to ensure a fair comparison on the basis of the number of variational parameters. The variational prior is also set to the true prior as in HVAE above. The log variational posterior is given by equation (13) of Rezende and Mohamed [2015], where .

We set our true offset vector to be , and our scale parameters to range quadratically from , reaching a minimum at , and increasing back to .^{3}^{3}3When is even, does not exist, although we could still consider to be the location of the minimum of the parabola defining the true standard deviations. All experiments have and all training was done using RMSProp Tieleman and Hinton [2012] with a learning rate of .

To compare the results across methods, we will generate estimates of the log evidence for out-of-sample datasets using importance samples from the variational posteriors. We do this for 10 different and randomly-generated out-of-sample datasets for various choices of dimensionality , displaying the average values of , along with shaded error bands corresponding to one sample standard deviation, in (a). We notice that HVAE performs much better out of sample than the other methods, conceivably because the flow is guided by the true distribution. The error bands are also fairly tight across methods except in the lower-dimensional regime, statistically validating our claim that HVAE is doing better. We noticed that the VB scheme strongly overfit to the training data, leading to poor performance on the test data, while the NF schemes cannot really fit either the training or test data.

We also compare HVAE with tempering to HVAE without tempering, i.e. where is fixed to in training – this has the effect of making our Hamiltonian dynamics homogeneous in time – again displaying the averages of across 10 out-of-sample datasets with importance samples in (b). We can clearly see that tempering and time-inhomogeneous dynamics are a key ingredient in the effectiveness of the method, which becomes even more apparent when we have a larger number of leapfrog steps.

### 4.2 Generative Model for MNIST

The next experiment that we consider is using HVAE to enhance a convolutional VAE for the binarized MNIST handwritten digit dataset. Again, our training data is , where each for . The base VAE has generative model for a single datapoint , associated latent variable , and model parameters , independently for . The likelihood is given by a product of Bernoulli distributions: , where is the output of a convolutional neural network (the generative network, or decoder) parametrized by , and the prior is just a standard isotropic Gaussian on . The variational approximate posterior is given by , where and are separate outputs of the same neural network (the inference network, or encoder) parametrized by the variational parameters , and is constrained to be diagonal.

We attempted to match the network structure of Salimans et al. [2015]. The inference network consists of three convolutional layers, each with filters of size and a stride of 2. The convolutional layers output 16, 32, and 32 feature maps, respectively. The output of the third layer is fed into a fully-connected layer with hidden dimension , whose output is then fully connected to the output means and standard deviations each of size . Softplus activation functions are used throughout the network except immediately before the outputted mean. The generative network mirrors this structure in reverse, replacing the stride with upsampling as in Dosovitskiy et al. [2015] and replicated in Salimans et al. [2015].

We apply HVAE on top of the base convolutional VAE by considering the VAE variational approximate posterior as the variational prior to our method. We then evolve the samples from the variational prior according to Algorithm 1 and optimize the new objective given in (12). We reparameterize as , for and , to generate unbiased gradients of the ELBO with respect to . We select various choices of and set . In contrast with normalizing flows, we do not amortize our flow parameters and by allowing them to be outputs of the inference network, although this can also be incorporated into our framework. We use the standard stochastic binarization of MNIST Salakhutdinov and Murray [2008] to train the model, and transition from RMSProp with learning rate to stochastic gradient ascent with learning rate throughout training for convergence. We calculate out-of-sample negative log likelihoods with importance samples and stochastic binarization, and report the average estimated negative log likelihood plus/minus the standard deviation in Table 1. We note that our base CNN VAE is not as powerful as the one in Salimans et al. [2015] – implying some discrepancies in network structure which were present because their code was not available online. However, we are also able to improve upon the base model by adding our time-inhomogeneous Hamiltonian dynamics on top. Rezende and Mohamed [2015] report only lower bounds on the log-likelihood for NFs, which are indeed lower than our log-likelihood estimates, although they use a much higher number of variational parameters.

CNN VAE | HVAE, | HVAE, | HVAE, | |
---|---|---|---|---|

Test NLL |

## 5 Conclusion and Discussion

We have proposed a simple and principled way to exploit Hamiltonian dynamics within stochastic variational inference. Contrary to previous methods Salimans et al. [2015], Wolf et al. [2016], our algorithm does not rely on arbitrary reverse Markov kernels and benefits from the use of tempering ideas. Additionally, we can use the reparameterization trick to obtain an unbiased estimator of gradients of the ELBO. The resulting HVAE can be interpreted as a target-driven normalizing flow which requires the evaluation of a few gradients of the log-likelihood associated to a single data point at each stochastic gradient step. However, the Jacobian computations required for the ELBO are trivial. In our experiments, the robustness brought about by the use of target-informed dynamics can reduce the number of parameters that must be trained and improve generalizability.

There are many possible extensions of this approach. In particular, Hamiltonian dynamics preserves the Hamiltonian and hence also the corresponding target distribution, but there exist other deterministic dynamics which leave the target distribution invariant but not the Hamiltonian. This includes, in particular, the Nosé-Hoover thermostat. It is directly possible to use these dynamics instead of the Hamiltonian dynamics within the framework developed in subsection 2.3 and, in continuous-time, related ideas have appeared in physics Cuendet [2006], Procacci et al. [2006], Schöll-Paschinger and Dellago [2006]. This comes at the cost of slightly more complicated Jacobian calculations, although these can be simplified in certain regimes.

Another possibility is to allow the cooling parameters to roam freely – i.e. not be constrained to a specific tempering schedule – and learn them variationally. This algorithmic improvement could also be combined with the standard amortization approach in which the variational flow parameters are outputs of the inference network, but we leave that for future work at this time.

## References

- Abadi et al. [2015] Martín Abadi et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from tensorflow.org.
- Berg et al. [2018] Rianne van den Berg, Leonard Hasenclever, Jakub M Tomczak, and Max Welling. Sylvester normalizing flows for variational inference. arXiv preprint arXiv:1803.05649, 2018.
- Burda et al. [2016] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. In The 4th International Conference on Learning Representations (ICLR), 2016.
- Crooks [1998] Gavin E Crooks. Nonequilibrium measurements of free energy differences for microscopically reversible Markovian systems. Journal of Statistical Physics, 90(5-6):1481–1487, 1998.
- Cuendet [2006] Michel A Cuendet. Statistical mechanical derivation of Jarzynski’s identity for thermostated non-hamiltonian dynamics. Physical Review Letters, 96(12):120602, 2006.
- Del Moral et al. [2006] Pierre Del Moral, Arnaud Doucet, and Ajay Jasra. Sequential Monte Carlo samplers. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(3):411–436, 2006.
- Dosovitskiy et al. [2015] Alexey Dosovitskiy, Jost Tobias Springenberg, and Thomas Brox. Learning to generate chairs with convolutional neural networks. In Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on, pages 1538–1546. IEEE, 2015.
- Glasserman [1991] Paul Glasserman. Gradient estimation via perturbation analysis, volume 116. Springer Science & Business Media, 1991.
- Hoffman [2017] Matthew D Hoffman. Learning deep latent gaussian models with Markov chain Monte Carlo. In International Conference on Machine Learning, pages 1510–1519, 2017.
- Hoffman and Gelman [2014] Matthew D Hoffman and Andrew Gelman. The no-u-turn sampler: adaptively setting path lengths in hamiltonian monte carlo. Journal of Machine Learning Research, 15(1):1593–1623, 2014.
- Jarzynski [1997] Christopher Jarzynski. Nonequilibrium equality for free energy differences. Physical Review Letters, 78(14):2690, 1997.
- Jarzynski [2000] Christopher Jarzynski. Hamiltonian derivation of a detailed fluctuation theorem. Journal of Statistical Physics, 98(1-2):77–102, 2000.
- Kingma and Welling [2014] Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. In The 2nd International Conference on Learning Representations (ICLR), 2014.
- Kingma et al. [2016] Diederik P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved variational inference with inverse autoregressive flow. In Advances in Neural Information Processing Systems, pages 4743–4751, 2016.
- Maddison et al. [2017] Chris J Maddison, John Lawson, George Tucker, Nicolas Heess, Mohammad Norouzi, Andriy Mnih, Arnaud Doucet, and Yee Teh. Filtering variational objectives. In Advances in Neural Information Processing Systems, pages 6576–6586, 2017.
- Neal [2001] Radford M Neal. Annealed importance sampling. Statistics and Computing, 11(2):125–139, 2001.
- Neal [2005] Radford M Neal. Hamiltonian importance sampling. www.cs.toronto.edu/pub/radford/his-talk.ps, 2005. Talk presented at the Banff International Research Station (BIRS) workshop on Mathematical Issues in Molecular Dynamics.
- Neal et al. [2011] Radford M Neal et al. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, 2(11), 2011.
- Procacci et al. [2006] Piero Procacci, Simone Marsili, Alessandro Barducci, Giorgio F Signorini, and Riccardo Chelli. Crooks equation for steered molecular dynamics using a Nosé-Hoover thermostat. The Journal of Chemical Physics, 125(16):164101, 2006.
- Rezende and Mohamed [2015] Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In International Conference on Machine Learning, pages 1530–1538, 2015.
- Rezende et al. [2014] Danilo Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning, pages 1278–1286, 2014.
- Salakhutdinov and Murray [2008] Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of deep belief networks. In Proceedings of the 25th international conference on Machine learning, pages 872–879. ACM, 2008.
- Salimans et al. [2015] Tim Salimans, Diederik P Kingma, and Max Welling. Markov chain Monte Carlo and variational inference: Bridging the gap. In International Conference on Machine Learning, pages 1218–1226, 2015.
- Schöll-Paschinger and Dellago [2006] E Schöll-Paschinger and Christoph Dellago. A proof of Jarzynski’s nonequilibrium work theorem for dynamical systems that conserve the canonical distribution. The Journal of Chemical Physics, 125(5):054105, 2006.
- Tieleman and Hinton [2012] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2):26–31, 2012.
- Wolf et al. [2016] Christopher Wolf, Maximilian Karl, and Patrick van der Smagt. Variational inference with Hamiltonian Monte Carlo. arXiv preprint arXiv:1609.08203, 2016.