# Relativistic Monte Carlo

###### Abstract

Hamiltonian Monte Carlo (HMC) is a popular Markov chain Monte Carlo (MCMC) algorithm that generates proposals for a Metropolis-Hastings algorithm by simulating the dynamics of a Hamiltonian system. However, HMC is sensitive to large time discretizations and performs poorly if there is a mismatch between the spatial geometry of the target distribution and the scales of the momentum distribution. In particular the mass matrix of HMC is hard to tune well.

In order to alleviate these problems we propose relativistic Hamiltonian Monte Carlo, a version of HMC based on relativistic dynamics that introduce a maximum velocity on particles. We also derive stochastic gradient versions of the algorithm and show that the resulting algorithms bear interesting relationships to gradient clipping, RMSprop, Adagrad and Adam, popular optimisation methods in deep learning. Based on this, we develop relativistic stochastic gradient descent by taking the zero-temperature limit of relativistic stochastic gradient Hamiltonian Monte Carlo. In experiments we show that the relativistic algorithms perform better than classical Newtonian variants and Adam.

## 1 Introduction

Markov chain Monte Carlo (MCMC) techniques based on continuous-time physical systems allow the efficient simulation of posterior distributions, and are an important mainstay of Bayesian machine learning and statistics. Hamiltonian Monte Carlo (HMC) [1, 2, 3, 4] is based on Newtonian dynamics on a frictionless surface, and has been argued to be more efficient than techniques based on diffusions [5]. On the other hand, stochastic gradient MCMC techniques based on diffusive dynamics [6, 7, 8, 9] have allowed scalable Bayesian learning using mini-batches.

An important consideration when designing such MCMC algorithms is adaptation or tuning to the geometry of the space under consideration [10, 11, 12]. To give a concrete example, consider HMC. Let be a target density which can be written as where is interpreted as the potential energy of a particle in location . HMC introduces an auxiliary momentum variable so that the joint distribution is where the Hamiltonian is . The quantity , where is the mass of the particle, represents the kinetic energy. Denoting by and the time derivative of and , the leapfrog discretisation [2] of Hamilton’s equations and gives

(1) |

where is the time discretisation and the velocity is . If is too small, the particle travels too fast leading to an accumulation of discretisation error. To compensate, needs to be set small and the computational cost required increases. On the other hand, if is too large, the particle travels slowly resulting in slow mixing of the resulting Markov chain. While the mass parameter can be tuned, e.g. to optimise acceptance rate according to theory [11], it only incidentally controls the velocity which ultimately affects the discretisation error and algorithm stability.

In this paper, we are interested in making MCMC algorithms based on physical simulations more robust by directly controlling the velocity of the particle. This is achieved by replacing Newtonian dynamics in HMC with relativistic dynamics [13], whereby particles cannot travel faster than the “speed of light”. We also develop relativistic variants of stochastic gradient MCMC algorithms and show that they work better and are more robust than the classical Newtonian variants.

The relativistic MCMC algorithms we develop have interesting relationships with a number of optimisation algorithms popular in deep learning. Firstly, the maximum allowable velocity (speed of light) is reminiscent of gradient clipping [14]. Our framework gives Bayesian alternatives to gradient clipping, in the sense that our algorithms demonstrably sample from instead of optimising the target distribution (exactly or approximately). Secondly, the resulting formulas (see (4)), which include normalisations by norms, bear strong resemblances to (but are distinct from) RMSprop, Adagrad and Adam [15, 16, 17]. Motivated by these connections, we develop a relativistic stochastic gradient descent (SGD) algorithm by taking the zero-temperature limit of relativistic SGHMC, and show in an experiment on feedforward networks trained on MNIST that it achieves better performance than Adam.

## 2 Relativistic Hamiltonian Dynamics

Our starting point is the Hamiltonian which governs dynamics in special relativity [13],

(2) | ||||

(3) |

where the target density is , for interpreted as the position of the particle, is a momentum variable, and is the relativistic kinetic energy. The two tunable hyperparameters are a scalar “rest mass” and the “speed of light” which bounds the particle’s speed. The joint distribution is separable, with the momentum variable having marginal distribution , a multivariate generalisation of the symmetric hyperbolic distribution.

The resulting dynamics are given by Hamilton’s equations, which read

(4) |

where can be interpreted as a relativistic mass and is the velocity of the particle (c.f. the velocity under Newtonian dynamics is ). Note that the relativistic mass is lower bounded by and increases asymptotically to as the momentum increases, so that the speed is upper bounded by and asymptototes to . On the other hand, the larger the rest mass the smaller the typical “cruising” speed of the particle is. Conversely, as the particle will travel at the speed of light at all times, i.e. it behaves like a photon. This gives an intuition for tuning both hyperparameters and based on knowledge about the length scale of the target density: we choose as an upper bound on the speed at which the parameter of interest changes at each iteration, while we choose to control the typical sensible speed at which the parameter changes. We will demonstrate this intuition in the experimental Section 5.

In very high dimensional problems (e.g. those in deep learning, collaborative filtering or probabilistic modelling), the maximum overall speed imposed on the system might need to be very large so that reasonably large changes in each coordinate are possible at each step of the algorithm. This means that each coordinate could in principle achieve a much higher speed than desirable. An alternative approach is to upper bound the speed at which each coordinate changes by choosing the following relativistic kinetic energy

(5) |

where indexes the coordinates of the -dimensional system, and each coordinate can have its own mass and speed of light . This leads to the same Hamiltonian dynamics (4), but with all variables interpreted as vectors, and all arithmetic operations interpreted as element-wise operations. Experimental results will be based on the separable variant which showed consistently better performance. For simplicity, in the theoretical sections we will describe only the non-separable version (3).

### 2.1 Relativistic Hamiltonian Monte Carlo

As a demonstration of the relativistic Monte Carlo framework, we derive a relativistic variant of the Hamiltonian Monte Carlo (HMC) algorithm [2, 1]. In the following, we will refer to all classical variants as Newtonian as they follow Newtonian dynamics (e.g. Newtonian HMC (NHMC) vs relativistic HMC (RHMC)).

Each iteration of HMC involves first sampling the momentum variable, followed by a series of leapfrog steps, followed by a Metropolis-Hastings accept/reject step. The momentum can be simulated by first simulating the speed followed by simulating uniformly distribution on the sphere with radius . The speed has marginal distribution given by a symmetric hyperbolic distribution, for which specialised random variate generators exist. Alternatively, the density is log-concave, and we used adaptive rejection sampling to simulate it. The leapfrog steps [18] with stepsize follows (4) directly: Set to the current location and momentum and for ,

(6) |

The leapfrog steps leave the Hamiltonian approximately invariant and is volume-preserving [19], so that the MH acceptance probability is simply .

Observe that the momentum is unbounded and may become very large in the presence of large gradients in the potential energy. However, the size of the update is bounded by and therefore the stability of the proposed sampler can be controlled. This behaviour is essential for good algorithmic performance on complex models such as neural networks, where the scales of gradients can vary significantly across different parameters and may not be indicative of the optimal scales of parameter changes. This is consistent with past experiences optimising neural networks, where it is important to adapt the learning rates individually for each parameter so that typical parameter changes stay in a sensible range [14, 15, 16, 17, 20]. Such adaptation techniques have also been explored for stochastic gradient MCMC techniques [21, 22], but we will argue in Sections 3.2 and 5 that they introduce another form of instability that is not present in the relativistic approach.

## 3 Relativistic Stochastic Gradient Markov Chain Monte Carlo

In recent years stochastic gradient MCMC (SGMCMC) algorithms have been very well explored as methods to scale up Bayesian learning by using mini-batches of data [6, 9, 8, 7, 23]. In this section we develop relativistic variants of SGHMC [9] and SGNHT [8, 23]. These algorithms include momenta, which serve as reservoirs of previous gradient computations, thus can integrate and smooth out gradient signals from previous mini-batches of data. As noted earlier, because the momentum can be large, particularly as the stochastic gradients can have large variance, the resulting updates to can be overly large, and small values of the step size are required for stability, leading potentially to slower convergence. This motivates our development of relativistic variants.

We make use of the framework of [7] for deriving SGMCMC algorithms. However, we like to note the same characterisations have already been obtained much earlier in [24, 25] and partial results even much earlier in the physics literature. Let be a collection of variables with target distribution . Consider an SDE in the form

(7) |

where is a symmetric positive-definite diffusion matrix, is a skew-symmetric matrix which describes energy-conserving dynamics, is a correction factor, and is the -dimensional Wiener process (Brownian motion). [7] showed that under mild conditions the SDE converges to the desired stationary distribution . Hence in the following we simply have to choose the appropriate , and . Once the correction factor is computed, the SDE discretised, and a stochastic estimate for substituted, we obtain a correct relativistic SGMCMC algorithm. The stochastic gradient has asymptotically negligible variance compared to the noise injected by .

### 3.1 Relativistic Stochastic Gradient Hamiltonian Monte Carlo

Suppose our noisy gradient estimate of is based on a minibatch of data. Then, appealing to the central limit theorem, we can assume that . Let and be the relativistic Hamiltonian in (3). Choosing

(8) |

where is a fixed symmetric diffusion matrix results in the following relativistic SGHMC dynamics:

Using a simple Euler-Maruyama discretisation, the relativistic SGHMC algorithm is,

(9) |

where is an estimate of the noise coming from the stochastic gradient . The term can be interpreted as friction, which prevents the kinetic energy to build up and corrects for the noise coming from the stochastic gradient.

It is useful to compare RSGHMC with preconditioned SGLD [21, 22] which attempt to adapt the SGLD algorithm to the geometry of the space, using adaptations similar to RMSProp, Adagrad or Adam. The relevant term above is the update to :

(10) |

Note the surprising similarity to RMSProp, Adagrad and Adam, with the main difference being that the relativistic mass adaptation uses the current momentum instead of being separately estimated using the square of the gradient. This has the advantage that the relativistic SGHMC enforces a maximum speed of change. In contrast, preconditioned SGLD has the following failure mode which we observe in Section 5: when the gradient is small, the adaptation scales up the gradient so that the gradient update has a reasonable size. However it also scales up the injected noise, which can end up being significantly larger than the gradient update, and making the algorithm unstable.

### 3.2 Relativistic Stochastic Gradient Descent (with Momentum)

Motivated by the relationship to RMSprop, Adagrad and Adam, we develop a relativistic stochastic gradient descent (RSGD) algorithm with momentum by taking the zero-temperature limit of the RSGHMC dynamics. This idea connects to Santa [26], a recently developed algorithm where an annealing scheme on the system temperature makes it possible to obtain a stochastic optimization algorithm starting from a Bayesian one.

From thermodynamics [27], the canonical (Gibbs Boltzmann) density is proportional to where , begin the Boltzmann constant and the temperature. Previously we have been using which corresponds to the posterior distribution. For general ,

(11) |

By taking the target distribution becomes more peaked around the MAP estimator. Simulated annealing [28, 29, 26], which increases over time, forces the sampler to converge to a MAP estimator. Instead, we can derive RSGD by rescaling time as well, guaranteeing a non-degenerate limit process. Letting , , so that

(12) |

and letting , we obtain the following ODE:

(13) |

Discretising the above then gives RSGD. Notice that if the above converges, i.e. , it does so at a critical point of . Similar to other adaptation schemes, RSGD adaptively rescales the learning rates for different parameters, which enables effective learning especially in high dimensional settings. Moreover, the update in each iteration is upper bounded by the speed of light. Our algorithm differs from others through the use of a momentum, and adapting based on the momentum instead of the average of squared gradients.

## 4 A Stochastic Gradient Nosé-Hoover Thermostat for Relativistic Hamiltonian Monte Carlo

Borrowing a second concept from physics, SGHMC can be improved by introducing a dynamic variable that adaptively increases or decreases the momenta. The new variable can be thought of as a thermostat in a statistical physics setting and its dynamics expressed as

(14) |

The idea is that the system adaptively changes the friction for the momentum, ‘heating’ or ‘cooling down’ the system. The dynamics of this new variable, known as Nosé-Hoover [30] thermostat due to its links to statistical physics, has been shown to be able to remove the additional bias due to the stochastic gradient provided that the noise is isotropic Gaussian and spatially constant ([8],[19]). In general, the noise is neither Gaussian, spatially constant or isotropic. Nevertheless, there is numerical evidence that the thermostat increases stability and mixing. Heuristically, the dynamic for can be motivated by the fact that at equilibrium we have

and hence . The additional dynamics pushes the system towards suggesting that the distribution will be moved closer to the equilibrium. This gives a recipe for a stochastic gradient Nosé-Hoover thermostat with a general kinetic energy .

We first augment the Hamiltonian with :

We are now in the position to derive the SDE preserving the probability density by adopting the framework of [7] and defining:

(15) | ||||

(16) | ||||

(17) |

From (7) it follows that and the dynamics becomes

where is the Laplace operator defined as . For the relativistic kinetic energy , we have that with a scalar and that . The Stochastic Gradient Nosé-Hoover Thermostat for relativistic HMC follows:

## 5 Experiments

### 5.1 Small examples

All the experimental results in this section are based on the separable versions (5) as they give superior results than the non-separable counterparts. We first explore the performances of the algorithms on a set of small examples including a two dimensional banana function (Banana) [31] with density , and Gaussian mixture models (GMM1, GMM2, GMM3) obtained by combining the three following Gaussian random variables with equal mixing proportions: , , , where . When the three Gaussians have the same variance and lower means larger the discrepancies between their variances and thus a wider range of length scales and log density gradients. The density plots of the examples can be found in the top row of Figure 3.

We start with an exploration of the behaviour of RHMC as the tuning parameters , and are varied. First we considered the effective sample sizes (ESS) of the algorithm on the Banana and GMM1 datasets. We varied both and over a grid, and computed the average ESS, over chains, each of length for Banana, and over 100 chains of length for GMM1. The ESS contour plots can be found in Figure 1, which suggests that and can be independently tuned. While controls the time discretisation of the continuous-time dynamics, controls the maximum change in the parameters at each leapfrog step. Next we varied the mass parameter for GMM1, showing plots in Figure 2. As expected the ESS is optimised at an intermediate value of , and the average “cruising speed” decreases with . In order to understand how to tune , on the fourth panel we overlaid two contour plots: one for ESS and the other for . We see that the cruising speed correlates much better with the ESS than does, which suggests that should be tuned via , e.g. by the user specifying a desired value for and being adapted to achieve the speed (noting that and have a monotonic relationship which makes for easy adaptation).

We next compare the performances of NHMC and RHMC for a wide range of step sizes, via the ESS (higher better), the mean absolute error (MAE) between the true probabilities and the histograms of the sample frequencies (lower better), and the log Stein discrepancy [32] which is a more accurate measure of sample quality (lower better). The reason being the Wasserstein distance can be bounded in terms of the Stein discrepancy thus accounting for bias and insufficient exploration of the target. The results can be found in rows 2-4 of Figure 3. It can be seen that RHMC achieves better performance and is strikingly more robust to the step size than NHMC. As expected, this behaviour is particularly pronounced when the step size is large. Moreover, when the gradients of the target model span a large range of values (GMM2, GMM3), the improvements yielded by the relativistic variants are more pronounced. These results confirm that, since the speed of particles is bounded by , RHMC is less sensitive to the presence of large gradients in the target density and more stable with respect to the choice of , allowing for a more efficient exploration of the target density.

Next we compare both the Newtonian and relativisitic variants of HMC and SGMCMC algorithms on a simulated 3-dimensional logistic regression example with observations. For the stochastic versions of the algorithms, we use mini-batches of size . After a burn-in period of iterations, we calculated the Stein discrepancy for different while keeping the product fixed. To make a fair comparison, we used samples for NHMC and RHMC and samples for the SGMCMC algorithms. From Figure 4, we see that the relativistic variants are significantly more robust than the Newtonian variants. The NHT algorithms were able to correct for stochastic gradient noise and performed better than SGHMC algorithms. Particularly, RSGNHT had lower Stein discrepencies than other algorithms for most values of .

### 5.2 Neural Network

Turning to more complex models, we first considered a neural network with 50 hidden units and initialized its weights by the widely used Xavier initialization. We used the Pima Indians dataset for binary classification (552 observations and 8 covariates) to compare the relativistic and the preconditioning approach. Indeed, these methods represent two different ways to normalise gradients so that the update sizes are reasonable for the local lengthscale of the target distribution. In particular we consider SGLD Adam, namely a preconditioned SGLD algorithm with an additional Adam-style debiasing of the preconditioner. Figure 5 compares the test-set accuracy of SGLD Adam with RSGD and RHMC, showing that the first is significantly outperformed by the relativistic algorithms. Due to Xavier initialization, all of the weights are small which causes small gradients, therefore the injected noise becomes very large due to the rescaling by the inverse of squared root of the average gradients, which makes SGLD Adam unstable. The histograms reveal that at the first iteration SGLD Adam causes the weights to become extremely large and this strongly compromises the performance of SGLD Adam, which takes a long time to recover. The relativistic framework represents therefore a much better approach to perform adaptation of the learning rates specific to each parameter.

We then apply our algorithms to the standard MNIST dataset, which consists handwritten digital images from 10 classes with a training set of size and a test set of size . We tested our optimization algorithm on a single layer with hidden units and a multi-layer neural network with hidden units. In Figure 6 a comparison with Adam and Santa [26] is displayed, their relation is discussed in more detail in Section 3.2. Note that, to ensure a fair comparison, we consider Santa SGD, namely a version of Santa that does not make use of symmetric splitting and simulated annealing. In other words, we adopt an Euler integration scheme for all algorithms and consider the zero-temperature limit for Santa. It can be observed that our algorithm is competitive with Adam and is able to achieve a lower error rate, particularly with the hidden units architecture. Moreover, RSGD performs significantly better than Santa SGD on all the considered architectures.

## 6 Conclusion

Our numerical experiments demonstrate that the relativistic algorithms discussed in this article are much more stable and robust to the choice of parameters and noise in stochastic gradients compared to the Newtonian counterparts. Moreover, we have a good understanding on how to choose the parameters , and . First the discretization parameter needs to be set, then we choose the maximal step and in relation we choose the "cruising speed" by picking . The connection of our algorithms with popular stochastic optimizers such as Adam and RMSProp is novel and gives an interesting perspective to understand them.

Each of the proposed methodologies has scope for further research. The HMC version of the algorithm could be improved by employing some more advanced HMC methodology such as the NUTS version [4] and using partial moment refreshment instead of Adaptive Rejection Sampling [2]. The relativistic stochastic gradient descent seems to be very competitive with state of the art stochastic gradient methods for fitting neural nets. Additionally, better numerical integration schemes could be employed. We also anticipate a variety of algorithms with different kinetic energies to be developed following our work. Last but not least, the strong simulation evidence should be complemented by more theoretical insights.

## Acknowledgement

XU thanks the PAG scholarschip and New College for support. LH and VP is funded by the EPSRC doctoral training centre OXWASP through EP/L016710/1. YWT gratefully acknowledges EPSRC for research funding through grant EP/K009362/1. SJV thanks EPSRC for funding through EPSRC Grants EP/N000188/1 and EP/K009850/1.

## References

- [1] S. Duane, A.D. Kennedy, B.J. Pendleton, and D. Roweth. Hybrid Monte Carlo. Physics letters B, 195(2):216–222, 1987.
- [2] R. M. Neal. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, 54:113–162, 2010.
- [3] B. Carpenter, A. Gelman, M. Hoffman, D. Lee, B. Goodrich, M. Betancourt, M. A. Brubaker, J. Guo, P. Li, and A. Riddell. Stan: A probabilistic programming language. Journal of Statistical Software (in press), 2016.
- [4] M. D. Hoffman and A. Gelman. The no-u-turn sampler: Adaptively setting path lengths in hamiltonian monte carlo. Journal of Machine Learning Research, 15:1593–1623, 2014.
- [5] G. O. Roberts and R. L. Tweedie. Exponential convergence of langevin distributions and their discrete approximations. Bernoulli, pages 341–363, 1996.
- [6] M. Welling and Y.W. Teh. Bayesian Learning via Stochastic Gradient Langevin Dynamics. ICML, pages 681–688, 2011.
- [7] Y. Ma, T. Chen, and E. B. Fox. A Complete Recipe for Stochastic Gradient MCMC. In NIPS, 2015.
- [8] N. Ding, Y. Fang, R. Babbush, C. Chen, R. D. Skeel, and H. Neven. Bayesian Sampling Using Stochastic Gradient Thermostats. NIPS, pages 3203–3211, 2014.
- [9] T. Chen, E. Fox, and C. Guestrin. Stochastic Gradient Hamiltonian Monte Carlo. ICML, pages 1683–1691, 2014.
- [10] M. Girolami and B. Calderhead. Riemann manifold Langevin and Hamiltonian Monte Carlo methods. Journal of the Royal Statistical Society: Series B, 73(2):123–214, March 2011.
- [11] A. Beskos, N. Pillai, G. O. Roberts, J. M. Sanz-Serna, and A. M. Stuart. Optimal tuning of hybrid Monte Carlo algorithm. Bernoulli, 19:1501–1534, 2013.
- [12] S. Patterson and Y. W. Teh. Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex. NIPS, pages 3102–3110, 2013.
- [13] A. Einstein. On the Electrodynamics of Moving Bodies. Annalen der Physik, 17, 1905.
- [14] R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training recurrent neural networks. ICML, 2013.
- [15] T. Tieleman and G. Hinton. Lecture 6.5-RMSProp: Divide the gradient by a running average of its recent magnitude, 2012. COURSERA: Neural Networks for Machine Learning.
- [16] J. Duchi, E. Hazan, and Y. Singer. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. JMLR, 12:2121–2159, July 2011.
- [17] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. ICLR, 2015.
- [18] J. M. Sanz-Serna and M.P. Calvo. Numerical Hamiltonian problems. Applied mathematics and mathematical computation. Chapman & Hall, 1994.
- [19] B. Leimkuhler and X. Shang. Adaptive Thermostats for Noisy Gradient Systems. SIAM Journal on Scientific Computing, 38(2):A712–A736, 2016.
- [20] Umut Şimşekli, Roland Badeau, A Taylan Cemgil, and Gaël Richard. Stochastic Quasi-Newton Langevin Monte Carlo. ICML, 2016.
- [21] Chunyuan Li, Changyou Chen, David Carlson, and Lawrence Carin. Preconditioned Stochastic Gradient Langevin Dynamics for Deep Neural Networks. In AAAI Conference on Artificial Intelligence, 2016.
- [22] L. Hasenclever, S. Webb, T. Lienart, Y. Whye Teh, S. Vollmer, B. Lakshminarayanan, and C. Blundell. Distributed Bayesian Learning with Stochastic Natural-gradient Expectation Propagation and the Posterior Server. ArXiv e-print 1512.09327, December 2015.
- [23] X. Shang, Z. Zhu, B. Leimkuhler, and A. J. Storkey. Covariance-Controlled Adaptive Langevin Thermostat for Large-Scale Bayesian Sampling. In NIPS, pages 37–45, 2015.
- [24] C. Villani. Hypocoercivity. Number 949-951. American Mathematical Soc., 2009.
- [25] G. Pavliotis. Stochastic processes and applications: Diffusion Processes, the Fokker-Planck and Langevin Equations, volume 60. Springer, 2014.
- [26] Changyou Chen, David Carlson, Zhe Gan, Chunyuan Li, and Lawrence Carin. Bridging the gap between stochastic gradient MCMC and stochastic optimization. AISTATS, 2016.
- [27] Normand M Laurendeau. Statistical Thermodynamics: Fundamentals and Applications. Cambridge University Press, 2005.
- [28] S. Geman and C. Hwang. Diffusions for global optimization. SIAM Journal on Control and Optimization, 24(5):1031–1043, 1986.
- [29] C. Andrieu and A. Doucet. Simulated annealing for maximum a posteriori parameter estimation of hidden Markov models. IEEE Transactions on Information Theory, 46(3):994–1004, 2000.
- [30] B. Leimkuhler and S. Reich. A Metropolis adjusted Nosé-Hoover thermostat. ESAIM: Mathematical Modelling and Numerical Analysis, 43:743–755, 7 2009.
- [31] H. Haario, E. Saksman, and J. Tamminen. Adaptive Proposal Distribution for Random Walk Metropolis Algorithm. Computational Statistics, 14(3):375–396, 1999.
- [32] J. Gorham and L. W. Mackey. Measuring Sample Quality with Stein’s Method. NIPS, pages 226–234, 2015.