Variational Measure Preserving Flows
Abstract
Probabilistic modelling is a general and elegant framework to capture the uncertainty, ambiguity and diversity of hidden structures in data. Probabilistic inference is the key operation on probabilistic models to obtain the distribution over the latent representations given data. Unfortunately, the computation of inference on complex models is extremely challenging. In spite of the success of existing inference methods, like Markov chain Monte Carlo(MCMC) and variational inference(VI), many powerful models are not available for large scale problems because inference is simply computationally intractable. The recent advances in using neural networks for probabilistic inference have shown promising results on this challenge. In this work, we propose a novel general inference framework that has the strength from both MCMC and VI. The proposed method is not only computationally scalable and efficient, but also has its root from the ergodicity theorem, that provides the guarantee of better performance with more computational power. Our experiment results suggest that our method can outperform stateoftheart methods on generative models and Bayesian neural networks on some popular benchmark problems.
Variational Measure Preserving Flows
Yichuan Zhang, Jose Miguel HernandezLobato, Zoubin Ghahramani Department of Engineering, University of Cambridge yichuan.zhang@eng.cam.ac.uk, jmh233@cam.ac.uk zoubin@eng.cam.ac.uk
noticebox[b]Preprint. Work in progress.\end@float
1 Introduction
Approximate statistical inference on probabilistic models with unnormalised density functions is fundamentally important in machine learning research. In Bayesian inference, the partition function of posterior distributions are typically intractable to compute, so the success of Bayesian inference directly depends on approximate inference techniques. Undirected graphical models (UGMs) in general also have intractable partition functions, that make the maximum likelihood estimation intractable. Because of this, training UGMs requires to use approximate inference to estimate gradients. The recent work on deep generative models (DGMs) combines probabilistic graphical models and deep neural networks (NNs), that has shown promising progress on many interesting unsupervised learning problems. However, inference on DGMs is even more challenging, because they are more complex than the classic probabilistic models.
The research on new modelling techniques is often motivated by new inference techniques. Many popular models in machine learning enjoys their success in practice not only because they are powerful models but also there are effective inference algorithms on those models. On the other hand, many other powerful models, like Boltzmann machines, have very limited applications due to the lack of efficient inference techniques. Most inference methods in machine learning research are variants of Markov chain Monte Carlo (MCMC) methods and variational inference (VI), originally developed in statistics and physics. Although it is difficult to overstate the contribution of MCMC and VI to machine learning in the last decade, they are certainly not satisfying for handling large scale complex models like DGMs. In spired by the idea of DGMs, there has been a trend of using NNs for inference in the last few years. In particular, variational autoencoder D.P. Kingma (2014) and normalising flows Rezende and Mohamed (2015) are two wellknown methods of using NNs in classic variational inference methods for training DGMs. However, one drawback of NNbased inference methods is that it is difficult to generalise to many different models. It often requires a lot of engineering effort to achieve good results.
In this work, we propose a novel approximate inference technique inspired by the idea of infinite parallel simulations of MCMC methods and the recent advance in NNbased approximate inference. Our methods have the strength of both the NNbased inference methods and classic MCMC. In particular, it is straightforward to accelerate the computation of our method using parallelised simulations on Graphical Processing Units. On the other hand, our methods enjoy asymptotically convergence to the target distribution like the classic MCMC. So, it is much easier to apply our methods on different models compared with NNbased methods.
2 Background
2.1 Deep Generative Models
We introduce the basic concepts and notations in inference on generative models. Generative models encode the underlying structures in the data as stochastic latent representations, that allow us to easily generate synthetic data by sampling from a distribution.
Formally, let be a dataset formed by a collection of data points . Assume that the data points in follows the distribution . In many popular machine learning applications, like natural language processing and computer vision, data contains very rich and complex structure. So, data distributions are typically highdimensional and multimodal. Latent variable model is a powerful modelling technique to handle complex data distributions. Intuitively, latent variable is introduced to represent the unobserved structure in data and the conditional distribution specifies the data distribution given specific latent representation . With a predefined prior distribution , the joint distribution of data and latent representation is given by
Because we are interested in without , we need to marginalise out to have the data distribution of the model, that is
Although the joint probability function is often known and efficient to compute, the marginal probability above is not MacKay (2002). Approximating computational intractable marginal probability is one of the great challenges in using latent variable models.
Recent advances in deep generative models D.P. Kingma (2014); Goodfellow et al. (2014) shows that deep neural networks can be used to parametrise and greatly boosts the representation power. Each distribution is identified by its parameter . In the context of deep generative models Rezende et al. (2014); D.P. Kingma (2014), there can be multiple layers of latent representations and two adjacent layers are connected by deterministic nonlinear transformations. For example, given the observed , follows the distribution , where the parameter is defined as and denotes a NN.
The most straightforward way to train a generative model is to fit the distribution to the data by maximizing the marginal likelihood, that is
Because does not have close form, ExpectationMaximisation (EM) MacKay (2002) is the classic training algorithm for latent generative models, that requires to compute the expectation of under the distribution . Unfortunately, the distribution is intractable, because it requires computing the marginal likelihood as partition function.
2.2 Variational Inference and Normalizing Flows
Variational inference (VI) is a popular technique to approximate distributions with intractable partition function, like in DGMs. The idea of VI is to approximate the target distribution by another distribution from the parametric family with closedform probability function. To find the distribution in the family that is closest to the distribution we want to approximate, in VI we optimize the variational lower bound
(1) 
w.r.t. the variational parameter . Obviously, the more flexible the family is the tighter variational inference can be. In classic meanfield variational methods, the proposal distribution is often in a factorized form, that is often not flexible enough to produce good approximations on complicated posterior, like deep generative models.
Fortunately, because in VI can be any normalized probability function, we can use NN to construct flexible for complicated posterior. There are two popular ways to do this. One way is to use another deep generative model to approximate and the architecture of NN in often follows the NN in . This is well known as variational autoencoders (VAEs) D.P. Kingma D.P. Kingma (2014). The parameter in are the parameters of the NN. Rezende and Mohamed Rezende and Mohamed (2015) proposed an alternative flexible family of variational distribution called normalizing flows (NFs). Unlike VAEs, NFs are constructed by a sequence of compositions of invertible nonlinear transformations of a random variable , that is
(2) 
where is often a simple distribution, like uniform or normal distribution. We use the shorthand notation for the composition . In NFs, the variational parameter is the collection of parameters in . Because is a deterministic smooth transformation, the density of is simply the pushforward of by , that is
(3) 
where the Jacobian terms are required if the transformations does not preserve volume. If preserves volume, the equation above can be simplified as
(4) 
Substitute (3) into (1), we have the variational lower bound for normalizing flows
If we reparameterise by by (2), we can rewrite the variational lower bound by (3) as
(5) 
This is known as reparameterization trick from D.P. Kingma (2014). It is straightforward to train normalizing flows by minimizing the variational lower bound using stochastic gradient descent (SGD).
Although both VAEs and NFs use powerful NNs to construct flexible variational distributions, it is still challenging to achieve stateoftheart performance by VI compared with generative adversarial models (GANs). There is no clear explanation of why VI does not perform as well as GANs. Matthew Hoffman pointed out that NNbased VI may not be flexible enough because they are restricted to the parametric family with closedform density functions.
2.3 Ergodicity and Markov chain Monte Carlo
Markov chain Monte Carlo (MCMC) is an alternative way to approximate a distribution by generating correlated but asymptotically unbiased samples from the distribution of interest by simulating ergodic Markov chains. To construct an ergodic Markov chain with stationary distribution , we only need to know the unnormalized probability/density function of . We denote the transition distribution in MCMC chains from state to by . By the theorem of ergodicity, given any initial distribution , the distribution after steps of a Markov chain with transition kernel converges towards . Formally, that means for every in the sample space,
The ergodicity of MCMC chains implies that the distribution is preserved by , that is
MCMC methods are very popular in statistics and physics. However, MCMC methods often converge very slowly in machine learning applications. Especially the convergence rate of the chain often is very sensitive to the choice of initial state and the parameters of transition kernel. Due to the strong correlation of the samples from MCMC chains, it often requires much more samples from MCMC chains to achieve the same level of reliable estimation than i.i.d. samples Robert and Casella (2005). Because of the sequential computation of simulating Markov chains, it is very difficult to parallelise MCMC methods. For this reason, MCMC methods are not as popular as variational inference in deep learning.
3 Measure Preserving Flows
Here we present a novel family of approximate inference framework that has the strength of both MCMC and VI. In this section we will focus on approximations of the distribution in general latent variable models from Section 2.1 as the motivating problem. However, it is worth to clarify that there is no reason to restrict our methods to training deep generative models. It is straightforward to apply our methods on many other inference problems like inference in Bayesian neural networks.
3.1 Definition
The idea of our method is to construct an approximate distribution by a mixture of sequential deterministic measurepreserving dynamical systems that preserve the unnormalized density function of . We call such simulation methods measure preserving flows (MPFs). Formally, let be a random variable with distribution in Euclidean space . A measure in is consistent with if for any measurable subset under , and are different only by a constant , that is . Any unnormalised density function of forms a measure.
Definition 3.1.
Measure Preserving Transformations. Let () be a probability space and be a consistent measure with . A mapping is a measure preserving transformation if is measurable and for all , the measurable subsets under . If is a onetoone mapping onto , then T preserves : .
One can verify if a transformation preserves a measure by the following three conditions ^{1}^{1}1These conditions are sufficient but not necessary.:

Bijection: is invertible

Preservation of the value: for all

Preservation of the volume: the determinant of Jacobian .
We motivate the MPFs by the ergodicity of MCMC chains. Consider a finite MCMC chain with steps, where the state of the Markov chain is denoted by and transition kernel is a distribution over . The joint probability of all states of the MCMC chain is
where is the distribution of the initial state. Integrating out the history of the chain , we have the distribution of the last state of the chain as
(6) 
Informally, the ergodicity of MCMC chains implies that converges to the stationary distribution as the number of steps grows.
Similarly, one can construct a MPF by reformulating a MCMC chain by stochastic combination of a sequences of measurepreserving transformations . Then, the density function of the transformed random variable is given by
where is a probability measure over the sequence . Because such MPFs are deterministic reconstruction of MCMC chains, they enjoy exactly the same ergodicity. That means converges to the invariant distribution as grows.
3.2 Measure Preserving Hamiltonian Flows
Hamiltonian Monte Carlo is also known as Hybrid Monte Carlo, because it exploits deterministic Hamiltonian dynamical system to explore the sample space rather than random walk. Let be the random variable we want to simulation. To use HMC, one need to evaluate unnormalized density function and its gradient. There is an auxiliary random variable following distribution and the stationary distribution of HMC chain is defined as . In HMC literature, and are known potential and momentum variables. They are associated with potential energy and kinetic energy defined as the negative log probabilities
The total Hamiltonian energy is defined as
Hamiltonian dynamics is a dynamical system in phase space as
(7) 
which preserves the total Hamiltonian energy , that means
(8) 
We denote the state of HMC chain at step by . The transition from to the next step in HMC is done by simulating the Hamiltonian dynamics (7) for a period of time . denotes the state of dynamics at time during the transition from the state to . Hamiltonian dynamics within the transition from to have the initial momentum is sampled from and position . The classic distribution is Gaussian distribution with zero mean and covariance . Given a initial phase state , the Hamiltonian dynamic of time period forms a smooth map from to , denoted by .
To see is a measurepreserving transformation, we will check that Hamiltonian dynamics satisfies the following three conditions mentioned in last section. First, is a bijective map because it is deterministic and the state is unique at all . Second, the total Hamiltonian energy is constant over time. Because of the preservation of Hamiltonian energy, given any choice of initial state , the density function in the space of time is a constant
Another consequence of energy preservation is that MetropolisHastings correction is not necessary if simulations of Hamiltonian dynamics are nearly perfect. Independent of the length of the simulation period we choose, the distribution of last state always converges to as grows. However, it is not hard to see that with fixed number of steps , is a critical parameter for the rate of convergence of to . The third property is that preserves the volume in the phase space. To be more specific, preservation of volume of a transformation means the determinant of the Jacobian matrix is equal to one. It is straightforward to show it by Liouville’s theorem.
Now we introduce some useful density functions in HMC chains with the image of in the position domain with a fixed initial momentum , that is . We denote the sequence of momentum variables by . Given an observed , the map from the initial state of HMC chain to the last state of the HMC chain is simply a composition of the sequence of simulation of Hamiltonian dynamics as
(9) 
We will use the shorthand notation for the composition above. Use the properties we mentioned on , it is straightforward to see that the joint density of states of HMC chain is
(10) 
where denotes the indicator function.
Because Hamiltonian dynamics preserve the volume in the phase space , there is no determinant of Jacobian of because it is the image of composition of transformations in the position space. Follow (10), it is straightforward to generate samples from from with the following three steps
(11a)  
(11b)  
(11c) 
approximate generative procedure in (10) Hamiltonian measure preserving flows (HMPFs).
3.3 Variational Inference on Measure Preserving Flows
By the connection between HMPFs and normalizing flows, the Hamiltonian simulations (9) are equivalent to invertible transformations (2) in normalizing flows. Replace (2) with (9) in (5), the variational lower bound of normalizing flows using Hamiltonian map
(12) 
where has the sequence of momentum as part of variational parameter . However, unlike normalizing flows, we have the momentum sequence are generated from Gaussian . Because the momentums are independent of and , it straightforward to write down the variational lower bound for HMPFs by taking expectation w.r.t. , that is
(13) 
where . Although the variational momentum variables are marginalized out, HMPFs have three kinds of parameters to learn. These are the parameter of the distribution of initial state , the mass matrix of momentum variable of and the parameters in Hamiltonian dynamics simulation time .
It is worth to mention one important detail here. There is no need to include MetroplisHastings correction for two reasons. First, we are not interested in generating perfect unbiased samples from the posterior. Second, the correction is handled implicitly by optimising variational lower bound. In particular, the simulation time of Hamiltonian dynamics is variational parameter. By setting the simulation time to be 0, it is equivalent to rejecting samples in HMC.
4 Related Work
The recent interest in training deep generative models has been a great motivation to efficient stochastic inference methods on large scale datasets. Many promising results come from hybrid inference methods that combines MCMC and variational inference has attracted increasingly attention. Salimans et al. Salimans et al. (2015) proposed an interesting idea to achieve tighter variational lower bound by constructing flexible approximate distribution from MCMC chains. Following this idea, they proposed an elegant variational method called Hamiltonian variational Inference (HVI), where the variational approximate distribution is the joint distribution of states of HMC chains. The framework of HVI is very elegant, however, the variational lower bound become intractable to compute because it requires the probability of the history of the HMC chain given its last state. To overcome this problem, they introduced an approximate distribution by some tractable distribution with flexible parametric form. Although the results of HVI show the improvement of performance by using approximate HMC chain, it still cannot completely remove closed form distribution in the variational inference. One attractive feature of HVI is that the parameters of HMC chains are part of variational parameters updated jointly with the model parameters.
Hoffman Hoffman (2017) proposed an appealling alternative solution to the problem in HVI without introducing auxiliary approximation. The idea is to use Monte Carlo approximation of the marginal likelihood by samples from HMC chains initialized from samples from closedform variational distribution. Han et al. Han et al. (2017) proposed a very similar framework, but they consider other Metropolisadjusted Langevin dynamics. This is idea is very similar to contrastive divergence in Hinton (2002), where the intractable gradient of partition function is estimated by Gibbs sampling. There are two drawbacks of such methods. First, it relies on the assumption of MCMC chain can effectively approximate samples from the posterior by initializing from variational distribution. Second, unlike HVI, there is no easy way to adapt the parameters in MCMC methods based variational lower bound. It is very unlikely that MCMC methods are welltuned, because the variational distribution and posterior are constantly updated during variational inference. That means MCMCbased posterior approximation may not contribute to a tighter variational lower bound.
5 Experiments
Here, we provide the empirical evidence of power of HMPFs with three inference tasks. Due to limited space, we will provide the future details in supplementary materials. The first tasks we consider is to use HMPFs as variational proposal to approximate given bivariate distributions with known parameters. First, we consider simply fit a HMPF with initial is standard normal distribution and the target distribution is a Gaussian distribution with mean and covariance matrix . In particular, here we consider the preserved measure is the normalized density function of the target Gaussian. This is indeed an easy benchmark, but we consider it as a sanity check to verify that our implementation of ELBO (13) is correct. As mentioned earlier, given the momentum sequence, the normalized probability density of samples from MPFs is equal to the density of initial sample from . Because both the target Gaussian density function and HMPF variational proposal are normalized, the variational lower bound should be very close to 0 if HMPFs can approximate the correlated Gaussian well.
Methods  Layers  

VAE BASELINE D.P. Kingma (2014)  1  90.14 
HMCDLGM2Hoffman (2017)  1  85.15 
IWAE Burda et al. (2015)  1  84.78 
IWAE Burda et al. (2015)  2  82.90 
HMCDLGM20Hoffman (2017)  1  82.53 
VGPDustin Tran (2016)  2  81.90 
LVAE Sø nderby et al. (2016)  5  81.74 
HVI Salimans et al. (2015)  3 Conv+1 Fully connected  83.49 
HMPFs (conv)  3 Conv+1 Fully connected  71.85 

Next experiment we consider is training convolutional VAEs on the MNIST dataset, that is the standard benchmark. For the generation model, we use the deconvolutional network with the same architecture in Salimans et al. (2015); Li et al. (2017). For HMPFs inference network, we use 50 HMC steps and 10 Leapfrog steps in each HMC step. The initial distribution is normal distribution. The parameters in HMPFs include the mean and standard deviations of initial normal distribution and Leapfrog step sizes for each latent dimension at each HMC step. We also tried to train a shallow generative model with 1 layer of 1000 relu hidden units and 50 latent states on dynamically binarised FashionMNISTXiao et al. (2017). In the case of VAE, the encoder network has the same architecture. For HMPF based encoder, we use 30 HMC steps with 5 Leapfrog steps in each HMC step.
In our final experiment, we consider approximate the posterior distribution of Bayesian neural networks. We use four UCI datasets and compare HMPFs with relevant SGHMC methods from Springenberg et al. (2016). The NNs in this experiments has 50 hidden layers and 1 real valued output. The HMPFs we used contains 50 HMC steps with 3 Leapfrog steps.
Method/Dataset  Boston Housing  Yacht Hydrodynamics  Concrete  Wine Quality Red 

SGHMC (best average)Springenberg et al. (2016)  3.474 0.511  13.579 0.983  4.871 0.051  1.825 0.75 
SGHMC (tuned per dataset)Springenberg et al. (2016) 
2.489 0.151  1.753 0.19  4.165 0.723  1.287 0.28 
SGHMC (scaleadapted)Springenberg et al. (2016)  2.536 0.036  1.107 0.083  3.384 0.24  1.041 0.17 


HMPFs  2.17 0.07  0.47 0.06  2.71 0.03  0.71 0.03 
6 Summary
In this work, we propose a novel general variational inference framework that is inspired by infinite many parallel simulation of MCMC chains. The proposed method achieved stateoftheart results on many basic benchmarks. Different from most existing work on inference by combining HMC and variational inference, our method enjoys the same asymptotical convergence as HMC methods, because there is no need for auxiliary approximation in computing the variational lower bound. Compared with NNbased variational inference, our method is much easier to use and more general, because there is no need to adhoc engineering for inference networks. For the future work, it will be very interesting to study the convergence of HMPFs to the target distribution with the increasing number of Hamiltonian simulations. With this framework, many existing advanced HMC methds based on manifolds can be introduced to variational inference.
References
 Burda et al. [2015] Y. Burda, R. Grosse, and R. Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015.
 D.P. Kingma [2014] M. W. D.P. Kingma. Autoencoding variational bayes. In The International Conference on Learning Representations (ICLR), 2014.
 Dustin Tran [2016] D. M. B. Dustin Tran, Rajesh Ranganath. The variational gaussian process. In International Conference on Learning Representations, 2016.
 Goodfellow et al. [2014] I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
 Han et al. [2017] T. Han, Y. Lu, S.C. Zhu, and Y. N. Wu. Alternating backpropagation for generator network. In AAAI, volume 3, page 13, 2017.
 Hinton [2002] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1771–1800, 2014/09/08 2002. doi: 10.1162/089976602760128018. URL http://dx.doi.org/10.1162/089976602760128018.
 Hoffman [2017] M. D. Hoffman. Learning deep latent Gaussian models with Markov chain Monte Carlo. In D. Precup and Y. W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1510–1519, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR. URL http://proceedings.mlr.press/v70/hoffman17a.html.
 Li et al. [2017] Y. Li, R. E. Turner, and Q. Liu. Approximate inference with amortised mcmc. arXiv preprint arXiv:1702.08343, 2017.
 MacKay [2002] D. J. C. MacKay. Information Theory, Inference & Learning Algorithms. Cambridge University Press, New York, NY, USA, 2002. ISBN 0521642981.
 Rezende and Mohamed [2015] D. J. Rezende and S. Mohamed. Variational inference with normalizing flows. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning  Volume 37, ICML’15, pages 1530–1538. JMLR.org, 2015. URL http://dl.acm.org/citation.cfm?id=3045118.3045281.
 Rezende et al. [2014] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
 Robert and Casella [2005] C. P. Robert and G. Casella. Monte Carlo Statistical Methods (Springer Texts in Statistics). SpringerVerlag New York, Inc., Secaucus, NJ, USA, 2005. ISBN 0387212396.
 Salimans et al. [2015] T. Salimans, D. Kingma, and M. Welling. Markov chain monte carlo and variational inference: Bridging the gap. In International Conference on Machine Learning, pages 1218–1226, 2015.
 Sø nderby et al. [2016] C. K. Sø nderby, T. Raiko, L. Maalø e, S. r. K. Sø nderby, and O. Winther. Ladder variational autoencoders. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 3738–3746. Curran Associates, Inc., 2016. URL http://papers.nips.cc/paper/6275laddervariationalautoencoders.pdf.
 Springenberg et al. [2016] J. T. Springenberg, A. Klein, S. Falkner, and F. Hutter. Bayesian optimization with robust bayesian neural networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 4134–4142. Curran Associates, Inc., 2016. URL http://papers.nips.cc/paper/6117bayesianoptimizationwithrobustbayesianneuralnetworks.pdf.
 Xiao et al. [2017] H. Xiao, K. Rasul, and R. Vollgraf. Fashionmnist: a novel image dataset for benchmarking machine learning algorithms, 2017.