Variational Inference via Transformations on Distributions1footnote 11footnote 1Github Repository for Code in Tensorflow

Variational Inference via Transformations on Distributions111Github Repository for Code in Tensorflow

Shibhansh Dohare
IIT Kanpur
14644
sdohare@iitk.ac.in
&
Siddhartha Saxena
IIT Kanpur
150719
siddsax@iitk.ac.in
&
Jaivardhan Kapoor
IIT Kanpur
150300
jkapoor@iitk.ac.in
Abstract

Variational inference methods often focus upon the problem of efficient model optimization, with little emphasis on the choice of the approximating posterior. In this paper we review and implement the various methods that enable us to develop a rich family of approximating posteriors. We show that one particular method employing transformations on distributions, results in developing very rich and complex posterior approximation. We analyze its performance on the MNIST dataset by implementing with a Variational Autoencoder, and demonstrate its effectiveness in learning better posterior distributions.

 

Variational Inference via Transformations on DistributionsGithub Repository for Code in Tensorflow


  Shibhansh Dohare IIT Kanpur 14644 sdohare@iitk.ac.in Siddhartha Saxena IIT Kanpur 150719 siddsax@iitk.ac.in Jaivardhan Kapoor IIT Kanpur 150300 jkapoor@iitk.ac.in

1 Introduction

Posterior computation is one of the central problems in Bayesian Machine Learning, in cases when we have conjugate priors and likelihood models, posterior can be calculated exactly in closed form. Although most of the time, our models are much more complicated and hence we have to deal with non-conjugacy. This inspired researchers to approximate the posterior via some heuristics. Two of the most popular heuristics to approximate the posterior are Variational Inference and Markov Chain Monte-Carlo sampling.

Variational Inference and MCMC sampling come from two different communities. MCMC has been primarily been studied by Statisticians whereas, Variational Inference has been studied extensively by the Machine Learning community. This has been primarily been due to the fact that VI is a much faster way to approximate the posterior whereas it lacks the guarantee of convergence at an infinite number of iterations.

In this work, we first take a brief survey of some popular Variation Inference techniques and then study and replicate the results of one of the most radical approaches in recent times in VI.

2 Related Approaches : A Brief Survey of Variational Inference Techniques

Variation Inference uses a tight lower bound on the model evidence or ELBO. It tries to maximize it with respect to a variational probability distribution that approximates the posterior. The core objective of variational inference relies on finding the optimal variational distribution or q(z) with some approximations on it that reduce the complexity of the ELBO. These approximations inherently limit the performance of the algorithm, hence it forms a possible source of improvement that many researchers have explored over the years.

2.1 Mean Field Assumption

In their introductory papers on variational inference, Jordan et al.[1] introduced the mean field assumption in order to simplify the form of ELBO. The mean field assumption assumes that the variational distribution of all the variables
are independent of each other.

This assumption leads to the reduction of ELBO to the summation of ELBOs for each latent variable. This assumption has a major drawback. It does not capture the dependencies between underlying posterior distributions of the latent variables effectively, zero-forcing and underestimates the variations in the data.

Some proposals have been made for richer posterior approximations especially using structured approximations where the approximations try to capture the dependencies in the data. The next sections introduce some schemes to soften this assumption.

2.2 Combining the goods of two worlds VI and sampling

Variational Inference uses an optimization approach to getting the approximate posterior, and this involves finding the gradient of the parameters. The ELBO is the function that is optimized and it involves finding expectations. As expectations can be hard to compute directly, a different approach is to find the expected value of the gradient via sampling from the variational distribution. This significantly simplifies finding expectation as the variational distributions are generally simpler distributions than the actual posterior (as well as known).

2.3 Hierarchical Variational Models

Figure 1: An illustration of Hierarchical VI [Figure take from [2]

A novel approach to link together the distributions of latent variables is to introduce a prior over their parameters. [2] As illustrated in the figure, there is an underlying parameter above the individual distributions. This makes them dependent on each other and hence relaxes the assumption to some extent. Although this leads to richer models, the expression of the ELBO becomes much more complicated and hence the gradient is calculated through sampling from the variational distribution.

2.4 Variational Boosting method

Boosting is a standard machine learning algorithm in which we train several models sequentially, where at each turn we increase the weights of the misclassified data points Finally all the weak models are combined to produce a strong classifier.

Inspired by this, Miller et al. [3] introduced Variational Boosting. [4] The approach is to combine several , each using the mean field assumption and then produce richer distributions using a convex combination of the . Hence we have

Here at each step, two things need to be learned. One is the and other is . Here can be calculated as we normally do in VI with mean field assumption, with just one modification that is there is one more latent variable that is the mixing weight for that distribution. Thus it leads to possibly a multi-modal distribution as we have the final posterior a convex sum of several variational distributions.

2.5 Streaming Variational Bayes

A useful modification of the Stochastic VI algorithm is demonstrated in the Streaming Variational Bayes approach[5]. In this, streaming distributed and asynchronous computation of the Bayesian posterior is made possible through parallelizing the process. To illustrate it by an example. The work is first divided in K Jobs i.e. we divide the data into K batches and coordinate between these workers through a master, the master holds its own distributions over the latent variables as . The jobs are run in parallel on K processors. Each processor is given the distribution of latent variables that the master holds at that time and the data. As one of the process finishes, the parameters are updated to the global parameters using the following equation.

Where is the new posterior distribution for master and is the posterior distribution learned by the worker which was given the prior over parameters as .

It can inherently be used as an on-line algorithm as the data size need not be specified, and after every iteration, the posterior approximation is available for us to utilize immediately. The use of exponential families with advantageous properties also aids the process for convenient parameter updates.

These were some of the recent changes brought to VI, the rest of our work focuses on the paper ”Variational Inference Normalizing Flows” [6]. It uses flows to transform a simpler distribution though several functions to generate richer posteriors. The technique that is radically different from the above mentioned one and it has gained a lot of attention recently.

The paper’s main aim is to demonstrate that instead of just focusing on better optimization methods and expectation computation methods for posterior approximation, we can also choose a richer family of approximate posteriors. This richer family must be large enough that the optimal distribution possibly lies within.

3 Our Approach

As mentioned previously, we employ normalizing flows to increase the complexity of the distribution to capture better dependencies in the data. The subsection below describes the theoretical aspects of normalizing flows.

3.1 A Primer On Normalizing Flows

An ideal family of variational posteriors is one that is flexible enough to possibly contain the true posterior. A normalizing flow is a. sequence of invertible transformation applied to a probability distribution such that the resulting distribution is also a probability distribution(hence the term normalising. The probability densities essentially ”flow” through the sequence of transformation on the random variables themselves, and thus a new distribution is obtained.

Normalizing flows may be categorized as finite or infinitesimal. In this paper, we only explore the finite flows. The flow length , defines the number of transformations applied on the base distribution. Consider an invertible function , with inverse . On appliying f on the random variable z with distribution , we obtain the distribution of as

This equation follows from the result of transformations of probability distributions using Jacobian, and the invertibility of the transforming function.
In this way, we can apply a sequence of transformations upon with distribution , to obtain a new random variable , with the probability distribution . The expressions for the two are written as

We focus on a specific family of transformations, the invertible linear-time planar flows of the form

where is a smooth element-wise non-linearity with derivative . We can compute the logdet-Jacobian of this trabsformation in time as follows:

where

Using the above equation, we can determine the log probability after flows:

We can interpret theses flows as expanding/contracting the density perpendicular to the hyperplane , hence the name planar flows. When the nonlinearity used is , the invertibility of the transformations is ensured by changing the parameters slightly, as demonstrated in the Appendix of [6]. This specific family of transformations is used because it allows for low-cost computation of the determinant.

Consider a model in which we have to infer the latent variables given data . The of the model, approximated with the posterior , reduces to

The change of variable with respect to which the expectation is taken is justified by the equation

Notice that the extra term in the consists of the parameters of the transforming functions, and maximizing the will optimize these parameters accordingly.

3.2 Introduction to Variational Autoencoder (VAE)

In recent years Variational Autoencoders[7] have shown promising results in unsupervised learning to generate data especially in tasks like image generation on MNIST[8] and CIFAR[9] data-sets. In generative modeling, we deal with modeling defined on data points which typically are images. VAE makes very little assumptions about the data and is very high capacity models.

Variational auto-encoders assume we have a density distribution defined over some latent variable z and a deterministic function of the form which maps z to space . Our aim is to now optimize so that when we sample z and evaluate it’s highly likely that is in our data-set or is close to some point in the data-set.

In VAEs the choice of the distribution is generally a Gaussian, this implies that . The problem now arise is to choose such that it captures the latent information. VAEs avoid this information by moving this task to learn latent information from the distribution to the function . So, we assume that z is coming from a very simple distribution namely . Now, if we have sufficiently powerful function approximators we can learn the mapping from our independently sampled latent variables z to the data points X. This is where Multi-Layer Perceptrons comes to aid as they have shown to be extremely powerful as function approximators.

The next task is to sample those values that are likely to have produced . This means we need a new function which can give that can produce from these sampled values of z. This helps in the computation of . The next problem is, how does z sampled from some distribution helps in optimizing . For this we need the relation between and . This relationship constitutes the theoretical working of VAEs, and the final equation[10] turns out to be -

This equation is the core of the VAE. The left side of the equation contains the term that we’re trying to maximize plus an extra term that’s trying to make Q produce s that can produce . The first term on the left hand side is like an encoder where is encoding s into and is decoding it to reconstruct . The right side of this equation can be directly optimized using stochastic backpropagation[11]. Consequently, we will be approximately minimizing our target . Note that second term on the right side of the equation is positive thus the left-hand side acts as a lower bound to .

3.3 Implementing Normalizing Flows in VAE

In this part, we derive the for the variational autoencoder with normalizing flows, and also describe its architecture. The prior of the VAE, is taken as , and the encoder is denoted by with free parameters as the weights of the encoder neural network. Similarly, the decoder is denoted by , with the free parameters as the weights of the decoder neural network. The starting distribution for the application of flows is taken as the reparametrized form of the output of the encoder. After applying flows, we sample from the final distribution and pass it through the decoder to obtain the reconstructed output. A schematic of the architecture is shown in the following figure.

Figure 2: The architecture of the inference network. Borrowed from [6]

The is calculated as

which separates into the sum of threee terms - (a) the square error between the input of the encoder and the reconstructed output of the decoder, (b) the -divergence between the output normal distribution of the encoder and the prior , and finally the sum of logdet-Jacobian for the extra terms introduced in the ELBO due to the flows (which can me easily represented in terms of the derivative of the nonlinearity used).

We implemented the model using TensorFlow, using the MNIST[8] dataset for training and testing. The encoder and decoder each had 4 hidden layers with dense connectivity, and the hidden layer dimension was set to 10, with the input and output layer dimension set to 128. The latent dimension was set to 2 for easy visualization of the probability distribution inferred from the dataset. The flow length was set to 4 and the linearity used was . The learning was carried for 500,000 iterations with batch size 1, and the loss optimized using the Adam optimizer[12] with a learning rate of 0.002.

4 Results

We used a basic VAE architecture to compare our results with respect to that one we observed much more richness in the final distribution of the latent. As we can see in the diagram below, we got a multimodal distribution for the MNIST dataset when we ran with Normalizing flows whereas it was just a normal distribution using a standard VAE.

Figure 3: Plotting the latent variable value for Normalizing Flows (upper plot) and Standard VAE (lower plot). The shapes in the upper plot corresponds the mean of digits as follows- 0: triangle down, 1: triangle up, 2: diamond, 3: star, 4: circle, 5: square, 6: octagon, 7: square, 8: pentagon, 9: triangle right
Figure 4: Multimodal nature of the latent variable’s distribution

To plot the figure, what we did is that we trained the VAE with MNIST dataset and then stored the values of the latent vectors, sampled from the distribution which is equivalent to sample from , apply the transformation and then all the flows over it.

This hinted that the multimodal distribution represents the different classes and it proved to be the case as well, as shown in the diagram. Thus this shows that Normalizing flows are indeed able to learn much more complicated distributions than what the mean field approximation or one of the above-surveyed methods could do.

5 Discussion and Future Work

The results of the experiment hint that the normalizing flows work in a manner similarly to structured VAEs[13], by stacking many probability distributions on one another for greater correlations between the features and classes of the data. In the previous sections, we saw that transformations on distributions help aid the inference procedure by helping us explore a larger solution space for the approximate posterior. Although we had run the experiment with only 2 latent dimensions, it gave us significant deviation from standard unimodal distributions obtained in vanilla VAE implementations. Consequently, it is safe to say that increasing the dimensionality will certainly result in more deviation from the prior-type shape of the final distribution. Variational methods have evolved greatly since their inception, and with the advent of computing technologies like probabilistic programming and parallel processing, the full power of these methods are to be greatly enhanced if used in conjunction with deep learning methods. This remains to serve as a promising avenue for future research. Flexible choices for distributions and structured VAE like richness in the approximate posterior family allow us to explore parallely the fields of optimization and modelling.

References

  • [1] Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. An introduction to variational methods for graphical models. Mach. Learn., 37(2):183–233, November 1999.
  • [2] Rajesh Ranganat, Dustin Tran, and David M. Blei. Hierarchical Variational Models. ArXiv e-prints, 2016.
  • [3] A. C. Miller, N. Foti, and R. P. Adams. Variational Boosting: Iteratively Refining Posterior Approximations. ArXiv e-prints, November 2016.
  • [4] Andrew C. Miller, Foti Nicholas, and Ryan P. Adams. Variational Boosting: Iteratively Refining Posterior Approximations. ArXiv e-prints, 2012.
  • [5] Tamara Broderick, Nicholas Boyd, Andre Wibisono, Ashia C. Wilson, and Michael I. Jordan. Streaming Variational Bayes. NIPS, 2013.
  • [6] D. Jimenez Rezende and S. Mohamed. Variational Inference with Normalizing Flows. ArXiv e-prints, May 2015.
  • [7] D. P Kingma and M. Welling. Auto-Encoding Variational Bayes. ArXiv e-prints, December 2013.
  • [8] Yann LeCun. The mnist database of handwritten digits. http://yann. lecun. com/exdb/mnist/, 1998.
  • [9] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto, 2009.
  • [10] C. Doersch. Tutorial on Variational Autoencoders. ArXiv e-prints, June 2016.
  • [11] D. Jimenez Rezende, S. Mohamed, and D. Wierstra. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. ArXiv e-prints, January 2014.
  • [12] D. P. Kingma and J. Ba. Adam: A Method for Stochastic Optimization. ArXiv e-prints, December 2014.
  • [13] T. Salimans. A Structured Variational Auto-encoder for Learning Deep Hierarchies of Sparse Features. ArXiv e-prints, February 2016.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
102430
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description