Reparameterization trick for discrete variables

Reparameterization trick for discrete variables

Seiya Tokui
Preferred Networks, Inc.
The University of Tokyo
Tokyo 100-0004, Japan
tokui@preferred.jp
&Issei Sato
The University of Tokyo
Tokyo 113-8656, Japan
sato@k.u-tokyo.ac.jp
Abstract

Low-variance gradient estimation is crucial for learning directed graphical models parameterized by neural networks, where the reparameterization trick is widely used for those with continuous variables. While this technique gives low-variance gradient estimates, it has not been directly applicable to discrete variables, the sampling of which inherently requires discontinuous operations. We argue that the discontinuity can be bypassed by marginalizing out the variable of interest, which results in a new reparameterization trick for discrete variables. This reparameterization greatly reduces the variance, which is understood by regarding the method as an application of common random numbers to the estimation. The resulting estimator is theoretically guaranteed to have a variance not larger than that of the likelihood-ratio method with the optimal input-dependent baseline. We give empirical results for variational learning of sigmoid belief networks.

1 Introduction

Directed graphical models parameterized by neural networks are widely used for complicated data distributions in high dimensional spaces, which require high levels of non-linearity and uncertainty to be captured. For learning such models, the objective function is often given as an expectation of a nonlinear function over latent variables, e.g. the evidence lower bound of deep directed generative models (Kingma and Welling, 2014; Mnih and Gregor, 2014). In this case, computing the exact gradient of the expectation is generally infeasible, and it has to be estimated approximately. For each variable, we consider a problem of estimating the gradient of the objective function w.r.t. parameters on which the variable directly depends.

When the variables are modeled by certian continuous distributions such as a Gaussian, the reparameterization trick (Kingma and Welling, 2014; Rezende et al., 2014) is often employed for the gradient estimation. With this technique, we can adjust a sampled configuration continuously within the domain space of the variables, where the variance of the gradient estimate is kept low. While it has been shown to be efficient in various applications (Gregor et al., 2015; Heess et al., 2015; Maaløe et al., 2016), it cannot be applied to discrete variables, since any reparameterization includes discontinuous operations for which the gradient cannot be estimated. Instead, the likelihood-ratio method (Glynn, 1990; Williams, 1992) is used for discrete variables, dispite its high-variance estimation.

In this study, we propose a simple way to apply reparameterization to discrete variables, while avoiding the discontinuity by marginalizing out the variable of interest. This method is applicable to any kind of variable for which we can approximate the expectation directly, although we only consider the discrete case in this study. The variance of the gradient estimate is guaranteed not to be larger than that of the likelihood-ratio method with the optimal input-dependent baseline.

Our algorithm requires us to marginalize out the discrete variable, for which we need to simulate all of its configurations. The simulations are essential for the gradient estimation, because a simulation of any single configuration provides no information about the loss landscape over the other configurations. Existing gradient estimators for discrete variables simulate each configuration separately, while in our algorithm, they are simulated all at once by sharing the reparameterized noise factors. Our method can be viewed as an application of common random numbers to these simulations, which is known to reduce the variance when the target value is expressed as a difference between two random variables. It was applied to the finite-difference gradient estimator in Glynn (1989), whereas, here, we apply it to the exact gradient computation of an expectation over a discrete variable. It has greatly reduced variance compared to existing techniques.

Related work

The likelihood-ratio method (Glynn, 1990; Williams, 1992) is often used to make gradient estimations of discrete variables, in which only one configuration is evaluated by simulation at each iteration. It requires multiple iterations to cover the information of all configurations, and it simulates different configurations separately. Furthermore, the likelihood ratio becomes unstable when the probability mass concentrates in only a few configurations; this also causes high variance. There are many techniques to reduce the variance (Paisley et al., 2012; Bengio et al., 2013; Ranganath et al., 2014; Mnih and Gregor, 2014; Gu et al., 2016), although their reductions are not enough for large and complex models. In another approach, called local expectation gradient (Titsias and Lázaro-Gredilla, 2015), the variable of interest is locally marginalized out with all other variables simulated only once. Our method is deeply connected to this method; both behave equivalently if the variable has no descendants in the graphical model. When the variable has descendants, the local expectation gradient simulates them only on one configuration of the variable, and thus requires multiple iterations to simulate all configurations, each of which is simulated separately.

The rest of this paper is organized as follows. In Sec.2, we introduce our method and give a theoretical analysis of it. We show experimental results in Sec.3 and give a brief conclution in Sec.4.

2 Method

Our task is to estimate the gradient of , where is a feasible function, a directed model of an -dimensional vector of variables conditioned on an input to the system , and the model parameters. Here denotes the parent nodes of in the graphical model. For simplicity, we will assume that and for do not share any parameters, but this assumption can easily be removed. One example of our setting is the gradient estimation for the variational inference of a generative model with an approximate posterior , where the objective is given by with . We will omit the gradient term corresponding to the dependency of on from our discussion, since it is easy to estimate.

Suppose each sample from a conditional is reparameterized as , where is drawn from a noise distribution . When is discrete, the gradient cannot be estimated using the reparameterization trick, since is not continuous at some point of .

We can bypass the discontinuity by marginalizing out . Here, let be the noise factors other than . We write the whole reparameterization as , and transform the gradient as follows.

(1)

This transformation comes from the observation that, even if is not continuous, its expectation over is differentiable by . If this inner expectation can be computed, Eq. (1) can be estimated by sampling .

The inner expectation is computed as follows. Let be the variables other than and an ancestral sampling procedure of them with clamped ; i.e., for each is computed by with fixed to the given one. The inner expectation is then transformed as , with which we can rewrite Eq. (1) as follows.

(2)

Note that the gradient can be computed analytically. The simulated variables can contain discrete variables, which are left reparameterized with the discontinuous function . The resulting algorithm is shown in Alg. 1.

1:a set of parameters and an input variable .
2:Sample .
3:for   do
4:     for all  configurations of  do
5:         .
6:         .
7:     end for
8:     .
9:end for
10:return as an estimation of .
Algorithm 1 Gradient estimation by Eq. (2). Note that the procedure can be made more efficient by reusing variables that are not descendants of in the ancestral sampling at line 5.

For example, suppose that is a Bernoulli variable whose mean is given by . For gradient estimations w.r.t. , it can be reparameterized as iff for . For the gradient estimation w.r.t. , is if and otherwise; thus, the estimator is given by , where denotes the value of simulated with for . The variance of an estimation of is given by , which is reduced by a large covariance of and . Our estimator reuses the same noise factor for simulations of these terms; thus, the covariance is expected to be large. This technique is known as the method of common random numbers, with which our estimator enjoys low variance.

While the formulation is similar to the original reparameterization trick, one big difference is that the estimator (2) does not use the gradient of w.r.t. . This is essential in the case of being discrete, since the gradient of is not related to the expectation gradient in general. This can be easily understood in the above Bernoulli case, where the exact expectation is written as a difference of on and . Even if is smoothly defined over , there is no guarantee that the gradient of at a given approximates the true gradient, especially when is highly nonlinear.

Theoretical analysis

The variance of our estimator is guaranteed not to be larger than that of the likelihood-ratio method. Let be the -th element of the parameter vector . Here, we focus on the estimation of the partial derivative . The likelihood-ratio method can be formulated as a Monte-Carlo simulation of an expectation,

where is a baseline that can depend on variables other than and its descendants. Our estimator is a Monte Carlo simulation of Eq. (2) with replaced by . Using these formulations, the following statement holds.

Theorem 1.

Let be any parameter, any baseline, the variance of the likelihood-ratio estimator, and that of the proposed estimator; then it holds that .

In particular, our method always achieves a variance not larger than that of the likelihood-ratio method with the optimal input-dependent baseline. The proof is given in the appendix.

3 Experiments

We empirically compared the likelihood-ratio method and our estimator in variational learning of sigmoid belief networks (SBNs) (Neal, 1992). For the deepest layer, the logit was directly parameterized in the generative model. We used a reverse-directional SBN for the posterior approximation; i.e., the variational model infers latent variables from shallow layers to deep layers one by one. The models we used were the same as those used in Mnih and Gregor (2014), except the number of layers and units. We denote the architecture using a notation like SBN(--), where represents the number of units in the -th layer.

We conducted experiments on the MNIST dataset (Lecun et al., 1998), a set of 28x28 pixel gray-scale images of hand-written digits. We binarized each image with the procedure described in Salakhutdinov and Murray (2008). We followed the standard data separation and used 10,000 images for testing and the rest for training. We further divided the latter into 50,000 training images and 10,000 validation images. We evaluated the model with the validation set at regular intervals throughout training and used the best model for the final test.

We trained the SBNs with RMSprop (Tieleman and Hinton, 2012) using gradient estimates given by the likelihood-ratio method (LR) or our method (ours). The learning rate was set to 0.001. We used mini-batches of size 100 in all experiments, and applied a weight decay with a coefficient of 0.001 to all weight matrices (not to the bias parameters). As for LR, we used the baseline proposed in Mnih and Gregor (2014), which consists of a running estimate of the expected loss and input-dependent loss estimation with layer-wise extra neural networks. We did not use variance normalization (Mnih and Gregor, 2014), as RMSprop already achieves per-element variance normalization.

The computational cost of our method is times larger than that of the likelihood-ratio method, since simulations of for all are required. The cost is not as problematic as expected, since the additional factor of is easily parallelized. In our experiments, using an NVIDIA GeForce TITAN X, the actual difference in computational times was less than two-fold with .

Figure 1: Left: variational lower bound of the log likelihood evaluated on the validation set (higher is better). Right: variance of the gradient w.r.t. mean parameters of Bernoulli variables for each layer. The variance of each unit is estimated using 50 million samples (i.e., 1,000 samples for each training image) and then averaged over all units in each layer.
200-200 200-200-200 200-200-200-200 32-64-128-256
LR 98.86 95.40 94.82 94.73
Ours 98.28 95.03 93.67 92.79
Table 1: Variational bound of the negative log likelihood on the test set with various architectures.

Figure 1 plots the validation performance and gradient variance for the four-layer model SBN(32-64-128-256). It shows that our method learns much faster than LR with the input-dependent adaptive baseline. The right figure shows the variance reduction effect. The variance of our method is smaller than that of LR, where the difference ( to ) is much larger than the difference in computational costs. The model with the best validation score was used for evaluation on the test set, whose results are listed in Table 1. For various architectures of SBNs, our method outperforms LR. In particular, the difference is larger when the model is deeper. This can be qualitatively understood by observing that the optimization of a deeper model becomes more difficult where the quality of the gradient estimate critically affects the optimization performance.

4 Conclusion

We showed that reparameterization can still be applied to discrete variables, which enables us to use common random numbers in evaluations of multiple configurations. The resulting method has lower variance; we confirmed this both theoretically and empirically. Although its computational cost is worse than the existing methods, it empirically runs quickly enough; the additional cost can be alleviated by parallelizing the computation on modern GPUs. Future work will include seeking a better way to balance the tradeoff between the computational cost and variance reduction.

Acknowledgments

We thank Daisuke Okanohara for helpful discussions.

References

References

  • Bengio et al. (2013) Yoshua Bengio, Nicholas Léonard, and Aaron C. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. ArXiv, 1308.3432, 2013.
  • Glynn (1989) P. W. Glynn. Optimization of stochastic systems via simulation. In Proceedings of the 21st Conference on Winter Simulation, pages 90–105, 1989. doi: 10.1145/76738.76750.
  • Glynn (1990) Peter W. Glynn. Likelihood ratio gradient estimation for stochastic systems. Communication of the ACM, 33(10):75–84, 1990. doi: 10.1145/84537.84552.
  • Gregor et al. (2015) Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1462–1471, 2015.
  • Gu et al. (2016) Shixiang Gu, Sergey Levine, Ilya Sutskever, and Andriy Mnih. Muprop: Unbiased backpropagation for stochastic neural networks. In Proceedings of the 4th International Conference on Learning Representations (ICLR), 2016.
  • Heess et al. (2015) Nicolas Heess, Gregory Wayne, David Silver, Tim Lillicrap, Tom Erez, and Yuval Tassa. Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems 28 (NIPS), pages 2944–2952. 2015.
  • Kingma and Welling (2014) Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of the International Conference on Learning Representations (ICLR), 2014.
  • Lecun et al. (1998) Yann Lecun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, pages 2278–2324, 1998.
  • Maaløe et al. (2016) Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary deep generative models. In Proceedings of the 33rd International Conference on Machine Learning (ICML), pages 1445–1453, 2016.
  • Mnih and Gregor (2014) Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. In Proceedings of the 31st International Conference on Machine Learning (ICML), pages 1791–1799, 2014.
  • Neal (1992) Radford M. Neal. Connectionist learning of belief networks. Artificial Intelligence, 56(1):71–113, 1992.
  • Paisley et al. (2012) John Paisley, David M. Blei, and Michael I. Jordan. Variational bayesian inference with stochastic search. In Proceedings of the 29 th International Conference on Machine Learning (ICML), 2012.
  • Ranganath et al. (2014) Rajesh Ranganath, Sean Gerrish, and David M. Blei. Black box variational inference. In Artificial Intelligence and Statistics (AISTATS), pages 814–822, 2014.
  • Rezende et al. (2014) Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31st International Conference on Machine Learning (ICML), pages 1278–1286, 2014.
  • Salakhutdinov and Murray (2008) Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of Deep Belief Networks. In Proceedings of the 25th Annual International Conference on Machine Learning (ICML), pages 872–879, 2008.
  • Tieleman and Hinton (2012) Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. In CORSERA: Neural Networks for Machine Learning, 2012.
  • Titsias and Lázaro-Gredilla (2015) Michalis Titsias and Miguel Lázaro-Gredilla. Local expectation gradients for black box variational inference. In Advances in Neural Information Processing Systems 28 (NIPS), pages 2638–2646. 2015.
  • Williams (1992) Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3):229–256, 1992.

Appendix A Proof of Theorem 1

We first introduce a well-known lemma used in our analysis.

Lemma 2 (Variance partitioning).

Let and be sets of random variables, and a function on them. Then the following formula holds: .

Proof.

Note that holds for any random variable . Applying it to the right side of the formula yields

Proof of Theorem 1.

Suppose that is reparameterized as is done in Sec.2. Denote the parent node of simulated with by . The variance of the likelihood-ratio method is evaluated as follows.

(3)

where we use Lemma 2 on and in the last equation. Note that ; thus, the second term of Eq. (3) can be further transformed as follows.

Since the first term of Eq. (3), which is an expectation of a variance, is not less than zero, we conclude that . ∎

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
14382
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description