The Concrete Distribution:
A Continuous Relaxation of
Discrete Random Variables
Abstract
The reparameterization trick enables optimizing large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack useful reparameterizations due to the discontinuous nature of discrete states. In this work we introduce Concrete random variables—continuous relaxations of discrete random variables. The Concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a onehot bit representation that is treated continuously, Concrete stochastic nodes can be used with automatic differentiation to produce lowvariance biased gradients of objectives (including objectives that depend on the logprobability of latent stochastic nodes) on the corresponding discrete graph. We demonstrate the effectiveness of Concrete relaxations on density estimation and structured prediction tasks using neural networks.
The Concrete Distribution:
A Continuous Relaxation of
Discrete Random Variables
Chris J. Maddison^{1,2}, Andriy Mnih^{1}, & Yee Whye Teh^{1} 
^{1}DeepMind, London, United Kingdom 
^{2}University of Oxford, Oxford, United Kingdom 
cmaddis@stats.ox.ac.uk 
1 Introduction
Software libraries for automatic differentiation (AD) (Abadi et al., 2015; Theano Development Team, 2016) are enjoying broad use, spurred on by the success of neural networks on some of the most challenging problems of machine learning. The dominant mode of development in these libraries is to define a forward parametric computation, in the form of a directed acyclic graph, that computes the desired objective. If the components of the graph are differentiable, then a backwards computation for the gradient of the objective can be derived automatically with the chain rule. The ease of use and unreasonable effectiveness of gradient descent has led to an explosion in the diversity of architectures and objective functions. Thus, expanding the range of useful continuous operations can have an outsized impact on the development of new models. For example, a topic of recent attention has been the optimization of stochastic computation graphs from samples of their states. Here, the observation that AD “just works” when stochastic nodes^{1}^{1}1For our purposes a stochastic node of a computation graph is just a random variable whose distribution depends in some deterministic way on the values of the parent nodes. can be reparameterized into deterministic functions of their parameters and a fixed noise distribution (Kingma & Welling, 2013; Rezende et al., 2014), has liberated researchers in the development of large complex stochastic architectures (e.g. Gregor et al., 2015).
Computing with discrete stochastic nodes still poses a significant challenge for AD libraries. Deterministic discreteness can be relaxed and approximated reasonably well with sigmoidal functions or the softmax (see e.g., Grefenstette et al., 2015; Graves et al., 2016), but, if a distribution over discrete states is needed, there is no clear solution. There are well known unbiased estimators for the gradients of the parameters of a discrete stochastic node from samples. While these can be made to work with AD, they involve special casing and defining surrogate objectives (Schulman et al., 2015), and even then they can have high variance. Still, reasoning about discrete computation comes naturally to humans, and so, despite the difficulty associated, many modern architectures incorporate discrete stochasticity (Mnih et al., 2014; Xu et al., 2015; Kočiský et al., 2016).
This work is inspired by the observation that many architectures treat discrete nodes continuously, and gradients rich with counterfactual information are available for each of their possible states. We introduce a continuous relaxation of discrete random variables, Concrete for short, which allow gradients to flow through their states. The Concrete distribution is a new parametric family of continuous distributions on the simplex with closed form densities. Sampling from the Concrete distribution is as simple as taking the softmax of logits perturbed by fixed additive noise. This reparameterization means that Concrete stochastic nodes are quick to implement in a way that “just works” with AD. Crucially, every discrete random variable corresponds to the zero temperature limit of a Concrete one. In this view optimizing an objective over an architecture with discrete stochastic nodes can be accomplished by gradient descent on the samples of the corresponding Concrete relaxation. When the objective depends, as in variational inference, on the logprobability of discrete nodes, the Concrete density is used during training in place of the discrete mass. At test time, the graph with discrete nodes is evaluated.
The paper is organized as follows. We provide a background on stochastic computation graphs and their optimization in Section 2. Section 3 reviews a reparameterization for discrete random variables, introduces the Concrete distribution, and discusses its application as a relaxation. Section 4 reviews related work. In Section 5 we present results on a density estimation task and a structured prediction task on the MNIST and Omniglot datasets. In Appendices C and F we provide details on the practical implementation and use of Concrete random variables. When comparing the effectiveness of gradients obtained via Concrete relaxations to a stateoftheartmethod (VIMCO, Mnih & Rezende, 2016), we find that they are competitive—occasionally outperforming and occasionally underperforming—all the while being implemented in an AD library without special casing.
2 Background
2.1 Optimizing Stochastic Computation Graphs
Stochastic computation graphs (SCGs) provide a formalism for specifying inputoutput mappings, potentially stochastic, with learnable parameters using directed acyclic graphs (see Schulman et al. (2015) for a review). The state of each noninput node in such a graph is obtained from the states of its parent nodes by either evaluating a deterministic function or sampling from a conditional distribution. Many training objectives in supervised, unsupervised, and reinforcement learning can be expressed in terms of SCGs.
To optimize an objective represented as a SCG, we need estimates of its parameter gradients. We will concentrate on graphs with some stochastic nodes (backpropagation covers the rest). For simplicity, we restrict our attention to graphs with a single stochastic node . We can interpret the forward pass in the graph as first sampling from the conditional distribution of the stochastic node given its parents, then evaluating a deterministic function at . We can think of as a noisy objective, and we are interested in optimizing its expected value w.r.t. parameters .
In general, both the objective and its gradients are intractable. We will sidestep this issue by estimating them with samples from . The gradient w.r.t. to the parameters has the form
(1) 
and can be easily estimated using Monte Carlo sampling:
(2) 
where i.i.d. The more challenging task is to compute the gradient w.r.t. the parameters of . The expression obtained by differentiating the expected objective,
(3) 
does not have the form of an expectation w.r.t. and thus does not directly lead to a Monte Carlo gradient estimator. However, there are two ways of getting around this difficulty which lead to the two classes of estimators we will now discuss.
2.2 Score Function Estimators
The score function estimator (SFE, Fu, 2006), also known as the REINFORCE (Williams, 1992) or likelihoodratio estimator (Glynn, 1990), is based on the identity , which allows the gradient in Eq. 3 to be written as an expectation:
(4) 
Estimating this expectation using naive Monte Carlo gives the estimator
(5) 
where i.i.d. This is a very general estimator that is applicable whenever is differentiable w.r.t. . As it does not require to be differentiable or even continuous as a function of , the SFE can be used with both discrete and continuous random variables.
Though the basic version of the estimator can suffer from high variance, various variance reduction techniques can be used to make the estimator much more effective (Greensmith et al., 2004). Baselines are the most important and widely used of these techniques (Williams, 1992). A number of score function estimators have been developed in machine learning (Paisley et al., 2012; Gregor et al., 2013; Ranganath et al., 2014; Mnih & Gregor, 2014; Titsias & LázaroGredilla, 2015; Gu et al., 2016), which differ primarily in the variance reduction techniques used.
2.3 Reparameterization Trick
In many cases we can sample from by first sampling from some fixed distribution and then transforming the sample using some function . For example, a sample from can be obtained by sampling from the standard form of the distribution and then transforming it using . This twostage reformulation of the sampling process, called the reparameterization trick, allows us to transfer the dependence on from into by writing for , making it possible to reduce the problem of estimating the gradient w.r.t. parameters of a distribution to the simpler problem of estimating the gradient w.r.t. parameters of a deterministic function.
Having reparameterized , we can now express the objective as an expectation w.r.t. :
(6) 
As does not depend on , we can estimate the gradient w.r.t. in exactly the same way we estimated the gradient w.r.t. in Eq. 1. Assuming differentiability of w.r.t. and of w.r.t. and using the chain rule gives
(7) 
The reparameterization trick, introduced in the context of variational inference independently by Kingma & Welling (2014), Rezende et al. (2014), and Titsias & LázaroGredilla (2014), is usually the estimator of choice when it is applicable. For continuous latent variables which are not directly reparameterizable, new hybrid estimators have also been developed, by combining partial reparameterizations with score function estimators (Ruiz et al., 2016; Naesseth et al., 2016).
2.4 Application: Variational Training of Latent Variable Models
We will now see how the task of training latent variable models can be formulated in the SCG framework. Such models assume that each observation is obtained by first sampling a vector of latent variables from the prior before sampling the observation itself from . Thus the probability of observation is . Maximum likelihood training of such models is infeasible, because the loglikelihood (LL) objective is typically intractable and does not fit into the above framework due to the expectation being inside the . The multisample variational objective (Burda et al., 2016),
(8) 
provides a convenient alternative which has precisely the form we considered in Section 2.1. This approach relies on introducing an auxiliary distribution with its own parameters, which serves as approximation to the intractable posterior . The model is trained by jointly maximizing the objective w.r.t. to the parameters of and . The number of samples used inside the objective allows trading off the computational cost against the tightness of the bound. For , becomes is the widely used evidence lower bound (ELBO, Hoffman et al., 2013) on , while for , it is known as the importance weighted bound (Burda et al., 2016).
3 The Concrete Distribution
3.1 Discrete Random Variables and the GumbelMax Trick
To motivate the construction of Concrete random variables, we review a method for sampling from discrete distributions called the GumbelMax trick (Luce, 1959; Yellott, 1977; Papandreou & Yuille, 2011; Hazan & Jaakkola, 2012; Maddison et al., 2014). We restrict ourselves to a representation of discrete states as vectors of bits that are onehot, or . This is a flexible representation in a computation graph; to achieve an integral representation take the inner product of with , and to achieve a point mass representation in take where .
Consider an unnormalized parameterization where of a discrete distribution —we can assume that states with 0 probability are excluded. The GumbelMax trick proceeds as follows: sample i.i.d. for each , find that maximizes , set and the remaining for . Then
(9) 
In other words, the sampling of a discrete random variable can be refactored into a deterministic function—componentwise addition followed by —of the parameters and fixed distribution . See Figure 0(a) for a visualization.
The apparently arbitrary choice of noise gives the trick its name, as has a Gumbel distribution. This distribution features in extreme value theory (Gumbel, 1954) where it plays a central role similar to the Normal distribution: the Gumbel distribution is stable under operations, and for some distributions, the order statistics (suitably normalized) of i.i.d. draws approach the Gumbel in distribution. The Gumbel can also be recognized as a transformed exponential random variable. So, the correctness of (9) also reduces to a well known result regarding the of exponential random variables. See (Hazan et al., 2016) for a collection of related work, and particularly the chapter (Maddison, 2016) for a proof and generalization of this trick.
3.2 Concrete Random Variables
The derivative of the is 0 everywhere except at the boundary of state changes, where it is undefined. For this reason the GumbelMax trick is not a suitable reparameterization for use in SCGs with AD. Here we introduce the Concrete distribution motivated by considering a graph, which is the same as Figure 0(a) up to a continuous relaxation of the computation, see Figure 0(b). This will ultimately allow the optimization of parameters via gradients.
The computation returns states on the vertices of the simplex . The idea behind Concrete random variables is to relax the state of a discrete variable from the vertices into the interior where it is a random probability vector—a vector of numbers between 0 and 1 that sum to 1. To sample a Concrete random variable at temperature with parameters , sample i.i.d. and set
(10) 
The softmax computation of (10) smoothly approaches the discrete computation as while preserving the relative order of the Gumbels . So, imagine making a series of forward passes on the graphs of Figure 1. Both graphs return a stochastic value for each forward pass, but for smaller temperatures the outputs of Figure 0(b) become more discrete and eventually indistinguishable from a typical forward pass of Figure 0(a).
The distribution of sampled via (10) has a closed form density on the simplex. Because there may be other ways to sample a Concrete random variable, we take the density to be its definition.
Definition 1 (Concrete Random Variables).
Let and . has a Concrete distribution with location and temperature , if its density is:
(11) 
Proposition 1 lists a few properties of the Concrete distribution. 1 is confirmation that our definition corresponds to the sampling routine (10). 2 confirms that rounding a Concrete random variable results in the discrete random variable whose distribution is described by the logits , 3 confirms that taking the zero temperature limit of a Concrete random variable is the same as rounding. Finally, 4 is a convexity result on the density. We prove these results in Appendix A.
Proposition 1 (Some Properties of Concrete Random Variables).
Let with location parameters and temperature , then

(Reparameterization) If i.i.d., then ,

(Rounding) ,

(Zero temperature) ,

(Convex eventually) If , then is logconvex in .
The binary case of the GumbelMax trick simplifies to passing additive noise through a step function. The corresponding Concrete relaxation is implemented by passing additive noise through a sigmoid—see Figure 3. We cover this more thoroughly in Appendix B, along with a cheat sheet (Appendix F) on the density and implementation of all the random variables discussed in this work.
3.3 Concrete Relaxations
Concrete random variables may have some intrinsic value, but we investigate them simply as surrogates for optimizing a SCG with discrete nodes. When it is computationally feasible to integrate over the discreteness, that will always be a better choice. Thus, we consider the use case of optimizing a large graph with discrete stochastic nodes from samples.
First, we outline our proposal for how to use Concrete relaxations by considering a variational autoencoder with a single discrete latent variable. Let be the mass function of some dimensional onehot discrete random variable with unnormalized probabilities and some distribution over a data point given onehot. The generative model is then . Let be an approximating posterior over onehot whose unnormalized probabilities depend on . All together the variational lowerbound we care about stochastically optimizing is
(12) 
with respect to , , and any parameters of . First, we relax the stochastic computation by replacing with a Concrete random variable with density . Simply replacing every instance of with in Eq. 12 will result in a noninterpretable objective, which does not necessarily lowerbound , because is not a KL divergence. Thus we propose “relaxing” the terms and to reflect the true sampling distribution. Thus, the relaxed objective is:
(13) 
where is a Concrete density with location and temperature . At test time we evaluate the discrete lowerbound . Naively implementing Eq. 13 will result in numerical issues. We discuss this and other details in Appendix C.
Thus, the basic paradigm we propose is the following: during training replace every discrete node with a Concrete node at some fixed temperature (or with an annealing schedule). The graphs are identical up to the / computations, so the parameters of the relaxed graph and discrete graph are the same. When an objective depends on the logprobability of discrete variables in the SCG, as the variational lowerbound does, we propose that the logprobability terms are also “relaxed” to represent the true distribution of the relaxed node. At test time the original discrete loss is evaluated. This is possible, because the discretization of any Concrete distribution has a closed form mass function, and the relaxation of any discrete distribution into a Concrete distribution has a closed form density. This is not always possible. For example, the multinomial probit model—the GumbelMax trick with Gaussians replacing Gumbels—does not have a closed form mass.
The success of Concrete relaxations will depend on the choice of temperature during training. It is important that the relaxed nodes are not able to represent a precise real valued mode in the interior of the simplex as in Figure 1(d). If this is the case, it is possible for the relaxed random variable to communicate much more than bits of information about its parameters. This might lead the relaxation to prefer the interior of the simplex to the vertices, and as a result there will be a large integrality gap in the overall performance of the discrete graph. Therefore Proposition 1 4 is a conservative guideline for generic ary Concrete relaxations; at temperatures lower than we are guaranteed not to have any modes in the interior for any . We discuss the subtleties of choosing the temperatures in more detail in Appendix C. Ultimately the best choice of and the performance of the relaxation for any specific will be an empirical question.
4 Related Work
Perhaps the most common distribution over the simplex is the Dirichlet with density on . The Dirichlet can be characterized by strong independence properties, and a great deal of work has been done to generalize it (Connor & Mosimann, 1969; Aitchison, 1985; Rayens & Srinivasan, 1994; Favaro et al., 2011). Of note is the Logistic Normal distribution (Atchison & Shen, 1980), which can be simulated by taking the softmax of normal random variables and an th logit that is deterministically zero. The Logistic Normal is an important distribution, because it can effectively model correlations within the simplex (Blei & Lafferty, 2006). To our knowledge the Concrete distribution does not fall completely into any family of distributions previously described. For the Concrete is in a class of normalized infinitely divisible distributions (S. Favaro, personal communication), and the results of Favaro et al. (2011) apply.
The idea of using a softmax of Gumbels as a relaxation for a discrete random variable was concurrently considered by (Jang et al., 2016), where it was called the GumbelSoftmax. They do not use the density in the relaxed objective, opting instead to compute all aspects of the graph, including discrete logprobability computations, with the relaxed stochastic state of the graph. In the case of variational inference, this relaxed objective is not a lower bound on the marginal likelihood of the observations, and care needs to be taken when optimizing it. The idea of using sigmoidal functions with additive input noise to approximate discreteness is also not a new idea. (Frey, 1997) introduced nonlinear Gaussian units which computed their activation by passing Gaussian noise with the mean and variance specified by the input to the unit through a nonlinearity, such as the logistic function. Salakhutdinov & Hinton (2009) binarized realvalued codes of an autoencoder by adding (Gaussian) noise to the logits before passing them through the logistic function. Most recently, to avoid the difficulty associated with likelihoodratio methods (Kočiský et al., 2016) relaxed the discrete sampling operation by sampling a vector of Gaussians instead and passing those through a softmax.
There is another family of gradient estimators that have been studied in the context of training neural networks with discrete units. These are usually collected under the umbrella of straightthrough estimators (Bengio et al., 2013; Raiko et al., 2014). The basic idea they use is passing forward discrete values, but taking gradients through the expected value. They have good empirical performance, but have not been shown to be the estimators of any loss function. This is in contrast to gradients from Concrete relaxations, which are biased with respect to the discrete graph, but unbiased with respect to the continuous one.
5 Experiments
5.1 Protocol
The aim of our experiments was to evaluate the effectiveness of the gradients of Concrete relaxations for optimizing SCGs with discrete nodes. We considered the tasks in (Mnih & Rezende, 2016): structured output prediction and density estimation. Both tasks are difficult optimization problems involving fitting probability distributions with hundreds of latent discrete nodes. We compared the performance of Concrete reparameterizations to two stateoftheart score function estimators: VIMCO (Mnih & Rezende, 2016) for optimizing the multisample variational objective () and NVIL (Mnih & Gregor, 2014) for optimizing the singlesample one (). We performed the experiments using the MNIST and Omniglot datasets. These are datasets of images of handwritten digits (MNIST) or letters (Omniglot). For MNIST we used the fixed binarization of Salakhutdinov & Murray (2008) and the standard 50,000/10,000/10,000 split into training/validation/testing sets. For Omniglot we sampled a fixed binarization and used the standard 24,345/8,070 split into training/testing sets. We report the negative loglikelihood (NLL) of the discrete graph on the test data as the performance metric.
All of our models were neural networks with layers of ary discrete stochastic nodes with values on the corners of the hypercube . The distributions were parameterized by real values , which we took to be the logits of a discrete random variable with states. Model descriptions are of the form “(200V–200H784V)”, read from left to right. This describes the order of conditional sampling, again from left to right, with each integer representing the number of stochastic units in a layer. The letters V and H represent observed and latent variables, respectively. If the leftmost layer is H, then it was sampled unconditionally from some parameters. Conditioning functions are described by –, , where “–” means a linear function of the previous layer and “” means a nonlinear function. A “layer” of these units is simply the concatenation of some number of independent nodes whose parameters are determined as a function the previous layer. For example a 240 binary layer is a factored distribution over the hypercube. Whereas a 240 ary layer can be seen as a distribution over the same hypercube where each of the 80 triples of units are sampled independently from an 8 way discrete distribution over . All models were initialized with the heuristic of Glorot & Bengio (2010) and optimized using Adam (Kingma & Ba, 2014). All temperatures were fixed throughout training. Appendix D for hyperparameter details.
5.2 Density Estimation
MNIST NLL  Omniglot NLL  

binary model  Test  Train  Test  Train  
Concrete  VIMCO  Concrete  VIMCO  Concrete  VIMCO  Concrete  VIMCO  
(200H – 784V)  1  107.3  104.4  107.5  104.2  118.7  115.7  117.0  112.2 
5  104.9  101.9  104.9  101.5  118.0  113.5  115.8  110.8  
50  104.3  98.8  104.2  98.3  118.9  113.0  115.8  110.0  
(200H – 200H – 784V)  1  102.1  92.9  102.3  91.7  116.3  109.2  114.4  104.8 
5  99.9  91.7  100.0  90.8  116.0  107.5  113.5  103.6  
50  99.5  90.7  99.4  89.7  117.0  108.1  113.9  103.6  
(200H 784V)  1  92.1  93.8  91.2  91.5  108.4  116.4  103.6  110.3 
5  89.5  91.4  88.1  88.6  107.5  118.2  101.4  102.3  
50  88.5  89.3  86.4  86.5  108.1  116.0  100.5  100.8  
(200H 200H 784V)  1  87.9  88.4  86.5  85.8  105.9  111.7  100.2  105.7 
5  86.3  86.4  84.1  82.5  105.8  108.2  98.6  101.1  
50  85.7  85.5  83.1  81.8  106.8  113.2  97.5  95.2 
Density estimation, or generative modelling, is the problem of fitting the distribution of data. We took the latent variable approach described in Section 2.4 and trained the models by optimizing the variational objective given by Eq. 8 averaged uniformly over minibatches of data points . Both our generative models and variational distributions were parameterized with neural networks as described above. We trained models with for and approximated the NLL with averaged uniformly over the whole dataset.
The results are shown in Table 1. In general, VIMCO outperformed Concrete relaxations for linear models and Concrete relaxations outperformed VIMCO for nonlinear models. We also tested the effectiveness of Concrete relaxations on generative models with ary layers on the objective. The best ary model achieved test/train NLL 86.7/83.3, the best 8ary achieved 87.4/84.6 with Concrete relaxations, more complete results in Appendix E. The relatively poor performance of the 8ary model may be because moving from 4 to 8 results in a more difficult objective without much added capacity. As a control we trained ary models using logistic normals as relaxations of discrete distributions (with retuned temperature hyperparameters). Because the discrete zero temperature limit of logistic Normals is a multinomial probit whose mass function is not known, we evaluated the discrete model by sampling from the discrete distribution parameterized by the logits learned during training. The best 4ary model achieved test/train NLL of 88.7/85.0, the best 8ary model achieved 89.1/85.1.
5.3 Structured Output Prediction

Structured output prediction is concerned with modelling the highdimensional distribution of the observation given a context and can be seen as conditional density estimation. We considered the task of predicting the bottom half of an image of an MNIST digit given its top half , as introduced by Raiko et al. (2014). We followed Raiko et al. (2014) in using a model with layers of discrete stochastic units between the context and the observation. Conditioned on the top half the network samples from a distribution over layers of stochastic units then predicts by sampling from a distribution . The training objective for a single pair is
This objective is a special case of (Eq. 8) where we use the prior as the variational distribution. Thus, the objective is a lower bound on .
We trained the models by optimizing for averaged uniformly over minibatches and evaluated them by computing averaged uniformly over the entire dataset. The results are shown in Figure 4. Concrete relaxations more uniformly outperformed VIMCO in this instance. We also trained ary (392V–240H–240H–240H–392V) models on the objective using the best temperature hyperparameters from density estimation. 4ary achieved a test/train NLL of 55.4/46.0 and 8ary achieved 54.7/44.8. As opposed to density estimation, increasing arity uniformly improved the models. We also investigated the hypothesis that for higher temperatures Concrete relaxations might prefer the interior of the interval to the boundary points . Figure 4 was generated with binary (392V–240H–240H–240H–392V) model trained on .
6 Conclusion
We introduced the Concrete distribution, a continuous relaxation of discrete random variables. The Concrete distribution is a new distribution on the simplex with a closed form density parameterized by a vector of positive location parameters and a positive temperature. Crucially, the zero temperature limit of every Concrete distribution corresponds to a discrete distribution, and any discrete distribution can be seen as the discretization of a Concrete one. The application we considered was training stochastic computation graphs with discrete stochastic nodes. The gradients of Concrete relaxations are biased with respect to the original discrete objective, but they are low variance unbiased estimators of a continuous surrogate objective. We showed in a series of experiments that stochastic nodes with Concrete distributions can be used effectively to optimize the parameters of a stochastic computation graph with discrete stochastic nodes. We did not find that annealing or automatically tuning the temperature was important for these experiments, but it remains interesting and possibly valuable future work.
Acknowledgments
We thank Jimmy Ba for the excitement and ideas in the early days, Stefano Favarro for some analysis of the distribution. We also thank Gabriel BarthMaron and Roger Grosse.
References
 Abadi et al. (2015) Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Largescale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensorflow.org.
 Aitchison (1985) J Aitchison. A general class of distributions on the simplex. Journal of the Royal Statistical Society. Series B (Methodological), pp. 136–146, 1985.
 Atchison & Shen (1980) J Atchison and Sheng M Shen. Logisticnormal distributions: Some properties and uses. Biometrika, 67(2):261–272, 1980.
 Bengio et al. (2013) Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
 Blei & Lafferty (2006) David Blei and John Lafferty. Correlated topic models. 2006.
 Burda et al. (2016) Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. ICLR, 2016.
 Connor & Mosimann (1969) Robert J Connor and James E Mosimann. Concepts of independence for proportions with a generalization of the dirichlet distribution. Journal of the American Statistical Association, 64(325):194–206, 1969.
 Favaro et al. (2011) Stefano Favaro, Georgia Hadjicharalambous, and Igor Prünster. On a class of distributions on the simplex. Journal of Statistical Planning and Inference, 141(9):2987 – 3004, 2011.
 Frey (1997) Brendan Frey. Continuous sigmoidal belief networks trained using slice sampling. In NIPS, 1997.
 Fu (2006) Michael C Fu. Gradient estimation. Handbooks in operations research and management science, 13:575–616, 2006.
 Glorot & Bengio (2010) Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Aistats, volume 9, pp. 249–256, 2010.
 Glynn (1990) Peter W Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM, 33(10):75–84, 1990.
 Graves et al. (2016) Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471–476, 2016.
 Greensmith et al. (2004) Evan Greensmith, Peter L. Bartlett, and Jonathan Baxter. Variance reduction techniques for gradient estimates in reinforcement learning. JMLR, 5, 2004.
 Grefenstette et al. (2015) Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce with unbounded memory. In Advances in Neural Information Processing Systems, pp. 1828–1836, 2015.
 Gregor et al. (2013) Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, and Daan Wierstra. Deep autoregressive networks. arXiv preprint arXiv:1310.8499, 2013.
 Gregor et al. (2015) Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015.
 Gu et al. (2016) Shixiang Gu, Sergey Levine, Ilya Sutskever, and Andriy Mnih. MuProp: Unbiased backpropagation for stochastic neural networks. ICLR, 2016.
 Gumbel (1954) Emil Julius Gumbel. Statistical theory of extreme values and some practical applications: a series of lectures. Number 33. US Govt. Print. Office, 1954.
 Hazan & Jaakkola (2012) Tamir Hazan and Tommi Jaakkola. On the partition function and random maximum aposteriori perturbations. In ICML, 2012.
 Hazan et al. (2016) Tamir Hazan, George Papandreou, and Daniel Tarlow. Perturbation, Optimization, and Statistics. MIT Press, 2016.
 Hoffman et al. (2013) Matthew D Hoffman, David M Blei, Chong Wang, and John William Paisley. Stochastic variational inference. JMLR, 14(1):1303–1347, 2013.
 Jang et al. (2016) E. Jang, S. Gu, and B. Poole. Categorical Reparameterization with GumbelSoftmax. ArXiv eprints, November 2016.
 Kingma & Ba (2014) Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 Kingma & Welling (2013) Diederik P Kingma and Max Welling. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
 Kingma & Welling (2014) Diederik P Kingma and Max Welling. Autoencoding variational bayes. ICLR, 2014.
 Kočiský et al. (2016) Tomáš Kočiský, Gábor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and Karl Moritz Hermann. Semantic parsing with semisupervised sequential autoencoders. In EMNLP, 2016.
 Luce (1959) R. Duncan Luce. Individual Choice Behavior: A Theoretical Analysis. New York: Wiley, 1959.
 Maddison (2016) Chris J Maddison. A Poisson process model for Monte Carlo. In Tamir Hazan, George Papandreou, and Daniel Tarlow (eds.), Perturbation, Optimization, and Statistics, chapter 7. MIT Press, 2016.
 Maddison et al. (2014) Chris J Maddison, Daniel Tarlow, and Tom Minka. A Sampling. In NIPS, 2014.
 Mnih & Gregor (2014) Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. In ICML, 2014.
 Mnih & Rezende (2016) Andriy Mnih and Danilo Jimenez Rezende. Variational inference for monte carlo objectives. In ICML, 2016.
 Mnih et al. (2014) Volodymyr Mnih, Nicolas Heess, Alex Graves, and koray kavukcuoglu. Recurrent Models of Visual Attention. In NIPS, 2014.
 Naesseth et al. (2016) Christian A Naesseth, Francisco JR Ruiz, Scott W Linderman, and David M Blei. Rejection sampling variational inference. arXiv preprint arXiv:1610.05683, 2016.
 Paisley et al. (2012) John William Paisley, David M. Blei, and Michael I. Jordan. Variational bayesian inference with stochastic search. In ICML, 2012.
 Papandreou & Yuille (2011) George Papandreou and Alan L Yuille. Perturbandmap random fields: Using discrete optimization to learn and sample from energy models. In ICCV, 2011.
 Raiko et al. (2014) Tapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh. Techniques for learning binary stochastic feedforward neural networks. arXiv preprint arXiv:1406.2989, 2014.
 Ranganath et al. (2014) Rajesh Ranganath, Sean Gerrish, and David M. Blei. Black box variational inference. In AISTATS, 2014.
 Rayens & Srinivasan (1994) William S Rayens and Cidambi Srinivasan. Dependence properties of generalized liouville distributions on the simplex. Journal of the American Statistical Association, 89(428):1465–1470, 1994.
 Rezende et al. (2014) Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014.
 Ruiz et al. (2016) Francisco JR Ruiz, Michalis K Titsias, and David M Blei. The generalized reparameterization gradient. arXiv preprint arXiv:1610.02287, 2016.
 Salakhutdinov & Hinton (2009) Ruslan Salakhutdinov and Geoffrey Hinton. Semantic hashing. International Journal of Approximate Reasoning, 50(7):969–978, 2009.
 Salakhutdinov & Murray (2008) Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of deep belief networks. In ICML, 2008.
 Schulman et al. (2015) John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation using stochastic computation graphs. In NIPS, 2015.
 Theano Development Team (2016) Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv eprints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/1605.02688.
 Titsias & LázaroGredilla (2014) Michalis Titsias and Miguel LázaroGredilla. Doubly stochastic variational bayes for nonconjugate inference. In Tony Jebara and Eric P. Xing (eds.), ICML, 2014.
 Titsias & LázaroGredilla (2015) Michalis Titsias and Miguel LázaroGredilla. Local expectation gradients for black box variational inference. In NIPS, 2015.
 Williams (1992) Ronald J Williams. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(34):229–256, 1992.
 Xu et al. (2015) Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015.
 Yellott (1977) John I Yellott. The relationship between luce’s choice axiom, thurstone’s theory of comparative judgment, and the double exponential distribution. Journal of Mathematical Psychology, 15(2):109–144, 1977.
Appendix A Proof of Proposition 1
Let with location parameters and temperature .

Let i.i.d., consider
Let , which has density
We will consider the invertible transformation
where then
where . This has Jacobian
by adding times each of the top rows to the bottom row we see that this Jacobian has the same determinant as
and thus the determinant is equal to
all together we have the density
with change of variables we have density
letting integrating out Thus .

Let . The density of can be rewritten as
Thus, the log density is up to an additive constant
If , then the first terms are convex, because is convex. For the last term, is convex and nonincreasing and is concave for . Thus, their composition is convex. The sum of convex terms is convex, finishing the proof.
Appendix B The Binary Special Case
Bernoulli random variables are an important special case of discrete distributions taking states in . Here we consider the binary special case of the GumbelMax trick from Figure 0(a) along with the corresponding Concrete relaxation.
Let for be a two state discrete random variable on such that , parameterized as in Figure 0(a) by :
(14) 
The distribution is degenerate, because . Therefore we consider just . Under the GumbelMax reparameterization, the event that is the event that where i.i.d. The difference of two Gumbels is a Logistic distribution , which can be sampled in the following way, where . So, if , then we have
(15) 
Thus, , where is the unit step function.
Correspondingly, we can consider the Binary Concrete relaxation that results from this process. As in the ary case, we consider the sampling routine for a Binary Concrete random variable first. To sample , sample and set
(16) 
We define the Binary Concrete random variable by its density on the unit interval.
Definition 2 (Binary Concrete Random Variables).
Let and . has a Binary Concrete distribution with location and temperature , if its density is:
(17) 
We state without proof the special case of Proposition 1 for Binary Concrete distributions
Proposition 2 (Some Properties of Binary Concrete Random Variables).
Let with location parameter and temperature , then

(Reparameterization) If , then ,

(Rounding) ,

(Zero temperature) ,

(Convex eventually) If , then is logconvex in .
We can generalize the binary circuit beyond Logistic random variables. Consider an arbitrary random variable with infinite support on . If is the CDF of , then
If we want this to have a Bernoulli distribution with probability , then we should solve the equation
This gives , which can be accomplished by relocating the random variable with CDF to be .
Appendix C Using Concrete Relaxations
In this section we include some tips for implementing and using the Concrete distribution as a relaxation. We use the following notation
Both sigmoid and logsumexp are common operations in libraries like TensorFlow or theano.
c.1 The Basic Problem
For the sake of exposition, we consider a simple variational autoencoder with a single discrete random variable and objective given by Eq. 8 for a single data point . This scenario will allow us to discuss all of the decisions one might make when using Concrete relaxations.
In particular, let be the mass function of some dimensional onehot discrete with , let be some likelihood (possibly computed by a neural network), which is a continuous function of and parameters , let be a onehot discrete random variable in whose unnormalized probabilities are some function (possible a neural net with its own parameters) of . Let be the mass function of . Then, we care about optimizing
(18) 
with respect to , , and any parameters in from samples of the SCG required to simulate an estimator of .
c.2 What you might relax and why
The first consideration when relaxing an estimator of Eq. 18 is how to relax the stochastic computation. The only sampling required to simulate is . The corresponding Concrete relaxation is to sample with temperature and location parameters are the the unnormalized probabilities of . Let density be the density of . We get a relaxed objective of the form:
(19) 
This choice allows us to take derivatives through the stochastic computaitons of the graph.
The second consideration is which objective to put in place of in Eq. 19. We will consider the ideal scenario irrespective of numerical issues. In Subsection C.3 we address those numerical issues. The central question is how to treat the expectation of the ratio (which is the KL component of the loss) when replaces .
There are at least three options for how to modify the objective. They are, (20) replace the discrete mass with Concrete densities, (21) relax the computation of the discrete log mass, (22) replace it with the analytic discrete KL.
(20)  
(21)  
(22) 
where is a onehot binary vector with and is the density of some Concrete random variable with temperature with location parameters . Although (22) or (21) is tempting, we emphasize that these are NOT necessarily lower bounds on in the relaxed model. (20) is the only objective guaranteed to be a lower bound:
(23) 
For this reason we consider objectives of the form (20). Choosing (22) or (21) is possible, but the value of these objectives is not interpretable and one should early stop otherwise it will overfit to the spurious “KL” component of the loss. We now consider practical issues with (20) and how to address them. All together we can interpret as the Concrete relaxation of the variational posterior and the relaxation of the prior.
c.3 Which random variable to treat as the stochastic node
When implementing a SCG like the variational autoencoder example, we need to compute logprobabilities of Concrete random variables. This computation can suffer from underflow, so where possible it’s better to take a different node on the relaxed graph as the stochastic node on which loglikelihood terms are computed. For example, it’s tempting in the case of Concrete random variables to treat the Gumbels as the stochastic node on which the loglikelihood terms are evaluated and the softmax as downstream computation. This will be a looser bound in the context of variational inference than the corresponding bound when treating the Concrete relaxed states as the node.
The solution we found to work well was to work with Concrete random variables in logspace. Consider the following vector in for location parameters and and ,
has the property that , therefore we call an . The advantage of this reparameterization is that the KL terms of a variational loss are invariant under invertible transformation. is invertible, so the KL between two ExpConcrete random variables is the same as the KL between two Concrete random variables. The logdensity of an is also simple to compute:
for such that . Note that the sample space of the ExpConcrete distribution is still interpretable in the zero temperature limit. In the limit of random variables become discrete random variables over the onehot vectors of where