Pathwise Derivatives for Multivariate Distributions

Pathwise Derivatives for Multivariate Distributions

Martin Jankowiak
Uber AI Labs
San Francisco, CA, USA
jankowiak@uber.com
&Theofanis Karaletsos
Uber AI Labs
San Francisco, CA, USA
theofanis@uber.com
Corresponding author
Abstract

We exploit the link between the transport equation and derivatives of expectations to construct efficient pathwise gradient estimators for multivariate distributions. We focus on two main threads. First, we use null solutions of the transport equation to construct adaptive control variates that can be used to construct gradient estimators with reduced variance. Second, we consider the case of multivariate mixture distributions. In particular we show how to compute pathwise derivatives for mixtures of multivariate Normal distributions with arbitrary means and diagonal covariances. We demonstrate in a variety of experiments in the context of variational inference that our gradient estimators can outperform other methods, especially in high dimensions.

1 Introduction

Stochastic optimization is a major component of many machine learning algorithms. In some cases—for example in variational inference—the stochasticity arises from an explicit, parameterized distribution . In these cases the optimization problem can often be cast as maximizing an objective

(1)

where is a (differentiable) test function and is a vector of parameters.111Without loss of generality we assume that has no explicit dependence on , since gradients of are readily handled by bringing inside the expectation. In order to maximize we would like to follow gradients . If were a deterministic function of we would simply apply the chain rule; for any particular (scalar) component of we have

(2)

What is the generalization of Eqn. 2 to the stochastic case? As observed in (beyond, ), we can construct an unbiased estimator for the gradient of Eqn. 1 provided we can solve the following partial differential equation (the transport a.k.a. continuity equation):

(3)

Here is a vector field defined on the sample space of .222Note that there is a vector field for each component of . We can then form the following pathwise gradient estimator:

(4)

That this gradient estimator is unbiased follows directly from the divergence theorem:333Specifically we appeal to the identity and assume that is sufficiently well-behaved that we can drop the surface integral.

Thus Eqn. 4 is a generalization of Eqn. 2 to the stochastic case, with the velocity field playing the role of in the deterministic case. In contrast to the deterministic case where is uniquely specified, Eqn. 3 generally admits an infinite dimensional space of solutions for .

Pathwise gradient estimators are particularly appealing because, empirically, they generally exhibit lower variance than alternatives. In this work we are interested in the multivariate case, where the solution space to Eqn. 3 is very rich, and where alternative gradient estimators tend to suffer from particularly large variance, especially in high dimensions. The rest of this paper is organized as follows. In Sec. 2 we give a brief overview of stochastic gradient variational inference (SGVI) and stochastic gradient estimators. In Sec. 3 we use parameterized families of solutions to Eqn. 3 to construct adaptive pathwise gradient estimators. In Sec. 4 we present solutions to Eqn. 3 for multivariate mixture distributions. In Sec. 5 we place our work in the context of recent research. In Sec. 6 we validate our proposed techniques with a variety of experiments in the context of variational inference. In Sec. 7 we conclude and discuss directions for future work.

2 Background

2.1 Stochastic Gradient Variational Inference

Given a probabilistic model with observations and latent random variables , variational inference recasts posterior inference as an optimization problem. Specifically we define a variational family of distributions parameterized by and seek to find a value of that minimizes the KL divergence between and the (unknown) posterior (jordan1999introduction, ; paisley2012variational, ; hoffman2013stochastic, ; wingate2013automated, ; ranganath2014black, ). This is equivalent to maximizing the ELBO, defined as

(5)

This stochastic optimization problem will be the basic setting for most of our experiments in Sec. 6.

2.2 Score Function Estimator

The score function estimator, also referred to as reinforce glynn1990likelihood (); williams1992simple (); fu2006gradient (), provides a simple and broadly applicable recipe for estimating stochastic gradients, with the simplest variant given by

Although the score function estimator is very general (e.g. it can be applied to discrete random variables) it typically suffers from high variance, although this can be mitigated with the use of variance reduction techniques such as Rao-Blackwellization (casella1996rao, ) and control variates (ross, ).

2.3 Reparameterization Trick

The reparameterization trick (RT) is not as broadly applicable as the score function estimator, but it generally exhibits lower variance price1958useful (); salimans2013fixed (); kingma2013auto (); glasserman2013monte (); titsias2014doubly (); rezende2014stochastic (). It is applicable to continuous random variables whose probability density can be reparameterized such that we can rewrite expectations as , where is a fixed distribution and is a differentiable -dependent transformation. Since the expectation w.r.t.  has no dependence, gradients w.r.t.  can be computed by pushing through the expectation. This reparameterization can be done for a number of distributions, including for example the Normal distribution. As noted in (beyond, ), in cases where the reparameterization trick is applicable, we have that

(6)

is a solution to the transport equation, Eqn. 3.

3 Adaptive Velocity Fields

Whenever we have a solution to Eqn. 3 we can form an unbiased gradient estimator via Eqn. 4. An intriguing possibility arises if we can construct a parameterized family of solutions : as we take steps in -space we can simultaneously take steps in -space, thus adaptively choosing the form the gradient estimator takes. This can be understood as an instance of an adaptive control variate (ross, ). Indeed consider the solution as well as a fixed reference solution . Then we can write

The final expectation is identically equal to zero so the integrand is a control variate. If we choose appropriately we can reduce the variance of our gradient estimator. In order to specify a complete algorithm we need to answer two questions: i) how should we adapt ?; and ii) what is an appropriate family of solutions ? We now address each of these in turn.

3.1 Adapting velocity fields

Whenever we compute a noisy gradient estimate via Eqn. 4 we can use the same sample(s) to form a noisy estimate of the gradient variance, which is given by

(7)

In direct analogy to the adaptation procedure used in (for example) (ruiz2016overdispersed, ) we can adapt by following noisy gradient estimates of this variance:444In the general case where the parameter vector has multiple components , we compute a noisy gradient estimate of the sum of the various (scalar) terms .

(8)

In this way can be adapted to reduce the gradient variance during the course of optimization.

3.2 Parameterizing velocity fields

Assuming we have obtained (at least) one solution to the transport equation Eqn. 3, parameterizing a family of solutions is in principle easy. We simply solve the null equation, , which is obtained from the transport equation by dropping the source term . Then for a given family of solutions to the null equation, , we get a solution to the transport equation by simple addition, i.e. . The problem is that solving the null equation is almost too easy, since any divergence free vector field immediately yields a solution:

(9)

While parameterizing families of divergence free vector fields is an intriguing option, it might be preferable to find simpler families of solutions—if only to limit the cost of each gradient step. To find simpler null solutions, however, we need to focus in on a particular family of distributions .

3.3 Null Solutions for the Multivariate Normal Distribution

Consider the multivariate Normal distribution in dimensions parameterized via a mean and Cholesky factor . We would like to compute derivatives w.r.t. . While the reparameterization trick is applicable in this case, we can potentially get lower variance gradients by using Adaptive Velocity Fields. As we show in the supplementary materials, a simple solution to the corresponding null transport equation is given by

(10)

where is an arbitrary collection of antisymmetric matrices (one for each ). This is just a linear vector field and so it is easy to manipulate and (relatively) cheap to compute. Geometrically, for each entry of , this null solution corresponds to an infinitesimal rotation in whitened coordinates . Note that this null solution is added to the reference solution , and this is not equivalent to rotating .

Since runs over pairs of indices and each has free parameters, this space of solutions is quite large. Since we will be adapting via noisy gradient estimates—and because we would like to limit the computational cost—we choose to parameterize a smaller subspace of solutions. Empirically we find that the following parameterization works well:

(11)

Here each matrix and is arbitrary and the hyperparameter allows us to trade off the flexibility of our family of null solutions with computational cost (typically ). We explore the performance of the resulting Adaptive Velocity Field (AVF) gradient estimator in Sec. 6.1.

4 Pathwise Derivatives for Multivariate Mixture Distributions

Consider a mixture of multivariate distributions in dimensions:

(12)

In order to compute pathwise derivatives of we need to solve the transport equation w.r.t. the mixture weights as well as the component parameters . For the derivatives w.r.t.  we can just repurpose the velocity fields of the individual components. That is, if is a solution of the single-component transport equation i.e. , then

(13)

is a solution of the multi-component transport equation.555We note that the form of Eqn. 13 implies that we can easily introduce adaptive velocity fields for each individual component (although the cost of doing so may be prohibitively high for a large number of components). See the supplementary materials for the (very short) derivation. For example we can readily compute Eqn. 13 if each component distribution is reparameterizable. Note that a typical gradient estimator for, say, a mixture of Normal distributions first samples the discrete component and then samples conditioned on . This results in a pathwise derivative for that only depends on and . By contrast the pathwise gradient computed via Eqn. 13 will result in a gradient for all component parameters for every sample . We explore this difference experimentally in Sec. 6.2.2.

4.1 Mixture weight derivatives

Unfortunately, solving the transport equation for the mixture weights is in general much more difficult, since the desired velocity fields need to coordinate mass transport among all components.666For example, we expect this to be particularly difficult if some of the component distributions belong to different families of distributions, e.g. a mixture of Normal distributions and Wishart distributions. We now describe a formal construction for solving the transport equation for the mixture weights. For each pair of component distributions , consider the transport equation

(14)

Intuitively, the velocity field moves mass from component to component . We can superimpose these pairwise solutions to form

(15)

As we show in the supplementary materials, the velocity field yields an estimator for the derivative w.r.t. the softmax logit that corresponds to component .777That is with respect to where . Thus provided we can find solutions to Eqn. 14 for each pair of components , we can form a pathwise gradient estimator for the mixture weights. We now consider two particular situations where this recipe can be carried out. We leave a discussion of other solutions—including a solution for a mixture of Normal distributions with arbitrary diagonal covariances—to the supplementary materials.

4.2 Mixture of Multivariate Normals with Shared Diagonal Covariance

Here each component distribution is specified by , i.e. each component distribution has its own mean vector but all components share the same (diagonal) square root covariance . We find that a solution to Eqn. 14 is given by

(16)

and where the quantities are defined in the supplementary materials. Here is the CDF of the unit Normal distribution and is the corresponding probability density. Geometrically, the velocity field in Eqn. 16 moves mass along lines parallel to .

4.3 Zero Mean Discrete Gaussian Scale Mixture

Here each component distribution is specified by , where each is a positive scalar. Defining and making use of radial coordinates with we find that a solution of the form in Eqn. 15 reduces to

(17)

where

(18)

The ‘radial CDF’ in Eqn.18 can be computed analytically; see the supplementary materials. Note that computing Eqn. 17 is , in contrast to Eqn. 16, which is .

5 Related Work

There is a large body of work on constructing gradient estimators with reduced variance, much of which can be understood in terms of control variates (ross, ): for example, (mnih2014neural, ) constructs neural baselines for score function gradients; (schulman2015gradient, ) discuss gradient estimators for stochastic computation graphs and their Rao-Blackwellization; and (tucker2017rebar, ; grathwohl2017backpropagation, ) construct adaptive control variates for discrete random variables. Another example of this line of work is (miller2017reducing, ), where the authors construct control variates that are applicable when is a diagonal Normal distribution and that rely on Taylor expansions of the test function .888Also, in their approach variance reduction for scale parameter gradients necessitates a multi-sample estimator (at least for high-dimensional models where computing the diagonal of the Hessian is expensive). In contrast our AVF gradient estimator for the multivariate Normal has about half the computational cost, since it does not rely on second order derivatives. Another line of work constructs partially reparameterized gradient estimators for cases where the reparameterization trick is difficult to apply (ruiz2016generalized, ; naesseth2017reparameterization, ). In (graves2016stochastic, ), the author constructs pathwise gradient estimators for mixtures of diagonal Normal distributions. Unfortunately, the resulting estimator is expensive, relying on a recursive computation that scales with the dimension of the sample space.999Reference (nalisnick2016approximate, ) also reports that the estimator does not yield competitive results in their setting. Another approach to mixture distributions is taken in (dillon2018quadrature, ), where the authors use quadrature-like constructions to form continuous distributions that are reparameterizable. Our work is closest to (beyond, ), which uses the transport equation to derive an ‘OMT’ gradient estimator for the multivariate Normal distribution whose velocity field is optimal in the sense of optimal transport. This gradient estimator can exhibit low variance but has computational complexity.

After this manuscript was completed, we become aware of reference figurnov2018implicit (), which has some overlap with this work. In particular, figurnov2018implicit () describes an interesting recipe for computing pathwise gradients that can be applied to mixture distributions. A nice feature of this recipe (also true of (graves2016stochastic, )), is that it can be applied to mixtures of Normal distributions with arbitrary covariances. A disadvantage is that it can be expensive, exhibiting a computational cost even for a mixture of Normal distributions with diagonal covariances. In contrast the gradient estimators constructed in Sec. 4 by solving the transport equation are linear in . It is also unclear how the recipe in figurnov2018implicit () performs empirically, as the authors do not conduct any experiments for the mixture case.

6 Experiments

(a) We compare the OMT and AVF gradient estimators for the multivariate Normal distribution to the RT estimator for three test functions. The horizontal axis controls the magnitude of the off-diagonal elements of the Cholesky factor . The vertical axis depicts the ratio of the mean variance of the given estimator to that of the RT estimator for the off-diagonal elements of .
(b) We compare the variance of pathwise gradient estimators for various mixture distributions with to the corresponding score function estimator. The vertical axis depicts the ratio of the logit gradient variance of the pathwise estimator to that of the score function estimator for the test function .
Figure 1: We compare gradient variances for different gradient estimators and synthetic test functions.

We conduct a series of experiments using synthetic and real world data to validate the performance of the gradient estimators introduced in Sec. 3 & 4. Throughout we use single sample estimators. See the supplementary materials for details on experimental setups. The SGVI experiments in this section were implemented in the Pyro101010http://pyro.ai probabilistic programming language.

6.1 Adaptive Velocity Fields

6.1.1 Synthetic test functions

In Fig. 0(a) we use synthetic test functions to illustrate the amount of variance reduction that can be achieved with AVF gradients for the multivariate Normal distribution as compared to the reparameterization trick and the OMT gradient estimator from (beyond, ). The dimension is ; the results are qualitatively similar for different dimensions.

6.1.2 Gaussian Process Regression

We investigate the performance of AVF gradients for the multivariate Normal distribution in the context of a Gaussian Process regression task reproduced from (beyond, ). We model the Mauna Loa data from (keeling2004atmospheric, ) considered in (rasmussen2004gaussian, ). We fit the GP using a single-sample Monte Carlo ELBO gradient estimator and all data points (so ). Both the OMT and AVF gradient estimators achieve higher ELBOs than the RT estimator (see Fig. 2); however, the OMT estimator does so at substantially increased computational cost (1.9x), while the AVF estimator for () requires only 6% (11%) more time per iteration. By iteration 250 the AVF gradient estimator with has attained the same ELBO that the RT estimator attains at iteration 500.

Figure 2: We compare the performance of various pathwise gradient estimators on the GP regression task in Sec 6.1.2. The AVF gradient estimator with is the clear winner both in terms of wall clock time and the final ELBO achieved.

6.2 Mixture Distributions

6.2.1 Synthetic test function

In Fig. 0(b) we depict the variance reduction of pathwise gradient estimators for various mixture distributions compared to the corresponding score function estimator. We find that the typical variance reduction has magnitude . The results are qualitatively similar for different numbers of components . See the supplementary materials for details on the different families of mixture distributions.

6.2.2 Baseball Experiment

We consider a model for repeated binary trial data (baseball players at bat) using the data in efron1975data () and the modeling setup in stanmanual () with partial pooling. The model has two global latent variables and 18 local latent variables so that the posterior is 20-dimensional. While mean field SGVI gives reasonable results for this model, it is not able to capture the detailed structure of the exact posterior as computed with the NUTS HMC implementation in Stan hoffman2014no (); carpenter2017stan (). To go beyond mean field we consider variational distributions that are mixtures of diagonal Normal distributions (cf. the experiment in (miller2016variational, )). Adding more components gives a better approximation to the exact posterior, see Fig. 2(a). Furthermore, using pathwise derivatives in the ELBO gradient estimator leads to faster convergence, see Fig. 2(b). Here the hybrid estimator uses score functions gradients for the mixture logits but implements Eqn. 13 for gradients w.r.t. the component parameters. Since there is significant overlap among the mixture components, Eqn. 13 leads to substantial variance reduction.

(a) Two-dimensional cross-sections of three approximate posteriors; each cross-section includes one global latent variable, , and one local latent variable, . The white density contours correspond to the different variational approximations, and the background depicts the approximate posterior computed with HMC. The quality of the variational approximation improves as we add more mixture components.
(b) ELBO training curves for three gradient estimators for . The lower variance of the pathwise and hybrid gradient estimators speeds up learning. The dotted lines denote mean ELBOs over 50 runs and the bands indicate standard deviations. Note that the figure would be qualitatively similar if the ELBO were plotted against wall clock time, since the estimators have comparable computational cost (within 10%).
Figure 3: Variational approximations and ELBO training curves for the model in Sec. 6.2.2.

6.2.3 Continuous state space model

(a) (no mixture)
(b) (mixture)
Figure 4: Approximate marginal posteriors for the toy state space model in Sec. 6.2.3 for a randomly chosen test sequence for . The mixture distributions that serve as components of the variational family on the right help capture the multi-modal structure of the posterior.

We consider a non-linear continuous state space model with observations and random variables at each time step (both two dimensional). Since the likelihood is of the form with , the posterior over is highly multi-modal (since is symmetric under ). We use SGVI to approximate the posterior over . To better capture the multi-modality we use a variational family that is a product of two-component mixture distributions of the form , one at each time step. As can be seen from Fig. 4, the variational family makes use of the mixture distributions at its disposal to model the multi-modal structure of the posterior, something that a variational family with a unimodal struggles to do. In the next section we explore the use of mixture distributions in a much richer time series setup.

6.2.4 Deep Markov Model

We consider a variant of the Deep Markov Model (DMM) setup introduced in (krishnan2017structured, ) on a polyphonic music dataset. We fix the form of the model and vary the variational family and gradient estimator used. We consider a variational family which includes a mixture distribution at each time step. In particular each factor in the variational distribution is a mixture of diagonal Normals. We compare the performance of the pathwise gradient estimator introduced in Sec. 4 to two variants of the Gumbel Softmax gradient estimator (jang2016categorical, ; maddison2016concrete, ), see Table 6.2.4. Note that both Gumbel Softmax estimators are biased, while the pathwise estimator is unbiased. We find that the Gumbel Softmax estimators—which induce times higher gradient variance for the mixture logits as compared to the pathwise estimator—are unable to achieve competitive test ELBOs.111111We also found the Gumbel Softmax estimators to be less numerically stable, which prevented us from using more aggressive KL annealing schedules. This may have contributed to the performance gap in Table 6.2.4.

{tabu}

|c|[1pt]c|c|c| & Gradient Estimator  

Variational Family & Pathwise & GS-Soft & GS-Hard

\tabucline

[1pt]- K=2 & -6.84 & -6.89 & -6.88

K=3 & -6.82 & -6.87 & -6.85

Table 1: Test ELBOs for the DMM in Sec. 6.2.4. Higher is better. For comparison: we achieve a test ELBO of -6.90 for a variational family in which each is a diagonal Normal.

6.2.5 Vae

We conduct a simple experiment with a VAE (kingma2013auto, ; rezende2014stochastic, ) trained on MNIST. We fix the form of the model and the variational family and vary the gradient estimator used. The variational family is of the form , where is a mixture of diagonal Normals. We find that the Gumbel Softmax estimators are unable to achieve competitive test ELBOs, see Table 6.2.5.

{tabu}

|c|[1pt]c|c|c| & Gradient Estimator  

Variational Family & Pathwise & GS-Soft & GS-Hard

\tabucline

[1pt]- K=3 & -98.47 & -98.85 & -98.92

Table 2: Test ELBOs for the VAE in Sec. 6.2.5. Higher is better.

7 Discussion

We have seen that the link between the transport equation and derivatives of expectations offers a powerful tool for constructing pathwise gradient estimators. For the particular case of variational inference, we emphasize that—in addition to providing gradients with reduced variance—pathwise gradient estimators can enable quick iteration over models and variational families with rich structure, since there is no need to reason about ELBO gradient estimators. All the modeler needs to do is construct a Monte Carlo estimator of the ELBO; automatic differentiation will handle the rest. Our hope is that an expanding toolkit of pathwise gradient estimators can contribute to the development of innovative applications of variational inference.


Given the richness of the transport equation and the great variety of multivariate distributions, there are several exciting possibilities for future research. It would be of interest to identify more cases where we can construct null solutions to the transport equation. One possibility is to generalize Eqn. 10 to higher orders; another is to consider parameterized families of divergence free vector fields in the context of normalizing flows (tabak2010density, ; tabak2013family, ; rezende2015variational, ). It could also be fruitful to consider other ways of adapting velocity fields. In some cases this could lead to improved variance reduction, especially in the case of mixture distributions, where the optimal transport setup in (chen2017optimal, ) might provide a path to constructing alternative gradient estimators. Given the usefulness of heavy tails in various contexts, it would be valuable to generalize the velocity fields in Eqn. 17 to the case of the multivariate t-distribution. Finally, it would be of interest to connect the present work to natural gradients (amari1998natural, ).



Acknowledgments

We thank Fritz Obermeyer for collaboration at an early stage of this project. We thank Peter Dayan for feedback on a draft manuscript and other colleagues at Uber AI Labs for stimulating conversations during the course of this work.


References

  • [1] Stan modeling language users guide and reference manual, version 2.17.0. http://mc-stan.org/users/documentation/case-studies/pool-binary-trials.html, 2017.
  • [2] S.-I. Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251–276, 1998.
  • [3] N. Boulanger-Lewandowski, Y. Bengio, and P. Vincent. Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription. arXiv preprint arXiv:1206.6392, 2012.
  • [4] B. Carpenter, A. Gelman, M. D. Hoffman, D. Lee, B. Goodrich, M. Betancourt, M. Brubaker, J. Guo, P. Li, and A. Riddell. Stan: A probabilistic programming language. Journal of statistical software, 76(1), 2017.
  • [5] G. Casella and C. P. Robert. Rao-blackwellisation of sampling schemes. Biometrika, 83(1):81–94, 1996.
  • [6] Y. Chen, T. T. Georgiou, and A. Tannenbaum. Optimal transport for gaussian mixture models. arXiv preprint arXiv:1710.07876, 2017.
  • [7] J. Dillon and I. Langmore. Quadrature compound: An approximating family of distributions. arXiv preprint arXiv:1801.03080, 2018.
  • [8] B. Efron and C. Morris. Data analysis using stein’s estimator and its generalizations. Journal of the American Statistical Association, 70(350):311–319, 1975.
  • [9] M. Figurnov, S. Mohamed, and A. Mnih. Implicit reparameterization gradients. arXiv preprint arXiv:1805.08498, 2018.
  • [10] M. C. Fu. Gradient estimation. Handbooks in operations research and management science, 13:575–616, 2006.
  • [11] P. Glasserman. Monte Carlo methods in financial engineering, volume 53. Springer Science & Business Media, 2013.
  • [12] P. W. Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM, 33(10):75–84, 1990.
  • [13] W. Grathwohl, D. Choi, Y. Wu, G. Roeder, and D. Duvenaud. Backpropagation through the void: Optimizing control variates for black-box gradient estimation. arXiv preprint arXiv:1711.00123, 2017.
  • [14] A. Graves. Stochastic backpropagation through mixture density distributions. arXiv preprint arXiv:1607.05690, 2016.
  • [15] M. D. Hoffman, D. M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303–1347, 2013.
  • [16] M. D. Hoffman and A. Gelman. The no-u-turn sampler: adaptively setting path lengths in hamiltonian monte carlo. Journal of Machine Learning Research, 15(1):1593–1623, 2014.
  • [17] E. Jang, S. Gu, and B. Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016.
  • [18] M. Jankowiak and F. Obermeyer. Pathwise derivatives beyond the reparameterization trick. 2018.
  • [19] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183–233, 1999.
  • [20] C. D. Keeling and T. P. Whorf. Atmospheric co2 concentrations derived from flask air samples at sites in the sio network. Trends: a compendium of data on Global Change, 2004.
  • [21] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • [22] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  • [23] R. G. Krishnan, U. Shalit, and D. Sontag. Structured inference networks for nonlinear state space models. In AAAI, pages 2101–2109, 2017.
  • [24] C. J. Maddison, A. Mnih, and Y. W. Teh. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, 2016.
  • [25] A. Miller, N. Foti, A. D’Amour, and R. P. Adams. Reducing reparameterization gradient variance. In Advances in Neural Information Processing Systems, pages 3711–3721, 2017.
  • [26] A. C. Miller, N. Foti, and R. P. Adams. Variational boosting: Iteratively refining posterior approximations. arXiv preprint arXiv:1611.06585, 2016.
  • [27] A. Mnih and K. Gregor. Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030, 2014.
  • [28] C. Naesseth, F. Ruiz, S. Linderman, and D. Blei. Reparameterization gradients through acceptance-rejection sampling algorithms. In Artificial Intelligence and Statistics, pages 489–498, 2017.
  • [29] E. Nalisnick, L. Hertel, and P. Smyth. Approximate inference for deep latent gaussian mixtures. In NIPS Workshop on Bayesian Deep Learning, volume 2, 2016.
  • [30] J. Paisley, D. Blei, and M. Jordan. Variational bayesian inference with stochastic search. arXiv preprint arXiv:1206.6430, 2012.
  • [31] R. Price. A useful theorem for nonlinear devices having gaussian inputs. IRE Transactions on Information Theory, 4(2):69–72, 1958.
  • [32] R. Ranganath, S. Gerrish, and D. Blei. Black box variational inference. In Artificial Intelligence and Statistics, pages 814–822, 2014.
  • [33] C. E. Rasmussen. Gaussian processes in machine learning. In Advanced lectures on machine learning, pages 63–71. Springer, 2004.
  • [34] D. J. Rezende and S. Mohamed. Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770, 2015.
  • [35] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
  • [36] S. M. Ross. Simulation. Academic Press, San Diego, 2006.
  • [37] F. J. Ruiz, M. K. Titsias, and D. M. Blei. Overdispersed black-box variational inference. arXiv preprint arXiv:1603.01140, 2016.
  • [38] F. R. Ruiz, M. T. R. AUEB, and D. Blei. The generalized reparameterization gradient. In Advances in Neural Information Processing Systems, pages 460–468, 2016.
  • [39] T. Salimans, D. A. Knowles, et al. Fixed-form variational posterior approximation through stochastic linear regression. Bayesian Analysis, 8(4):837–882, 2013.
  • [40] J. Schulman, N. Heess, T. Weber, and P. Abbeel. Gradient estimation using stochastic computation graphs. In Advances in Neural Information Processing Systems, pages 3528–3536, 2015.
  • [41] E. Tabak and C. V. Turner. A family of nonparametric density estimation algorithms. Communications on Pure and Applied Mathematics, 66(2):145–164, 2013.
  • [42] E. G. Tabak, E. Vanden-Eijnden, et al. Density estimation by dual ascent of the log-likelihood. Communications in Mathematical Sciences, 8(1):217–233, 2010.
  • [43] M. Titsias and M. Lázaro-Gredilla. Doubly stochastic variational bayes for non-conjugate inference. In International Conference on Machine Learning, pages 1971–1979, 2014.
  • [44] G. Tucker, A. Mnih, C. J. Maddison, J. Lawson, and J. Sohl-Dickstein. Rebar: Low-variance, unbiased gradient estimates for discrete latent variable models. In Advances in Neural Information Processing Systems, pages 2624–2633, 2017.
  • [45] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992.
  • [46] D. Wingate and T. Weber. Automated variational inference in probabilistic programming. arXiv preprint arXiv:1301.1299, 2013.

Appendix A Supplementary Materials

a.1 Pathwise Gradient Estimators and the Transport Equation

As discussed in the main text, a solution to the transport equation allows us to form an unbiased pathwise gradient estimator via

(19)

In order for this to be a sensible Monte Carlo estimator, we require that the variance is finite, i.e

(20)

In order for the derivation given in the main text to hold, we also require for to be everywhere continuously differentiable and that the surface integral go to zero as tends towards the boundary at infinity. A natural way to ensure the latter condition for a large class of test functions is to require the boundary condition

(21)

Much of the difficulty in using the transport equation to construct pathwise gradient estimators is in finding velocity fields that satisfy all these desiderata.

For example consider a mixture of products of univariate distributions of the form:

(22)

Here runs over the components and runs over the dimensions of . Note that a mixture of diagonal Normal distributions is a special case of Eqn. 22. Suppose each has a CDF that we have analytic control over. Then we can form the velocity field

(23)

This is a solution to the transport equation for the mixture weight ; however, it does not satisfy the boundary condition Eqn. 21 and so it is of limited practical use for estimating gradients.121212Note that since is constrained to lie on the simplex, the relevant velocity fields to consider are defined w.r.t. an appropriate parameterization like softmax logits . It is for these velocity fields that the boundary condition needs to hold and not for itself. This is why Eqn. 23 can be used for , where the boundary condition does hold. Intuitively, the problem with Eqn. 23 is that sends mass to infinity.

a.2 Adaptive Velocity Fields for the Multivariate Normal Distribution

We show that the velocity field

(24)

given in the main text is a solution to the corresponding null transport equation.131313This derivation can also be found in reference [18]. The transport equation can be written in the form

(25)

Transforming to whitened coordinates , the null equation is given by

(26)

We let and compute

(27)

where we have used that is antisymmetric. Transforming back to the given coordinates , we end up with Eqn. 24 (the factor of enters when we transform the vector field).

For the ‘reference solution’ we simply use the velocity field furnished by the reparameterization trick, which is given by

(28)

Thus the complete specification of is

(29)

The computational complexity of using AVF gradients with this class of parameterized velocity fields (including the update equations) is

(30)

This should be compared to the cost of the reparameterization trick gradient and the cost of the OMT gradient. However, the computational complexity in Eqn. 30 is somewhat misleading in that the term arises from matrix multiplications, which tend to be quite fast. By contrast the OMT gradient estimator involves a singular value decomposition, which tends to be substantially more expensive than a matrix multiplication on modern hardware. In cases where computing the test function involves expensive operations like Cholesky factorizations, the additional cost reflected in Eqn. 30 is limited (at least for ). For example, as reported in the GP experiment in Sec. 6.1.2 in the main text where , the AVF gradient estimator for () requires only 6% (11%) more time per iteration. Our implementation of the gradient estimator corresponding to Eqn. 29 can be found here: https://github.com/uber/pyro/blob/dev/pyro/distributions/avf_mvn.py

a.3 Mixture distributions

{tabu}

|c|c|c|c| Distribution Name & Component Distributions & Velocity Field & Computational Complexity

\tabucline

[1pt]- DiagNormalsSharedCovariance & & Eqn. 16 &

ZeroMeanGSM & & Eqn. 17&

GSM & & Eqn. 45&

DiagNormals & & Eqn. 42 &

Table 3: Four families of mixture distributions for which we can compute pathwise derivatives. The names are those used in Fig. 0(b) in the main text. All of the velocity fields satisfy the desiderata outlined in Sec. A.1.

In Table 3 we summarize the four families of mixture distributions for which we have found closed form solutions to the transport equation. The first two were presented in the main text; here we also present the solutions for the two other families of mixture distributions.



a.3.1 Pairwise Mass Transport

We begin with the transport equation for , which reads

(31)

Introducing softmax logits given by

(32)

and using the fact that

(33)

we observe that the velocity field for satisfies the following transport equation

(34)

We substitute Eqn. 15 for and compute the divergence term, which yields

Thus satisfies the relevant transport equation Eqn. 34.


a.3.2 Zero Mean Discrete GSM

We show explicitly that Eqn. 17 is a solution of the corresponding transport equation. The derivations for the other families of mixture distributions are similar. It is enough to show that

(35)

Using the identities

(36)

which follow from the definition , we have

(37)

By construction we have that

(38)

Comparing terms, we see that is indeed a solution of the transport equation for as desired.

a.3.3 Mixture of Diagonal Normal Distributions

Here each component distribution is given by

(39)

For define

(40)

where is an arbitrary reference scale. We find that if we define141414Here we take the empty products and to be equal to unity.

(41)

then we can construct a solution of the form specified in Eqn. 15 that is given by

(42)

Since the reference scale is arbitrary, this is actually a parameterized family of solutions. Thus this solution is in principle amenable to the Adaptive Velocity Field approach described in Sec. 3 in the main text. In addition, the ordering of the dimensions that occurs implicitly in the telescopic structure of Eqn. 41 is also arbitrary. Thus Eqn. 41 corresponds to a very large family of solutions. In practice we use the fixed ordering given in Eqn. 41 and choose

(43)

We find this works pretty well empirically.

a.3.4 Discrete GSM

Here each component distribution is given by

(44)

where each is a positive scalar. We can solve the corresponding transport equation for the mixture weights by superimposing the solutions in Eqn M16 and Eqn. 17. In more detail, in this case the solution to Eqn. 14 is given by

(45)

where

(46)

Analogously to the reference scale in Sec. A.3.3, is arbitrary. As such Eqn. 45 is a parametric family of solutions that is amenable to the Adaptive Velocity Field approach. Intuitively, we use the solutions from Eqn. 16 to effect mass transport between component means and solutions from Eqn. 17 to shrink/dilate covariances.

a.3.5 Mixture of Multivariate Normals with Shared Diagonal Covariance

To finish specifying the solution Eqn. 16 given in the main text, we define the following coordinates:



a.3.6 Radial CDF for the Discrete GSM

The ‘radial CDF’ that appears in Eqn. 18 can be computed analytically. In even dimensions we find151515The notation refers to the double factorial of , which occurs in this context through the identity .

(47)

and in odd dimensions we find

(48)

where is the error function. Note that in contrast to all the other solutions in Table 3, Eqn. 18 for even does not involve any error functions.

a.3.7 Velocity Fields for the Component Parameters of Multivariate Mixtures

Suppose is a solution of the single-component transport equation for the parameter , i.e.

(49)

Then

(50)

is a solution of the multi-component transport equation, since

(51)

This completes the derivation for the claim about made at the beginning of Sec. 4 in the main text.

a.3.8 Pairwise Mass Transport and Control Variates

For define square matrices such that all the rows and columns of each sum to zero. Then

(52)

is a null solution to the transport equation for , Eqn. 34. While we have not done so ourselves, these null solutions could be used to adaptively move mass among the component distributions of a mixture instead of using the recipe in Eqn. 15, which takes mass from each component distribution in proportion to its mass (which is in general suboptimal).

a.4 Experimental Details

a.4.1 Synthetic Test Function Experiments

We describe the setup for the experiments in Sec. 6.1.1 and Sec. 6.2.1 in the main text.

For the experiment in Sec. 6.1.1 the dimension is fixed to and the mean of is fixed to the zero vector. The Cholesky factor that enters into is constructed as follows. The diagonal of consists of all ones. To construct the off-diagonal terms we proceed as follows. We populate the entries below the diagonal of a matrix by drawing each entry from the uniform distribution on the unit interval. Then we define . Here controls the magnitude of off-diagonal terms of and appears on the horizontal axis of Fig. 0(a) in the main text. The three test functions are constructed as follows. First we construct a strictly lower diagonal matrix by drawing each entry from a bernoulli distribution with probability 0.5. We then define . The cosine test function is then given by

(53)

The quadratic test function is given by

(54)

The quartic test function is given by

(55)

The AVF gradient uses and we train to (approximate) convergence before estimating the gradient variance.

For the experiment in Sec. 6.2.1 the test function is fixed to . The distributions are constructed as follows. For the distributions that admit a parameter , each is sampled from the sphere centered at with radius 2. For the distribution whose velocity field is given in Eqn. 17, the mean is fixed to . The covariance matrices are sampled from a narrow distribution centered at the identity matrix. Consequently the different mixture components have little overlap.

In all cases the gradients can be computed analytically, which makes it easier to reliably estimate the variance of the gradient estimators.

a.4.2 Gaussian Process Regression

We use the Adam optimizer [21] to optimize the ELBO with single-sample gradient estimates. We chose the Adam hyperparameters by doing a grid search over the learning rate and . For the AVF gradient estimator the learning rate and are allowed to differ between and gradient steps (the latter needs a much larger learning rate for good results). For each combination of hyperparameters we did 500 training iterations for five trials with different random seeds and then chose the combination that yielded the highest mean ELBO after 500 iterations. We then trained the model for 500 iterations, initializing with another random number seed. The figure in the main text shows the training curves for that single run. We confirmed that other random number seeds give similar results.

a.4.3 Baseball Experiment

There are 18 baseball players and the data consists of 45 hits/misses for each player. The model has two global latent random variables, and , with priors and , respectively. There are 18 local latent random variables, for , with . The data likelihood factorizes into 45 Bernoulli observations with mean chance of success for each player . All variational approximations are formed in the unconstrained space . The mean field variational approximation consists of a diagonal Normal distribution in the unconstrained space, while the mixture variational approximation consists of diagonal Normal distributions in the same space. We use the Adam optimizer for training with a learning rate of [21].

a.4.4 Continuous State Space Model

We consider the following simple model with two dimensional observations and two dimensional latent random variables :