Theorem B.1 (Bonnet’s theorem).
Abstract

We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent an approximate posterior distribution and uses this for optimisation of a variational lower bound. We develop stochastic backpropagation – rules for gradient backpropagation through stochastic variables – and derive an algorithm that allows for joint optimisation of the parameters of both the generative and recognition models. We demonstrate on several real-world data sets that by using stochastic backpropagation and variational inference, we obtain models that are able to generate realistic samples of data, allow for accurate imputations of missing data, and provide a useful tool for high-dimensional data visualisation.

\NAT@set@cites

 

Stochastic Backpropagation and Approximate Inference
in Deep Generative Models

 

Danilo J. Rezende, Shakir Mohamed, Daan Wierstra

{danilor, shakir, daanw}@google.com

Google DeepMind, London


\@xsect

There is an immense effort in machine learning and statistics to develop accurate and scalable probabilistic models of data. Such models are called upon whenever we are faced with tasks requiring probabilistic reasoning, such as prediction, missing data imputation and uncertainty estimation; or in simulation-based analyses, common in many scientific fields such as genetics, robotics and control that require generating a large number of independent samples from the model.

Recent efforts to develop generative models have focused on directed models, since samples are easily obtained by ancestral sampling from the generative process. Directed models such as belief networks and similar latent variable models (dayan1995; frey1996; saul1996mean; book:bartholomew; uria2013; gregor2013) can be easily sampled from, but in most cases, efficient inference algorithms have remained elusive. These efforts, combined with the demand for accurate probabilistic inferences and fast simulation, lead us to seek generative models that are i) deep, since hierarchical architectures allow us to capture complex structure in the data, ii) allow for fast sampling of fantasy data from the inferred model, and iii) are computationally tractable and scalable to high-dimensional data.

We meet these desiderata by introducing a class of deep, directed generative models with Gaussian latent variables at each layer. To allow for efficient and tractable inference, we use introduce an approximate representation of the posterior over the latent variables using a recognition model that acts as a stochastic encoder of the data. For the generative model, we derive the objective function for optimisation using variational principles; for the recognition model, we specify its structure and regularisation by exploiting recent advances in deep learning. Using this construction, we can train the entire model by a modified form of gradient backpropagation that allows for optimisation of the parameters of both the generative and recognition models jointly.

We build upon the large body of prior work (in section id1) and make the following contributions:

  • We combine ideas from deep neural networks and probabilistic latent variable modelling to derive a general class of deep, non-linear latent Gaussian models (section id1).

  • We present a new approach for scalable variational inference that allows for joint optimisation of both variational and model parameters by exploiting the properties of latent Gaussian distributions and gradient backpropagation (sections id1 and id1).

  • We provide a comprehensive and systematic evaluation of the model demonstrating its applicability to problems in simulation, visualisation, prediction and missing data imputation (section id1).

\@xsect

Deep latent Gaussian models (DLGMs) are a general class of deep directed graphical models that consist of Gaussian latent variables at each layer of a processing hierarchy. The model consists of layers of latent variables. To generate a sample from the model, we begin at the top-most layer () by drawing from a Gaussian distribution. The activation at any lower layer is formed by a non-linear transformation of the layer above , perturbed by Gaussian noise. We descend through the hierarchy and generate observations by sampling from the observation likelihood using the activation of the lowest layer . This process is described graphically in figure 1.

Figure 1: (a) Graphical model for DLGMs (5). (b) The corresponding computational graph. Black arrows indicate the forward pass of sampling from the recognition and generative models: Solid lines indicate propagation of deterministic activations, dotted lines indicate propagation of samples. Red arrows indicate the backward pass for gradient computation: Solid lines indicate paths where deterministic backpropagation is used, dashed arrows indicate stochastic backpropagation.

This generative process is described as follows:

(1)
(2)
(3)
(4)

where are mutually independent Gaussian variables. The transformations represent multi-layer perceptrons (MLPs) and are matrices. At the visible layer, the data is generated from any appropriate distribution whose parameters are specified by a transformation of the first latent layer. Throughout the paper we refer to the set of parameters in this generative model by , i.e. the parameters of the maps and the matrices . This construction allows us to make use of as many deterministic and stochastic layers as needed. We adopt a weak Gaussian prior over , .

The joint probability distribution of this model can be expressed in two equivalent ways:

(5)
(6)

The conditional distributions are implicitly defined by equation (3) and are Gaussian distributions with mean and covariance . Equation (6) makes explicit that this generative model works by applying a complex non-linear transformation to a spherical Gaussian distribution such that the transformed distribution tries to match the empirical distribution. A graphical model corresponding to equation (5) is shown in figure 1(a).

This specification for deep latent Gaussian models (DLGMs) generalises a number of well known models. When we have only one layer of latent variables and use a linear mapping , we recover factor analysis (book:bartholomew) – more general mappings allow for a non-linear factor analysis (lappalainen2000bayesian). When the mappings are of the form , for simple element-wise non-linearities such as the probit function or the rectified linearity, we recover the non-linear Gaussian belief network (frey1999a). We describe the relationship to other existing models in section id1. Given this specification, our key task is to develop a method for tractable inference. A number of approaches are known and widely used, and include: mean-field variational EM (beal2003); the wake-sleep algorithm (dayan2000); and stochastic variational methods and related control-variate estimators (wilson1984variance; Williams1992; hoffman2012stochastic). We also follow a stochastic variational approach, but shall develop an alternative to these existing inference algorithms that overcomes many of their limitations and that is both scalable and efficient.

\@xsect

Gradient descent methods in latent variable models typically require computations of the form , where the expectation is taken with respect to a distribution with parameters , and is a loss function that we assume to be integrable and smooth. This quantity is difficult to compute directly since i) the expectation is unknown for most problems, and ii) there is an indirect dependency on the parameters of over which the expectation is taken.

We now develop the key identities that are used to allow for efficient inference by exploiting specific properties of the problem of computing gradients through random variables. We refer to this computational strategy as stochastic backpropagation.

\@xsect

When the distribution is a -dimensional Gaussian the required gradients can be computed using the Gaussian gradient identities:

(7)
(8)

which are due to the theorems by bonnet1964 and price1958, respectively. These equations are true in expectation for any integrable and smooth function . Equation (7) is a direct consequence of the location-scale transformation for the Gaussian (discussed in section id1). Equation (8) can be derived by successive application of the product rule for integrals; we provide the proofs for these identities in appendix id1.

Equations (7) and (8) are especially interesting since they allow for unbiased gradient estimates by using a small number of samples from . Assume that both the mean and covariance matrix depend on a parameter vector . We are now able to write a general rule for Gaussian gradient computation by combining equations (7) and (8) and using the chain rule:

(9)

where and are the gradient and the Hessian of the function , respectively. Equation (9) can be interpreted as a modified backpropagation rule for Gaussian distributions that takes into account the gradients through the mean and covariance . This reduces to the standard backpropagation rule when is constant. Unfortunately this rule requires knowledge of the Hessian matrix of , which has an algorithmic complexity . For inference in DLGMs, we later introduce an unbiased though higher variance estimator that requires only quadratic complexity.

\@xsect

We describe two approaches to derive general backpropagation rules for non-Gaussian -distributions.

Using the product rule for integrals. For many exponential family distributions, it is possible to find a function to ensure that

That is, we express the gradient with respect to the parameters of as an expectation of gradients with respect to the random variables themselves. This approach can be used to derive rules for many distributions such as the Gaussian, inverse Gamma and log-Normal. We discuss this in more detail in appendix id1.

Using suitable co-ordinate transformations.
We can also derive stochastic backpropagation rules for any distribution that can be written as a smooth, invertible transformation of a standard base distribution. For example, any Gaussian distribution can be obtained as a transformation of a spherical Gaussian , using the transformation and . The gradient of the expectation with respect to is then:

(10)

where is the gradient of evaluated at and provides a lower-cost alternative to Price’s theorem (8). Such transformations are well known for many distributions, especially those with a self-similarity property or location-scale formulation, such as the Gaussian, Student’s -distribution, stable distributions, and generalised extreme value distributions.

Stochastic backpropagation in other contexts. The Gaussian gradient identities described above do not appear to be widely used. These identities have been recognised by opper2009 for variational inference in Gaussian process regression, and following this work, by gravesNIPS2011 for parameter learning in large neural networks. Concurrently with this paper, kingma2013auto present an alternative discussion of stochastic backpropagation. Our approaches were developed simultaneously and provide complementary perspectives on the use and derivation of stochastic backpropagation rules.

\@xsect

We use the matrix to refer to the full data set of size with observations .

\@xsect

To perform inference in DLGMs we must integrate out the effect of any latent variables – this requires us to compute the integrated or marginal likelihood. In general, this will be an intractable integration and instead we optimise a lower bound on the marginal likelihood. We introduce an approximate posterior distribution and apply Jensen’s inequality following the variational principle (beal2003) to obtain:

(11)

This objective consists of two terms: the first is the KL-divergence between the variational distribution and the prior distribution (which acts a regulariser), and the second is a reconstruction error.

We specify the approximate posterior as a distribution that is conditioned on the observed data. This distribution can be specified as any directed acyclic graph where each node of the graph is a Gaussian conditioned, through linear or non-linear transformations, on its parents. The joint distribution in this case is non-Gaussian, but stochastic backpropagation can still be applied.

For simplicity, we use a that is a Gaussian distribution that factorises across the layers (but not necessarily within a layer):

(12)

where the mean and covariance are generic maps represented by deep neural networks. Parameters of the -distribution are denoted by the vector .

For a Gaussian prior and a Gaussian recognition model, the KL term in (11) can be computed analytically and the free energy becomes:

(13)

where and indicate the trace and the determinant of the covariance matrix , respectively.

The specification of an approximate posterior distribution that is conditioned on the observed data is the first component of an efficient variational inference algorithm. We shall refer to the distribution (12) as a recognition model, whose design is independent of the generative model. A recognition model allows us introduce a form of amortised inference (gershman2014) for variational methods in which we share statistical strength by allowing for generalisation across the posterior estimates for all latent variables using a model. The implication of this generalisation ability is: faster convergence during training; and faster inference at test time since we only require a single pass through the recognition model, rather than needing to perform any iterative computations (such as in a generalised E-step).

To allow for the best possible inference, the specification of the recognition model must be flexible enough to provide an accurate approximation of the posterior distribution – motivating the use of deep neural networks. We regularise the recognition model by introducing additional noise, specifically, bit-flip or drop-out noise at the input layer and small additional Gaussian noise to samples from the recognition model. We use rectified linear activation functions as non-linearities for any deterministic layers of the neural network. We found that such regularisation is essential and without it the recognition model is unable to provide accurate inferences for unseen data points.

\@xsect

To optimise (13), we use Monte Carlo methods for any expectations and use stochastic gradient descent for optimisation. For optimisation, we require efficient estimators of the gradients of all terms in equation (13) with respect to the parameters and of the generative and the recognition models, respectively.

The gradients with respect to the th generative parameter can be computed using:

(14)

An unbiased estimator of is obtained by approximating equation (14) with a small number of samples (or even a single sample) from the recognition model .

To obtain gradients with respect to the recognition parameters , we use the rules for Gaussian backpropagation developed in section id1. To address the complexity of the Hessian in the general rule (9), we use the co-ordinate transformation for the Gaussian to write the gradient with respect to the factor matrix instead of the covariance (recalling ) derived in equation (10), where derivatives are computed for the function .

The gradients of in equation (13) with respect to the variational mean and the factors are:

(15)
(16)

where the gradients are computed by backpropagation. Unbiased estimators of the gradients (15) and (16) are obtained jointly by sampling from the recognition model (bottom-up pass) and updating the values of the generative model layers using equation (3) (top-down pass).

Finally the gradients obtained from equations (15) and (16) are:

(17)

The gradients (14) – (17) are now used to descend the free-energy surface with respect to both the generative and recognition parameters in a single optimisation step. Figure 1 shows the flow of computation in DLGMs. Our algorithm proceeds by first performing a forward pass (black arrows), consisting of a bottom-up (recognition) phase and a top-down (generation) phase, which updates the hidden activations of the recognition model and parameters of any Gaussian distributions, and then a backward pass (red arrows) in which gradients are computed using the appropriate backpropagation rule for deterministic and stochastic layers. We take a descent step using:

(18)

where is a diagonal pre-conditioning matrix computed using the RMSprop heuristic111Described by G. Hinton, ‘RMSprop: Divide the gradient by a running average of its recent magnitude’, in Neural networks for machine learning, Coursera lecture 6e, 2012.. The learning procedure is summarised in algorithm 1.

  while  hasNotConverged()  do
     
      (bottom-up pass) eq. (12)
      (top-down pass) eq. (3)
     updateGradients() eqs (14) – (17)
     
  end while
Algorithm 1 Learning in DLGM s
\@xsect

There are a number of approaches for parameterising the covariance matrix of the recognition model . Maintaining a full covariance matrix in equation (13) would entail an algorithmic complexity of for training and sampling per layer, where is the number of latent variables per layer.

The simplest approach is to use a diagonal covariance matrix , where is a -dimensional vector. This approach is appealing since it allows for linear-time computation and sampling, but only allows for axis-aligned posterior distributions.

We can improve upon the diagonal approximation by parameterising the covarinace as a rank-1 matrix with a diagonal correction. Using a vectors and , with , we parameterise the precision as:

(19)

This representation allows for arbitrary rotations of the Gaussian distribution along one principal direction with relatively few additional parameters (magdon-ismail2010a). By application of the matrix inversion lemma (Woodbury identity), we obtain the covariance matrix in terms of and as:

(20)

This allows both the trace and needed in the computation of the Gaussian KL, as well as their gradients, to be computed in time per layer.

The factorisation , with a matrix of the same size as and can be computed directly in terms of and . One solution for is:

(21)

The product of with an arbitrary vector can be computed in without computing explicitly. This also allows us to sample efficiently from this Gaussian, since any Gaussian random variable with mean and covariance matrix can be written as where is a standard Gaussian variate.

Since this covariance parametrisation has linear cost in the number of latent variables, we can also use it to parameterise the variational distribution of all layers jointly, instead of the factorised assumption in (12).

\@xsect

The computational complexity of producing a sample from the generative model is , where is the average number of latent variables per layer and is the number of layers (counting both deterministic and stochastic layers). The computational complexity per training sample during training is also – the same as that of matching auto-encoder.

\@xsect

Generative models have a number of applications in simulation, prediction, data visualisation, missing data imputation and other forms of probabilistic reasoning. We describe the testing methodology we use and present results on a number of these tasks.

\@xsect
(a) Diagonal covariance
(b) Low-rank covariance
(c) Performance
Figure 2: (a, b) Analysis of the true vs. approximate posterior for MNIST. Within each image we show four views of the same posterior, zooming in on the region centred on the MAP (red) estimate. (c) Comparison of test log likelihoods.

We use sampling to evaluate the true posterior distribution for a number of MNIST digits using the binarised data set from larochelle2011. We visualise the posterior distribution for a model with two Gaussian latent variables in figure 2. The true posterior distribution is shown by the grey regions and was computed by importance sampling with a large number of particles aligned in a grid between -5 and 5. In figure 2(a) we see that these posterior distributions are elliptical or spherical in shape and thus, it is reasonable to assume that they can be well approximated by a Gaussian. Samples from the prior (green) are spread widely over the space and very few samples fall in the region of significant posterior mass, explaining the inefficiency of estimation methods that rely on samples from the prior. Samples from the recognition model (blue) are concentrated on the posterior mass, indicating that the recognition model has learnt the correct posterior statistics, which should lead to efficient learning.

Model
Factor Analysis
NLGBN (frey1999a)
Wake-Sleep (dayan2000) 91.3
DLGM diagonal covariance
DLGM rank-one covariance
Results below from uria2013
MoBernoullis K=10 168.95
MoBernoullis K=500 137.64
RBM (500 h, 25 CD steps) approx. 86.34
DBN 2hl approx. 84.55
NADE 1hl (fixed order) 88.86
NADE 1hl (fixed order, RLU, minibatch) 88.33
EoNADE 1hl (2 orderings) 90.69
EoNADE 1hl (128 orderings) 87.71
EoNADE 2hl (2 orderings) 87.96
EoNADE 2hl (128 orderings) 85.10
Table 1: Comparison of negative log-probabilities on the test set for the binarised MNIST data.


In figure 2(a) we see that samples from the recognition model are aligned to the axis and do not capture the posterior correlation. The correlation is captured using the structured covariance model in figure 2(b). Not all posteriors are Gaussian in shape, but the recognition places mass in the best location possible to provide a reasonable approximation. As a benchmark for comparison, the performance in terms of test log-likelihood is shown in figure 2(c), using the same architecture, for factor analysis (FA), the wake-sleep algorithm, and our approach using both the diagonal and structured covariance approaches. For this experiment, the generative model consists of 100 latent variables feeding into a deterministic layer of 300 nodes, which then feeds to the observation likelihood. We use the same structure for the recognition model.

\@xsect

(a) Left: Training data. Middle: Sampled pixel probabilities. Right: Model samples
(b) 2D embedding.
Figure 3: Performance on the MNIST dataset. For the visualisation, each colour corresponds to one of the digit classes.
(a) NORB
(b) CIFAR
(c) Frey
Figure 4: Sampled generated from DLGMs for three data sets: (a) NORB, (b) CIFAR 10, (c) Frey faces. In all images, the left image shows samples from the training data and the right side shows the generated samples.

We evaluate the performance of a three layer latent Gaussian model on the MNIST data set. The model consists of two deterministic layers with 200 hidden units and a stochastic layer of 200 latent variables. We use mini-batches of 200 observations and trained the model using stochastic backpropagation. Samples from this model are shown in figure 3(a). We also compare the test log-likelihood to a large number of existing approaches in table 1. We used the binarised dataset as in uria2013 and quote the log-likelihoods in the lower part of the table from this work. These results show that our approach is competitive with some of the best models currently available. The generated digits also match the true data well and visually appear as good as some of the best visualisations from these competing approaches.

We also analysed the performance of our model on three high-dimensional real image data sets. The NORB object recognition data set consists of images that are of size pixels. We use a model consisting of 1 deterministic layer of 400 hidden units and one stochastic layer of 100 latent variables. Samples produced from this model are shown in figure 4(a). The CIFAR10 natural images data set consists of RGB images that are of size pixels, which we split into random patches. We use the same model as used for the MNIST experiment and show samples from the model in figure 4(b). The Frey faces data set consists of almost images of different facial expressions of size pixels.

\@xsect

Latent variable models are often used for visualisation of high-dimensional data sets. We project the MNIST data set to a 2-dimensional latent space and use this 2D embedding as a visualisation of the data – an embedding for MNIST is shown in figure 3(b). The classes separate into different regions, suggesting that such embeddings can be useful in understanding the structure of high-dimensional data sets.

\@xsect

We demonstrate the ability of the model to impute missing data using the street view house numbers (SVHN) data set (SVHNdata), which consists of images of size pixels, and the Frey faces and MNIST data sets. The performance of the model is shown in figure 5.

We test the imputation ability under two different missingness types (little1987statistical): Missing-at-Random (MAR), where we consider 60% and 80% of the pixels to be missing randomly, and Not Missing-at-Random (NMAR), where we consider a square region of the image to be missing. The model produces very good completions in both test cases. There is uncertainty in the identity of the image and this is reflected in the errors in these completions as the resampling procedure is run (see transitions from digit 9 to 7, and digit 8 to 6 in figure 5 ). This further demonstrates the ability of the model to capture the diversity of the underlying data. We do not integrate over the missing values, but use a procedure that simulates a Markov chain that we show converges to the true marginal distribution of missing given observed pixels. The imputation procedure is discussed in appendix id1.

px px px px

Figure 5: Imputation results: Row 1, SVHN. Row 2, Frey faces. Rows 3–5, MNIST. Col. 1 shows the true data. Col. 2 shows pixel locations set as missing in grey. The remaining columns show imputations for 15 iterations.
\@xsect

Directed Graphical Models. DLGMs form a unified family of models that includes factor analysis (book:bartholomew), non-linear factor analysis (lappalainen2000bayesian), and non-linear Gaussian belief networks (frey1999a). Other related models include sigmoid belief networks (saul1996mean) and deep auto-regressive networks (gregor2013), which use auto-regressive Bernoulli distributions at each layer instead of Gaussian distributions. The Gaussian process latent variable model and deep Gaussian processes (lawrence2005probabilistic; damianou2013) form the non-parametric analogue of our model and employ Gaussian process priors over the non-linear functions between each layer. The neural auto-regressive density estimator (NADE) (larochelle2011; uria2013) uses function approximation to model conditional distributions within a directed acyclic graph. NADE is amongst the most competitive generative models currently available, but has several limitations, such as the inability to allow for deep representations and difficulties in extending to locally-connected models (e.g., through the use of convolutional layers), preventing it from scaling easily to high-dimensional data.

Alternative latent Gaussian inference. Few of the alternative approaches for inferring latent Gaussian distributions meet the desiderata for scalable inference we seek. The Laplace approximation has been concluded to be a poor approximation in general, in addition to being computationally expensive. INLA is restricted to models with few hyperparameters (), whereas our interest is in 100s-1000s. EP cannot be applied to latent variable models due to the inability to match moments of the joint distribution of latent variables and model parameters. Furthermore, no reliable methods exist for moment-matching with means and covariances formed by non-linear transformations – linearisation and importance sampling are two, but are either inaccurate or very slow. Thus, the the variational approach we present remains a general-purpose and competitive approach for inference.

Monte Carlo variance reduction. Control variate methods are amongst the most general and effective techniques for variance reduction when Monte Carlo methods are used (wilson1984variance). One popular approach is the REINFORCE algorithm (Williams1992), since it is simple to implement and applicable to both discrete and continuous models, though control variate methods are becoming increasingly popular for variational inference problems (hoffman2012stochastic; blei2012variational; ranganath2013; salimans2014). Unfortunately, such estimators have the undesirable property that their variance scales linearly with the number of independent random variables in the target function, while the variance of GBP is bounded by a constant: for -dimensional latent variables the variance of REINFORCE scales as , whereas GBP scales as (see appendix id1).

An important family of alternative estimators is based on quadrature and series expansion methods (HonkelaV04; lappalainen2000bayesian). These methods have low-variance at the price of introducing biases in the estimation. More recently a combination of the series expansion and control variate approaches has been proposed by blei2012variational.

A very general alternative is the wake-sleep algorithm (dayan1995). The wake-sleep algorithm can perform well, but it fails to optimise a single consistent objective function and there is thus no guarantee that optimising it leads to a decrease in the free energy (11).

Relation to denoising auto-encoders. Denoising auto-encoders (DAE) (vincent2010) introduce a random corruption to the encoder network and attempt to minimize the expected reconstruction error under this corruption noise with additional regularisation terms. In our variational approach, the recognition distribution can be interpreted as a stochastic encoder in the DAE setting. There is then a direct correspondence between the expression for the free energy (11) and the reconstruction error and regularization terms used in denoising auto-encoders (c.f. equation (4) of bengio2013). Thus, we can see denoising auto-encoders as a realisation of variational inference in latent variable models.

The key difference is that the form of encoding ‘corruption’ and regularisation terms used in our model have been derived directly using the variational principle to provide a strict bound on the marginal likelihood of a known directed graphical model that allows for easy generation of samples. DAEs can also be used as generative models by simulating from a Markov chain (bengio2013; bengio2013deep). But the behaviour of these Markov chains will be very problem specific, and we lack consistent tools to evaluate their convergence.

\@xsect

We have introduced a general-purpose inference method for models with continuous latent variables. Our approach introduces a recognition model, which can be seen as a stochastic encoding of the data, to allow for efficient and tractable inference. We derived a lower bound on the marginal likelihood for the generative model and specified the structure and regularisation of the recognition model by exploiting recent advances in deep learning. By developing modified rules for backpropagation through stochastic layers, we derived an efficient inference algorithm that allows for joint optimisation of all parameters. We show on several real-world data sets that the model generates realistic samples, provides accurate imputations of missing data and can be a useful tool for high-dimensional data visualisation.

Appendices can be found with the online version of the paper. http://arxiv.org/abs/1401.4082

Acknowledgements. We are grateful for feedback from the reviewers as well as Peter Dayan, Antti Honkela, Neil Lawrence and Yoshua Bengio.

\@input

bu1.aux\@input@bu1.bbl\@inputbu1.aux\@inputbu.aux

 

Appendices:
Stochastic Backpropagation and Approximate Inference
in Deep Generative Models

 

Danilo J. Rezende, Shakir Mohamed, Daan Wierstra

{danilor, shakir, daanw}@google.com

Google DeepMind, London


\@xsect

In equation (6) we showed an alternative form of the joint log likelihood that explicitly separates the deterministic and stochastic parts of the generative model and corroborates the view that the generative model works by applying a complex non-linear transformation to a spherical Gaussian distribution such that the transformed distribution best matches the empirical distribution. We provide more details on this view here for clarity.

From the model description in equations (3) and (4), we can interpret the variables as deterministic functions of the noise variables . This can be formally introduced as a coordinate transformation of the probability density in equation (5): we perform a change of coordinates . The density of the transformed variables can be expressed in terms of the density (5) times the determinant of the Jacobian of the transformation . Since the co-ordinate transformation is linear we have and the distribution of is obtained as follows:

(22)

Combining this equation with the distribution of the visible layer we obtain equation (6).

\@xsect

Below we provide simple, explicit examples of generative and recognition models.

In the case of a two-layer model the activation in equation (6) can be explicitly written as

(23)

Similarly, a simple recognition model consists of a single deterministic layer and a stochastic Gaussian layer with the rank-one covariance structure and is constructed as:

(24)
(25)
(26)
(27)

where the function is a rectified linearity (but other non-linearities such as tanh can be used).

222Proceedings of the International Conference on Machine Learning, Beijing, China, 2014. JMLR: W&CP volume 32. Copyright 2014 by the author(s).\@xsect

Here we review the derivations of Bonnet’s and Price’s theorems that were presented in section id1.

Theorem B.1 (Bonnet’s theorem).

Let be a integrable and twice differentiable function. The gradient of the expectation of under a Gaussian distribution with respect to the mean can be expressed as the expectation of the gradient of .

Proof.
(28)

where we have used the identity

in moving from step 1 to 2. From step 2 to 3 we have used the product rule for integrals with the first term evaluating to zero. ∎

Theorem B.2 (Price’s theorem).

Under the same conditions as before. The gradient of the expectation of under a Gaussian distribution with respect to the covariance can be expressed in terms of the expectation of the Hessian of as

Proof.
(29)

In moving from steps 1 to 2, we have used the identity

which can be verified by taking the derivatives on both sides and comparing the resulting expressions. From step 2 to 3 we have used the product rule for integrals twice. ∎

\@xsect

In section id1 we described two ways in which to derive stochastic back-propagation rules. We show specific examples and provide some more discussion in this section.

\@xsect

We can derive rules for stochastic back-propagation for many distributions by finding a appropriate non-linear function that allows us to express the gradient with respect to the parameters of the distribution as a gradient with respect to the random variable directly. The approach we described in the main text was:

(30)

where we have introduced the non-linear function to allow for the transformation of the gradients and have applied the product rule for integrals (rule for integration by parts) to rewrite the integral in two parts in the second line, and the supp indicates that the term is evaluated at the boundaries of the support. To use this approach, we require that the density we are analysing be zero at the boundaries of the support to ensure that the first term in the second line is zero.

As an alternative, we can also write this differently and find an non-linear function of the form:

(31)

Consider general exponential family distributions of the form:

(32)

where is the base measure, is the set of mean parameters of the distribution, is the set of natural parameters, and is the log-partition function. We can express the non-linear function in (30) using these quantities as:

(33)

This can be derived for a number of distributions such as the Gaussian, inverse Gamma, Log-Normal, Wald (inverse Gaussian) and other distributions. We show some of these below:

Family
Gaussian
Inv. Gamma
Log-Normal

The corresponding to the second formulation can also be derived and may be useful in certain situations, requiring the solution of a first order differential equation. This approach of searching for non-linear transformations leads us to the second approach for deriving stochastic back-propagation rules.

\@xsect

There are many distributions outside the exponential family that we would like to consider using. A simpler approach is to search for a co-ordinate transformation that allows us to separate the deterministic and stochastic parts of the distribution. We described the case of the Gaussian in section id1. Other distributions also have this property. As an example, consider the Levy distribution (which is a special case of the inverse Gamma considered above). Due to the self-similarity property of this distribution, if we draw from a Levy distribution with known parameters , we can obtain any other Levy distribution by rescaling and shifting this base distribution: .

Many other distributions hold this property, allowing stochastic back-propagation rules to be determined for distributions such as the Student’s t-distribution, Logistic distribution, the class of stable distributions and the class of generalised extreme value distributions (GEV). Examples of co-ordinate transformations and the resulsting distributions are shown below for variates drawn from the standard distribution listed in the first column.

Std Distr. T Gen. Distr.
GEV GEV
Exp Logistic
Exp Weibull
\@xsect

An alternative approach for stochastic gradient computation is commonly based on the method of control variates. We analyse the variance properties of various estimators in a simple example using univariate function. We then show the correspondence of the widely-known REINFORCE algorithm to the general control variate framework.

\@xsect

The REINFORCE estimator is based on

(34)

where is a baseline typically chosen to reduce the variance of the estimator.

The variance of (34) scales poorly with the number of random variables (dayan1995). To see this limitation, consider functions of the form , where each individual term and its gradient has a bounded variance, i.e., and for some and assume independent or weakly correlated random variables. Given these assumptions the variance of GBP (7) scales as , while the variance for REINFORCE (34) scales as

For the variance of GBP above, all terms in that do not depend on have zero gradient, whereas for REINFORCE the variance involves a summation over all terms. Even if most of these terms have zero expectation, they still contribute to the variance of the estimator. Thus, the REINFORCE estimator has the undesirable property that its variance scales linearly with the number of independent random variables in the target function, while the variance of GBP is bounded by a constant.

The assumption of weakly correlated terms is relevant for variational learning in larger generative models where independence assumptions and structure in the variational distribution result in free energies that are summations over weakly correlated or independent terms.

\@xsect

In analysing the variance properties of many estimators, we discuss the general scaling of likelihood ratio approaches in appendix id1. As an example to further emphasise the high-variance nature of these alternative approaches, we present a short analysis in the univariate case.

Consider a random variable and a simple quadratic function of the form

(35)

For this function we immediately obtain the following variances

(36)
(37)
(38)
(39)

Equations (36), (37) and (38) correspond to the variance of the estimators based on (7), (8), (10) respectively whereas equation (39) corresponds to the variance of the REINFORCE algorithm for the gradient with respect to .

From these relations we see that, for any parameter configuration, the variance of the REINFORCE estimator is strictly larger than the variance of the estimator based on (7). Additionally, the ratio between the variances of the former and later estimators is lower-bounded by . We can also see that the variance of the estimator based on equation (8) is zero for this specific function whereas the variance of the estimator based on equation (10) is not.

\@xsect

We compute the marginal likelihood by importance sampling by generating samples from the recognition model and using the following estimator:

(40)
\@xsect

Image completion can be approximatively achieved by a simple iterative procedure which consists of (i) initializing the non-observed pixels with random values; (ii) sampling from the recognition distribution given the resulting image; (iii) reconstruct the image given the sample from the recognition model; (iv) iterate the procedure.

We denote the observed and missing entries in an observation as , respectively. The observed is fixed throughout, therefore all the computations in this section will be conditioned on . The imputation procedure can be written formally as a Markov chain on the space of missing entries with transition kernel given by

(41)

where .

Provided that the recognition model constitutes a good approximation of the true posterior , (41) can be seen as an approximation of the kernel

(42)

The kernel (42) has two important properties: (i) it has as its eigen-distribution the marginal ; (ii) . The property (i) can be derived by applying the kernel (42) to the marginal and noting that it is a fixed point. Property (ii) is an immediate consequence of the smoothness of the model.

We apply the fundamental theorem for Markov chains (neal93, pp. 38) and conclude that given the above properties, a Markov chain generated by (42) is guaranteed to generate samples from the correct marginal .

In practice, the stationary distribution of the completed pixels will not be exactly the marginal , since we use the approximated kernel (41). Even in this setting we can provide a bound on the norm of the difference between the resulting stationary marginal and the target marginal

Proposition F.1 ( bound on marginal error ).

If the recognition model is such that for all

(43)

then the marginal is a weak fixed point of the kernel (41) in the following sense:

(44)
Proof.