Do Deep Generative Models Know What They Don't Know?
Abstract
A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. A plethora of work has demonstrated that it is easy to find or synthesize inputs for which a neural network is highly confident yet wrong. Generative models are widely viewed to be robust to such mistaken confidence as modeling the density of the input features can be used to detect novel, outofdistribution inputs. In this paper we challenge this assumption. We find that the model density from flowbased models, VAEs and PixelCNN cannot distinguish images of common objects such as dogs, trucks, and horses (i.e. CIFAR10) from those of house numbers (i.e. SVHN), assigning a higher likelihood to the latter when the model is trained on the former. We focus our analysis on flowbased generative models in particular since they are trained and evaluated via the exact marginal likelihood. We find such behavior persists even when we restrict the flow models to constantvolume transformations. These transformations admit some theoretical analysis, and we show that the difference in likelihoods can be explained by the location and variances of the data and the model curvature, which shows that such behavior is more general and not just restricted to the pairs of datasets used in our experiments. Our results caution against using the density estimates from deep generative models to identify inputs similar to the training distribution, until their behavior on outofdistribution inputs is better understood.
[linecolor=gray,leftmargin=2,rightmargin=2,
backgroundcolor=gray!40,innertopmargin=5pt,ntheorem]exExample[section]
\patchcmd
\patchcmd\AdaptNote\multthanks
\patchcmd
Do Deep Generative Models Know
What They Don’t Know?
Eric NalisnickCorresponding authors: e.nalisnick@eng.cam.ac.uk and balajiln@google.com. Work done during an internship at DeepMind.,
Akihiro Matsukawa,
Yee Whye Teh,
Dilan Gorur,
Balaji Lakshminarayanan
DeepMind
\iclrfinalcopy
1 Introduction
Deep learning has achieved impressive success in applications for which the goal is to model a conditional distribution , with being a label and the features. there are no guarantees that the model will work well on ’s drawn from some other distribution. For example, Louizos & Welling (2017) show that simply rotating an MNIST digit can make a neural network predict another class with high confidence (see their Figure 1a). Ostensibly, one way to avoid such overconfidently wrong predictions would be to train a density model (with denoting the parameters) to approximate the true distribution of training inputs and refuse to make a prediction for any that has a sufficiently low density under . The intuition is that the discriminative model likely did not observe enough samples in that region to make a reliable decision for those inputs. This idea has been proposed by various papers, cf. (Bishop, 1994), and as recently as in the panel discussion at Advances in Approximate Bayesian Inference (AABI) 2017 (Blei et al., 2017).
Anomaly detection is just one motivating example for which we require accurate densities, and others include information regularization (Szummer & Jaakkola, 2003), open set recognition (Herbei & Wegkamp, 2006), uncertainty estimation, detecting covariate shift, active learning, modelbased reinforcement learning, and transfer learning. Accordingly, these applications have lead to widespread interest in deep generative models, which take many forms such as variational autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014), generative adversarial networks (GANs) (Goodfellow et al., 2014), autoregressive models (van den Oord et al., 2016b, a), and invertible latent variable models (Tabak & Turner, 2013). The last two classes—autoregressive and invertible models—are especially attractive since they offer exact computation of the marginal likelihood, requiring no approximate inference techniques.
In this paper, we investigate if modern deep generative models can be used for anomaly detection, as suggested by Bishop (1994) and the AABI pannel (Blei et al., 2017), expecting a wellcalibrated model to assign higher density to the training data than to some other data set. However, we find this to not be the case: when trained on CIFAR10 (Krizhevsky & Hinton, 2009), VAEs, autoregressive models, and flowbased generative models all assign a higher density to SVHN (Netzer et al., 2011) than to the training data. We find this observation to be quite problematic and unintuitive since SVHN’s digit images are so visually distinct from the dogs, horses, trucks, boats, etc. found in CIFAR10.
We go on to study this CIFAR10 vs SVHN phenomenon in flowbased models in particular since they allow for exact marginal density calculations. Initial experiments suggested the logdeterminantJacobian term may contribute to SVHN’s high density, but we report the phenomenon also holds for constantvolume flows. We then describe a series of analytic analyses that show this phenomenon can be explained in terms of the variances of the input distributions and the model curvature. To the best of our knowledge, we are the first to report these unintuitive findings for a variety of deep generative models. Moreover, our experiments with flowbased models isolate some crucial experimental variables such as the effect of constantvolume vs nonvolumepreserving transformations. Lastly, our analysis provides some simple but general expressions for quantifying the gap in the model density between two data sets. We close the paper by urging more study of the outoftrainingdistribution properties of deep generative models, as understanding their behavior in this setting is crucial for their deployment to the real world.
2 Background
We begin by establishing notation and reviewing the necessary background material. We denote matrices with uppercase and bold letters (e.g. ), vectors with lowercase and bold (e.g. ), and scalars with lowercase and no bolding (e.g. ). As our focus is on generative models, let the collection of all observations be denoted by with representing a vector containing all features and, if present, labels. All examples are assumed independently and identically drawn from some population (which is unknown) with support denoted . We define the model density function to be where are the model parameters, and let the model likelihood be denoted .
2.1 Training Neural Generative Models
Given observed (training) data and a model class , we are interested in finding the parameters that make the model closest to the true but unknown data distribution . We can quantify this gap in terms of a Kullback–Leibler divergence (KLD):
(1) 
where the first term in the rightmost expression is the average loglikelihood and the second is the entropy of the true distribution. As the latter is a fixed constant, minimizing the KLD amounts to finding the parameter settings that maximize the data’s log density: Note that alone does not have any interpretation as a probability. To extract probabilities from the model density, we need to integrate over some region : . Adding noise to the data during model optimization can mock this integration step, encouraging the density model to output something nearer to probabilities (Theis et al., 2016):
where is a sample from . The resulting objective is a lowerbound, making it a suitable optimization target. All models in all the experiments we report are trained with input noise. Due to this ambiguity between densities and probabilities, henceforth we call the quantity a ‘loglikelihood,’ even if is drawn from a distribution unlike the training data.
Regarding the choice of density model, we could choose one of the standard density functions for , e.g. a Gaussian, but these may not be suitable for modeling the complex, highdimensional data sets we often observe in the real world. Hence, we want to parametrize the model density with some highcapacity function , which is usually chosen to be a neural network. That way, the model has a somewhat compact representation and can be optimized via gradient ascent. We experiment with three variants of neural generative models: autoregressive, latent variable, and invertible. In the first class, we study the PixelCNN (van den Oord et al., 2016b), and due to space constraints, we refer the reader to van den Oord et al. (2016b) for its definition. As a representative of the second class, we use a VAE (Kingma & Welling, 2014; Rezende et al., 2014). See Rosca et al. (2018) for descriptions of the precise versions we use. And lastly, we have invertible flowbased generative models as the third class. We define them in detail below since we study them with the most depth.
2.2 Generative Models via Change of Variables
The VAE and many other generative models are defined as a joint distribution between the observed and latent variables. However, another path forward is to perform a change of variables. In this case and are one and the same, and there is no longer any notion of a product space . Let be a diffeomorphism from the data space to a latent space . Using then allows us to compute integrals over as an integral over and vice versa:
(2) 
where and are known as the volume elements as they adjust for the volume change under the alternate measure. Specifically, when the change is w.r.t. coordinates, the volume element is the determinant of the diffeomorphism’s Jacobian matrix, which we denote as .
The change of variables formula is a powerful tool for generative modeling as it allows us to define a distribution entirely in terms of an auxiliary distribution , which we are free to choose, and . Denote the parameters of the change of variables model as with being the diffeomorphism’s parameters, i.e. , and being the auxiliary distribution’s parameters, i.e. . We can perform maximum likelihood estimation for the model as follows:
(3) 
Yet, optimizing must be done carefully so as to not result in a trivial model. For instance, optimization could make close to uniform if there are no constraints on its variance. For this reason, most implementations leave as fixed (usually a standard Gaussian) in practice. Likewise, we assume it as fixed from here forward, thus omitting from equations to reduce notational clutter. After training, samples can be drawn from the model via the inverse transform:
For the particular form of , most work to date has constructed the bijection from affine coupling layers (ACLs) (Dinh et al., 2017), which transform by way of translation and scaling operations. Specifically, ACLs take the form: where denotes an elementwise product. This transformation, firstly, splits the input vector in half, i.e. (using Python list syntax). Then the second half of the vector is fed into two arbitrary neural networks (possibly with tied parameters) whose outputs are denoted and , with being the collection of weights and biases. Finally, the output is formed by (1) scaling the first half of the input by one neural network output, i.e. , (2) translating the result of the scaling operation by the second neural network output, i.e. , and (3) copying the second half of forward, making it the second half of , i.e. . ACLs are stacked to make rich hierarchical transforms, and the latent representation is output from this composition, i.e. . A permutation operation is required between ACLs to ensure the same elements are not repeatedly used in the copy operations. We use without subscript to denote the complete transform and overload the use of to denote the parameters of all constituent layers.
This class of transform is known as nonvolume preserving (NVP) (Dinh et al., 2017) since the volume element does not necessarily evaluate to one and can vary with each input . Although nonzero, the log determinant of the Jacobian is still tractable: . A diffeomorphic transform can also be defined with just translation operations, as was done in earlier work by Dinh et al. (2015), and this transformation is volume preserving (VP) since the volume term is one and thus has no influence in the likelihood calculation. We will examine another class of flows we term constantvolume (CV) since the volume, while not preserved, is constant across all . Appendix A provides additional details on implementing flowbased generative models.
3 Motivating Observations
Given the impressive advances of deep generative models,
we sought to test their ability to quantify when an input comes from a different distribution than that of the training set. This calibration w.r.t. outofdistribution data is essential for applications such as safety—if we were using the generative model to filter the inputs to a discriminative model—and for active learning. For the experiment, we trained the same Glow architecture described in Kingma & Dhariwal (2018)—except small enough that it could fit on one GPU


Beginning with NotMNIST vs MNIST, the left subtable of Figure 1 shows the average BPD of each split, with the model being trained only on NotMNISTTrain. We see that the test split has the lowest BPD, roughly less than the train set. While this may seem surprising, this phenomenon is due to the training set being larger and more diverse than the test set. The neverbeforeseen MNISTTest split has a BPD of , roughly bits higher than the training set. Thus, this experiment agrees with our stated hypothesis. We also include a (normalized) histogram of the loglikelihoods for the three splits in Figure 2 (a). While the NotMNIST splits clearly have more instances toward the RHS of the plot (highest likelihood), there is significant overlap, which could give the modeler pause before using the density as a proxy score for detecting inputs similar to the training distribution.
Moving on to CIFAR10 vs SVHN, the right subtable of Figure 1 again reports the BPD of the training data (CIFAR10Train), the indistribution test data (CIFAR10Test), and the outofdistribution data (SVHNTest). Here we see a peculiar result: the SVHN BPD is one bit lower than that of both indistribution data sets. We observed a BPD of for SVHN vs for CIFAR10Train vs for CIFAR10Test. Figure 2 (b) shows a similar histogram of the loglikelihood for the three data sets. Clearly, the SVHN examples (red bars) have higher likelihood across the board, and the result is therefore not caused by a few outliers. We observed this phenomenon when training on CIFAR10 (NotMNIST) and testing on SVHN (MNIST), but not the other way around so this phenomenon is not symmetric; see Figure 7 in Appendix B for these results. We report results only for Glow, but we observe the same behavior for RNVP transforms as defined by Dinh et al. (2017).
We next tested if the phenomenon occurs for the other common deep generative models: PixelCNN and VAE. We do not include GANs in the comparison since evaluating their likelihood is an open problem. Figure 3 reports the same histograms as above for these models, showing the distribution of evaluations for CIFAR10’s train (black) and test (blue) splits and SVHN’s test (red) split. Since in all plots the red bars are shifted to the right much as they were before—albeit to varying degrees, with perhaps PixelCNN having the smallest gap—we see that, indeed, this inability to detect inputs unlike the training data persists for these other model classes. SVHN images continue to have higher likelihood than CIFAR10 training images.
4 Digging Deeper into the FlowBased Model
While we observed the CIFAR10 vs SVHN phenomenon for the PixelCNN, VAE, and Glow, we now narrow our investigation to just the class of invertible generative models. The rationale is that they allow for better experimental control as, firstly, they can compute exact marginal likelihoods, unlike the VAE, and secondly, the transforms used in flowbased models have Jacobian constraints that simplify the analysis we present in Section 5. To further analyze the high likelihood of the outofdistribution (nontraining) samples, we next report the contributions to the likelihood of each term in the changeofvariables formula. At first, this suggested the volume element was the primary cause of SVHN’s high likelihood, but further experiments with constantvolume flows show the problem exists with them as well.
Decomposing the changeofvariables objective. To further examine this curious phenomenon, we inspect the changeofvariables objective itself, investigating if one or both terms give the outofdistribution data a higher value. We report the constituent and terms for NVPGlow in Figure 4, showing histograms for in subfigures (a) and (c) and for in subfigures (b) and (d). We see that behaves mostly as expected for both experiments. For MNIST in subfigure (a), the red bars are clearly shifted to the left, representing lower likelihoods under the latent distribution. For SVHN in subfigure (c), we observe a similar situation with the red bars again shifted to the left—although the shift is not as dramatic as it is with MNIST.
Moving on to the volume element, this term seems to cause SVHN’s higher likelihood. Subfigure (d) shows that all of the SVHN logvolume evaluations (red) are conspicuously shifted to the right—to higher values—when compared to CIFAR10’s (blue and black). Since SVHN’s evaluations are only slightly less than CIFAR10’s, the volume term dominates, resulting in SVHN having a higher likelihood. Comparing these results to the MNIST results in subfigure (b), MNIST’s logvolume evaluations all but overlap with NotMNIST’s, meaning the lower evaluations are what is allowing the model to identify MNIST as outofdistribution.
Is the volume the culprit? In addition to the empirical evidence against the volume element, we notice that the changeofvariables objective—by rewarding the maximization of the Jacobian determinant—encourages the model to increase its sensitivity to perturbations in . This behavior starkly contradicts a long history of derivativebased regularization penalties that reward the model for decreasing its sensitivity to input directions. For instance, Girosi et al. (1995) and Rifai et al. (2011) propose penalizing the Frobenius norm of a neural network’s Jacobian for classifiers and autoencoders respectively. See Appendix C for more analysis of the log volume element.
To experimentally control for the effect of the volume term, we trained Glow with constantvolume (CV) transformations. We modify the affine layers to use only translation operations (Dinh et al., 2015) but keep the convolutions as is. The logdeterminantJacobian is then , where is the determinant of the convolutional weights for the th flow. This makes the volume element constant across all inputs , allowing us to isolate its effect. Figure 5 shows the results for this model, which we term CVGlow (constantvolume Glow). Subfigure (a) shows a histogram of the evaluations, just as shown before in Figure 2, and we see that SVHN (red) still achieves a higher likelihood (lower BPD) than the CIFAR10 training set. Subfigure (b) shows the SVHN vs CIFAR10 BPD over the course of training CVGlow. Notice that there is no crossover point in the curves.
Other experiments: random and constant images, ensembles. Other work on generative models (Sønderby et al., 2017; van den Oord et al., 2018) has noted that they often assign the highest likelihood to constant inputs. We also test this case, reporting the BPD in Appendix Figure 9 for NVPGlow models trained on NotMNIST (left) and CIFAR10 (right). We find constant inputs have the highest likelihood for our models as well (0.214 BPD for NotMNIST, 0.589 BPD for CIFAR10). We also include the BPD of random inputs in the table for comparison.
We also hypothesized that averaging over the parameters may mitigate the phenomenon. While integration over the entire parameter space would be ideal, this is analytically and computationally difficult for Glow. Lakshminarayanan et al. (2017) show that discrete ensembles can guard against overconfidence for anomalous inputs while being more practical to implement. Therefore, we opted for this approach, training five Glow models independently and averaging their likelihoods to evaluate test data. Each model was given a different initialization of the parameters to help diversify the ensemble. Figure 10 in Appendix F reports a histogram of the evaluations when averaging over the ensemble. We see nearly identical results: SVHN is still assigned a higher likelihood than the CIFAR10 training data.
5 Second Order Analysis
In this section, we aim to provide a more direct analysis of when another distribution might have higher likelihood than the one used for training. We propose analyzing the phenomenon by way of linearizing the difference in expected loglikelihoods. Consider two distributions: the training distribution and some dissimilar distribution also with support on . For a given generative model , the adversarial distribution will have a higher likelihood than the training data’s if . This expression is hard to analyze directly so we perform a secondorder expansion of the loglikelihood around an interior point . Applying the expansion to both likelihoods, taking expectations, and canceling the common terms, we have:
(4) 
where , the covariance matrix, and is the trace operation. Since the expansion is accurate only locally around , we next assume that . While this at first glance may seem like a strong assumption, it is not too removed from practice since data is usually centered before being fed to the model. For SVHN and CIFAR10 in particular, we find this assumption to hold; see Figure 6 (a) for the empirical means of each dimension of CIFAR10 and SVHN. All of SVHN’s means fall within the empirical range of CIFAR10’s. Assuming equal means, we then have:
(5) 
where the second line assumes the generative model to be flowbased.
Analysis of CVGlow. We use the expression in Equation 5 to analyze the behavior of CVGlow on CIFAR10 vs SVHN, seeing if the difference in likelihoods can be explained by the model curvature and data’s second moment. The second derivative terms simplify considerably for CVGlow with a spherical latent density. Given a kernel , with indexing the flow and the number of input channels, the derivatives are , with and indexing the spatial height and width and the columns of the th flow’s convolutional kernel. The second derivative is then , which allows us to write
The derivation is given in Appendix G. Plugging in the second derivative of the Gaussian’s log density—a common choice for the latent distribution in flow models, following (Dinh et al., 2015, 2017; Kingma & Dhariwal, 2018)—and the empirical variances, we have:
(6) 
and where is the variance of the latent distribution. We know the final expression is greater than or equal to zero since all . Equality is achieved only for or in the unusual case of at least one allzero row in any convolutional kernel for all channels. Thus, the secondorder expression we derived does indeed predict we should see a higher likelihood for SVHN than for CIFAR10. Moreover, we leave the CVGlow’s parameters as constants to emphasize the expression is nonnegative for any parameter setting of the CVGlow model. This supports our observation that an ensemble of Glows resulted an almost identical likelihood gap (Figure 10) and that the gap remained relatively constant over the course of training (Figure 5 b). Furthermore, the term would be negative for any logconcave density function, meaning that changing the latent density to Laplace or logistic would not change the result.
Our final conclusion is that SVHN simply “sits inside of” CIFAR10—roughly same mean, smaller variance—resulting in its higher likelihood. In turn, this means that we can artificially increase the likelihood of both distributions by shrinking their variance. For RGB images, shrinking the variance is equivalent to ’graying’ them, i.e. making the pixel values closer to 128. We show in Figure 6 (b) that doing exactly this improves the likelihood of both CIFAR10 and SVHN. Reducing the variance of the latent representations has the same effect, which is shown by Figure 13 in the Appendix.
6 Related Work
This paper is inspired by and most related to recent work on evaluation of generative models. Worthy of foremost mention is the work of Theis et al. (2016), which showed that high likelihood is neither sufficient nor necessary for the model to produce visually satisfying samples. However, their paper does not consider outofdistribution inputs. In this regard, there has been much work on adversarial inputs (Szegedy et al., 2014). While the term is used broadly, it commonly refers to inputs that have been imperceptibly modified so that the model no longer can provide an accurate output (a misclassification, usually). Adversarial attacks on generative models have been studied by (at least) Tabacof et al. (2016) and Kos et al. (2018), but these methods of attack require access to the model. We, on the other hand, are interested in model calibration for any outofdistribution set and especially for common data sets not constructed with any nefarious intentions nor for attack on a particular model. Various papers (Hendrycks & Gimpel, 2017; Lakshminarayanan et al., 2017; Liang et al., 2018) have reported that discriminative neural networks can produce overconfident predictions on outofdistribution inputs, but outofdistribution robustness of deep generative models has not been investigated previously, to the best of our knowledge.
However, there is work concurrent with ours that has tested the anomaly detection abilities of deep generative models. Shafaei et al. (2018) observe that PixelCNN++ cannot provide reliable outlier detection. They do not consider flowbased models. Škvára et al. (2018) experimentally compare VAEs and GANs against knearest neighbors (kNNs), showing that VAEs and GANs outperform kNNs only when known outliers can be used to select their hyperparameters. In the work most similar to ours, Choi & Jang (2018) report the same CIFAR10 vs SVHN phenomenon for Glow—independently confirming our motivating observation. As a fix, they propose training an ensemble of generative models with an adversarial objective and testing for outoftrainingdistribution inputs by computing the WatanabeAkaike information criterion via the ensemble. Their work is complementary to ours since they focus on providing a detection metric whereas we are interested in understanding how and when the phenomenon can arise. The results we present in Equation 6 do not apply to Choi & Jang (2018)’s models since they use affine coupling layers in their Glow, making it NVP.
7 Discussion
The impressively sharp samples produced by Glow (Kingma & Dhariwal, 2018) and its precursor RNVP flow (Dinh et al., 2017), in addition to their ability to compute exact marginal likelihoods, makes invertible generative models attractive to study and deploy. However, we urge caution when using deep generative models with outoftrainingdistribution inputs as we have shown that comparing likelihoods alone cannot identify the training set or inputs like it. Moreover, our analysis in Section 5 shows that the SVHN vs CIFAR10 problem we report would persist for any constantvolume flow no matter the parameter settings nor the choice of latent density (as long as it is logconcave). The models seem to capture lowlevel statistics rather than highlevel semantics. While we cannot conclude that this is necessarily a pathology in deep generative models, it does suggest they need to be further improved. It could be a problem that plagues any generative model, no matter how high its capacity. In turn, we must then temper the enthusiasm with which we preach the benefits of generative models until their sensitivity to outofdistribution inputs is better understood.
Acknowledgments
We thank Aaron van den Oord, Danilo Rezende, Eric Jang, Florian Stimberg, Josh Dillon, Mihaela Rosca, Rui Shu, and Sander Dieleman for helpful discussions.
Appendix A More Implementation Details for FlowBased Models
We have described the core building blocks of invertible generative models above, but there are several other architectural features required in practice. Although, due to space requirements, we only describe them briefly, referring the reader to the original papers for details. In the most recent extension of this line of work, Kingma & Dhariwal (2018) propose the Glow architecture, with its foremost contribution being the use of convolutions in place of discrete permutation operations. Convolutions of this form can be thought of as a relaxed but generalized permutation, having all the representational power of the discrete version with the added benefit of parameters amenable to gradientbased training. As the transformation function becomes deeper, it becomes prone to the same scale pathologies as deep neural networks and therefore requires a normalization step of some form. Dinh et al. (2017) propose incorporating batch normalization and describe how to compute its contribution to the logdeterminantJacobian term. Kingma & Dhariwal (2018) apply a similar normalization, which they call actnorm, but it uses trainable parameters instead of batch statistics. Lastly, both Dinh et al. (2017) and Kingma & Dhariwal (2018) use multiscale architectures that factor out variables at regular intervals, copying them forward to the final latent representation. This gradually reduces the dimensionality of the transformations, improving upon computational costs.
Appendix B Results illustrating asymmetric behavior
Appendix C Analyzing the ChangeofVariables Formula as an Optimization Function
Consider the intuition underlying the volume term in the change of variables objective (Equation 3). As we are maximizing the Jacobian’s determinant, it means that the model is being encouraged to maximize the partial derivatives. In other words, the model is rewarded for making the transformation sensitive to small changes in . This behavior starkly contradicts a long history of derivativebased regularization penalties. Dating back at least to (Girosi et al., 1995), penalizing the Frobenius norm of a neural network’s Jacobian—which upper bounds the volume term
Limiting Behavior. We next attempt to quantify the limiting behavior of the log volume element. Let us assume, for the purposes of a general treatment, that the bijection is an Lipschitz function. Both terms in Equation 3 can be bounded as follows:
(7) 
where is the Lipschitz constant, the dimensionality, and an expression for the (log) mode of . We will make this mode term for concrete for Gaussian distributions below. The bound on the volume term follows from Hadamard’s inequality:
where is an eigenvector. While this expression is too general to admit any strong conclusions, we can see from it that the ‘peakedness’ of the distribution represented by the mode must keep pace with the Lipschitz constant, especially as dimensionality increases, in order for both terms to contribute equally to the objective.
We can further illuminate the connection between and the concentration of the latent distribution through the following proposition:
Proposition 1.
Assume is distributed with moments and . Moreover, let be Lipschitz and . We then have the following concentration inequality for some constant :
Proof: From the fact that is Lipschitz, we know . Assuming , we can apply Chebyshev’s inequality to the RHS: . Since , we can plug the RHS into the inequality and the bound will continue to hold.
From the inequality we can see that the latent distribution can be made more concentrated by decreasing and/or the data’s variance . Since the latter is fixed, optimization only influences . Yet, recall that the volume term in the changeofvariables objective rewards increasing ’s derivatives and thus . While we have given an upper bound and therefore cannot say that increasing will necessarily decrease concentration in latent space, it is for certain that leaving unconstrained does not directly pressure the evaluations to concentrate.
Previous work (Dinh et al., 2015, 2017; Kingma & Dhariwal, 2018) has almost exclusively used a factorized zeromean Gaussian as the latent distribution, and therefore we examine this case in particular. The logmode can be expressed as , making the likelihood bound
(8) 
We see that both terms scale with although in different directions, with the contribution of the distribution becoming more negative and the volume term’s becoming more positive. We performed a simulation to demonstrate this behavior on the two moons data set, which is shown in Figure 8 (a). We replicated the original two dimensions to create data sets of dimensionality of up to . The results are shown in Figure 8 (b). The empirical values of the two terms are shown by the solid lines, and indeed, we see they exhibit the expected diverging behavior as dimensionality increases.
Appendix D Glow with Sigmoid Parametrization
Upon reading the open source implementation of Glow,
(9) 
where indexes the flows and the dimensionality at flow . Interestingly, this parametrization has a fixed upper bound of zero, removing the dependence on found in Equation 8. We demonstrate the change in behavior introduced by the alternate parametrization via the same two moon simulation. The only difference is that the RNVP transforms use a sigmoid parametrizations for the scaling operation. See Figure 8 (c) for the results: we see that now both changeofvariable terms are oriented downward as dimensionality grows. We conjecture this parametrization helps condition the loglikelihood, limiting the volume term’s influence, when training the large models ( flows) used by Kingma & Dhariwal (2018). However, it does not fix the outofdistribution overconfidence we report in Section 3.
Appendix E Constant and Random Inputs


Appendix F Ensembling Glows
The likelihood function technically measures how likely the parameters are under the data (and not how likely the data is under the model), and perhaps a better quantity would be the posterior predictive distribution where we draw samples from posterior distribution . Intutitively, it seems that such an integration would be more robust than a single maximum likelihood point estimate. As a crude approximation to Bayesian inference, we tried averaging over ensembles of generative models since Lakshminarayanan et al. (2017) showed that ensembles of discriminative models are robust to outofdistribution inputs. We compute an “ensemble predictive distribution” as , where indexes over models. However, as Figure 10 shows, ensembles did not significantly change the relative difference between indistribution (CIFAR10, black and blue) and outofdistribution (SVHN, red).
Appendix G Derivation of CVGlow’s Likelihood Difference
We start with Equation 5:
The volume element for CVGlow does not depend on and therefore drops from the equation:
(10) 
where denotes the th convolution’s kernel. Moving on to the first term, the log probability under the latent distribution, we have:
(11) 
Since is comprised of translation operations and convolutions, its partial derivatives involve just the latter (as the former are all ones), and therefore we have the partial derivatives:
(12) 
where and index the input spatial dimensions, the input channel dimensions, the series of flows, and the column dimensions of the sized convolutional kernel . The diagonal elements of are then , and the diagonal element of are all zero.
Then returning to the full equation, for the constantvolume Glow model we have:
(13) 
Lastly, we assume that both and are diagonal and thus the elementwise multiplication with collects only its diagonal elements:
(14) 
where we arrived at the last line by rearranging the sum to collect the shared channel terms.
Appendix H Histogram of data statistics
Appendix I Results illustrating effect of graying on codes
Figure 13 shows the effect of graying on codes.




Footnotes
 footnotemark:
 Specifically, we use levels of steps ( convolution followed by an affine coupling layer). Kingma & Dhariwal (2018) use 3 levels of 32 steps. Although we use a smaller model, it still produces good samples, which can be seen in Figure 14 of the Appendix, and competitive BPD (CIFAR10: 3.46 for ours vs 3.35 for theirs).
 See (Theis et al., 2016, Section 3.1) for the definitions of loglikelihood and bitsperdimension.
 It is easy to show the upper bound via Hadamard’s inequality: .
 https://github.com/openai/glow/blob/master/model.py#L376
References
 Christopher M Bishop. Novelty Detection and Neural Network Validation. IEE ProceedingsVision, Image and Signal processing, 141(4):217–222, 1994.
 Christopher M Bishop. Training with Noise is Equivalent to Tikhonov Regularization. Neural Computation, 7(1):108–116, 1995.
 David Blei, Katherine Heller, Tim Salimans, Max Welling, and Zoubin Ghahramani. Panel Discussion. Advances in Approximate Bayesian Inference, December 2017. URL https://youtu.be/x1UByHT60mQ?t=46m2s. NIPS Workshop.
 Hyunsun Choi and Eric Jang. Generative Ensembles for Robust Anomaly Detection. ArXiv ePrint arXiv:1810.01392, 2018.
 Laurent Dinh, David Krueger, and Yoshua Bengio. NICE: NonLinear Independent Components Estimation. ICLR Workshop Track, 2015.
 Laurent Dinh, Jascha SohlDickstein, and Samy Bengio. Density Estimation Using Real NVP. In International Conference on Learning Representations (ICLR), 2017.
 Federico Girosi, Michael Jones, and Tomaso Poggio. Regularization Theory and Neural Networks Architectures. Neural Computation, 7(2):219–269, 1995.
 Ian Goodfellow, Jean PougetAbadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aäron Courville, and Yoshua Bengio. Generative Adversarial Nets. In Advances in Neural Information Processing Systems (NIPS), 2014.
 Dan Hendrycks and Kevin Gimpel. A Baseline for Detecting Misclassified and OutofDistribution Examples in Neural Networks. In International Conference on Learning Representations (ICLR), 2017.
 Radu Herbei and Marten H Wegkamp. Classification with Reject Option. Canadian Journal of Statistics, 34(4):709–721, 2006.
 Diederik P. Kingma and Prafulla Dhariwal. Glow: Generative Flow with Invertible 1x1 Convolutions. In Advances in Neural Information Processing Systems (NIPS), 2018.
 Diederik P Kingma and Max Welling. AutoEncoding Variational Bayes. International Conference on Learning Representations (ICLR), 2014.
 Jernej Kos, Ian Fischer, and Dawn Song. Adversarial Examples for Generative Models. In 2018 IEEE Security and Privacy Workshops (SPW), pp. 36–42. IEEE, 2018.
 Alex Krizhevsky and Geoffrey Hinton. Learning Multiple Layers of Features from Tiny Images. Technical report, University of Toronto, 2009.
 Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and Scalable Predictive Uncertainty Estimation Using Deep Ensembles. In Advances in Neural Information Processing Systems (NIPS), 2017.
 Shiyu Liang, Yixuan Li, and R Srikant. Enhancing the Reliability of OutofDistribution Image Detection in Neural Networks. International Conference on Learning Representations (ICLR), 2018.
 Christos Louizos and Max Welling. Multiplicative Normalizing Flows for Variational Bayesian Neural Networks. In Proceedings of the 34th International Conference on Machine Learning (ICML), 2017.
 Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading Digits in Natural Images with Unsupervised Feature Learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.
 Danilo Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. In Proceedings of the 31st International Conference on Machine Learning (ICML), pp. 1278–1286, 2014.
 Salah Rifai, Pascal Vincent, Xavier Muller, Xavier Glorot, and Yoshua Bengio. Contractive AutoEncoders: Explicit Invariance During Feature Extraction. In Proceedings of the 28th International Conference on Machine Learning (ICML), 2011.
 Mihaela Rosca, Balaji Lakshminarayanan, and Shakir Mohamed. Distribution matching in variational inference. arXiv preprint arXiv:1802.06847, 2018.
 Alireza Shafaei, Mark Schmidt, and James J Little. Does Your Model Know the Digit 6 Is Not a Cat? A Less Biased Evaluation of “Outlier" Detectors. ArXiv ePrint arXiv:1809.04729, 2018.
 Vít Škvára, Tomáš Pevnỳ, and Václav Šmídl. Are Generative Deep Models for Novelty Detection Truly Better? KDD Workshop on Outlier Detection DeConstructed (ODD v5.0), 2018.
 Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszár. Amortised MAP Inference for Image SuperResolution. International Conference on Learning Representations (ICLR), 2017.
 Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing Properties of Neural Networks. International Conference on Learning Representations (ICLR), 2014.
 Martin Szummer and Tommi S Jaakkola. Information Regularization with Partially Labeled Data. In Advances in Neural Information Processing Systems (NIPS), 2003.
 Pedro Tabacof, Julia Tavares, and Eduardo Valle. Adversarial Images for Variational Autoencoders. ArXiv ePrint, 2016.
 EG Tabak and Cristina V Turner. A Family of Nonparametric Density Estimation Algorithms. Communications on Pure and Applied Mathematics, 66(2):145–164, 2013.
 Lucas Theis, Aäron van den Oord, and Matthias Bethge. A Note on the Evaluation of Generative Models. In International Conference on Learning Representations (ICLR), 2016.
 Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A Generative Model for Raw Audio. ArXiv ePrint, 2016a.
 Aäron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixel CNN decoders. In Advances in Neural Information Processing Systems (NIPS), 2016b.
 Aaron van den Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George van den Driessche, Edward Lockhart, Luis Cobo, Florian Stimberg, Norman Casagrande, Dominik Grewe, Seb Noury, Sander Dieleman, Erich Elsen, Nal Kalchbrenner, Heiga Zen, Alex Graves, Helen King, Tom Walters, Dan Belov, and Demis Hassabis. Parallel WaveNet: Fast highfidelity speech synthesis. In Proceedings of the 35th International Conference on Machine Learning (ICML), pp. 3918–3926, 2018.