On the Implicit Assumptions of GANs
Abstract
Generative adversarial nets (GANs) (Goodfellow et al., 2014; Gutmann et al., 2014) have generated a lot of excitement. Despite their popularity, they exhibit a number of welldocumented issues in practice, which apparently contradict theoretical guarantees. A number of enlightening papers, e.g.: (Arora et al., 2017; Sinn & Rawat, 2017; Cornish et al., 2018), have pointed out that these issues arise from unjustified assumptions that are commonly made, but the message seems to have been lost amid the optimism of recent years. We believe the identified problems deserve more attention, and highlight the implications on both the properties of GANs and the trajectory of research on probabilistic models. We recently proposed an alternative method (Li & Malik, 2018) that sidesteps these problems.
On the Implicit Assumptions of GANs
Ke Li Jitendra Malik Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720 United States {ke.li,malik}@eecs.berkeley.edu
noticebox[b]32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada.\end@float
1 Introduction
Generative adversarial nets (GANs) (Goodfellow et al., 2014; Gutmann et al., 2014) are one of the most popular generative models today. In the ensuing discussion, we separate the model, also known as the generator, from the training objective for clarity. The model is an example of an implicit probabilistic model (Diggle & Gratton, 1984; Mohamed & Lakshminarayanan, 2016), which is a probabilistic model that is defined most naturally in terms of a sampling procedure. In the case of GANs, the sampling procedure associated with the model is the following:

Sample

Return
where is a neural net.
For implicit models, marginal likelihood in general cannot be expressed analytically or computed numerically. Consequently, implicit models cannot be trained by directly maximizing likelihood and must instead be trained using likelihoodfree approaches, which do not require evaluation of the likelihood or any derived quantities. GANs use adversarial loss as their training objective, which penalizes dissimilarity of samples to data. It has been shown that when given access to an infinitely powerful discriminator, the original GAN objective minimizes the JensenShannon divergence, the variant of the objective minimizes the reverse KLdivergence minus a bounded quantity (Arjovsky & Bottou, 2017), and later extensions (Nowozin et al., 2016) minimize arbitrary fdivergences. Various papers (Arora et al., 2017; Sinn & Rawat, 2017; Cornish et al., 2018) have pointed out that these results only hold under strong assumptions, but these are often forgotten. These assumptions have farreaching implications on both the properties of adversarial loss and the trajectory of research on probabilistic models, and so we believe these deserve more attention than they have received. In this paper, we highlight the most important issues, discuss their implications and propose an alternative method that bypasses these issues.
2 True vs. Empirical Data Distribution
As pointed out by (Arora et al., 2017; Sinn & Rawat, 2017; Cornish et al., 2018), theoretical analysis of GANs often disregards the distinction between the true and the empirical data distribution. More concretely, the original adversarial training objective is:
where is the true data distribution. Ideally, both expectations should be approximated with Monte Carlo estimates during training to ensure unbiased gradient estimates. However, because drawing samples from the true data distribution is expensive and usually cannot be done during training, is in practice approximated with samples drawn from the training dataset rather than from the true data distribution. Drawing samples from the training dataset effectively results in a Monte Carlo estimate of the expectation w.r.t. the empirical data distribution. While this is an unbiased estimate of the expectation w.r.t. the empirical data distribution, it is not necessarily an unbiased estimate of the expectation w.r.t. the true data distribution, because the randomness in the data collection process is not marginalized out when training the model. ^{1}^{1}1More details are in Appendix A.
While seemingly innocuous, the replacement of the true data distribution with the empirical version has important consequences. For example, Arora et al. (2017) pointed out that because the empirical data distribution is discrete, the JensenShannon divergence between the empirical data distribution and any continuous model distribution is always , which is the maximum possible value of the JensenShannon divergence. Therefore, minimizing the JensenShannon divergence between the empirical data distribution and the model distribution does not necessarily recover the true data distribution.
Minimizing reverse KLdivergence, i.e. the KLdivergence from the empirical data distribution to the model distribution, , is similarly problematic. Recall that is only defined and finite if the is absolutely continuous w.r.t. , i.e. for any such that , . Therefore, for reverse KLdivergence to be defined and finite, the support of the model distribution must be contained in the support of the empirical data distribution, which is just the set of training examples. Therefore, any model that can produce a novel sample that is not in the training set will give rise to an undefined or infinite reverse KLdivergence. In particular, the reverse KLdivergence for any continuous model distribution is undefined, and so minimizing reverse KLdivergence does not make sense. ^{2}^{2}2This argument conveys the right intuition, but there are some technical subtleties involved due to the nonexistence of the density of the empirical data distribution. A more rigorous treatment can be found in Appendix B.
Therefore, claims about GANs minimizing JensenShannon divergence or the reverse KLdivergence minus a bounded quantity implicitly carry the assumption of access to the true data distribution during training. Because this is a strong assumption that rarely holds in practice, it is important to keep this assumption in mind when interpreting theoretical results and applying GANs in practice.
3 Asymptotic Consistency
It is often claimed that the parameter estimate obtained by optimizing the original adversarial objective when the discriminator has infinite capacity (which corresponds to minimizing the JensenShannon divergence) (Goodfellow, 2014) or by minimizing the reverse KLdivergence (Huszár, 2015) is asymptotically consistent. The notion of asymptotic consistency used in the sense here refers to the fact that when the true data distribution is given and the model has sufficient capacity to reproduce the true data distribution, the objective attains the unique global optimum when the model distribution coincides with the true data distribution. It is important to note, however, that this notion of asymptotic consistency is different from the classical notion of asymptotic consistency in statistics, and the two should not be conflated. In fact, minimizing any divergence from or to the true data distribution is asymptotically consistent in the sense as used in the GAN literature, since by definition of divergences, they must always be nonnegative and can only evaluate to zero if and only if the two distributions passed in as arguments are equal, implying that the optimum must occur when the model distribution is equal to the true data distribution.
Unfortunately, the notion of asymptotic consistency used in the GAN literature does not characterize the behaviour of the optima of various objectives in practice, because the true data distribution is almost never available. (If it were available, there would be no need to train a generative model, since we can simply use the true data distribution as given in place of the generative model.) What is more useful is to look at how the parameter estimate behaves when given a finite set of training examples, and examine what this parameter estimate converges to as the number of training examples tends to infinity. This is what the classical notion of asymptotic consistency in statistics refers to.
More precisely, given an infinite sequence of i.i.d. samples from , where the true parameter is unknown, let be a parameter estimator that depends on only the first samples . An estimator is (weakly) asymptotically consistent if , where the probability is over the randomness in the process of drawing samples from .
Let’s now compare this notion of asymptotic consistency to the notion of asymptotic consistency used in the GAN literature by way of examples. Recall that the JensenShannon divergence between the empirical data distribution and any continuous model distribution is , regardless of what the model distribution is or how many samples there are. Therefore, any parameter value minimizes the JensenShannon divergence, because it is a constant function in the parameter value. Therefore, the parameter estimate obtained by minimizing the JensenShannon divergence is not asymptotically consistent in the classical statistical sense. Similarly, because the reverse KLdivergence is undefined for any continuous model distribution, the parameter value that minimizes reverse KLdivergence is undefined, and so the parameter estimate obtained by minimizing reverse KLdivergence is not asymptotically consistent either.
Therefore, even though minimizing any divergence is consistent according to the notion of asymptotic consistency used in the GAN literature, this is not true if the notion of asymptotic consistency considered were to agree with the definition in the statistical literature. One divergence that is asymptotically consistent in both the statistical sense and the GAN sense is the standard KLdivergence, i.e. the KLdivergence from the model distribution to the empirical data distribution, , which is equivalent to maximum likelihood.
4 Mode Dropping and Its Implications
Mode dropping is a welldocumented issue in GANs and refers to the phenomenon where the model disregards some modes in the data distribution and assigns them low density. Various theoretical explanations have been proposed (Arora et al., 2017; Arjovsky & Bottou, 2017), which suggest that it could be caused by a combination of factors, including the polynomial capacity of the discriminator and the minimization of reverse KLdivergence. More specifically, Arora et al. (2017) showed that a discriminator with a polynomial number of parameters in the dimensionality of the data cannot in general detect mode dropping, leading to the generator at convergence dropping an exponential number of modes. Arjovsky & Bottou (2017) demonstrated that even if we assume access to the true data distribution and a discriminator with infinite capacity, the variant of the adversarial objective, which is more commonly used than the original objective due to issues with vanishing gradients, minimizes the reverse KLdivergence minus a bounded quantity. Because reverse KLdivergence heavily penalizes the model assigning high density to unlikely data and only mildly penalizes the model assigning low density to likely data, it tends to lead to a model distribution that misses some modes of the data distribution.
The freedom that GANs have to drop modes has important implications on the evaluation of generative models and the trajectory of research on generative models more broadly. It is instructive to think of the performance of a generative model along two axes: precision, i.e.: its ability to generate plausible samples, and recall, i.e.: its ability to model the full diversity of the data distribution. Ideally, we would like to learn a model that scores high along both axes. Traditionally, generative models were trained using maximum likelihood. Because likelihood is the product of densities at each of the training examples, such models are not allowed to assign low density to any of the training examples, because overall likelihood would become low. So, mode dropping is effectively disallowed and full recall is guaranteed. Evaluation in this case is straightforward – since recall is fixed, an increase in precision implies an upward shift in the precisionrecall curve, which indicates better modelling performance. So, models trained using maximum likelihood can be compared solely on the basis of precision. Precision can be easily measured – a simple way is by visual assessment of sample quality. This is why sample quality has historically been an important indicator of modelling performance and why sample quality used to correlate with loglikelihood (or estimates/lower bounds thereof).
On the other hand, GANs are allowed to drop modes and can effectively choose the data examples that it wants to model. Because the model designer has no control over which or how many training examples are ignored, there is no guarantee on the level of recall, and so it is critical to measure both precision and recall. To see why both precision and recall are important, consider a model with low capacity that drops all but a few modes. Such a model is able to trivially achieve high precision by dedicating all its modelling capacity to the few modes, but cannot explain most of the data. Evaluation in this case becomes tricky – an increase in precision could mean either a movement along the precisionrecall curve, where the increase in precision comes at the expense of a decrease in recall, or an upward shift in the precisionrecall curve, where recall is maintained or increased together with the improvement in precision. Only the latter implies an improvement in modelling performance. As a result, evaluation by precision no longer suffices and can be misleading. Instead, both precision and recall must be measured and care must be taken to interpret the results.
Unfortunately, to date, there has been no reliable method to measure recall. Visualization of samples no longer works because human memory constraints limit our ability to detect a deficiency in diversity compared to the training data. Comparison to the nearest training example is also problematic because it only detects almost exact memorization of a training example. As a result, recall is typically not measured. This is dangerous – without measuring recall, we do not know how much recall was given up in order to achieve an improvement in sample quality.
This has had an impact on the trajectory of research in probabilistic modelling, which has deviated somewhat from the original goal, which is to model the inherent uncertainty in inference/prediction. Performance of generative models was traditionally measured in terms of loglikelihood or a lower bound on loglikelihood, and assessment of sample quality was a way of visualizing performance. Later on, estimated loglikelihoods replaced lower bounds on loglikelihoods as a performance metric, since there was no straightforward way to compute lower bounds on loglikelihoods for some models. Fortunately, sample quality was unaffected by this limitation and served to validate conclusions drawn from estimated loglikelihoods, which may be unreliable in high dimensions. The advent of models that can drop modes brought about a shift from the fullrecall setting to a setting where a level of recall is not guaranteed. Consequently, conclusions drawn from estimated loglikelihood and sample quality diverged. Due to the unreliability of estimated loglikelihood, it has gradually fallen out of favour as a performance metric, leaving sample quality as the sole performance metric. However, it is sometimes forgotten that because full recall is no longer enforced, sample quality no longer reflects how well the model learns the underlying distribution. As a result, research in the area has undergone a largely unnoticed change in focus, from trying to learn the distribution to trying to synthesize visually pleasing images. While the latter is a worthy goal, the former should not be abandoned and researchers should be mindful of the fact that impressive advances in sample quality in recent years do not mean that we are now within reach of being able to model the distribution of natural images. We can only claim to have achieved the latter when we can produce realistic samples at full recall, which could be considerably more challenging than producing realistic samples at some unknown level of recall.
5 Proposed Solution: Implicit Maximum Likelihood Estimation
How can we as a community progress towards generative models that are capable of learning the underlying data distribution? We argue that the first step is to return to the principle of maximum likelihood and insist on full recall, for the following reason: when the model is given the freedom to drop modes and effectively ignore training data that it does not want to model, it is difficult to design highcapacity models. Even when a model is trained on a large and diverse dataset, model may appear to overfit to a small subset of training examples and fail to generalize, because a lot of examples in the dataset are effectively ignored. The natural course of action to mitigate this would be to reduce the capacity of the model, however, this would result in a lowcapacity model that captures only the modes represented by a small subset of the training examples.
This does not mean that we would have to give up implicit probabilistic models, which do offer a lot more modelling flexibility compared to classical probabilistic models. Previously, it was unclear how to train implicit models to maximize likelihood when the likelihood and derived quantities can neither be expressed analytically or computed numerically. Recently, we introduced a simple likelihoodfree parameter estimation technique that can be shown to be equivalent to maximum likelihood under some conditions, which we call Implicit Maximum Likelihood Estimation (Li & Malik, 2018).
References
 Arjovsky & Bottou (2017) Arjovsky, Martin and Bottou, Léon. Towards principled methods for training generative adversarial networks. arXiv preprint arXiv:1701.04862, 2017.
 Arora et al. (2017) Arora, Sanjeev, Ge, Rong, Liang, Yingyu, Ma, Tengyu, and Zhang, Yi. Generalization and equilibrium in generative adversarial nets (GANs). arXiv preprint arXiv:1703.00573, 2017.
 Cornish et al. (2018) Cornish, Robert, Yang, Hongseok, and Wood, Frank. Towards a testable notion of generalization for generative adversarial networks. ICLR Submission, 2018.
 Diggle & Gratton (1984) Diggle, Peter J and Gratton, Richard J. Monte carlo methods of inference for implicit statistical models. Journal of the Royal Statistical Society. Series B (Methodological), pp. 193–227, 1984.
 Goodfellow et al. (2014) Goodfellow, Ian, PougetAbadie, Jean, Mirza, Mehdi, Xu, Bing, WardeFarley, David, Ozair, Sherjil, Courville, Aaron, and Bengio, Yoshua. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680, 2014.
 Goodfellow (2014) Goodfellow, Ian J. On distinguishability criteria for estimating generative models. arXiv preprint arXiv:1412.6515, 2014.
 Gutmann et al. (2014) Gutmann, Michael U, Dutta, Ritabrata, Kaski, Samuel, and Corander, Jukka. Likelihoodfree inference via classification. arXiv preprint arXiv:1407.4981, 2014.
 Huszár (2015) Huszár, Ferenc. How (not) to train your generative model: Scheduled sampling, likelihood, adversary? arXiv preprint arXiv:1511.05101, 2015.
 Li & Malik (2018) Li, Ke and Malik, Jitendra. Implicit maximum likelihood estimation. arXiv preprint arXiv:1809.09087, 2018.
 Mohamed & Lakshminarayanan (2016) Mohamed, Shakir and Lakshminarayanan, Balaji. Learning in implicit generative models. arXiv preprint arXiv:1610.03483, 2016.
 Nowozin et al. (2016) Nowozin, Sebastian, Cseke, Botond, and Tomioka, Ryota. fGAN: Training generative neural samplers using variational divergence minimization. In Advances in Neural Information Processing Systems, pp. 271–279, 2016.
 Sinn & Rawat (2017) Sinn, Mathieu and Rawat, Ambrish. Nonparametric estimation of jensenshannon divergence in generative adversarial network training. arXiv preprint arXiv:1705.09199, 2017.
Appendix A Monte Carlo Estimates of Expectations
We first show that a Monte Carlo estimate based on samples from the training data is an unbiased estimate of the expectation w.r.t. the empirical data distribution, i.e.: .
Let denote the training examples and let be a categorical distribution with categories and uniform probabilities over all categories, i.e. if , . Then is the Monte Carlo estimate based on samples from the training data. Note that the expectation is conditioned on , because training examples are only drawn once while collecting data, and new samples from the true data distribution are not drawn during training.
Note that is a random variable (because are random), and so cannot be equal to , which is a constant. Below we show that we can obtain an unbiased estimate of the expectation w.r.t. the true data distribution, i.e.: , by taking the expectation over the training examples , which essentially marginalizes out the randomness during the data collection process and requires drawing fresh samples from the true data distribution during training.
Appendix B KLDivergence between Discrete and Continuous Distributions
KLdivergence is typically defined as:
where and denote two probability measures and denotes the RadonNikodym derivative of w.r.t. . This notion of KLdivergence is welldefined when and are both continuous or are both discrete. However, it is not welldefined when one distribution is discrete and the other is continuous. Therefore, under this definition, neither nor is welldefined.
However, we will show below that under a slightly more general definition of KLdivergence, is welldefined when is discrete and is continuous, but not the other way around.
Consider the following notion of KLdivergence:
where and denote probability mass functions (PMFs) or probability density functions (PDFs) of and respectively (depending on whether each is discrete or continuous), and denote the reference measures for and , and and denote the cumulative distribution functions (CDFs) of and respectively. If is continuous, the reference measure is the Lebesgue measure on ; if it is discrete, is the counting measure on a set . We will write out the definition of and explicitly as RadonNikodym derivatives to emphasize the possibly different reference measures for and . Note that the second integral is w.r.t. the LebesgueStieltjes measure associated to the CDF of . It is easy to check that when and are both continuous or are both discrete, this definition is equivalent to the previous definition.
Now we consider the case where is discrete and is continuous. We first evaluate the first term:
So, the first term is clearly welldefined. We then evaluate the second term:
Since is continuous and is of bounded variation, the LebesgueStieltjes integral is equivalent to the RiemannStieltjes integral. From the definition of the RiemannStieltjes integral, we can see that evaluates to .
Hence, when is discrete and is continuous,
Note that while this final expression appears similar to the expression for KLdivergence when both and are discrete, here denotes the PDF rather than the PMF (which doesn’t exist because is continuous).
Therefore, under this definition of KLdivergence, , where are the training examples. This is why minimizing is equivalent to maximizing likelihood.
On the other hand, when is continuous and is discrete, the first term is:
This is just differential entropy and is clearly welldefined. The second term is:
This is not welldefined, because is not defined for . This implies that under this definition, even though is welldefined, reverse KLdivergence, , is not.