Improved generator objectives for GANs
Abstract
We present a framework to understand GAN training as alternating density ratio estimation, and approximate divergence minimization. This provides an interpretation for the mismatched GAN generator and discriminator objectives often used in practice, and explains the problem of poor sample diversity. Further, we derive a family of generator objectives that target arbitrary divergences without minimizing a lower bound, and use them to train generative image models that target either improved sample quality or greater sample diversity.
Improved generator objectives for GANs
Ben Poole^{†}^{†}thanks: Work done during an internship at Google Brain. Stanford University poole@cs.stanford.edu Alexander A. Alemi, Jascha SohlDickstein, Anelia Angelova Google Brain {alemi, jaschasd, anelia}@google.com
noticebox[b]Workshop on Adversarial Training, NIPS 2016, Barcelona, Spain.\end@float
1 Introduction
Generative adversarial networks (GANs) have become a popular method for fitting latentvariable directed generative models to complex datasets [2, 10, 13, 6]. While these models provide compelling visual samples, they are notoriously unstable and difficult to train and evaluate. Many recent papers have focused on new architectures and regularization techniques for improved stability and performance [13, 11, 4], but the objectives they optimize are fundamentally the same as the objectives in the original proposal [2].
The visual quality of samples from generative models trained with GANs often exceeds those of their variationallytrained counterparts [5, 12]. This is often credited to a difference in the divergence between the data and model distribution that each technique optimizes [15]. GAN theory shows that an idealized formulation optimizes JensenShannon divergence, while VAEs optimize a lower bound on loglikelihood, corresponding to a lower bound on the KL divergence. Recent work has generalized the GAN theory to target reverse KL [14] and additional divergences (including KL, reverse KL, and JS), allowing GANs to target a diverse set of behaviors [9].
However, these new theoretical advances fail to provide a justification for the GAN objectives that are used in practice. In particular, the generator objective used in practice is different from the one that is theoretically justified [2, 9]. This raises the question as to whether the theory used to motivate GANs applies to these modified objectives, and how the use of mismatched generator and discriminator objectives influences the behavior of GANs in practice.
Here we present a new interpretation of GANs as alternating between steps of density ratio estimation, and divergence minimization. This leads to a new understanding of the GAN generator objective that is used in practice as targeting a modeseeking divergence that resembles reverse KL, thus providing an explanation for the mode dropping seen in practice. Furthermore, we introduce a set of new objectives for training the generator of a GAN that can trade off between sample quality and sample diversity, and show their effectiveness on CIFAR10.
2 Theory
2.1 Background
Given samples from a data density, , we would like to learn a generative model with density that matches the data density . Often the models we are interested in have intractable likelihoods, so that we can sample efficiently but cannot evaluate its likelihood. In the GAN framework [2], the intractable likelihood is bypassed by instead training a discriminator to classify between samples from the data and samples from the model. Given this discriminator, the parameters of the generative model are updated to increase the tendency of the discriminator to misclassify samples from the model as samples from the data. This iterative process pushes the model density towards the data density without ever explicitly computing the likelihood of a sample. More formally, the GAN training process is typically motivated as solving a minimax optimization problem:
(1) 
where is the generative model distribution, is the discriminator, and is the data distribution. Fixing , the optimal discriminator is [2]. Thus if the inner maximization over the discriminator is performed to completion for each step of , the GAN objective is equivalent to minimizing:
(2) 
This has led to the understanding that GANs minimize the JensenShannon divergence between the data density and the model density, and is thought to underlie the difference in sample quality between GANs and VAEs [15]. However, this is not the objective that is used in practice, and we will see below that this alters the analysis.
Recently, [9] proposed an extension to GANs to target divergences other than JensenShannon. They generalize the set of divergences a GAN can target to the family of divergences, where:
(3) 
and is a convex function with . The key result they leverage from [8] is that any divergence can be lowerbounded by
(4) 
where is the Fenchel conjugate^{1}^{1}1The Fenchel conjugate is defined as of , and is the variational function also known as the discriminator in the GAN literature^{2}^{2}2We use as the data distribution and as the model distribution, which is the opposite of [9].. Thus for any , we have a lower bound on the divergence that recovers exactly the discriminator objective used in the standard GAN when . As this is a lower bound on the divergence, maximizing it with respect to the discriminator makes sense, and yields a tighter lower bound on the true divergence.
However, the objective to optimize for the generative model, , remains unclear. In both the original GAN paper [2] and the GAN paper [9], two objectives are proposed (denoted as and ):

: Minimize the lower bound in Equation 4. For standard GANs, this corresponds to minimizing the probability of the discriminator classifying a sample from the model as fake.

: Optimize an alternative objective:
(5) For standard GANs, this corresponds to maximizing the log probability of the discriminator classifying a sample from the model as real.
The first approach minimizes a lower bound, and thus improvements in the objective can correspond to making smaller, or, more problematically, by making the lower bound on looser. In practice this leads to slower convergence, and thus the first objective is not widely used.
The second approach is empirically motivated in [2, 9] as speeding up training, and theoretically motivated by the observation that remains a fixed point of the learning dynamics. However, the behavior of this generator objective when the generative model does not have the capacity to realize the data density remains unclear. This is the regime we care about as most generative models do not have the capacity to exactly model the data.
2.2 Discriminator as a density ratio estimator
To address the theoretical and practical issues we first present a simple relationship between the discriminator and an estimate of the density ratio. Given known data and model densities, the optimal discriminator with respect to an divergence, , was derived in [9] as:
(6) 
where is the derivative of . If is invertible, we can reverse the relationship, and use the discriminator to recover the ratio of the data density to the model density:
(7) 
In practice we don’t have access to the optimal discriminator , and instead use the current discriminator as an approximation.
2.3 A new set of generator objectives
Given access to an approximate density ratio , we can now optimize any objective that depends only on samples from or and the value of the density ratio. Conveniently, divergences are a family of divergences that depend only on samples from one distribution and the density ratio! Given samples from and an estimate of the density ratio at each point, we can compute an estimate of the divergence, between and :
(8) 
where is the generator objective, is the divergence targeted for the generator, and the divergence targeted for the discriminator. and need not be the same divergence. For nonoptimal discriminators, this objective will be a biased approximation of the divergence, but is not guaranteed to be either an upper or lower bound on .
Our new algorithm for GAN training iterates the following steps:

Optimize the discriminator, , to maximize a lowerbound on using Equation 4.

Optimize the generator, , to minimize , using the estimate of the density ratio from the current discriminator, , in Equation 8.
While the first step is identical to the standard GAN training algorithm, the second step comprises a new generator update that can be used to fit a generative model to the data while targeting any divergence. In practice, we alternate single steps of optimization on each minibatch of data.
2.4 Related work
Several recent papers have identified novel objectives for GAN generators. In [14], they propose a generator objective corresponding to being reverse KL, and show that it improves performance on image superresolution. [3] identifies the generator objective that corresponds to minimizing the KL divergence, but does not empirically evaluate this objective.
Concurrent with our work, two papers propose closely related GAN training algorithms. In [16], they directly estimate the density ratio by optimizing a different discriminator objective that corresponds to rewriting the discriminator in terms of the density ratio:
(9) 
This approach requires learning a network that directly outputs the density ratio, which can be very small or very large and in practice the networks that parameterize the density ratio must be clipped [16]. We found estimating a function of the density ratio to be more stable, in particular using the GAN discriminator objective the discriminator estimates . However, there are likely ways of combining these approaches in the future to directly estimate stable functions of the density ratio independent of the discriminator divergence.
More generically, the training process can be thought of as two interacting systems: one that identifies a statistic of the model and data, and another that uses that statistic to make the model closer to the data. [7] discusses many approaches similar to the one presented here, but do not present experimental results.
3 Interpreting the GAN generator objective used in practice,
We can use our new family of generator objectives to better understand , the objective that is used in practice (Eq. 5). Given that is the standard GAN divergence, we can solve for the generator divergence, , such that , yielding:
(10) 
Thus minimizing corresponds to minimizing an approximation of the divergence between the data density and the model density, not minimizing the JensenShannon divergence.
To better understand the behavior of this divergence, we fit a single Gaussian to a mixture of two Gaussians in one dimension (Figure 1). We find that the GAN divergence optimized in practice is even more modeseeking than JS and reverse KL. This behavior is likely the cause of many problems experienced with GANs in practice: samples often fail to cover the diversity of the dataset.
4 Experiments
We evaluate our proposed generator objectives at improving the sample quality and diversity on CIFAR10. All models were trained using identical architectures and hyperparameters (see Appendix B). The discriminator in all models was trained to optimize the normal GAN objective, corresponding to maximizing Equation 4 with , and using with being the output of a neural network and being used to constrain the range of as in [9]. For each model, we optimized a different generator objective by using different values for in Equation 8. The generator objectives are derived and listed in Appendix A.
In order to highlight the effect the generator objective can have on the generated samples, we targeted several objectives at various divergences, as well as the traditional generator objective . In Figure 2, we see that the generator objective has a large impact on sample diversity. In particular, for very modeseeking divergences ( and ), the samples fail to capture the diversity of class labels in the dataset, as is immediately visually obvious from overrepresentation of greens and browns in the generated samples. For more modecovering divergences ( (squared Hellinger), KL) we see much better diversity in colors and sampled classes, without any noticeably degradation in sample quality.
5 Discussion
Our work presents a new interpretation of GAN training, and a new set of generator objectives for GANs that can be used to target any divergence. We demonstrate that targeting JS for the discriminator and targeting other objectives for the generator yields qualitatively different samples, with modeseeking objectives producing less diverse samples, and modecovering objectives producing more diverse samples. However, training with very modeseeking objectives does not yield extremely highquality samples. Similarly, targeting modecovering objectives like KL improves sample diversity, but the quality of samples does not visibly worsen. Visual evaluation of sample quality is a potentially fraught measure of quality however. Future work will be needed to investigate the impact of alternate generator objectives and provide better quantitative metrics and understanding of what factors drive sample quality and diversity in GANs.
Acknowledgments
We thank Augustus Odena for feedback on the manuscript, Vincent Dumoulin for the baseline code, and Luke Metz, Luke Vilnis, and the Google Brain team for valuable and insightful discussions.
References
 [1] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016.
 [2] Ian Goodfellow, Jean PougetAbadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2672–2680. Curran Associates, Inc., 2014.
 [3] Ian J Goodfellow. On distinguishability criteria for estimating generative models. arXiv preprint arXiv:1412.6515, 2014.
 [4] Ferenc Huszar. An alternative update rule for generative adversarial networks. http://www.inference.vc/analternativeupdateruleforgenerativeadversarialnetworks/, 2015.
 [5] Diederik P Kingma and Max Welling. Autoencoding variational bayes, 2013.
 [6] Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photorealistic single image superresolution using a generative adversarial network, 2016.
 [7] Shakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. arXiv preprint arXiv:1610.03483, 2016.
 [8] XuanLong Nguyen, Martin J Wainwright, and Michael I Jordan. Estimating divergence functionals and the likelihood ratio by penalized convex risk minimization. In NIPS, pages 1089–1096, 2007.
 [9] Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. fgan: Training generative neural samplers using variational divergence minimization. arXiv preprint arXiv:1606.00709, 2016.
 [10] Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier gans. arXiv preprint arXiv:1610.09585, 2016.
 [11] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
 [12] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and variational inference in deep latent gaussian models. In International Conference on Machine Learning. Citeseer, 2014.
 [13] Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016.
 [14] Casper Kaae Sonderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszar. Amortised map inference for image superresolution, 2016.
 [15] L. Theis, A. van den Oord, and M. Bethge. A note on the evaluation of generative models. In International Conference on Learning Representations, Apr 2016.
 [16] Masatoshi Uehara, Issei Sato, Masahiro Suzuki, Kotaro Nakayama, and Yutaka Matsuo. Generative adversarial nets from a density ratio estimation perspective. arXiv preprint arXiv:1610.02920, 2016.
Appendix A Deriving the generator objectives
Here we derive the generator objectives when the discriminator divergence is , corresponding to the standard GAN discriminator objective. As in [9], we parameterize the discriminator as where has the same range as . For the GAN case, this corresponds to .
First, we can compute the inverse of the gradient of which is used to estimate the density ratio:
For GANs, the discriminator is parameterized as , so we can compute the density ratio as:
(11) 
Given this estimate of the density ratio, we can then compute the generator objective as . The table below contains the generator objectives for many different given :
(12) 
Appendix B CIFAR10 architecture details
This is a slightly modified version of the architecture from [1]. Input images were scaled from to .
Operation  Kernel  Strides  Feature maps  BN?  Dropout  Nonlinearity 
– input  
Transposed convolution  0.0  Leaky ReLU  
Transposed convolution  0.0  Leaky ReLU  
Transposed convolution  0.0  Leaky ReLU  
Transposed convolution  0.0  Leaky ReLU  
Transposed convolution  0.0  Leaky ReLU  
Convolution  0.0  Leaky ReLU  
Convolution  0.0  Sigmoid  
– input  
Convolution  0.2  Maxout  
Convolution  0.5  Maxout  
Convolution  0.5  Maxout  
Convolution  0.5  Maxout  
Convolution  0.5  Maxout  
Convolution  0.5  Maxout  
Convolution  0.5  Maxout  
Convolution  0.5  Linear  
Optimizer  Adam (, , )  
Batch size  128  
Leaky ReLU slope, maxout pieces  0.1, 2  
Weight, bias initialization  Isotropic gaussian (, ), Constant() 