Improved generator objectives for GANs

Improved generator objectives for GANs

Ben Poole
Stanford University &Alexander A. Alemi, Jascha Sohl-Dickstein, Anelia Angelova
Google Brain
{alemi, jaschasd, anelia}
Work done during an internship at Google Brain.

We present a framework to understand GAN training as alternating density ratio estimation, and approximate divergence minimization. This provides an interpretation for the mismatched GAN generator and discriminator objectives often used in practice, and explains the problem of poor sample diversity. Further, we derive a family of generator objectives that target arbitrary -divergences without minimizing a lower bound, and use them to train generative image models that target either improved sample quality or greater sample diversity.


Improved generator objectives for GANs

  Ben Poolethanks: Work done during an internship at Google Brain. Stanford University Alexander A. Alemi, Jascha Sohl-Dickstein, Anelia Angelova Google Brain {alemi, jaschasd, anelia}


noticebox[b]Workshop on Adversarial Training, NIPS 2016, Barcelona, Spain.\end@float

1 Introduction

Generative adversarial networks (GANs) have become a popular method for fitting latent-variable directed generative models to complex datasets [2, 10, 13, 6]. While these models provide compelling visual samples, they are notoriously unstable and difficult to train and evaluate. Many recent papers have focused on new architectures and regularization techniques for improved stability and performance [13, 11, 4], but the objectives they optimize are fundamentally the same as the objectives in the original proposal [2].

The visual quality of samples from generative models trained with GANs often exceeds those of their variationally-trained counterparts [5, 12]. This is often credited to a difference in the divergence between the data and model distribution that each technique optimizes [15]. GAN theory shows that an idealized formulation optimizes Jensen-Shannon divergence, while VAEs optimize a lower bound on log-likelihood, corresponding to a lower bound on the KL divergence. Recent work has generalized the GAN theory to target reverse KL [14] and additional -divergences (including KL, reverse KL, and JS), allowing GANs to target a diverse set of behaviors [9].

However, these new theoretical advances fail to provide a justification for the GAN objectives that are used in practice. In particular, the generator objective used in practice is different from the one that is theoretically justified [2, 9]. This raises the question as to whether the theory used to motivate GANs applies to these modified objectives, and how the use of mismatched generator and discriminator objectives influences the behavior of GANs in practice.

Here we present a new interpretation of GANs as alternating between steps of density ratio estimation, and divergence minimization. This leads to a new understanding of the GAN generator objective that is used in practice as targeting a mode-seeking divergence that resembles reverse KL, thus providing an explanation for the mode dropping seen in practice. Furthermore, we introduce a set of new objectives for training the generator of a GAN that can trade off between sample quality and sample diversity, and show their effectiveness on CIFAR-10.

2 Theory

2.1 Background

Given samples from a data density, , we would like to learn a generative model with density that matches the data density . Often the models we are interested in have intractable likelihoods, so that we can sample efficiently but cannot evaluate its likelihood. In the GAN framework [2], the intractable likelihood is bypassed by instead training a discriminator to classify between samples from the data and samples from the model. Given this discriminator, the parameters of the generative model are updated to increase the tendency of the discriminator to mis-classify samples from the model as samples from the data. This iterative process pushes the model density towards the data density without ever explicitly computing the likelihood of a sample. More formally, the GAN training process is typically motivated as solving a minimax optimization problem:


where is the generative model distribution, is the discriminator, and is the data distribution. Fixing , the optimal discriminator is [2]. Thus if the inner maximization over the discriminator is performed to completion for each step of , the GAN objective is equivalent to minimizing:


This has led to the understanding that GANs minimize the Jensen-Shannon divergence between the data density and the model density, and is thought to underlie the difference in sample quality between GANs and VAEs [15]. However, this is not the objective that is used in practice, and we will see below that this alters the analysis.

Recently, [9] proposed an extension to GANs to target divergences other than Jensen-Shannon. They generalize the set of divergences a GAN can target to the family of -divergences, where:


and is a convex function with . The key result they leverage from [8] is that any -divergence can be lower-bounded by


where is the Fenchel conjugate111The Fenchel conjugate is defined as of , and is the variational function also known as the discriminator in the GAN literature222We use as the data distribution and as the model distribution, which is the opposite of [9].. Thus for any , we have a lower bound on the divergence that recovers exactly the discriminator objective used in the standard GAN when . As this is a lower bound on the -divergence, maximizing it with respect to the discriminator makes sense, and yields a tighter lower bound on the true divergence.

However, the objective to optimize for the generative model, , remains unclear. In both the original GAN paper [2] and the -GAN paper [9], two objectives are proposed (denoted as and ):

  1. : Minimize the lower bound in Equation 4. For standard GANs, this corresponds to minimizing the probability of the discriminator classifying a sample from the model as fake.

  2. : Optimize an alternative objective:


    For standard GANs, this corresponds to maximizing the log probability of the discriminator classifying a sample from the model as real.

The first approach minimizes a lower bound, and thus improvements in the objective can correspond to making smaller, or, more problematically, by making the lower bound on looser. In practice this leads to slower convergence, and thus the first objective is not widely used.

The second approach is empirically motivated in [2, 9] as speeding up training, and theoretically motivated by the observation that remains a fixed point of the learning dynamics. However, the behavior of this generator objective when the generative model does not have the capacity to realize the data density remains unclear. This is the regime we care about as most generative models do not have the capacity to exactly model the data.

2.2 Discriminator as a density ratio estimator

To address the theoretical and practical issues we first present a simple relationship between the discriminator and an estimate of the density ratio. Given known data and model densities, the optimal discriminator with respect to an -divergence, , was derived in [9] as:


where is the derivative of . If is invertible, we can reverse the relationship, and use the discriminator to recover the ratio of the data density to the model density:


In practice we don’t have access to the optimal discriminator , and instead use the current discriminator as an approximation.

2.3 A new set of generator objectives

Given access to an approximate density ratio , we can now optimize any objective that depends only on samples from or and the value of the density ratio. Conveniently, -divergences are a family of divergences that depend only on samples from one distribution and the density ratio! Given samples from and an estimate of the density ratio at each point, we can compute an estimate of the -divergence, between and :


where is the generator objective, is the -divergence targeted for the generator, and the -divergence targeted for the discriminator. and need not be the same -divergence. For non-optimal discriminators, this objective will be a biased approximation of the -divergence, but is not guaranteed to be either an upper or lower bound on .

Our new algorithm for GAN training iterates the following steps:

  1. Optimize the discriminator, , to maximize a lower-bound on using Equation 4.

  2. Optimize the generator, , to minimize , using the estimate of the density ratio from the current discriminator, , in Equation 8.

While the first step is identical to the standard -GAN training algorithm, the second step comprises a new generator update that can be used to fit a generative model to the data while targeting any -divergence. In practice, we alternate single steps of optimization on each minibatch of data.

2.4 Related work

Several recent papers have identified novel objectives for GAN generators. In [14], they propose a generator objective corresponding to being reverse KL, and show that it improves performance on image super-resolution. [3] identifies the generator objective that corresponds to minimizing the KL divergence, but does not empirically evaluate this objective.

Concurrent with our work, two papers propose closely related GAN training algorithms. In [16], they directly estimate the density ratio by optimizing a different discriminator objective that corresponds to rewriting the discriminator in terms of the density ratio:


This approach requires learning a network that directly outputs the density ratio, which can be very small or very large and in practice the networks that parameterize the density ratio must be clipped [16]. We found estimating a function of the density ratio to be more stable, in particular using the GAN discriminator objective the discriminator estimates . However, there are likely ways of combining these approaches in the future to directly estimate stable functions of the density ratio independent of the discriminator divergence.

More generically, the training process can be thought of as two interacting systems: one that identifies a statistic of the model and data, and another that uses that statistic to make the model closer to the data. [7] discusses many approaches similar to the one presented here, but do not present experimental results.

3 Interpreting the GAN generator objective used in practice,

We can use our new family of generator objectives to better understand , the objective that is used in practice (Eq. 5). Given that is the standard GAN divergence, we can solve for the generator divergence, , such that , yielding:


Thus minimizing corresponds to minimizing an approximation of the divergence between the data density and the model density, not minimizing the Jensen-Shannon divergence.

To better understand the behavior of this divergence, we fit a single Gaussian to a mixture of two Gaussians in one dimension (Figure 1). We find that the GAN divergence optimized in practice is even more mode-seeking than JS and reverse KL. This behavior is likely the cause of many problems experienced with GANs in practice: samples often fail to cover the diversity of the dataset.

Figure 1: The GAN generator objective used in practice () is mode-seeking when fit to a mixture of two Gaussians in one dimension. (a) Value of the divergence function, , as a function of the density ratio. The behavior of the GAN objective used in practice () resembles reverse KL when the model density is greater than the data density. (b) Learned densities when fitting a single Gaussian generative model to a mixture of two Gaussians (data, black). KL and JS are more mode-covering learning a generative model with larger variance that covers both modes of the data density, while reverse KL (RKL) and the GAN generator used in practice (GAN) are more mode-seeking, with smaller variance that covers only the higher density mode.
(c) (typical objective)
(d) Reverse KL
(e) squared Hellinger
(f) KL
Figure 2: Different generator objectives yield different degrees of sample diversity. As we move from mode seeking -divergences with low to mode covering divergences with we see visual evidence of the increase in sample diversity, without a noticeable decrease in sample quality. In particular, note the overabundance of green and brown tones in the most mode seeking objectives. Sub-captions give the targeted generator divergence and are ordered from the most mode seeking to most mode covering. In all cases, the discriminator was trained using the standard GAN objective.

4 Experiments

We evaluate our proposed generator objectives at improving the sample quality and diversity on CIFAR-10. All models were trained using identical architectures and hyperparameters (see Appendix B). The discriminator in all models was trained to optimize the normal GAN objective, corresponding to maximizing Equation 4 with , and using with being the output of a neural network and being used to constrain the range of as in [9]. For each model, we optimized a different generator objective by using different values for in Equation 8. The generator objectives are derived and listed in Appendix A.

In order to highlight the effect the generator objective can have on the generated samples, we targeted several objectives at various divergences, as well as the traditional generator objective . In Figure 2, we see that the generator objective has a large impact on sample diversity. In particular, for very mode-seeking divergences ( and ), the samples fail to capture the diversity of class labels in the dataset, as is immediately visually obvious from over-representation of greens and browns in the generated samples. For more mode-covering divergences ( (squared Hellinger), KL) we see much better diversity in colors and sampled classes, without any noticeably degradation in sample quality.

5 Discussion

Our work presents a new interpretation of GAN training, and a new set of generator objectives for GANs that can be used to target any -divergence. We demonstrate that targeting JS for the discriminator and targeting other objectives for the generator yields qualitatively different samples, with mode-seeking objectives producing less diverse samples, and mode-covering objectives producing more diverse samples. However, training with very mode-seeking objectives does not yield extremely high-quality samples. Similarly, targeting mode-covering objectives like KL improves sample diversity, but the quality of samples does not visibly worsen. Visual evaluation of sample quality is a potentially fraught measure of quality however. Future work will be needed to investigate the impact of alternate generator objectives and provide better quantitative metrics and understanding of what factors drive sample quality and diversity in GANs.


We thank Augustus Odena for feedback on the manuscript, Vincent Dumoulin for the baseline code, and Luke Metz, Luke Vilnis, and the Google Brain team for valuable and insightful discussions.


  • [1] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016.
  • [2] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2672–2680. Curran Associates, Inc., 2014.
  • [3] Ian J Goodfellow. On distinguishability criteria for estimating generative models. arXiv preprint arXiv:1412.6515, 2014.
  • [4] Ferenc Huszar. An alternative update rule for generative adversarial networks., 2015.
  • [5] Diederik P Kingma and Max Welling. Auto-encoding variational bayes, 2013.
  • [6] Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo-realistic single image super-resolution using a generative adversarial network, 2016.
  • [7] Shakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. arXiv preprint arXiv:1610.03483, 2016.
  • [8] XuanLong Nguyen, Martin J Wainwright, and Michael I Jordan. Estimating divergence functionals and the likelihood ratio by penalized convex risk minimization. In NIPS, pages 1089–1096, 2007.
  • [9] Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. arXiv preprint arXiv:1606.00709, 2016.
  • [10] Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier gans. arXiv preprint arXiv:1610.09585, 2016.
  • [11] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
  • [12] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and variational inference in deep latent gaussian models. In International Conference on Machine Learning. Citeseer, 2014.
  • [13] Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016.
  • [14] Casper Kaae Sonderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszar. Amortised map inference for image super-resolution, 2016.
  • [15] L. Theis, A. van den Oord, and M. Bethge. A note on the evaluation of generative models. In International Conference on Learning Representations, Apr 2016.
  • [16] Masatoshi Uehara, Issei Sato, Masahiro Suzuki, Kotaro Nakayama, and Yutaka Matsuo. Generative adversarial nets from a density ratio estimation perspective. arXiv preprint arXiv:1610.02920, 2016.

Appendix A Deriving the generator objectives

Here we derive the generator objectives when the discriminator divergence is , corresponding to the standard GAN discriminator objective. As in [9], we parameterize the discriminator as where has the same range as . For the GAN case, this corresponds to .

First, we can compute the inverse of the gradient of which is used to estimate the density ratio:

For GANs, the discriminator is parameterized as , so we can compute the density ratio as:


Given this estimate of the density ratio, we can then compute the generator objective as . The table below contains the generator objectives for many different given :


Appendix B CIFAR-10 architecture details

This is a slightly modified version of the architecture from [1]. Input images were scaled from to .

Operation Kernel Strides Feature maps BN? Dropout Nonlinearity
Transposed convolution 0.0 Leaky ReLU
Transposed convolution 0.0 Leaky ReLU
Transposed convolution 0.0 Leaky ReLU
Transposed convolution 0.0 Leaky ReLU
Transposed convolution 0.0 Leaky ReLU
Convolution 0.0 Leaky ReLU
Convolution 0.0 Sigmoid
Convolution 0.2 Maxout
Convolution 0.5 Maxout
Convolution 0.5 Maxout
Convolution 0.5 Maxout
Convolution 0.5 Maxout
Convolution 0.5 Maxout
Convolution 0.5 Maxout
Convolution 0.5 Linear
Optimizer Adam (, , )
Batch size 128
Leaky ReLU slope, maxout pieces 0.1, 2
Weight, bias initialization Isotropic gaussian (, ), Constant()
Table 1: CIFAR10 model hyperparameters. Maxout layers are used in the discriminator.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description