Variational Approaches for Auto-Encoding Generative Adversarial Networks

Variational Approaches for Auto-Encoding Generative Adversarial Networks

Abstract

Auto-encoding generative adversarial networks (GANs) combine the standard GAN algorithm, which discriminates between real and model-generated data, with a reconstruction loss given by an auto-encoder. Such models aim to prevent mode collapse in the learned generative model by ensuring that it is grounded in all the available training data. In this paper, we develop a principle upon which auto-encoders can be combined with generative adversarial networks by exploiting the hierarchical structure of the generative model. The underlying principle shows that variational inference can be used a basic tool for learning, but with the intractable likelihood replaced by a synthetic likelihood, and the unknown posterior distribution replaced by an implicit distribution; both synthetic likelihoods and implicit posterior distributions can be learned using discriminators. This allows us to develop a natural fusion of variational auto-encoders and generative adversarial networks, combining the best of both these methods. We describe a unified objective for optimization, discuss the constraints needed to guide learning, connect to the wide range of existing work, and use a battery of tests to systematically and quantitatively assess the performance of our method.

1Introduction

Generative adversarial networks (GANs) [11] are one of the dominant approaches for learning generative models in contemporary machine learning research, which provide a flexible algorithm for learning in latent variable models. Directed latent variable models describe a data generating process in which a source of noise is transformed into a plausible data sample using a non-linear function, and GANs drive learning by discriminating observed data from model-generated data. GANs allow for training on large datasets, are fast to simulate from, and when trained on image data, produce visually compelling sample images. But this flexibility comes with instabilities in optimization that leads to the problem of mode-collapse, in which generated data does not reflect the diversity of the underlying data distribution. A large class of GAN variants that aim to address this problem are auto-encoder-based GANs (AE-GANs), that use an auto-encoder to encourage the model to better represent all the data it is trained with, thus discouraging mode-collapse.

Auto-encoders have been successfully used to improve GAN training. For example, plug and play generative networks (PPGNs) [30] produce state-of-the-art samples by optimizing an objective that combines an auto-encoder loss, a GAN loss, and a classification loss defined using a pre-trained classifier. AE-GANs can be broadly classified into three approaches: (1) those using an auto-encoder as the discriminator, such as energy-based GANs and boundary-equilibrium GANs [3], (2) those using a denoising auto-encoder to derive an auxiliary loss for the generator, such as denoising feature matching GANs [43], and (3) those combining ideas from VAEs and GANs. For example, the variational auto-encoder GAN (VAE-GAN) [24] adds an adversarial loss to the variational evidence lower bound objective. More recent GAN variants, such as mode-regularized GANs (MRGAN) [4] and adversarial generator encoders (AGE) [41] also use a separate encoder in order to stabilize GAN training. Such variants are interesting because they reveal interesting connections to VAEs, however the principles underlying the fusion of auto-encoders and GANs remain unclear.

In this paper, we develop a principled approach for hybrid AE-GANs. By exploiting the hierarchical structure of the latent variable model learned by GANs, we show how another popular approach for learning latent variable models, variational auto-encoders (VAEs), can be combined with GANs. This approach will be advantageous since it allows us to overcome the limitations of each of these methods. Whereas VAEs often produce blurry images when trained on images, they do not suffer from the problem of mode collapse experienced by GANs. GANs allow few distributional assumptions to be made about the model, whereas VAEs allow for inference of the latent variables which is useful for representation learning, visualization and explanation. The approach we will develop will combine the best of these two worlds, provide a unified objective for learning, is purely unsupervised, requires no pre-training or external classifiers, and can easily be extended to other generative modeling tasks.

We begin by exposing the tools that we acquire for dealing with intractable generative models from both GANs and VAEs in Section 2, and then make the following contributions:

  • We show that variational inference applies equally well to GANs and how discriminators can be used for variational inference with implicit posterior approximations.

  • Likelihood-based and likelihood-free models can be combined when learning generative models. In the likelihood-free setting, we develop variational inference with synthetic likelihoods that allows us to learn such models.

  • We develop a principled objective function for auto-encoding GANs (-GAN),1 and describe considerations needed to make it work in practice.

  • Evaluation is one of the major challenges in GAN research and we use a battery of evaluation measures to carefully assess the performance of our approach, comparing to DC-GAN, Wasserstein GAN and adversarial-generator-encoders (AGE). We emphasize the continuing challenge of evaluation in implicit generative models and show that our model performs well on these measures.

2Overcoming Intractability in Generative Models

Latent Variable Models

: Latent variable models describe a stochastic process by which modeled data is assumed to be generated (and thereby a process by which synthetic data can be simulated from the model distribution). In their simplest form, an unobserved quantity gives rise to a conditional distribution in the ambient space of the observed data, . In several recently proposed model families, is specified via a generator (or decoder), , a non-linear function from with parameters . In this work we consider models with , unless otherwise specified.

In implicit latent variable models, or likelihood-free models, we do not make any further assumptions about the data generating process and set the observation likelihood , which is the model class considered in many simulation-based models, and especially in generative adversarial networks (GANs) [11]. In prescribed latent variable models we make a further assumption of observation noise, and any likelihood function that is appropriate to the data can be used.

In both implicit and prescribed models (such as GANs and VAEs, respectively) an important quantity that describes the quality of the model is the marginal likelihood , in which the latent variables have been integrated over. We learn about the parameters of the model by minimizing an -divergence between the model likelihood and the true data distribution , such as the KL-divergence . But in both types of models, the marginal likelihood is intractable, requiring us to find solutions by which we can overcome this intractability in order to learn the model parameters.

Generative Adversarial Networks

: One way to overcome the intractability of the marginal likelihood is to never compute it, and instead to learn about the model parameters using a tool that gives us indirect information about it. Generative adversarial networks (GANs) [11] do this by learning a suitably powerful discriminator that learns to distinguish samples from the true distribution and the model . The ability of the discriminator (or lack thereof) to distinguish between real and generated data is the learning signal that drives the optimization of the model parameters: when this discriminator is unable to distinguish between real and simulated data, we have learned all we can about the observed data. This is a principle of learning known under various names, including adversarial training [11], estimation-by-comparison [14], and unsupervised-as-supervised learning [16].

Let denote a binary label corresponding to data samples from the real data distribution and for simulated data , and a discriminator that gives the probability that an input is from the real distribution, with discriminator parameters . At any time point, we update the discriminator by drawing samples from the real data and from the model and minimize the binary cross entropy . The generator parameters are then updated by maximizing the probability that samples from are classified as real. [11] suggests an alternative loss in , which provides stronger gradients. The optimization is then an alternating minimization w.r.t. and .

GANs are especially interesting as a way of learning in latent variable models, since they do not require inference of the latent variables , and are applicable to both implicit and prescribed models. GANs are based on an underlying principle of density ratio estimation [11] and thus provide us with an important tool for overcoming intractable distributions.

The Density Ratio Trick

: By introducing the labels for real data and for simulated data in GANs, we re-express the data and model distributions in conditional form, i.e. for the true distribution, and for the model. The density ratio between the true distribution and model distribution can be computed using these conditional distributions as:

where we used Bayes’ rule in the second last step and assumed that the marginal class probabilities are equal, i.e. . This tells us that whenever we wish to compute a density ratio, we can simply draw samples from the two distributions and implement a binary classifier of the two sets of samples. By using the density ratio, GANs account for the intractability of the marginal likelihood by looking only at its relative behavior with respect to the true distribution. This trick only requires samples from the two distributions and never access to their analytical forms, making it particularly well-suited for dealing with implicit distributions or likelihood-free models. Since we are required to build a classifier, we can use all the knowledge we have about building state-of-the-art classifiers. This trick is widespread [11]. While using class probability estimation is amongst the most popular, the density ratio can also be computed in several other ways including by -divergence minimization and density-ratio matching [36].

Variational Inference

: A second approach for dealing with intractable likelihoods is to approximate them. There are several ways to approximate the marginal likelihood, but one of the most popular is to derive a lower bound to it by transforming the marginal likelihood into an expectation over a new variational distribution , whose variational parameters can be optimized to ensure that a tight bound can be found. The bound obtained is the popular variational lower bound : 8

Variational auto-encoders (VAEs) [33] provide one way of implementing variational inference in which the variational distribution is represented as an encoder, and the variational and model parameters are jointly optimized using the pathwise stochastic gradient estimator (also known as the reparameterization trick) [10]. The variational lower bound is a description applicable to both implicit and prescribed models, and gives us a further tool for dealing with intractable distributions, which is to introduce an encoder to invert the generative process and optimize a lower bound on the marginal likelihood.

Synthetic Likelihoods

: When the likelihood function is unknown, the variational lower bound cannot directly be used for learning. One further tool with which to overcome this, is to replace the likelihood with a substitute, or synthetic likelihood . The original formulation of the synthetic likelihood [44] is based on a Gaussian assumption, but we use the term here to mean any general substitute for the likelihood that maintains its asymptotic properties. The synthetic likelihood form we use here was proposed by [9] for approximate Bayesian computation (ABC). The idea is to introduce a synthetic likelihood into the likelihood term of by dividing and multiplying by the true data distribution :

The first term in contains the synthetic likelihood . Any estimate of the ratio is an estimate of the likelihood since they are proportional (and the normalizing constant is independent of ). Wherever an intractable likelihood appears, we can instead use this ratio. The synthetic likelihood can be estimated using the density ratio trick by training a discriminator to distinguish between samples from the marginal and the conditional where is drawn from . The second term in is independent of and can be ignored for optimization purposes.

3A Fusion of Variational and Adversarial Learning

GANs and VAEs have given us useful tools for learning and inference in generative models and we now use these tools to build new hybrid inference methods. The VAE forms our generic starting point, and we will gradually transform it to be more GAN-like.

Implicit Variational Distributions

: The major task in variational inference is the choice of the variational distribution . Common approaches, such as mean-field variational inference, assume simple distributions like a Gaussian, but we would like not to make a restrictive choice of distribution. If we treat this distribution as implicit—we do not know its distribution but are able to generate from it—then we can use the density ratio trick to replace the KL-divergence term in .

We will thus introduce a latent classifier that discriminates between latent variables produced by an encoder network and variables sampled from a standard Gaussian distribution. For optimization, the expectation in is evaluated by Monte Carlo integration. Replacing the KL-divergence with a discriminator was first proposed by [26], and a similar idea was used by [27] for adversarial variational Bayes.

Likelihood Choice

: If we make the explicit choice of a likelihood in the model, the we can substitute our chosen likelihood into . We choose a zero-mean Laplace distribution with scale parameter , which corresponds to using a variational auto-encoder with an reconstruction loss; this is a highly popular choice and used in many related auto-encoder GAN variants, such as AGE, BEGAN, cycle GAN and PPGN [41].

In GANs the effective likelihood is unknown and intractable. We can again use our tools for intractable inference by replacing the intractable likelihood by its synthetic substitute. Using the synthetic likelihood introduces a new synthetic-likelihood classifier that discriminates between data sampled from the conditional and marginal distributions of the model. The reconstruction term in can be either:

These two choices have different behaviors. Using the synthetic discriminator-based likelihood means that this model will have the ability to use the adversarial game to learn the data distribution, although it may still be subject to mode-collapse. This is where an explicit choice of likelihood can be used to ensure that we assign mass to all parts of the output support and prevent collapse. When forming a final loss we can make use of a weighted sum of the two to get the benefits of both types of behavior.

Hybrid Loss Functions

: An hybrid objective function that combines all these choices is:

We are required to build four networks: the classifier is trained to discriminate between reconstructions from an auto-encoder and real data points; a second classifier is trained to discriminate between latent samples produced by the encoder and samples from a standard Gaussian; we must implement the deep generative model , and also the encoder network , which can be implemented using any type of deep network. The density-ratio estimators and can be trained using any loss for density ratio estimation described in section ?, hence their loss functions are not shown in . We refer to training using as -GAN. Our algorithm alternates between updates of the parameters of the generator , encoder , synthetic likelihood discriminator , and the latent code discriminator ; see algorithm ?.

Improved Techniques

: Equation provides a principled starting point for optimization based on losses obtained by the combination of insights from VAEs and GANs. To improve the stability of optimization and speed of learning we make two modifications. Firstly, following the insights from [29], we consider the reverse KL loss formulation for both the latent discriminator and the synthetic likelihood discriminator, where we replace with while training the generator as it provides non-saturating gradients. The minimization of the generator parameters becomes:

which shows that we have are using the GAN updates for the generator, with the addition of a reconstruction term, that discourages mode collapse as needs to be able to reconstruct every input .

Secondly, we found that passing the samples to the discriminator as fake samples, in addition to the reconstructions, helps improve performance. One way to justify the use of samples is to apply Jensen’s inequality, that is, , and replace this with a synthetic likelihood, as done for reconstructions. Instead of training two separate discriminators, we train a single discriminator which treats samples and reconstructions as fake, and as real.

4Related work

Figure ? summarizes our architecture and the architectures we compare with in the experimental section. Hybrids of VAEs and GANs can be classified by whether the density ratio trick is applied only to likelihood, prior approximation or both. Table ? reveals the connections to related approaches (see also [17]). DCGAN [32] and WGAN-GP [13] are pure GAN variants; they do not use an auto-encoder loss nor do they do inference. WGAN-GP shares the attributes of DCGAN, except that it uses a critic that approximates the Wasserstein distance [1] instead of a density ratio estimator. AGE uses an approximation of KL term, however it does not use a synthetic likelihood, but instead uses observed likelihoods - reconstruction losses - for both latent codes and data. The adversarial component of AGE arises form the opposing goals of the encoder and decoder: the encoder tries to compress data into codes drawn from the prior, while compressing samples into codes which do not match the prior; at the same time the decoder wants to generate samples that when encoded by the encoder will generate codes which match the prior distribution. VAE uses the observation likelihood and an analytic KL term, however it tends to produce blurry images, hence we do not consider it here. To solve the blurriness issue, VAE-GAN change the VAE loss function by replacing the observed likelihood on pixels with an adversarial loss together with a reconstruction metric in discriminator feature space. Unlike our work, VAE-GAN still uses the analytical KL loss to minimize the distance between the prior and the posterior of the latents, and they do not discuss the connection to density ratio estimation. Similar to VAE-GAN, [7] replace the observed likelihood term in the variational lower bound with a weighted sum of a feature matching loss (here the features matched are those of a pre-trained classifier) and an adversarial loss, but instead of using the analytical KL, they use a numerical approximation. We explore the same approximation (also used by AGE) in Section ? in the Appendix. By not using a pre-trained classifier or a feature matching loss, -GAN is trained end-to-end, completely unsupervised and maximizes a lower bound on the true data likelihood.

ALI [8], BiGAN [6] perform inference by creating an adversarial game between the encoder and decoder via a discriminator that operates on space. The discriminator learns to distinguish between input-output pairs of the encoder (where is a sample from the data distribution and is a sample from the conditional posterior ) and decoder (where is a sample from the latent prior and is a sample from the conditional ). Unlike -GAN , their approach operates jointly, without exploiting the structure of the model. Cycle-GAN [46] was proposed for image-to-image translation, but applying the underlying cycle consistency principle to image-to-code translation reveals an interesting connection with -GAN. This method has become popular for image-to-image translation, with similar approaches having proposed[20]. Recall that in space, we both use a pointwise reconstruction term term as well as a loss to match the distributions of and . In space, we only match the distributions of and in -GAN. Adding pointwise code reconstruction loss would make it similar to CycleGAN. We note however that the CycleGAN authors used the least square GAN loss, while the traditional GAN loss needs to be used to obtain the variational lower bound in .

In mode regularized GANs (MRGANs) [4] the generator is part of an auto-encoder, hence it learns how to produce reconstructions from the posterior over latent codes and also independently learns how to produce samples from codes drawn from the prior over latents. MRGANs employ two discriminators, one to distinguish between data and reconstructions and one to distinguish between data and samples. As described in Section ?, in -GAN we also pass both samples and reconstructions through the discriminator (which learns to distinguish between them and data). However, we only need one discriminator, as we explicitly match the latent prior and the latent posterior given by the model using KL term in , which encourages the distributions of reconstructions and sample to be similar.

5Evaluation metrics

Evaluating generative models is challenging [38]. In particular, evaluating GANs is difficult due to the lack of likelihood. Multiple proxy metrics have been proposed, and we explore some of them in this work and assess their strengths and weaknesses in the experiments section.

Inception score

: The inception score was proposed by [34] and has been widely adopted since. The inception score uses a pre-trained neural network classifier to capture to two desirable properties of generated samples: highly classifiable and diverse with respect to class labels. It does so by computing the average of the KL divergences between the conditional label distributions of samples (expected to have low entropy for easily classifiable samples) and the marginal distribution obtained from all the samples (expected to have high entropy if all classes are equally represented in the set of samples). As the name suggests, the classifier network used to compute the inception score was originally an Inception network [37] trained on the ImageNet dataset. For comparison to previous work, we report scores using this network. However, when reporting CIFAR-10 results we also report metrics obtained using a VGG style convolutional neural network, trained on the same dataset, which obtained 5.5% error (see section ? in the details on this network).

Multi-scale structural similarity (MS-SSIM)

: The inception score fails to capture mode collapse inside a class: the inception score of a model that generates the same image for a class and the inception score of a model that is able to capture diversity inside a class are the same. To address this issue, [31] assess the similarity between class-conditional generated samples using MS-SSIM [42], an image similarity metric that has been shown to correlate well with human judgement. MS-SSIM ranges between 0.0 (low similarity) and 1.0 (high similarity). By computing the average pairwise MS-SSIM score between images in a given set, we can determine how similar the images are, and in particular, we can compare with the similarity obtained on a reference set (the training set, for example). Since our models are not class conditional, we only used MS-SSIM to evaluate models on CelebA [25], a dataset of faces, since the variability of the data there is smaller. For datasets with very distinct labels, using MS-SSIM would not give us a good metric, since there will be high variability between classes. We report sample diversity score as 1-MSSSIM. The reported results on this metric need to be seen relative to the diversity obtained on the input dataset: too much diversity can mean failure to capture the data distribution. To illustrate this, we computed the diversity on images from the input dataset to which we add normal noise, and it is higher than the diversity of the original data. We report this value as another baseline for this metric.

Independent Wasserstein critic

: [5] proposed training an independent Wasserstein GAN critic to distinguish between held out validation data and generated samples.2 This metric measures both overfitting and mode collapse: if the generator memorizes the training set, the critic trained on validation data will be able to distinguish between samples and data; if mode collapse occurs, the critic will have an easy task distinguishing between data and samples. The Wasserstein distance does not saturate when the two distributions do not overlap [1], and the magnitude of the distance represents how easy it is for the critic to distinguish between data and samples. To be consistent with the other metrics, we report the negative of the Wasserstein distance between the test set and generator, hence higher values are better. Since the critic is trained independently for evaluation only, and thus does not affect the training of the generator, this evaluation technique can be used irrespective of the training criteria used [5]. To ensure that the independent critic does not overfit to the validation data, we only start training it half way through the training of our model and examined the learning curves during training (see Appendix ? in the supplementary material for learning curves).

6Experiments

To better understand the importance of autoencoder based methods in the GAN landscape, we implemented and compared the proposed -GAN with another hybrid model, AGE, as well as pure GAN variants such as DCGAN and WGAN-GP, across three datasets: ColorMNIST [28], CelebA [25] and CIFAR-10 [23]. We complement the visual inspection of samples with a battery of numerical test using the metrics above to get an insight of both on the models and on the metrics themselves. For a comprehensive analysis, we report both the best values obtained by each algorithm, as well as the quartiles obtained by each hyperparameter sweep for each model, to assess the sensitivity to hyperparameters. On all metrics, we report box plot for all the hyperparameters we considered with the best 10 jobs indicated by black circles (for Inception Scores and Independent Wasserstein critic, higher is better; for sample diversity, the best reported jobs are those with the smallest distance from the reference value computed on the test set). To the best of our knowledge, we are the first to do such an analysis of the GAN landscape.

For details of the training procedure used in all our experiments, including the hyperparameter sweeps, we refer to Appendix ? in the supplementary material. Note that the models considered here are all unconditional and do not make use of label information, hence it is not appropriate to compare our results with those obtained using conditional GANs [31] and semi-supervised GANs [34].

Results on ColorMNIST

: We compare the values of an independent Wasserstein critic in Figure ?, where higher values are better. On this metric, most of hyperparameters tried achieve a higher value than the best DC-GAN results. This is supported by the generated samples shown in Figure 1. However, WGAN-GP produces the samples rated best by this metric.

Figure 1:  Best samples on ColorMNIST (L-to-R): samples from DCGAN, WGAN-GP, AGE and the proposed variant \alpha-GAN, according to visual inspection.
 Best samples on ColorMNIST (L-to-R): samples from DCGAN, WGAN-GP, AGE and the proposed variant \alpha-GAN, according to visual inspection.
 Best samples on ColorMNIST (L-to-R): samples from DCGAN, WGAN-GP, AGE and the proposed variant \alpha-GAN, according to visual inspection.
 Best samples on ColorMNIST (L-to-R): samples from DCGAN, WGAN-GP, AGE and the proposed variant \alpha-GAN, according to visual inspection.
Figure 1: Best samples on ColorMNIST (L-to-R): samples from DCGAN, WGAN-GP, AGE and the proposed variant -GAN, according to visual inspection.

Results on CelebA

: The CelebA dataset consists of pixel images of faces of celebrities. We show samples from the four models in Figure 2. We also compare the models using the independent Wasserstein critic in Figure ? and sample diversity score in Figure ?. -GAN is competitive with WGAN-GP and AGE, but has a wider spread than the WGAN-GP model, which produces the best results.

Unlike WGAN and DCGAN, an advantage of -GAN and AGE is the ability to reconstruct inputs. Appendix ? shows that -GAN produces better reconstructions than AGE.

Figure 2:  Best samples on CelebA (L-to-R): samples from DCGAN, WGAN-GP, AGE and \alpha-GAN, according to visual inspection. See Figure  in Appendix  for a higher resolution version.
 Best samples on CelebA (L-to-R): samples from DCGAN, WGAN-GP, AGE and \alpha-GAN, according to visual inspection. See Figure  in Appendix  for a higher resolution version.
 Best samples on CelebA (L-to-R): samples from DCGAN, WGAN-GP, AGE and \alpha-GAN, according to visual inspection. See Figure  in Appendix  for a higher resolution version.
 Best samples on CelebA (L-to-R): samples from DCGAN, WGAN-GP, AGE and \alpha-GAN, according to visual inspection. See Figure  in Appendix  for a higher resolution version.
Figure 2: Best samples on CelebA (L-to-R): samples from DCGAN, WGAN-GP, AGE and -GAN, according to visual inspection. See Figure in Appendix for a higher resolution version.

Results on CIFAR-10

: We show samples from the various models in Figure 3. We evaluate -GAN using the independent critic, shown in Figure ?, where WGAN-GP is the best performing model. We also compare the ImageNet-based inception score in Figures ?, where it has the best performance, and with the CIFAR-10 based inception score in Figure ?. While our model produces the best Inception score result on the ImageNet-based inception score, it has wide spread on the CIFAR-10 Inception score, where WGAN-GP both performs best, and has less hyperparameter spread. This shows that the two metrics widely differ, and that evaluating CIFAR-10 samples using the ImageNet based inception score can lead to erroneous conclusions. To understand more of the importance of the model used to evaluate the Inception score, we looked at the relationship between the Inception score measured with the Inception net trained on ImageNet (introduced by [34]) and the VGG style net trained on CIFAR-10, the same dataset on which we train the generative models. We observed that 15% of the jobs in a hyperparameter sweep were ranked as being in the top 50% by the ImageNet Inception score while ranked in the bottom 50% by the CIFAR-10 Inception score. Hence, using the Inception score of a model trained on a different dataset than the generative model is evaluated on can be misleading when ranking models.

The best reported ImageNet-based inception score on CIFAR for unsupervised models is by DFM-GAN [43], who also report for ALI [8], however these are trained on different architectures and may not be directly comparable.

Figure 3: Best samples on CIFAR-10 (L-to-R): DC-GAN, WGAN-GP, AGE and \alpha-GAN, according to visual inspection. For AGE and \alpha-GAN reconstructions on CIFAR-10, see Appendix .
Best samples on CIFAR-10 (L-to-R): DC-GAN, WGAN-GP, AGE and \alpha-GAN, according to visual inspection. For AGE and \alpha-GAN reconstructions on CIFAR-10, see Appendix .
Best samples on CIFAR-10 (L-to-R): DC-GAN, WGAN-GP, AGE and \alpha-GAN, according to visual inspection. For AGE and \alpha-GAN reconstructions on CIFAR-10, see Appendix .
Best samples on CIFAR-10 (L-to-R): DC-GAN, WGAN-GP, AGE and \alpha-GAN, according to visual inspection. For AGE and \alpha-GAN reconstructions on CIFAR-10, see Appendix .
Figure 3: Best samples on CIFAR-10 (L-to-R): DC-GAN, WGAN-GP, AGE and -GAN, according to visual inspection. For AGE and -GAN reconstructions on CIFAR-10, see Appendix .

Experimental insights

: Irrespective of the algorithm used, we found that two factors can contribute significantly to the quality of the results:

  • The network architectures

    . We noticed that the most decisive factor in the lies in the architectures chosen for the discriminator and generator. We found that given enough capacity, DCGAN (which uses the traditional GAN [11]) can be very robust, and does not suffer from obvious mode collapse on the datasets we tried. All models reported are sensitive to changes in the architectures, with minor changes resulting in catastrophic mode collapse, regardless of other hyperparameters.

  • The number of updates performed by the individual components of the model

    . For DCGAN, we update the generator twice for each discriminator update following https://github.com/carpedm20/DCGAN-tensorflow; we found it stabilizes training and produces significantly better samples, contrary to GAN theory which suggests training discriminator multiple times instead. Our findings are also consistent with the updates performed by the AGE model, where the generator is updated multiple times for each encoder update. Similarly, for -GAN, we update the encoder (which can be seen as the latent code generator) and the generator twice for each discriminator and code discriminator update. On the other hand, for WGAN-GP, we update the discriminator 5 times for each generator update following [1].

While the independent Wasserstein critic does not directly measure sample diversity, we notice a high correlation between its estimate of the negative Wasserstein distance and sample similarity (see Appendix ?). Note however that the measures are not perfectly correlated, and if used to rank the best performing jobs in a hyperparameter sweep they give different results.

7Discussion

In this paper we have combined the variational lower bound on the data likelihood with the density ratio trick, allowing us to better understand the connection between variational auto-encoders and generative adversarial networks. From the newly introduced lower bound on the likelihood we derived a new training criteria for generative models, named -GAN. -GAN combines an adversarial loss with a data reconstruction loss. This can be seen in two ways: from the VAE perspective, it can solve the blurriness of samples via the (learned) adversarial loss; from the GAN perspective, it can solve mode collapse by grounding the generator using a perceptual similarity metric on the data - the reconstruction loss. In a quest to understand how -GAN compares to other GAN models (including auto-encoder based ones), we deployed a set of metrics on 3 datasets as well as compared samples visually. While the picture of evaluating GANs is far from being completed, we show that the metrics employed are complementary and assess different failure modes of GANs (mode collapse, overfitting to the training data and poor learning of the data distribution).

The prospect of marrying the two approaches (VAEs and GANs) comes with multiple benefits: auto-encoder based methods can be used to reconstruct data and thus can be used for inpainting [30] [45]; having an inference network allows our model to be used for representation learning [2], where we can learn disentangled representations by choosing an appropriate latent prior. We thus believe VAE-GAN hybrids such as -GAN can be used in unsupervised, supervised and reinforcement learning settings, which leads the way to directions of research for future work.

Acknowledgements.

We thank Ivo Danihelka and Chris Burgess for helpful feedback and discussions.

8Model Samples

Figure 4 shows larger-sized versions of the samples in Figure 2 in the main text.

Figure 4:  Best samples on CelebA according to visual inspection shown in Figure . Top row: (left) DCGAN (right) WGAN-GP. Bottom row: (left) AGE (right) \alpha-GAN.
 Best samples on CelebA according to visual inspection shown in Figure . Top row: (left) DCGAN (right) WGAN-GP. Bottom row: (left) AGE (right) \alpha-GAN.
 Best samples on CelebA according to visual inspection shown in Figure . Top row: (left) DCGAN (right) WGAN-GP. Bottom row: (left) AGE (right) \alpha-GAN.
 Best samples on CelebA according to visual inspection shown in Figure . Top row: (left) DCGAN (right) WGAN-GP. Bottom row: (left) AGE (right) \alpha-GAN.
Figure 4: Best samples on CelebA according to visual inspection shown in Figure . Top row: (left) DCGAN (right) WGAN-GP. Bottom row: (left) AGE (right) -GAN.

9Pseudocode

The overall training procedure is summarized in Algorithm ?.

10Reconstructions

We show reconstructions obtained using -GAN and AGE for the CelebA dataset in Figure ? and on CIFAR-10 in Figure ?.

11Ablation experiment: code discriminator and the empirical KL

We have shown that we can estimate the KL term in using the density ratio trick. In the case of a normal prior, another way to estimate the KL divergence on a mini-batch of latents each of dimension , with per dimension sample mean and variance denoted by and () respectively, is3:

In order to understand how the two different ways of estimating the KL term compare, we replaced the code discriminator in -GAN with the KL approximation in . We then compared the results both by visual inspection (see CelebA and CIFAR-10 samples in Figure 5) and by evaluating how well the prior was matched. In order to avoid be able to use the same hyperparameters for different latent sizes, we divide the approximation in by the latent size. To also understand the effects of the two methods on the resulting autoencoder codes, we plot the means (Figure 6) and the covariance matrix (Figure 7) obtained from the a set of saved latent codes. By assessing the statistics of the final codes obtained by models trained using both approaches, we see that the two models of enforcing the prior have different side effects: the latent codes obtained using the code discriminator are decorrelated, while the ones obtained using the empirical KL are entangled; this is expected, since the correlation of latent dimensions is not modeled by , while the code discriminator can pick up that highly correlated codes are not from the same distribution as the prior. While the code discriminator achieves better disentangling, the means obtained using the empirical KL are closer to 0, the mean of the prior distribution for each latent. We leave investigating these affects and combining the two approaches for future work.

Figure 5:  Samples from \alpha-GAN on CelebA and CIFAR-10, trained using the empirical KL approximation (as opposed to a code discriminator) to make the posterior and the prior of the latents match.
 Samples from \alpha-GAN on CelebA and CIFAR-10, trained using the empirical KL approximation (as opposed to a code discriminator) to make the posterior and the prior of the latents match.
Figure 5: Samples from -GAN on CelebA and CIFAR-10, trained using the empirical KL approximation (as opposed to a code discriminator) to make the posterior and the prior of the latents match.
Figure 6:  Histogram of latent means obtained on 64000 code representations from \alpha-GAN trained using a code discriminator (left) and the empirical KL approximation (right). The latent size was 100. Since the prior was set to a normal with mean 0, we expect most means to be around 0. We note that the empirical KL seems better at forcing the means to be around 0.
 Histogram of latent means obtained on 64000 code representations from \alpha-GAN trained using a code discriminator (left) and the empirical KL approximation (right). The latent size was 100. Since the prior was set to a normal with mean 0, we expect most means to be around 0. We note that the empirical KL seems better at forcing the means to be around 0.
Figure 6: Histogram of latent means obtained on 64000 code representations from -GAN trained using a code discriminator (left) and the empirical KL approximation (right). The latent size was 100. Since the prior was set to a normal with mean 0, we expect most means to be around 0. We note that the empirical KL seems better at forcing the means to be around 0.
Figure 7:  Covariance matrices obtained on 64000 code representations from \alpha-GAN trained using a code discriminator (left) and the empirical KL approximation (right). The latent size was 100. We note that the code discriminator produces latents which have a lot less correlation than the empirical KL (which is what we want in this case, since the prior was a univariate Gaussian).
 Covariance matrices obtained on 64000 code representations from \alpha-GAN trained using a code discriminator (left) and the empirical KL approximation (right). The latent size was 100. We note that the code discriminator produces latents which have a lot less correlation than the empirical KL (which is what we want in this case, since the prior was a univariate Gaussian).
Figure 7: Covariance matrices obtained on 64000 code representations from -GAN trained using a code discriminator (left) and the empirical KL approximation (right). The latent size was 100. We note that the code discriminator produces latents which have a lot less correlation than the empirical KL (which is what we want in this case, since the prior was a univariate Gaussian).

12Monitoring overfitting of the independent Wasserstein critic

To ensure that the independent Wasserstein critic does overfit during training to the validation data, we monitor the difference in performance between training and test (see Figure Figure 8).

Figure 8:  Training curves of the independent Wasserstein critic for different hyperparameter values. The model trained here is \alpha-GAN , trained on CelebA. Left: the loss obtained on a mini-batch from the validation data. Right: the average loss obtained on the entire test set.
 Training curves of the independent Wasserstein critic for different hyperparameter values. The model trained here is \alpha-GAN , trained on CelebA. Left: the loss obtained on a mini-batch from the validation data. Right: the average loss obtained on the entire test set.
Figure 8: Training curves of the independent Wasserstein critic for different hyperparameter values. The model trained here is -GAN , trained on CelebA. Left: the loss obtained on a mini-batch from the validation data. Right: the average loss obtained on the entire test set.

13Best samples according to different metrics

Figure 9 shows the best samples on CelebA according to different metrics.

Figure 9: Best samples from \alpha-GAN trained on CelebA according to different metrics: sample quality (left), independent Wasserstein critic (middle), sample diversity (right) given by 1-MSSSIM.
Best samples from \alpha-GAN trained on CelebA according to different metrics: sample quality (left), independent Wasserstein critic (middle), sample diversity (right) given by 1-MSSSIM.
Best samples from \alpha-GAN trained on CelebA according to different metrics: sample quality (left), independent Wasserstein critic (middle), sample diversity (right) given by 1-MSSSIM.
Figure 9: Best samples from -GAN trained on CelebA according to different metrics: sample quality (left), independent Wasserstein critic (middle), sample diversity (right) given by 1-MSSSIM.

14Relationships between different metrics

We assess the correlation between sample quality and how good a model is according to a independent Wasserstein critic in Figure 10.

Figure 10: Correlation between sample diversity and the negative Wasserstein distance, obtained from a \alpha-GAN hyperparameter sweep.
Figure 10: Correlation between sample diversity and the negative Wasserstein distance, obtained from a -GAN hyperparameter sweep.

15Training details: hyperparameters and network architectures

For all our models, we kept a fixed learning rate throughout training. We note the difference with AGE, where the authors decayed the learning rate during training, and changed the loss coefficients during training4.). The exact learning rate sweeps are defined in Table ?. We used the Adam optimizer [21] with and and a batch size of 64 for all our experiments. We used batch normalization [18] for all our experiments. We trained all ColorMNIST models for 100000 iterations, and CelebA and CIFAR-10 models for 200000 iterations.

15.1Scaling coefficients

We used the following sweeps for the models which have combined losses with different coefficients (for all our baselines, we took the sweep ranges from the original papers):

  • WGAN-GP

    • The gradient penalty of the discriminator loss function: 10.

  • AGE

    • Data reconstruction loss for the encoder: sweep over 100, 500, 1000, 2000.

    • Code reconstruction loss for the generator: 10.

  • -GAN

    • Data reconstruction loss for the encoder: sweep over 1, 5, 10, 50.

    • Data reconstruction loss for the generator: sweep over 1, 5, 10, 50.

    • Adversarial loss for the generator (coming from the data discriminator): 1.0.

    • Adversarial loss for the encoder (coming from the code discriminator): 1.0.

15.2Choice of loss functions

For AGE, we used the loss as the data reconstruction loss, and we used the cosine distance for the code reconstruction loss. For -GAN , we used as the data reconstruction loss and the traditional GAN loss for the data and code discriminator.

15.3Choice of latent prior

We use a normal prior for all models, apart from AGE [41] which uses a uniform unit ball as the prior, and thus we project the output of the encoder to the unit ball.

15.4Network architectures

For all our baselines, we used the same discriminator and generator architectures, and we controlled the number of latents for a fair comparison. For AGE we used the encoder architecture suggested by the authors5, which is very similar to the DCGAN discriminator architecture. For -GAN , the encoder is always set as a convolutional network, formed by transposing the generator (we do not use any activation function after the encoder). All discriminators and the AGE encoder use leaky units with a slope of 0.2, and all generators used ReLUs. For all our experiments using -GAN , we used as a code discriminator a 3 layer MLP, each layer containing 750 hidden units. We did not tune the size of this network, and we postulate that since the prior latent distributions are similar (multi variate normals) between datasets, the impact of the architecture of the code discriminator is of less importance than the architecture of the data discriminator, which has to change from dataset to dataset (with the complexity of the data distribution). However, one could improve on our results by carefully tuning this architecture too.

ColorMNIST

For all our models trained on ColorMNIST, we swept over the latent sizes 10, 50 and 75. Tables ? and ? describe the discriminator and generator architectures respectively.

ColorMNIST discriminator architecture used for DCGAN, WGAN-GP and -GAN. For DCGAN, we use dropout of 0.8 after the last convolutional layer. No other model uses dropout.
Operation Kernel Strides Feature maps
Convolution
Convolution
Convolution
Convolution
Convolution
Linear adv N/A N/A
Linear class N/A N/A
ColorMNIST generator architecture. This architecture was used for all 4 compared models.
Operation Kernel Strides Feature maps
Linear N/A N/A
Transposed Convolution
Transposed Convolution
Transposed Convolution

CelebA and CIFAR-10

The discriminator and generator architectures used for CelebA and CIFAR-10 were the same as the ones used by [13] for WGAN.6. Note that the WGAN-GP paper reports Inception Scores computed on a different architecture, using 101-Resnet blocks.

15.5CIFAR-10 classifier used for Inception score

We used a VGG style [35] convnet trained on CIFAR-10 as the classifier network used to report the inception score in Section 5. The architecture is described in Table ?. We use batch normalization after each convolutional layer. The data is rescaled to be in range , and during training the input images are randomly cropped to size . We used a momentum optimizer with learning rate starting at 0.1 and decaying by 0.1 at timesteps 40000 and 60000, with momentum set at 0.9. We used an regularization penalty of . The network was trained for 80000 epochs, using a batch size of 256 (8 synchronous workers, each having a batch size of 32). The resulting network achieves an accuracy of 5.5% on the official CIFAR-10 test set.

The neural network trained to classify CIFAR-10 data.
Operation Kernel Strides Feature maps
Convolution
Convolution
Convolution
Convolution
Convolution
Convolution
Convolution
Convolution
Convolution
Convolution
Convolution
Average pooling N/A N/A N/A
Linear class N/A N/A

Footnotes

  1. We use the Greek prefix for -GAN, as AEGAN and most other Latin prefixes seem to have been taken https://deephunt.in/the-gan-zoo-79597dc8c347.
  2. [5] used the original WGAN [1], whereas we use improved WGAN-GP proposed in [13].
  3. This approximation was also used by [41] in AGE.
  4. As per advice found here: https://github.com/DmitryUlyanov/AGE/
  5. Code at: https://github.com/DmitryUlyanov/AGE/
  6. Code at: https://github.com/martinarjovsky/WassersteinGAN/blob/master/models/dcgan.py

References

  1. Wasserstein GAN.
    M. Arjovsky, S. Chintala, and L. Bottou. arXiv preprint arXiv:1701.07875
  2. Representation learning: A review and new perspectives.
    Y. Bengio, A. Courville, and P. Vincent. IEEE transactions on pattern analysis and machine intelligence
  3. BEGAN: Boundary equilibrium generative adversarial networks.
    D. Berthelot, T. Schumm, and L. Metz. arXiv preprint arXiv:1703.10717
  4. Mode regularized generative adversarial networks.
    T. Che, Y. Li, A. P. Jacob, Y. Bengio, and W. Li. arXiv preprint arXiv:1612.02136
  5. Comparison of Maximum Likelihood and GAN-based training of Real NVPs.
    I. Danihelka, B. Lakshminarayanan, B. Uria, D. Wierstra, and P. Dayan. arXiv preprint arXiv:1705.05263
  6. Adversarial feature learning.
    J. Donahue, P. Krähenbühl, and T. Darrell. arXiv preprint arXiv:1605.09782
  7. Generating images with perceptual similarity metrics based on deep networks.
    A. Dosovitskiy and T. Brox. In Advances in Neural Information Processing Systems, pages 658–666, 2016.
  8. Adversarially learned inference.
    V. Dumoulin, I. Belghazi, B. Poole, A. Lamb, M. Arjovsky, O. Mastropietro, and A. Courville. arXiv preprint arXiv:1606.00704
  9. Likelihood-free inference by penalised logistic regression.
    R. Dutta, J. Corander, S. Kaski, and M. U. Gutmann. arXiv preprint arXiv:1611.10242
  10. Gradient estimation.
    M. C. Fu. Handbooks in operations research and management science
  11. Generative adversarial nets.
    I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. In Advances in Neural Information Processing Systems, pages 2672–2680, 2014.
  12. On distinguishability criteria for estimating generative models.
    I. J. Goodfellow. arXiv preprint arXiv:1412.6515
  13. Improved Training of Wasserstein GANs.
    I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville. arXiv preprint arXiv:1704.00028
  14. Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics.
    M. U. Gutmann and A. Hyvärinen. Journal of Machine Learning Research
  15. Statistical inference of intractable generative models via classification.
    M. U. Gutmann, R. Dutta, S. Kaski, and J. Corander. arXiv preprint arXiv:1407.4981
  16. The elements of statistical learning


    T. Hastie, R. Tibshirani, and J. Friedman. .
  17. Variational inference using implicit distributions.
    F. Huszár. arXiv preprint arXiv:1702.08235
  18. Batch normalization: Accelerating deep network training by reducing internal covariate shift.
    S. Ioffe and C. Szegedy. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 448–456, 2015.
  19. Adversarial message passing for graphical models.
    T. Karaletsos. arXiv preprint arXiv:1612.05048
  20. Learning to discover cross-domain relations with generative adversarial networks.
    T. Kim, M. Cha, H. Kim, J. Lee, and J. Kim. arXiv preprint arXiv:1703.05192
  21. Adam: A method for stochastic optimization.
    D. Kingma and J. Ba. arXiv preprint arXiv:1412.6980
  22. Auto-encoding variational Bayes.
    D. P. Kingma and M. Welling. arXiv preprint arXiv:1312.6114
  23. Learning multiple layers of features from tiny images.
    A. Krizhevsky. 2009.
  24. Autoencoding beyond pixels using a learned similarity metric.
    A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther. In Proceedings of The 33rd International Conference on Machine Learning, pages 1558–1566, 2016.
  25. Deep learning face attributes in the wild.
    Z. Liu, P. Luo, X. Wang, and X. Tang. In Proceedings of International Conference on Computer Vision (ICCV), 2015.
  26. Adversarial autoencoders.
    A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey. arXiv preprint arXiv:1511.05644
  27. Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks.
    L. Mescheder, S. Nowozin, and A. Geiger. arXiv preprint arXiv:1701.04722
  28. Unrolled generative adversarial networks.
    L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein. arXiv preprint arXiv:1611.02163
  29. Learning in implicit generative models.
    S. Mohamed and B. Lakshminarayanan. arXiv preprint arXiv:1610.03483
  30. Plug & play generative networks: Conditional iterative generation of images in latent space.
    A. Nguyen, J. Yosinski, Y. Bengio, A. Dosovitskiy, and J. Clune. arXiv preprint arXiv:1612.00005
  31. Conditional image synthesis with auxiliary classifier GANs.
    A. Odena, C. Olah, and J. Shlens. arXiv preprint arXiv:1610.09585
  32. Unsupervised representation learning with deep convolutional generative adversarial networks.
    A. Radford, L. Metz, and S. Chintala. arXiv preprint arXiv:1511.06434
  33. Stochastic backpropagation and approximate inference in deep generative models.
    D. J. Rezende, S. Mohamed, and D. Wierstra. In The 31st International Conference on Machine Learning (ICML), 2014.
  34. Improved techniques for training GANs.
    T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. arXiv preprint arXiv:1606.03498
  35. Very deep convolutional networks for large-scale image recognition.
    K. Simonyan and A. Zisserman. arXiv preprint arXiv:1409.1556
  36. Density ratio estimation in machine learning


    M. Sugiyama, T. Suzuki, and T. Kanamori. .
  37. Rethinking the inception architecture for computer vision.
    C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818–2826, 2016.
  38. A note on the evaluation of generative models.
    L. Theis, A. v. d. Oord, and M. Bethge. arXiv preprint arXiv:1511.01844
  39. Deep and hierarchical implicit models.
    D. Tran, R. Ranganath, and D. M. Blei. arXiv preprint arXiv:1702.08896
  40. Generative adversarial nets from a density ratio estimation perspective.
    M. Uehara, I. Sato, M. Suzuki, K. Nakayama, and Y. Matsuo. arXiv preprint arXiv:1610.02920
  41. Adversarial generator-encoder networks.
    D. Ulyanov, A. Vedaldi, and V. Lempitsky. arXiv preprint arXiv:1704.02304
  42. Multiscale structural similarity for image quality assessment.
    Z. Wang, E. P. Simoncelli, and A. C. Bovik. In Signals, Systems and Computers, 2004. Conference Record of the Thirty-Seventh Asilomar Conference on, volume 2, pages 1398–1402. IEEE, 2003.
  43. Improving generative adversarial networks with denoising feature matching.
    D. Warde-Farley and Y. Bengio. ICLR submission
  44. Statistical inference for noisy nonlinear ecological dynamic systems.
    S. N. Wood. Nature
  45. Image denoising and inpainting with deep neural networks.
    J. Xie, L. Xu, and E. Chen. In Advances in Neural Information Processing Systems, pages 341–349, 2012.
  46. Unpaired image-to-image translation using cycle-consistent adversarial networks.
    J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. arXiv preprint arXiv:1703.10593
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
   
Add comment
Cancel
Loading ...
1126
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description