On Self Modulation for Generative Adversarial Networks

On Self Modulation for Generative Adversarial Networks

Ting Chen
University of California, Los Angeles
tingchen@cs.ucla.edu &Mario Lucic, Neil Houlsby, Sylvain Gelly
Google Brain
{lucic,neilhoulsby,sylvaingelly}@google.com
Work done at Google
Abstract

Training Generative Adversarial Networks (GANs) is notoriously challenging. We propose and study an architectural modification, self-modulation, which improves GAN performance across different data sets, architectures, losses, regularizers, and hyperparameter settings. Intuitively, self-modulation allows the intermediate feature maps of a generator to change as a function of the input noise vector. While reminiscent of other conditioning techniques, it requires no labeled data. In a large-scale empirical study we observe a relative decrease of in fid. Furthermore, all else being equal, adding this modification to the generator leads to improved performance in () of the studied settings. Self-modulation is a simple architectural change that requires no additional parameter tuning, which suggests that it can be applied readily to any GAN.

On Self Modulation for Generative Adversarial Networks

Ting Chenthanks: Work done at Google
University of California, Los Angeles
tingchen@cs.ucla.edu
Mario Lucic, Neil Houlsby, Sylvain Gelly
Google Brain
{lucic,neilhoulsby,sylvaingelly}@google.com

1 Introduction

Generative Adversarial Networks (GANs) are a powerful class of generative models successfully applied to a variety of tasks such as image generation (Zhang et al., 2017; Miyato et al., 2018; Karras et al., 2017), learned compression (Tschannen et al., 2018), super-resolution (Ledig et al., 2017), inpainting (Pathak et al., 2016), and domain transfer (Isola et al., 2016; Zhu et al., 2017).

Training GANs is a notoriously challenging task (Goodfellow et al., 2014; Arjovsky et al., 2017; Lucic et al., 2018) as one is searching in a high-dimensional parameter space for a Nash equilibrium of a non-convex game. As a practical remedy one applies (usually a variant of) stochastic gradient descent, which can be unstable and lack guarantees Salimans et al. (2016). As a result, one of the main research challenges is to stabilize GAN training. Several approaches have been proposed, including varying the underlying divergence between the model and data distributions (Arjovsky et al., 2017; Mao et al., 2016), regularization and normalization schemes (Gulrajani et al., 2017; Miyato et al., 2018), optimization schedules  (Karras et al., 2017), and specific neural architectures (Radford et al., 2016; Zhang et al., 2018). A particularly successful approach is based on conditional generation; where the generator (and possibly discriminator) are given side information, for example class labels Mirza & Osindero (2014); Odena et al. (2017); Miyato & Koyama (2018). In fact, state-of-the-art conditional GANs inject side information via conditional batch normalization (CBN) layers (De Vries et al., 2017; Miyato & Koyama, 2018; Zhang et al., 2018). While this approach does help, a major drawback is that it requires external information, such as labels or embeddings, which is not always available.

In this work we show that GANs benefit from self-modulation layers in the generator. Our approach is motivated by Feature-wise Linear Modulation in supervised learning (Perez et al., 2018; De Vries et al., 2017), with one key difference: instead of conditioning on external information, we condition on the generator’s own input. As self-modulation requires a simple change which is easily applicable to all popular generator architectures, we believe that is a useful addition to the GAN toolbox.

Summary of contributions. We provide a simple yet effective technique that can added universally to yield better GANs. We open source the code at http://anonymized.url. We demonstrate empirically that for a wide variety of settings (loss functions, regularizers and normalizers, neural architectures, and optimization settings) that the proposed approach yields between a and improvement in sample quality. When using fixed hyperparameters settings our approach outperforms the baseline in of cases. Further, we show that self-modulation still helps even if label information is available. Finally, we discuss the effects of this method in light of recently proposed diagnostic tools, generator conditioning (Odena et al., 2018) and precision/recall for generative models (Sajjadi et al., 2018)

2 Self-Modulation for Generative Adversarial Networks

Figure 1: (a) The proposed Self-Modulation framework for a generator network, where middle layers are directly modulated as a function of the generator input . (b) A simple MLP based modulation function that transforms input to the modulation variables and .

Several recent works observe that conditioning the generative process on side information (such as labels or class embeddings) leads to improved models (Mirza & Osindero, 2014; Odena et al., 2017; Miyato & Koyama, 2018). Two major approaches to conditioning on side information have emerged: (1) Directly concatenate the side information with the noise vector  (Mirza & Osindero, 2014), i.e. . (2) Condition the hidden layers directly on , which is usually instantiated via conditional batch normalization (De Vries et al., 2017; Miyato & Koyama, 2018).

Despite the success of conditional approaches, two concerns arise. The first is practical; side information is often unavailable. The second is conceptual; unsupervised models, such as GANs, seek to model data without labels. Including them side-steps the challenge and value of unsupervised learning.

We propose self-modulating layers for the generator network. In these layers the hidden activations are modulated as a function of latent vector . In particular, we apply modulation in a feature-wise fashion which allows the model to re-weight the feature maps as a function of the input. This is also motivated by the FiLM layer for supervised models (Perez et al., 2018; De Vries et al., 2017) in which a similar mechanism is used to condition a supervised network on side information.

Batch normalization (Ioffe & Szegedy, 2015) can improve the training of deep neural nets, and it is widely used in both discriminative and generative modeling (Szegedy et al., 2015; Radford et al., 2016; Miyato et al., 2018). It is thus present in most modern networks, and provides a convenient entry point for self-modulation. Therefore, we present our method in the context of its application via batch normalization. In batch normalization the activations of a layer, , are linearly transformed as

(1)

where and are the estimated mean and variances of the features across the data, and and are learnable scale and shift parameters.

Self-modulation for unconditional (without side information) generation. In this case the proposed method replaces the non-adaptive parameters and with input-dependent and , respectively. These are parametrized by a neural network applied to the generator’s input (Figure 1). In particular, for layer , we compute

(2)

In general, it suffices that and are differentiable. In this work, we use a small one-hidden layer feed-forward network (MLP) with ReLU activation applied to the generator input . Specifically, given parameter matrices and , and a bias vector , we compute

and similarly for .

Self-modulation for conditional (with side information) generation. Having access to side information proved to be useful for conditional generation. The use of labels in the generator (and possibly discriminator) was introduced by Mirza & Osindero (2014) and later adapted by Odena et al. (2017); Miyato & Koyama (2018). In case that side information is available (e.g. class labels ), it can be readily incorporated into the proposed method. This can be achieved by simply composing the information with the input via some learnable function , i.e. . In this work we opt for the simplest option and instantiate as a bi-linear interaction between and two trainable embedding functions of the class label , as

(3)

This conditionally composed can be directly used in Equation 1. Despite its simplicity, we demonstrate that it outperforms the standard conditional models.

Only first layer Other Arbitrary layers
Side information N/A Conditional batch normalization (De Vries et al., 2017; Miyato & Koyama, 2018)
Latent vector Unconditional Generator (Goodfellow et al., 2014) (Unconditional) Self-Modulation (this work)
Both and Conditional Generator (Mirza & Osindero, 2014) (Conditional) Self-Modulation (this work)
Table 1: Techniques for generator conditioning and modulation.

Discussion. Table 1 summarizes recent techniques for generator conditioning. While we choose to implement this approach via batch normalization, it can also operate independently by removing the normalization part in the Equation 1. We made this pragmatic choice due to the fact that such conditioning is common (Radford et al., 2016; Miyato et al., 2018; Miyato & Koyama, 2018).

The second question is whether one benefits from more complex modulation architectures, such as using an attention network (Vaswani et al., 2017) whereby and could be made dependant on all upstream activations, or constraining the elements in to which would yield a similar gating mechanism to an LSTM cell (Hochreiter & Schmidhuber, 1997). Based on initial experiments we concluded that this additional complexity does not yield a substantial increase in performance.

3 Experiments

We perform a large-scale study of self-modulation to demonstrate that this method yields robust improvements in a variety of settings. We consider loss functions, architectures, discriminator regularization/normalization strategies, and a variety of hyperparameter settings collected from recent studies (Radford et al., 2016; Gulrajani et al., 2017; Miyato et al., 2018; Lucic et al., 2018; Kurach et al., 2018). We study both unconditional (without labels) and conditional (with labels) generation. Finally, we analyze the results through the lens of the condition number of the generator’s Jacobian as suggested by Odena et al. (2018), and precision and recall as defined in Sajjadi et al. (2018).

3.1 Experimental Settings

Loss functions. We consider two loss functions. The first one is the non-saturating loss proposed in Goodfellow et al. (2014):

The second one is the hinge loss used in Miyato et al. (2018):

Controlling the Lipschitz constant of the discriminator. The discriminator’s Lipschitz constant is a central quantity analyzed in the GAN literature (Miyato et al., 2018; Zhou et al., 2018). We consider two state-of-the-art techniques: Gradient Penalty (Gulrajani et al., 2017), and Spectral Normalization (Miyato et al., 2018). Without normalization and regularization the models can perform poorly on some datasets. For the Gradient Penalty regularizer we consider regularization strength .

Network architecture. We use two popular architecture types: one based on DCGAN (Radford et al., 2016), and another from Miyato et al. (2018) which incorporates residual connections (He et al., 2016). The details can be found in the appendix.

Optimization hyper-parameters. We train all models for k generator steps with the Adam optimizer (Kingma & Ba, 2014) (We also perform a subset of the studies with K steps and discuss it in. We test two popular settings of the Adam hyperparameters : and . Previous studies find that multiple discriminator steps per generator step can help the training (Goodfellow et al., 2014; Salimans et al., 2016), thus we also consider both and discriminator steps per generator step111We also experimented with 5 steps which didn’t outperform the step setting.. In total, this amounts to three different sets of hyper-parameters for : , , . We fix the learning rate to as in Miyato et al. (2018). All models are trained with batch size of 64 on a single nVidia P100 GPU. We report the best performing model attained during the training period; although the results follow the same pattern if the final model is report.

Datasets. We consider four datasets: cifar10, celeba-hq, lsun-bedroom, and imagenet. The lsun-bedroom dataset (Yu et al., 2015) contains around 3M images. We partition the images randomly into a test set containing 30588 images and a train set containing the rest. celeba-hq contains 30k images (Karras et al., 2017). We use the version obtained by running the code provided by the authors222Available at https://github.com/tkarras/progressive_growing_of_gans.. We use 3000 examples as the test set and the remaining examples as the training set. cifar10 contains 70K images (), partitioned into 60000 training instances and 10000 testing instances. Finally, we evaluate our method on imagenet, which contains M training images and K test images. We re-size the images to as done in Miyato & Koyama (2018) and Zhang et al. (2018).

Metrics. Quantitative evaluation of generative models remains one of the most challenging tasks. This is particularly true in the context of implicit generative models where likelihood cannot be effectively evaluated. Nevertheless, two quantitative measures have recently emerged: The Inception Score and the Frechet Inception Distance. While both of these scores have some drawbacks, they correlate well with scores assigned by human annotators and are somewhat robust.

The Inception Score (IS) (Salimans et al., 2016) is based on the insight that the conditional label distribution of samples containing representing meaningful objects should have low entropy, while the marginal label distribution should have high entropy. Formally, . The score is computed based on an Inception classifier (Szegedy et al., 2015). Some drawbacks of applying the IS to model comparison are discussed in (Barratt & Sharma, 2018).

An alternative score, the Frechet Inception Distance (FID), requires no labeled data (Heusel et al., 2017). The real and generated samples are first embedded into a feature space (using a specific layer of InceptionNet). Then, a multivariate Gaussian is fit to the data and the distance is computed as , where and denote the empirical mean and covariance and subscripts and denote the true and generated data, respectively. FID was shown to be robust to various manipulations (Heusel et al., 2017) and sensitive to mode dropping (Lucic et al., 2018).

3.2 Robustness experiments for unconditional generation

To test robustness, we run a Cartesian product of the parameters in Section 3.1 which results in 36 settings for each dataset (2 losses, 2 architectures, 3 hyperparameter settings for spectral norm, and 6 for gradient penalty). For each setting we run five random seeds for self-modulation and the baseline (no self-modulation, just batch normalization). We compute the median score across random seeds which results in trained models.

We distinguish between two sets of experiments. In the unpaired setting we define the model as the tuple of loss, regularizer/normalization, neural architecture, and conditioning (self-modulated or classic batch normalization). For each model compute the minimum FID across optimization hyperparameters (, , ). We therefore compare the performance of self-modulation and baseline for each model after hyperparameter optimization. The results of this study are reported in Table 2, and the relative improvements are in Table 3 and Figure 2. We observe the following: (1) When using the resnet style architecture, the proposed method outperforms the baseline in all considered settings. (2) When using the sndcgan architecture, it outperforms the baseline in of the cases. The breakdown by datasets is shown in Figure 2. (3) The improvement can be as high as a reduction in fid. (4) We observe similar improvement to the inception score, reported in the appendix.

In the second setting, the paired setting, we assess how effective is the technique when simply added to an existing model with the same set of hyperparameters. In particular, we fix everything except the type of conditioning – the model tuple now includes the optimization hyperparameters. This results in 36 settings for each data set for a total of 144 comparisons. We observe that self-modulation outperforms the baseline in 124/144 settings. These results suggest that self-modulation can be applied to most GANs even without additional hyperparameter tuning.

Type Arch Loss Method bedroom celebahq cifar10 imagenet
Gradient penalty res hinge self-mod 22.62 27.03 26.93 78.31
baseline 27.75 30.02 28.14 86.23
ns self-mod 25.30 26.65 26.74 85.67
baseline 36.79 33.72 28.61 98.38
sndc hinge self-mod 110.86 55.63 33.58 90.67
baseline 119.59 68.51 36.24 116.25
ns self-mod 120.73 125.44 33.70 101.40
baseline 134.13 131.89 37.12 122.74
Spectral Norm res hinge self-mod 14.32 24.50 18.54 68.90
baseline 17.10 26.15 20.08 78.62
ns self-mod 14.80 26.27 20.63 80.48
baseline 17.50 30.22 23.81 120.82
sndc hinge self-mod 48.07 22.51 24.66 75.87
baseline 38.31 27.20 26.33 90.01
ns self-mod 46.65 24.73 26.09 76.69
baseline 40.80 28.16 27.41 93.25
Best of above self-mod 14.32 22.51 18.54 68.90
baseline 17.10 26.15 20.08 78.62
Table 2: In the unpaired setting (as defined in Section 3.2, we compute the median score (across random seeds) and report the best attainable score across considered optimization hyperparameters. Self-Mod is the method introduced in Section 2 and baseline refers to batch normalization. We observe that the proposed approach strongly outperforms the baseline in 30 out of 32 settings. The relative improvement is detailed in Table 3.
Reduction(%) Reduction(%)
Model resnet sndc Model resnet sndc
hinge-gp bedroom 18.50 7.30 ns-gp bedroom 31.22 9.99
celebahq 9.94 18.81 celebahq 20.96 4.89
cifar10 4.30 7.33 cifar10 6.51 9.21
imagenet 9.18 22.01 imagenet 12.92 17.39
hinge-sn bedroom 16.25 -25.48 ns-sn bedroom 15.43 -14.35
celebahq 6.31 17.26 celebahq 13.08 12.20
cifar10 7.67 6.35 cifar10 13.36 4.83
imagenet 12.37 15.72 imagenet 33.39 17.76
Table 3: Reduction in FID over a large class of hyperparameter settings, losses, regularization, and normalization schemes. We observe from 4.3% to 33% decrease in FID. When applied to the resnet architecture, independently of the loss, regularization, and normalization, self-mod always outperforms the baseline. For sndcgan we observe an improvement in of the cases (all except two on lsun-bedroom).
(a)
(b)
(c)
Figure 2: In Figure (a) we observe that the proposed method outperforms the baseline in the unpaired setting. Figure (b) shows the number of models which fall in 80-th percentile in terms of FID (with reverse ordering). We observe that the majority “good” models utilize self-modulation. Figure (c) shows that applying self-conditioning is more beneficial on the later layers, but should be applied to each layer for optimal performance. This effect persits across considered datasets.

Conditional Generation. We demonstrate that self-modulation also works for label-conditional generation. Here, one is given access the class label which may be used by the generator and the discriminator. We compare two settings: (1) Generator conditioning is applied via label-conditional Batch Norm (De Vries et al., 2017; Miyato & Koyama, 2018) with no use of labels in the discriminator (G-Cond). (2) Generator conditioning applied as above, but with projection based conditioning in the discriminator (intuitively it encourages discriminator to use label discriminative features to distinguish true/fake samples), as in Miyato & Koyama (2018) (P-cGAN). The former can be considered as a special case of the latter where discriminator conditioning is disabled. For P-cGAN, we take the architectures and hyper-parameter setting as in Miyato & Koyama (2018). See the appendix, Section B.3 for details. In both cases, we compare both standard label-conditional batch normalization to self-modulation with additional labels, as discussed in Section 2, Equation 3.

The results are shown in Table 4. Again, we observe that the simple incorporation of self-modulation leads to a significant improvement in performance in the considered settings.

Unconditional G-Cond P-cGAN
Score Baseline Self-mod Baseline Self-mod Baseline Self-mod
cifar10 fid 20.41 18.58 21.08 18.39 16.06 14.19
imagenet fid 81.07 69.53 80.43 68.93 70.28 66.09
cifar10 is 7.89 8.31 8.11 8.34 8.53 8.71
imagenet is 11.16 12.52 11.16 12.48 13.62 14.14
Table 4: FID and IS scores in label conditional setting.

Training for longer on imagenet. To demonstrate that self-modulation continues to yield improvement after training for longer, we train imagenet for k generator steps333We expect potentially the results would continue to improve if training longer. However, currently results from k steps require training for 10 days on a P100 GPU.. Due to the increased computational demand we use a single setting for the unconditional and conditional settings models following Miyato et al. (2018); Miyato & Koyama (2018), but only using 2 discriminator steps per generator. We compute the median FID across 3 random seeds. After k steps the Baseline unconditional model attains FID , self-modulation attains ( improvement). In the conditional setting self-modulation improves the FID from to (13% improvement). The improvements in IS are from to , and to in unconditional and conditional, respectively.

Where to apply self-modulation? Given the robust improvement of the proposed method, an immediate question is where to apply the modulation. We tested two settings: (1) applying modulation to every batch normalization layer, and (2) applying it to a single layer. The results of this ablation are in Figure 2. These results suggest that the benefit of self-modulation is greatest in the last layer, as may be intuitive, but applying it to each layer is most effective.

4 Related Work

Conditional GANs. Conditioning on side information, such as class labels, has been shown to improve the performance of GANs. Initial proposals were based on concatenating this additional feature with the input vector (Mirza & Osindero, 2014; Radford et al., 2016; Odena et al., 2017). Recent approaches, such as the projection cGAN (Miyato & Koyama, 2018) injects label information into the generator architecture using conditional Batch Norm layers (De Vries et al., 2017). Self-modulation is a simple yet effective complementary addition to this line of work which makes a significant difference when no side information is available. In addition, when side information is available it can be readily applied as discussed in Section 2 and leads to further improvements.

Conditional Modulation. Conditional modulation, using side information to modulate the computation flow in neural networks, is a rich idea which has been applied in various contexts (beyond GANs). In particular, Dumoulin et al. (2017) apply Conditional Instance Normalization (Ulyanov et al., 2016) to image style-transfer (Dumoulin et al., 2017). Kim et al. (2017) use Dynamic Layer Normalization (Ba et al., 2016) for adaptive acoustic modelling. Feature-wise Linear Modulation (Perez et al., 2018) generalizes this family of methods by conditioning the Batch Norm scaling and bias factors (which correspond to multiplicative and additive interactions) on general external embedding vectors in supervised learning. The proposed method applies to generators in GAN (unsupervised learning), and it works with both unconditional (without side information) and conditional (with side information) settings.

Multiplicative and Additive Modulation. Existing conditional modulations mentioned above are usually instantiated via Batch Normalization, which include both multiplicative and additive modulation. These two types of modulation also link to other techniques widely used in neural network literature. The multiplicative modulation is closely related to Gating, which is adopted in LSTM (Hochreiter & Schmidhuber, 1997), gated PixelCNN (van den Oord et al., 2016), Convolutional Sequence-to-sequence networks (Gehring et al., 2017) and Squeeze-and-excitation Networks (Hu et al., 2018). The additive modulation is closely related to Residual Netowrks (He et al., 2016). The proposed method adopts both types of modulation.

5 Discussion

Condition number Precision/Recall
cifar10
imagenet
Figure 3: Each point corresponds to a single model/hyperparameter setting. The left-hand plots show the log condition number of the generator versus the FID score. The right-hand plots show the generator precision/recall curves. The values for the correlation between log condition number and FID on cifar10are and for Self-Mod and Base, respectively. For imagenetthey are and for Self-Mod and Base, respectively. lsun-bedroom and celeba-hq are in the Appendix.

We present a generator modification that improves the performance of most GANs. This technique is simple to implement and can be applied to all popular GANs, therefore we believe that self-modulation is a useful addition to the GAN toolbox.

Our results suggest that self-modulation clearly yields performance gains, however, they do not say how this technique results in better models. Interpretation of deep networks is a complex topic, especially for GANs, where the training process is less well understood. Rather than purely speculate, we compute two diagnostic statistics that were proposed recently ignite the discussion of the method’s effects.

First, we compute the condition number of the generators Jacobian. Odena et al. (2018) provide evidence that better generators have a Jacobian with lower condition number and hence regularize using this quantity. We estimate the generator condition number in the same way as Odena et al. (2018). We compute the Jacobian at each in a minibatch, then average the logarithm of the condition numbers computed from each Jacobian.

Second, we compute a notion of precision and recall for generative models. Sajjadi et al. (2018) define the quantities, and , for generators. These quantities relate intuitively to the traditional precision and recall metrics for classification. Generating points which have low probability under the true data distribution is interpreted as a loss in precision, and is penalized by the score. Failing to generate points that have high probability under the true data distributions is interpreted as a loss in recall, and is penalized by the score.

Figure 3 shows both statistics. The left hand plot shows the condition number plotted against FID score for each model. We observe that poor models tend to have large condition numbers; the correlation, although noisy, is always positive. This result corroborates the observations in (Odena et al., 2018). However, we notice an inverse trend in the vicinity of the best models. The cluster of the best models with self-modulation has lower FID, but higher condition number, than the best models without self-modulation. Overall the correlation between FID and condition number is smaller for self-modulated models. This is surprising, it appears that rather than unilaterally reducing condition number, self-modulation provides some training stability, yielding models with a small range of generator condition numbers.

The right-hand plot in Figure 3 shows the and scores. Models in the upper-left quadrant cover true data modes better (higher precision), and models in the lower-right quadrant produce more modes (higher recall). Self-modulated models tend to favour higher recall. This effect is most pronounced on imagenet.

Overall these diagnostics indicate that self-modulation stabilizes the generator towards favourable conditioning values. It also appears to improve mode coverage. However, these metrics are very new; further development of anaylsis tools and theoretical study is needed to better disentangle the symptoms and causes of the self-modulation technique, and indeed of others.

Acknowledgments

We would like to thank Ilya Tolstikhin for helpful discussions. We would also like to thank Xiaohua Zhai, Marcin Michalski, Karol Kurach and Anton Raichuk for their help on dealing with infustrature. The authors also appreciate general discussions with Olivier Bachem, Alexander Kolesnikov, Thomas Unterthiner, and Josip Djolonga. Finally, we are grateful for general support from other members of Google Brain team.

References

  • Arjovsky et al. (2017) Martín Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In International Conference on Machine Learning (ICML), 2017.
  • Ba et al. (2016) Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
  • Barratt & Sharma (2018) Shane Barratt and Rishi Sharma. A note on the inception score. arXiv preprint arXiv:1801.01973, 2018.
  • De Vries et al. (2017) Harm De Vries, Florian Strub, Jérémie Mary, Hugo Larochelle, Olivier Pietquin, and Aaron C Courville. Modulating early visual processing by language. In Advances in Neural Information Processing Systems (NIPS), 2017.
  • Dumoulin et al. (2017) Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. A learned representation for artistic style. International Conference on Learning Representations (ICLR), 2017.
  • Gehring et al. (2017) Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann Dauphin. Convolutional sequence to sequence learning. In International Conference on Machine Learning (ICML), 2017.
  • Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), 2014.
  • Gulrajani et al. (2017) Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of Wasserstein GANs. Advances in Neural Information Processing Systems (NIPS), 2017.
  • He et al. (2016) K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Computer Vision and Pattern Recognition (CVPR), 2016.
  • Heusel et al. (2017) Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Günter Klambauer, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a Nash equilibrium. In Advances in Neural Information Processing Systems (NIPS), 2017.
  • Hochreiter & Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 1997.
  • Hu et al. (2018) Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In Computer Vision and Pattern Recognition (CVPR), 2018.
  • Ioffe & Szegedy (2015) Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
  • Isola et al. (2016) Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. arxiv, 2016.
  • Karras et al. (2017) Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. Advances in Neural Information Processing Systems (NIPS), 2017.
  • Kim et al. (2017) Taesup Kim, Inchul Song, and Yoshua Bengio. Dynamic layer normalization for adaptive neural acoustic modeling in speech recognition. In INTERSPEECH, 2017.
  • Kingma & Ba (2014) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • Kurach et al. (2018) Karol Kurach, Mario Lucic, Xiaohua Zhai, Marcin Michalski, and Sylvain Gelly. The GAN Landscape: Losses, Architectures, Regularization, and Normalization. arXiv preprint arXiv:1807.04720, 2018.
  • Ledig et al. (2017) Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. In Computer Vision and Pattern Recognition (CVPR), 2017.
  • Lucic et al. (2018) Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. Are GANs Created Equal? A Large-scale Study. In Advances in Neural Information Processing Systems (NIPS), 2018.
  • Mao et al. (2016) Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. Least squares generative adversarial networks. International Conference on Computer Vision (ICCV), 2016.
  • Mirza & Osindero (2014) Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
  • Miyato & Koyama (2018) Takeru Miyato and Masanori Koyama. cgans with projection discriminator. International Conference on Learning Representations (ICLR), 2018.
  • Miyato et al. (2018) Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. International Conference on Learning Representations (ICLR), 2018.
  • Odena et al. (2017) Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier GANs. In International Conference on Machine Learning (ICML), 2017.
  • Odena et al. (2018) Augustus Odena, Jacob Buckman, Catherine Olsson, Tom B Brown, Christopher Olah, Colin Raffel, and Ian Goodfellow. Is generator conditioning causally related to gan performance? arXiv preprint arXiv:1802.08768, 2018.
  • Pathak et al. (2016) Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Computer Vision and Pattern Recognition (CVPR), 2016.
  • Perez et al. (2018) Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron C. Courville. Film: Visual reasoning with a general conditioning layer. AAAI, 2018.
  • Radford et al. (2016) Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. International Conference on Learning Representations (ICLR), 2016.
  • Sajjadi et al. (2018) Mehdi SM Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, and Sylvain Gelly. Assessing generative models via precision and recall. In Advances in Neural Information Processing Systems (NIPS), 2018.
  • Salimans et al. (2016) Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems (NIPS), 2016.
  • Szegedy et al. (2015) Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Computer Vision and Pattern Recognition (CVPR), 2015.
  • Tschannen et al. (2018) Michael Tschannen, Eirikur Agustsson, and Mario Lucic. Deep generative models for distribution-preserving lossy compression. In Advances in Neural Information Processing Systems (NIPS), 2018.
  • Ulyanov et al. (2016) D Ulyanov, A Vedaldi, and VS Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016.
  • van den Oord et al. (2016) Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelcnn decoders. In Advances in Neural Information Processing Systems (NIPS), 2016.
  • Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems (NIPS), 2017.
  • Yu et al. (2015) Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.
  • Zhang et al. (2017) Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaolei Huang, Xiaogang Wang, and Dimitris Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. International Conference on Computer Vision (ICCV), 2017.
  • Zhang et al. (2018) Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. Self-attention generative adversarial networks. arXiv preprint arXiv:1805.08318, 2018.
  • Zhou et al. (2018) Zhiming Zhou, Yuxuan Song, Lantao Yu, and Yong Yu. Understanding the effectiveness of lipschitz constraint in training of gans via gradient analysis. arXiv preprint arXiv:1807.00751, 2018.
  • Zhu et al. (2017) Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint, 2017.

Appendix A Additional results

a.1 Inception Scores

Type Arch Loss Method bedroom celebahq cifar10 imagenet
Gradient penalty resnet hinge self-mod 5.28 2.92 7.71 11.52
baseline 4.72 2.80 7.35 10.26
ns self-mod 4.96 2.61 7.70 10.74
baseline 4.54 2.60 7.26 9.49
sndcgan hinge self-mod 6.34 3.05 7.37 10.99
baseline 5.02 3.08 6.88 8.11
ns self-mod 6.31 3.07 7.28 10.06
baseline 4.71 3.21 6.86 7.24
Spectral Norm resnet hinge self-mod 3.94 3.65 8.29 12.67
baseline 4.32 3.26 8.00 11.29
ns self-mod 4.61 3.32 8.23 11.52
baseline 4.07 2.58 7.93 7.40
sndcgan hinge self-mod 5.85 2.74 7.90 12.50
baseline 4.82 2.40 7.48 9.62
ns self-mod 5.73 2.55 7.84 11.95
baseline 4.39 2.33 7.37 9.28
Table 5: In the unpaired setting (as defined in Section 3.2), we compute the median score (across random seeds) and report the best attainable score across considered optimization hyperparameters. Self-Mod is the method introduced in Section 2 and baseline refers to batch normalization.

a.2 Which layer to modulate?

Figure 4: FID distributions resulting from Self-Modulation on different layers.

a.3 Conditioning and Precision/Recall

Figure 5 presents the generator Jacobian condition number and precision/recall plot for each dataset.

cifar10:
imagenet:
lsun-bedroom:
celeba-hq:
Figure 5: Each point in each plot corresponds to a single model for all parameter configurations. The model with mean FID score across the five random seeds was chosen. The left-hand plots show the log condition number of the generator versus the FID score for each model. The right-hand generator precision/recall metrics.

Appendix B Model Architectures

We describe the model structures that are used in our experiments in this section.

b.1 SNDCGAN Architectures

The SNDCGAN architecture we follows the ones used in Miyato et al. (2018). Since the resolution of images in cifar10is , while resolutions of images in other datasets are . There are slightly differences in terms of spatial dimensions for both architectures. The proposed self-modulation is applied to replace existing BN layer, we term it sBN (self-modulated BN) for short in Table 6, 7, 8, 9.

b.2 ResNet Architectures

The ResNet architecture we also follows the ones used in Miyato et al. (2018). Again, due to the resolution differences, two ResNet architectures are used in this work. The proposed self-modulation is applied to replace existing BN layer, we term it sBN (self-modulated BN) for short in Table 10, 11, 12, 13.

b.3 Conditional GAN Architecture

For the conditional setting with label information available, we adopt the Projection Based Conditional GAN (P-cGAN) (Miyato & Koyama, 2018). There are both conditioning in generators as well ad discriminators. For generator, conditional batch norm is applied via conditioning on label information, more specifically, this can be expressed as follows,

Where each label is associated with a scaling and shifting parameters independently.

For discriminator label conditioning, the dot product between final layer feature and label embedding is added back to the discriminator output logits, i.e. where represents the final feature representation layer of input , and is the linear transformation maps the feature vector into a real number. Intuitively, this type of conditional discriminator encourages discriminator to use label discriminative features to distinguish true/fake samples. Both the above conditioning strategies do not dependent on the specific architectures, and can be applied to above architectures with small modifications.

We use the same architectures and hyper-parameter settings444With one exception: to make it consistent with previous unconditional settings (and also due to the computation time), instead of running five discriminator steps per generator step, we only use two discriminator steps per generator step. as in Miyato & Koyama (2018). More specifically, the architecture is the same as ResNet above, and we compare in two settings: (1) only generator label conditioning is applied, and there is no projection based conditioning in the discriminator, and (2) both generator and discriminator conditioning are applied, which is the standard full P-cGAN.

Layer Details Output size
Latent noise
Fully Connected Linear
Reshape
Deconv sBN, ReLU
Deconv4x4,stride=2
Deconv sBN, ReLU
Deconv4x4,stride=2
Deconv sBN, ReLU
Deconv4x4,stride=2
Deconv sBN, ReLU
Deconv4x4,stride=2
Tanh
Table 6: SNDCGAN Generator with resolution. sBN denotes BN with self-modulation as proposed.
Layer Details Output size
Input image -
Conv Conv3x3,stride=1
LeakyReLU
Conv Conv4x4,stride=2
LeakyReLU
Conv Conv3x3,stride=1
LeakyReLU
Conv Conv4x4,stride=2
LeakyReLU
Conv Conv3x3,stride=1
LeakyReLU
Conv Conv4x4,stride=2
LeakyReLU
Conv Conv3x3,stride=1
LeakyReLU
Fully connected Reshape
Linear
Table 7: SNDCGAN Discriminator with resolution.
Layer Details Output size
Latent noise
Fully Connected Linear
Reshape
Deconv sBN, ReLU
Deconv4x4,stride=2
Deconv sBN, ReLU
Deconv4x4,stride=2
Deconv sBN, ReLU
Deconv4x4,stride=2
Deconv sBN, ReLU
Deconv4x4,stride=2
Tanh
Table 8: SNDCGAN Gnerator with resolution. sBN denotes BN with self-modulation as proposed.
Layer Details Output size
Input image -
Conv Conv3x3,stride=1
LeakyReLU
Conv Conv4x4,stride=2
LeakyReLU
Conv Conv3x3,stride=1
LeakyReLU
Conv Conv4x4,stride=2
LeakyReLU
Conv Conv3x3,stride=1
LeakyReLU
Conv Conv4x4,stride=2
LeakyReLU
Conv Conv3x3,stride=1
LeakyReLU
Fully connected Reshape
Linear
Table 9: SNDCGAN Discriminator with resolution.
Layer Details Output size
Latent noise
Fully connected Linear
Reshape
ResNet block sBN, ReLU
Upsample
Conv3x3, sBN, ReLU
Conv3x3
ResNet block sBN, ReLU
Upsample
Conv3x3, sBN, ReLU
Conv3x3
ResNet block sBN, ReLU
Upsample
Conv3x3, sBN, ReLU
Conv3x3
Conv sBN, ReLU
Conv3x3, Tanh
Table 10: ResNet Generator with resolution. Each ResNet block has a skip-connection that uses upsampling of its input and a 1x1 convolution. sBN denotes BN with self-modulation as proposed.
Layer Details Output size
Input image
ResNet block Conv3x3
ReLU,Conv3x3
Downsample
ResNet block ReLU,Conv3x3
ReLU,Conv3x3
Downsample
ResNet block ReLU,Conv3x3
ReLU,Conv3x3
ResNet block ReLU,Conv3x3
ReLU,Conv3x3
Fully connected ReLU,GlobalSum pooling
Linear
Table 11: ResNet Discriminator with resolution. Each ResNet block has a skip-connection that applies a 1x1 convolution with possible downsampling according to spatial dimension.
Layer Details Output size
Latent noise
Fully connected Linear
Reshape
ResNet block sBN, ReLU
Upsample
Conv3x3, sBN, ReLU
Conv3x3
ResNet block sBN, ReLU
Upsample
Conv3x3, sBN, ReLU
Conv3x3
ResNet block sBN, ReLU
Upsample
Conv3x3, sBN, ReLU
Conv3x3
ResNet block sBN, ReLU
Upsample
Conv3x3, sBN, ReLU
Conv3x3
ResNet block sBN, ReLU
Upsample
Conv3x3, sBN, ReLU
Conv3x3
Conv sBN, ReLU
Conv3x3, Tanh
Table 12: ResNet Generator with resolution. Each ResNet block has a skip-connection that uses upsampling of its input and a 1x1 convolution. sBN denotes BN with self-modulation as proposed.
Layer Details Output size
Input image
ResNet block Conv3x3
ReLU,Conv3x3
Downsample
ResNet block ReLU,Conv3x3
ReLU,Conv3x3
Downsample
ResNet block ReLU,Conv3x3
ReLU,Conv3x3
Downsample
ResNet block ReLU,Conv3x3
ReLU,Conv3x3
Downsample
ResNet block ReLU,Conv3x3
ReLU,Conv3x3
Downsample
ResNet block ReLU,Conv3x3
ReLU,Conv3x3
Fully connected ReLU,GlobalSum pooling
Linear
Table 13: ResNet Discriminator with resolution. Each ResNet block has a skip-connection that applies a 1x1 convolution with possible downsampling according to spatial dimension.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
297472
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description