Plug & Play Generative Networks:Conditional Iterative Generation of Images in Latent Space

Plug & Play Generative Networks:
Conditional Iterative Generation of Images in Latent Space

Anh Nguyen
University of Wyoming
   Jeff Clune
Uber AI Labs, University of Wyoming
   Yoshua Bengio
Montreal Institute for Learning Algorithms
   Alexey Dosovitskiy
University of Freiburg
   Jason Yosinski
Uber AI Labs
This work was mostly performed at Geometric Intelligence, which Uber acquired to create Uber AI Labs.

Generating high-resolution, photo-realistic images has been a long-standing goal in machine learning. Recently, Nguyen et al. [37] showed one interesting way to synthesize novel images by performing gradient ascent in the latent space of a generator network to maximize the activations of one or multiple neurons in a separate classifier network. In this paper we extend this method by introducing an additional prior on the latent code, improving both sample quality and sample diversity, leading to a state-of-the-art generative model that produces high quality images at higher resolutions () than previous generative models, and does so for all 1000 ImageNet categories. In addition, we provide a unified probabilistic interpretation of related activation maximization methods and call the general class of models “Plug and Play Generative Networks.” PPGNs are composed of 1) a generator network G that is capable of drawing a wide range of image types and 2) a replaceable “condition” network C that tells the generator what to draw. We demonstrate the generation of images conditioned on a class (when C is an ImageNet or MIT Places classification network) and also conditioned on a caption (when C is an image captioning network). Our method also improves the state of the art of Multifaceted Feature Visualization [40], which generates the set of synthetic inputs that activate a neuron in order to better understand how deep neural networks operate. Finally, we show that our model performs reasonably well at the task of image inpainting. While image models are used in this paper, the approach is modality-agnostic and can be applied to many types of data.

Figure 1: Images synthetically generated by Plug and Play Generative Networks at high-resolution (227x227) for four ImageNet classes. Not only are many images nearly photo-realistic, but samples within a class are diverse.

1 Introduction

(a) Real: top 9                 (b) DGN-AM [37]              (c) Real: random 9                (d) PPGN (this)

Figure 2: For the “cardoon” class neuron in a pre-trained ImageNet classifier, we show: a) the 9 real training set images that most highly activate that neuron; b) images synthesized by DGN-AM [37], which are of similar type and diversity to the real top-9 images; c) random real training set images in the cardoon class; and d) images synthesized by PPGN, which better represent the diversity of random images from the class. Fig. S10 shows the same four groups for other classes.

Recent years have seen generative models that are increasingly capable of synthesizing diverse, realistic images that capture both the fine-grained details and global coherence of natural images [54, 27, 9, 15, 43, 24]. However, many important open challenges remain, including (1) producing photo-realistic images at high resolutions [30], (2) training generators that can produce a wide variety of images (e.g. all 1000 ImageNet classes) instead of only one or a few types (e.g. faces or bedrooms [43]), and (3) producing a diversity of samples that match the diversity in the dataset instead of modeling only a subset of the data distribution [14, 53]. Current image generative models often work well at low resolutions (e.g. ), but struggle to generate high-resolution (e.g. or higher), globally coherent images (especially for datasets such as ImageNet [7] that have a large variability [41, 47, 14]) due to many challenges including difficulty in training [47, 41] and computationally expensive sampling procedures [54, 55].

Nguyen et al. [37] recently introduced a technique that produces high quality images at a high resolution. Their Deep Generator Network-based Activation Maximization111 Activation maximization is a technique of searching via optimization for the synthetic image that maximally activates a target neuron in order to understand which features that neuron has learned to detect [11]. (DGN-AM) involves training a generator to create realistic images from compressed features extracted from a pre-trained classifier network (Fig. 3f). To generate images conditioned on a class, an optimization process is launched to find a hidden code that maps to an image that highly activates a neuron in another classifier (not necessarily the same as ). Not only does DGN-AM produce realistic images at a high resolution (Figs. 2b & S10b), but, without having to re-train , it can also produce interesting new types of images that never saw during training. For example, a trained on ImageNet can produce ballrooms, jail cells, and picnic areas if is trained on the MIT Places dataset (Fig. S17, top).

A major limitation with DGN-AM, however, is the lack of diversity in the generated samples. While samples may vary slightly (e.g. “cardoons” with two or three flowers viewed from slightly different angles; see Fig. 2b), the whole image tends to have the same composition (e.g. a closeup of a single cardoon plant with a green background). It is noteworthy that the images produced by DGN-AM closely match the images from that class that most highly activate the class neuron (Fig. 2a). Optimization often converges to the same mode even with different random initializations, a phenomenon common with activation maximization [11, 40, 59]. In contrast, real images within a class tend to show more diversity (Fig. 2c). In this paper, we improve the diversity and quality of samples produced via DGN-AM by adding a prior on the latent code that keeps optimization along the manifold of realistic-looking images (Fig. 2d).

We do this by providing a probabilistic framework in which to unify and interpret activation maximization approaches [48, 64, 40, 37] as a type of energy-based model [4, 29] where the energy function is a sum of multiple constraint terms: (a) priors (e.g. biasing images to look realistic) and (b) conditions, typically given as a category of a separately trained classification model (e.g. encouraging images to look like “pianos” or both “pianos” and “candles”). We then show how to sample iteratively from such models using an approximate Metropolis-adjusted Langevin sampling algorithm.

We call this general class of models Plug and Play Generative Networks (PPGN). The name reflects an important, attractive property of the method: one is free to design an energy function, and “plug and play” with different priors and conditions to form a new generative model. This property has recently been shown to be useful in multiple image generation projects that use the DGN-AM generator network prior and swap in different condition networks [66, 13]. In addition to generating images conditioned on a class, PPGNs can generate images conditioned on text, forming a text-to-image generative model that allows one to describe an image with words and have it synthesized. We accomplish this by attaching a recurrent, image-captioning network (instead of an image classification network) to the output of the generator, and performing similar iterative sampling. Note that, while this paper discusses only the image generation domain, the approach should generalize to many other data types. We publish our code and the trained networks at

2 Probabilistic interpretation of iterative image generation methods

Beginning with the Metropolis-adjusted Langevin algorithm [46, 45] (MALA), it is possible to define a Markov chain Monte Carlo (MCMC) sampler whose stationary distribution approximates a given distribution . We refer to our variant of MALA as MALA-approx, which uses the following transition operator:222 We abuse notation slightly in the interest of space and denote as a sample from that distribution. The first step size is given as in anticipation of later splitting into separate and terms.


A full derivation and discussion is given in Sec. S1. Using this sampler we first derive a probabilistically interpretable formulation for activation maximization methods (Sec. 2.1) and then interpret other activation maximization algorithms in this framework (Sec. 2.2, Sec. S2).

2.1 Probabilistic framework for Activation

Assume we wish to sample from a joint model , which can be decomposed into an image model and a classification model:


This equation can be interpreted as a “product of experts” [19] in which each expert determines whether a soft constraint is satisfied. First, a expert determines a condition for image generation (e.g. images have to be classified as “cardoon”). Also, in a high-dimensional image space, a good expert is needed to ensure the search stays in the manifold of image distribution that we try to model (e.g. images of faces [6, 63], shoes [67] or natural images [37]), otherwise we might encounter “fooling” examples that are unrecognizable but have high [38, 51]. Thus, and together impose a complicated high-dimensional constraint on image generation.

We could write a sampler for the full joint , but because variables are categorical, suppose for now that we fix to be a particular chosen class , with either sampled or chosen outside the inner sampling loop.333 One could resample in the loop as well, but resampling via the Langevin family under consideration is not a natural fit: because values from the data set are one-hot – and from the model hopefully nearly so – there will be a wide small- or zero-likelihood region between pairs coming from different classes. Thus making local jumps will not be a good sampling scheme for the components. This leaves us with the conditional :


We can construct a MALA-approx sampler for this model, which produces the following update step:


Expanding the into explicit partial derivatives and decoupling into explicit and multipliers, we arrive at the following form of the update rule:


We empirically found that decoupling the and multipliers works better. An intuitive interpretation of the actions of these three terms is as follows:

  • term: take a step from the current image toward one that looks more like a generic image (an image from any class).

  • term: take a step from the current image toward an image that causes the classifier to output higher confidence in the chosen class. The term is typically modeled by the softmax output units of a modern convnet, e.g. AlexNet [26] or VGG [49].

  • term: add a small amount of noise to jump around the search space to encourage a diversity of images.

Figure 3: Different variants of PPGN models we tested. The Noiseless Joint PPGN- (e), which we found empirically produces the best images, generated the results shown in Figs. 1 & 2 & Sections 3.5 & 4. In all variants, we perform iterative sampling following the gradients of two terms: the condition (red arrows) and the prior (black arrows). (a) PPGN- (Sec. 3.1): To avoid fooling examples [38] when sampling in the high-dimensional image space, we incorporate a prior modeled via a denoising autoencoder (DAE) for images, and sample images conditioned on the output classes of a condition network (or, to visualize hidden neurons, conditioned upon the activation of a hidden neuron in ). (b) DGN-AM (Sec. 3.2): Instead of sampling in the image space (i.e. in the space of individual pixels), Nguyen et al. [37] sample in the abstract, high-level feature space of a generator trained to reconstruct images from compressed features extracted from a pre-trained encoder (f). Because the generator network was trained to produce realistic images, it serves as a prior on since it ideally can only generate real images. However, this model has no learned prior on (save for a simple Gaussian assumption). (c) PPGN- (Sec. 3.3): We attempt to improve the mixing speed and image quality by incorporating a learned prior modeled via a multi-layer perceptron DAE for . (d) Joint PPGN- (Sec. 3.4): To improve upon the poor data modeling of the DAE in PPGN-, we experiment with treating as a DAE that models via . In addition, to possibly improve the robustness of , we also add a small amount of noise to and during training and sampling, treating the entire system as being composed of 4 interleaved models that share parameters: a GAN and 3 interleaved DAEs for , and , respectively. This model mixes substantially faster and produces better image quality than DGN-AM and PPGN- (Fig. S14). (e) Noiseless Joint PPGN- (Sec. 3.5): We perform an ablation study on the Joint PPGN-, sweeping across noise levels or loss combinations, and found a Noiseless Joint PPGN- variant trained with one less loss (Sec. S4.4) to produce the best image quality. (f) A pre-trained image classification network (here, AlexNet trained on ImageNet) serves as the encoder network component of our model by mapping an image to a useful, abstract, high-level feature space (here, AlexNet’s fc6 layer). (g) Instead of conditioning on classes, we can generate images conditioned on a caption by attaching a recurrent, image-captioning network to the output layer of , and performing similar iterative sampling.

2.2 Interpretation of previous models

Aside from the errors introduced by not including a reject step, the stationary distribution of the sampler in Eq. 5 will converge to the appropriate distribution if the terms are chosen appropriately [61]. Thus, we can use this framework to interpret previously proposed iterative methods for generating samples, evaluating whether each method faithfully computes and employs each term.

There are many previous approaches that iteratively sample from a trained model to generate images [48, 64, 40, 37, 60, 2, 11, 63, 67, 6, 39, 38, 34], with methods designed for different purposes such as activation maximization [48, 64, 40, 37, 60, 11, 38, 34] or generating realistic-looking images by sampling in the latent space of a generator network [63, 37, 67, 6, 2, 17]. However, most of them are gradient-based, and can be interpreted as a variant of MCMC sampling from a graphical model [25].

While an analysis of the full spectrum of approaches is outside this paper’s scope, we do examine a few representative approaches under this framework in Sec. S2. In particular, we interpret the models that lack a image prior, yielding adversarial or fooling examples [51, 38] as setting ; and methods that use decay during sampling as using a Gaussian prior with . Both lack a noise term and thus sacrifice sample diversity.

3 Plug and Play Generative Networks

Previous models are often limited in that they use hand-engineered priors when sampling in either image space or the latent space of a generator network (see Sec. S2). In this paper, we experiment with 4 different explicitly learned priors modeled by a denoising autoencoder (DAE) [57].

We choose a DAE because, although it does not allow evaluation of directly, it does allow approximation of the gradient of the log probability when trained with Gaussian noise with variance [1]; with sufficient capacity and training time, the approximation is perfect in the limit as :


where is the reconstruction function in -space representing the DAE, i.e. is a “denoised” output of the autoencoder (an encoder followed by a decoder) when the encoder is fed input . This term approximates exactly the term required by our sampler, so we can use it to define the steps of a sampler for an image from class . Pulling the term into , the update is:


3.1 Ppgn-: DAE model of

First, we tested using a DAE to model directly (Fig. 3a) and sampling from the entire model via Eq. 7. However, we found that PPGN- exhibits two expected problems: (1) it models the data distribution poorly; and (2) the chain mixes slowly. More details are in Sec. S6.

3.2 DGN-AM: sampling without a learned prior

Poor mixing in the high-dimensional pixel space of PPGN- is consistent with previous observations that mixing on higher layers can result in faster exploration of the space [5, 33]. Thus, to ameliorate the problem of slow mixing, we may reparameterize as for some latent , and perform sampling in this lower-dimensional -space.

While several recent works had success with this approach [37, 6, 63], they often hand-design the prior. Among these, the DGN-AM method [37] searches in the latent space of a generator network to find a code such that the image highly activates a given neuron in a target DNN. We start by reproducing their results for comparison. is trained following the methodology in Dosovitskiy & Brox [9] with an image reconstruction loss, a Generative Adversarial Networks (GAN) loss [14], and an loss in a feature space of an encoder (Fig. 3f). The last loss encourages generated images to match the real images in a high-level feature space and is referred to as “feature matching” [47] in this paper, but is also known as “perceptual similarity” [28, 9] or a form of “moment matching” [31]. Note that in the GAN training for , we simultaneously train a discriminator to tell apart real images vs. generated images . More training details are in Sec. S4.4.

The directed graphical model interpretation of DGN-AM is (see Fig. 3b) and the joint can be decomposed into:


where in this case represents features extracted from the first fully connected layer (called ) of a pre-trained AlexNet [26] 1000-class ImageNet [7] classification network (see Fig. 3f). is modeled by , an upconvolutional (also “deconvolutional”) network [10] with 9 upconvolutional and 3 fully connected layers. is modeled by C, which in this case is also the AlexNet classifier. The model for was an implicit unimodal Gaussian implemented via L2 decay in -space [37].

Since is a deterministic variable, the model simplifies to:


From Eq. 5, if we define a Gaussian centered at 0 and set , pulling Gaussian constants into , we obtain the following noiseless update rule in Nguyen et al. [37] to sample from class :


where represents the output unit associated with class . As before, all terms are computable in a single forward-backward pass. More concretely, to compute the term, we push a code through the generator and condition network to the output class that we want to condition on (Fig. 3b, red arrows), and back-propagate the gradient via the same path to . The final is pushed through to produce an image sample.

Under this newly proposed framework, we have successfully reproduced the original DGN-AM results and their issue of converging to the same mode when starting from different random initializations (Fig. 2b). We also found that DGN-AM mixes somewhat poorly, yielding the same image after many sampling steps (Figs. 12(b) & 13(b)).

3.3 Ppgn-: Generator and DAE model of

We attempt to address the poor mixing speed of DGN-AM by incorporating a proper prior learned via a DAE into the sampling procedure described in Sec. 3.2. Specifically, we train , a 7-layer, fully-connected DAE on (as before, is a feature vector). The size of the hidden layers are respectively: . Full training details are provided in S4.3.

The update rule to sample from this model is similar to Eq. 10 except for the inclusion of all three terms:


Concretely, to compute we push through the learned DAE, encoding and decoding it (Fig. 3c, black arrows). The term is computed via a forward and backward pass through both and networks as before (Fig. 3c, red arrows). Lastly, we add the same amount of noise used during DAE training to . Equivalently, noise can also be added before the encode-decode step.

We sample444 If faster mixing or more stable samples are desired, then and can be scaled up or down together. Here we scale both down to . using and show results in Figs. 12(c) & 13(c). As expected, the chain mixes faster than PPGN-, with subsequent samples appearing more qualitatively different from their predecessors. However, the samples for PPGN- are qualitatively similar to those from DGN-AM (Figs. 12(b) & 13(b)). Samples still lack quality and diversity, which we hypothesize is due to the poor model learned by the DAE.

3.4 Joint PPGN-: joint Generator and DAE

The previous result suggests that the simple multi-layer perceptron DAE poorly modeled the distribution of features. This could occur because the DAE faces the generally difficult unconstrained density estimation problem. To combat this issue, we experiment with modeling via with a DAE: . Intuitively, to help the DAE better model , we force it to generate realistic-looking images from features and then decode them back to . One can train this DAE from scratch separately from (as done for PPGN-). However, in the DGN-AM formulation, models the (Fig. 3b) and models the (Fig. 3f). Thus, the composition can be considered an AE (Fig. 3d).

Note that is theoretically not a formal -DAE because its two components were trained with neither noise added to nor an reconstruction loss for [37] (more details in Sec. S4.4) as is required for regular DAE training [57]. To make a more theoretically justifiable -DAE, we add noise to and train with an additional reconstruction loss for (Fig. S9c). We do the same for and ( features), hypothesizing that a little noise added to and might encourage to be more robust [57]. In other words, with the same existing network structures from DGN-AM [37], we train differently by treating the entire model as being composed of 3 interleaved DAEs that share parameters: one each for , , and (see Fig. S9c). Note that remains frozen, and is trained with 4 losses in total i.e. three reconstruction losses for , , and and a GAN loss for . See Sec. S4.5 for full training details. We call this the Joint PPGN- model.

We sample from this model following the update rule in Eq. 11 with , and with noise added to all three variables: , and instead of only to (Fig. 3d vs e). The noise amounts added at each layer are the same as those used during training. As hypothesized, we observe that the sampling chain from this model mixes substantially faster and produces samples with better quality than all previous PPGN treatments (Figs. 12(d) & 13(d)) including PPGN-, which has a multi-layer perceptron -DAE.

3.5 Ablation study with Noiseless Joint PPGN-

While the Joint PPGN- outperforms all previous treatments in sample quality and diversity (as the chain mixes faster), the model is trained with a combination of four losses and noise added to all variables. This complex training process can be difficult to understand, making further improvements non-intuitive. To shed more light into how the Joint PPGN- works, we perform ablation experiments which later reveal a better-performing variant.

Noise sweeps. To understand the effects of adding noise to each variable, we train variants of the Joint PPGN- (1) with different noise levels, (2) using noise on only a single variable, and (3) using noise on multiple variables simultaneously. We did not find these variants to produce qualitatively better reconstruction results than the Joint PPGN-. Interestingly, in a PPGN variant trained with no noise at all, the -autoencoder given by still appears to be contractive, i.e. robust to a large amount of noise (Fig. S16). This is beneficial during sampling; if “unrealistic” codes appear, could map them back to realistic-looking images. We believe this property might emerge for multiple reasons: (1) and are not trained jointly; (2) features encode global, high-level rather than local, low-level information; (3) the presence of the adversarial cost when training could make the mapping more “many-to-one” by pushing towards modes of the image distribution.

Combinations of losses. To understand the effects of each loss component, we repeat the Joint PPGN- training (Sec. 3.4), but without noise added to the variables. Specifically, we test different combinations of losses and compare the quality of images produced by pushing the codes of real images through (without MCMC sampling).

First, we found that removing the adversarial loss from the 4-loss combination yields blurrier images (Fig. 7(c)). Second, we compare 3 different feature matching losses: , , and both and combined, and found that feature matching loss leads to the best image quality (Sec. S3). Our result is consistent with Dosovitskiy & Brox [9]. Thus, the model that we found empirically to produce the best image quality is trained without noise and with three losses: a feature matching loss, an adversarial loss, and an image reconstruction loss. We call this variant “Noiseless Joint PPGN-”: it produced the results in Figs. 1 & 2 and Sections 3.5 & 4.

Noiseless Joint PPGN-. We sample from this model with following the same update rule in Eq. 11 (we need noise to make it a proper sampling procedure, but found that infinitesimally small noise produces better and more diverse images, which is to be expected given that the DAE in this variant was trained without noise). Interestingly, the chain mixes substantially faster than DGN-AM (Figs. 12(e) & 12(b)) although the only difference between two treatments is the existence of the learned prior. Overall, the Noiseless Joint PPGN- produces a large amount of sample diversity (Fig. 2). Compared to the Joint PPGN-, the Noiseless Joint PPGN- produces better image quality, but mixes slightly slower (Figs. S13 & S14). Sweeping across the noise levels during sampling, we noted that larger noise amounts often results in worse image quality, but not necessarily faster mixing speed (Fig. S15). Also, as expected, a small multiplier makes the chain mix faster, and a large one pulls the samples towards being generic instead of class-specific (Fig. S23).

Evaluations. Evaluating image generative models is challenging, and there is not yet a commonly accepted quantitative performance measure [53]. We qualitatively evaluate sample diversity of the Noiseless Joint PPGN- variant by running 10 sampling chains, each for 200 steps, to produce 2000 samples, and filtering out samples with class probability of less than . From the remaining, we randomly pick 400 samples and plot them in a grid t-SNE [56] (Figs. S12 & S11). More examples for the reader’s evaluation of sample quality and diversity are provided in Figs. S21S22S25. To better observe the mixing speed, we show videos of sampling chains (with one sample per frame; no samples filtered out) from within classes and between 10 different classes at In addition, Table S3 provides quantitative comparisons between PPGN, auxiliary classifier GAN [41] and real ImageNet images in image quality (via Inception score [47] & Inception accuracy [41]) and diversity (via MS-SSIM metric [41]).

While future work is required to fully understand why the Noiseless Joint PPGN- produces high-quality images at a high resolution for 1000-class ImageNet more successfully than other existing latent variable models [41, 47, 43], we discuss possible explanations in Sec. S7.

4 Additional results

In this section, we take the Noiseless Joint PPGN- model and show its capabilities on several different tasks.

4.1 Generating images with different condition

A compelling property that makes PPGN different from other existing generative models is that one can “plug and play” with different prior and condition components (as shown in Eq. 2) and ask the model to perform new tasks, including challenging the generator to produce images that it has never seen before. Here, we demonstrate this feature by replacing the component with different networks.

Generating images conditioned on classes

Above we showed that PPGN could generate a diversity of high quality samples for ImageNet classes (Figs. 1 & 2 & Sec. 3.5). Here, we test whether the generator within the PPGN can generalize to new types of images that it has never seen before. Specifically, we sample with a different model: an AlexNet DNN [26] trained to classify 205 categories of scene images from the MIT Places dataset [65]. Similar to DGN-AM [37], the PPGN generates realistic-looking images for classes that the generator was never trained on, such as “alley” or “hotel room” (Fig. 4). A side-by-side comparison between DGN-AM and PPGN are in Fig. S17.

Figure 4: Images synthesized conditioned on MIT Places [65] classes instead of ImageNet classes.

Generating images conditioned on captions

Instead of conditioning on classes, we can also condition the image generation on a caption (Fig. 3g). Here, we swap in an image-captioning recurrent network (called LRCN) from [8] that was trained on the MS COCO dataset [32] to predict a caption given an image . Specifically, LRCN is a two-layer LSTM network that generates captions conditioned on features extracted from the output softmax layer of AlexNet [26].

Figure 5: Images synthesized to match a text description. A PPGN containing the image captioning model from [8] can generate reasonable images that differ based on user-provided captions (e.g. red car vs. blue car, oranges vs. a pile of oranges). For each caption, we show 3 images synthesized starting from random codes (more in Fig. S18).

We found that PPGN can generate reasonable images in many cases (Figs. 5 & S18), although the image quality is lower than when conditioning on classes. In other cases, it also fails to generate high-quality images for certain types of images such as “people” or “giraffe”, which are not categories in the generator’s training set (Fig. S18). We also observe “fooling” images [38]—those that look unrecognizable to humans, but produce high-scoring captions. More results are in Fig. S18. The challenges for this task could be: (1) the sampling is conditioned on many () words at the same time, and the gradients backpropagated from different words could conflict with each other; (2) the LRCN captioning model itself is easily fooled, thus additional priors on the conversion from image features to natural language could improve the result further; (3) the depth of the entire model (AlexNet and LRCN) impairs gradient propagation during sampling. In the future, it would be interesting to experiment with other state-of-the-art image captioning models [12, 58]. Overall, we have demonstrated that PPGNs can be flexibly turned into a text-to-image model by combining the prior with an image captioning network, and this process does not even require additional training.

Generating images conditioned on hidden neurons

PPGNs can perform a more challenging form of activation maximization called Multifaceted Feature Visualization (MFV) [40], which involves generating the set of inputs that activate a given neuron. Instead of conditioning on a class output neuron, here we condition on a hidden neuron, revealing many facets that a neuron has learned to detect (Fig. 6).

Figure 6: Images synthesized to activate a hidden neuron (number ) previously identified as a “face detector neuron” [64] in the fifth convolutional layer of the AlexNet DNN trained on ImageNet. The PPGN uncovers a large diversity of types of inputs that activate this neuron, thus performing Multifaceted Feature Visualization [40], which sheds light into what the neuron has learned to detect. The different facets include different types of human faces (top row), dog faces (bottom row), and objects that only barely resemble faces (e.g. the windows of a house, or something resembling green hair above a flesh-colored patch). More examples and details are shown in Figs. S19 & S20.

4.2 Inpainting

Because PPGNs can be interpreted probabilistically, we can also sample from them conditioned on part of an image (in addition to the class condition) to perform inpainting—filling in missing pixels given the observed context regions [42, 3, 63, 54]. The model must understand the entire image to be able to reasonably fill in a large masked out region that is positioned randomly. Overall, we found that PPGNs are able to perform inpainting suggesting that the models do “understand” the semantics of concepts such as junco or bell pepper (Fig. 7) rather than merely memorizing the images. More details and results are in Sec. S5.

Figure 7: We perform class-conditional image sampling to fill in missing pixels (see Sec. 4.2). In addition to conditioning on a specific class (PPGN), PPGN-context also constrains the code to produce an image that matches the context region. PPGN-context (c) matches the pixels surrounding the masked region better than PPGN (b), and semantically fills in better than the Context-Aware Fill feature in Photoshop (d) in many cases. The result shows that the class-conditional PPGN does understand the semantics of images. More PPGN-context results are in Fig. S24.

5 Conclusion

The most useful property of PPGN is the capability of “plug and play”—allowing one to drop in a replaceable condition network and generate images according to a condition specified at test time. Beyond the applications we demonstrated here, one could use PPGNs to synthesize images for videos or create arts with one or even multiple condition networks at the same time [13]. Note that DGN-AM [37]—the predecessor of PPGNs—has previously enabled both scientists and amateurs without substantial resources to take a pre-trained condition network and generate art [13] and scientific visualizations [66]. An explanation for why this is possible is that the features that the generator was trained to invert are relatively general and cover the set of natural images. Thus, there is great value in producing flexible, powerful generators that can be combined with pretrained condition networks in a plug and play fashion.


We thank Theo Karaletsos and Noah Goodman for helpful discussions, and Jeff Donahue for providing a trained image captioning model [8] for our experiments. We also thank Joost Huizinga, Christopher Stanton, Rosanne Liu, Tyler Jaszkowiak, Richard Yang, and Jon Berliner for invaluable suggestions on preliminary drafts.


  • [1] G. Alain and Y. Bengio. What regularized auto-encoders learn from the data-generating distribution. The Journal of Machine Learning Research, 15(1):3563–3593, 2014.
  • [2] K. Arulkumaran, A. Creswell, and A. A. Bharath. Improving sampling from generative autoencoders with markov chains. arXiv preprint arXiv:1610.09296, 2016.
  • [3] C. Barnes, E. Shechtman, A. Finkelstein, and D. Goldman. Patchmatch: a randomized correspondence algorithm for structural image editing. ACM Transactions on Graphics-TOG, 28(3):24, 2009.
  • [4] I. G. Y. Bengio and A. Courville. Deep learning. Book in preparation for MIT Press, 2016.
  • [5] Y. Bengio, G. Mesnil, Y. Dauphin, and S. Rifai. Better mixing via deep representations. In Proceedings of the 30th International Conference on Machine Learning (ICML), pages 552–560, 2013.
  • [6] A. Brock, T. Lim, J. Ritchie, and N. Weston. Neural photo editing with introspective adversarial networks. arXiv preprint arXiv:1609.07093, 2016.
  • [7] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248–255. IEEE, 2009.
  • [8] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. In Computer Vision and Pattern Recognition, 2015.
  • [9] A. Dosovitskiy and T. Brox. Generating Images with Perceptual Similarity Metrics based on Deep Networks. In Advances in Neural Information Processing Systems, 2016.
  • [10] A. Dosovitskiy, J. Tobias Springenberg, and T. Brox. Learning to generate chairs with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1538–1546, 2015.
  • [11] D. Erhan, Y. Bengio, A. Courville, and P. Vincent. Visualizing higher-layer features of a deep network. Technical report, Technical report, University of Montreal, 2009.
  • [12] A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, M. A. Ranzato, and T. Mikolov. Devise: A deep visual-semantic embedding model. In C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 2121–2129. Curran Associates, Inc., 2013.
  • [13] G. Goh. Image synthesis from yahoo open nsfw., 2016.
  • [14] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680, 2014.
  • [15] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. Draw: A recurrent neural network for image generation. In ICML, 2015.
  • [16] A. Gretton, K. M. Borgwardt, M. Rasch, B. Schölkopf, and A. J. Smola. A kernel method for the two-sample-problem. In Advances in neural information processing systems, pages 513–520, 2006.
  • [17] T. Han, Y. Lu, S.-C. Zhu, and Y. N. Wu. Alternating back-propagation for generator network. In AAAI, 2017.
  • [18] W. K. Hastings. Monte carlo sampling methods using markov chains and their applications. Biometrika, 57(1):97–109, 1970.
  • [19] G. E. Hinton. Products of experts. In Artificial Neural Networks, 1999. ICANN 99. Ninth International Conference on (Conf. Publ. No. 470), volume 1, pages 1–6. IET, 1999.
  • [20] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, 2015.
  • [21] Y. Jia. Caffe: An open source convolutional architecture for fast feature embedding., 2013.
  • [22] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. arXiv preprint arXiv:1603.08155, 2016.
  • [23] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • [24] D. P. Kingma and M. Welling. Auto-Encoding Variational Bayes. Dec. 2014.
  • [25] D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009.
  • [26] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, pages 1106–1114, 2012.
  • [27] H. Larochelle and I. Murray. The neural autoregressive distribution estimator. Journal of Machine Learning Research, 15:29–37, 2011.
  • [28] A. B. L. Larsen, S. K. Sønderby, and O. Winther. Autoencoding beyond pixels using a learned similarity metric. CoRR, abs/1512.09300, 2015.
  • [29] Y. LeCun, S. Chopra, R. Hadsell, M. Ranzato, and F. Huang. A tutorial on energy-based learning. Predicting structured data, 1:0, 2006.
  • [30] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi. Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint arXiv:1609.04802, 2016.
  • [31] Y. Li, K. Swersky, and R. Zemel. Generative moment matching networks. In International Conference on Machine Learning, pages 1718–1727, 2015.
  • [32] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In European Conference on Computer Vision, pages 740–755. Springer, 2014.
  • [33] H. Luo, P. L. Carrier, A. C. Courville, and Y. Bengio. Texture modeling with convolutional spike-and-slab rbms and deep extensions. In AISTATS, pages 415–423, 2013.
  • [34] A. Mahendran and A. Vedaldi. Visualizing deep convolutional neural networks using natural pre-images. International Journal of Computer Vision, pages 1–23, 2016.
  • [35] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller. Equation of state calculations by fast computing machines. The journal of chemical physics, 21(6):1087–1092, 1953.
  • [36] A. Mordvintsev, C. Olah, and M. Tyka. Inceptionism: Going deeper into neural networks. Google Research Blog. Retrieved June, 20, 2015.
  • [37] A. Nguyen, A. Dosovitskiy, J. Yosinski, T. Brox, and J. Clune. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In Advances in Neural Information Processing Systems, 2016.
  • [38] A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
  • [39] A. Nguyen, J. Yosinski, and J. Clune. Innovation engines: Automated creativity and improved stochastic optimization via deep learning. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), 2015.
  • [40] A. Nguyen, J. Yosinski, and J. Clune. Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks. In Visualization for Deep Learning Workshop, ICML conference, 2016.
  • [41] A. Odena, C. Olah, and J. Shlens. Conditional Image Synthesis With Auxiliary Classifier GANs. ArXiv e-prints, Oct. 2016.
  • [42] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. arXiv preprint arXiv:1604.07379, 2016.
  • [43] A. Radford, L. Metz, and S. Chintala. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. Nov. 2015.
  • [44] S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio. Contractive auto-encoders: Explicit invariance during feature extraction. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 833–840, 2011.
  • [45] G. O. Roberts and J. S. Rosenthal. Optimal scaling of discrete approximations to langevin diffusions. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 60(1):255–268, 1998.
  • [46] G. O. Roberts and R. L. Tweedie. Exponential convergence of langevin distributions and their discrete approximations. Bernoulli, pages 341–363, 1996.
  • [47] T. Salimans, I. J. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. CoRR, abs/1606.03498, 2016.
  • [48] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, presented at ICLR Workshop 2014, 2013.
  • [49] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
  • [50] C. Szegedy, S. Ioffe, and V. Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. CoRR, abs/1602.07261, 2016.
  • [51] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus. Intriguing properties of neural networks. CoRR, abs/1312.6199, 2013.
  • [52] Y. W. Teh, A. H. Thiery, and S. J. Vollmer. Consistency and fluctuations for stochastic gradient langevin dynamics. 2014.
  • [53] L. Theis, A. van den Oord, and M. Bethge. A note on the evaluation of generative models. Nov 2016. International Conference on Learning Representations.
  • [54] A. van den Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel Recurrent Neural Networks. ArXiv e-prints, Jan. 2016.
  • [55] A. van den Oord, N. Kalchbrenner, O. Vinyals, L. Espeholt, A. Graves, and K. Kavukcuoglu. Conditional image generation with pixelcnn decoders. CoRR, abs/1606.05328, 2016.
  • [56] L. Van der Maaten and G. Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(11), 2008.
  • [57] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pages 1096–1103. ACM, 2008.
  • [58] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. arXiv preprint arXiv:1411.4555, 2014.
  • [59] D. Wei, B. Zhou, A. Torrabla, and W. Freeman. Understanding intra-class knowledge inside cnn. arXiv preprint arXiv:1507.02379, 2015.
  • [60] D. Wei, B. Zhou, A. Torrabla, and W. Freeman. Understanding intra-class knowledge inside cnn. arXiv preprint arXiv:1507.02379, 2015.
  • [61] M. Welling and Y. W. Teh. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 681–688, 2011.
  • [62] J. Xie, Y. Lu, S.-C. Zhu, and Y. N. Wu. Cooperative training of descriptor and generator networks. arXiv preprint arXiv:1609.09408, 2016.
  • [63] R. Yeh, C. Chen, T. Y. Lim, M. Hasegawa-Johnson, and M. N. Do. Semantic image inpainting with perceptual and contextual losses. arXiv preprint arXiv:1607.07539, 2016.
  • [64] J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, and H. Lipson. Understanding neural networks through deep visualization. In Deep Learning Workshop, International Conference on Machine Learning (ICML), 2015.
  • [65] B. Zhou, A. Khosla, À. Lapedriza, A. Oliva, and A. Torralba. Object detectors emerge in deep scene cnns. In International Conference on Learning Representations (ICLR), volume abs/1412.6856, 2014.
  • [66] B. Zhou, A. Khosla, A. Lapedriza, A. Torralba, and A. Oliva. Places: An image database for deep scene understanding. arXiv preprint arXiv:1610.02055, 2016.
  • [67] J.-Y. Zhu, P. Krähenbühl, E. Shechtman, and A. A. Efros. Generative visual manipulation on the natural image manifold. In European Conference on Computer Vision, pages 597–613. Springer, 2016.

Supplementary materials for:

Plug & Play Generative Networks:

Conditional Iterative Generation of Images in Latent Space

Appendix S1 Markov chain Monte Carlo methods and derivation of MALA-approx

Assume a distribution that we wish to produce samples from. For certain distributions with amenable structure it may be possible to write down directly an independent and identically distributed (IID) sampler, but in general this can be difficult. In such cases where IID samplers are not readily available, we may instead resort to Markov Chain Monte Carlo (MCMC) methods for sampling. Complete discussions of this topic fill books [25, 4]. Here we briefly review the background that led to the sampler we propose.

In cases where evaluation of is possible, we can write down the Metropolis-Hastings (hereafter: MH) sampler for  [35, 18]. It requires a choice of proposal distribution ; for simplicity we consider (and later use) a simple Gaussian proposal distribution. Starting with an from some initial distribution, the sampler takes steps according to a transition operator defined by the below routine, with shorthand for a sample from that Gaussian proposal distribution:

  1. if , reject sample with probability by setting , else keep

In theory, sufficiently many steps of this simple sampling rule produce samples for any computable , but in practice it has two problems: it mixes slowly, because steps are small and uncorrelated in time, and it requires us to be able to compute to calculate , which is often not possible. A Metropolis-adjusted Langevin algorithm (hereafter: MALA) [46, 45] addresses the first problem. This sampler follows a slightly modified procedure:

  1. if , reject sample with probability by setting , else keep

where is the slightly more complex calculation of , with the notable property that as the step size goes to 0, . This sampler preferentially steps in the direction of higher probability, which allows it to spend less time rejecting low probability proposals, but it still requires computation of to calculate .

\lrbox\endlrbox uses accept/
\lrbox\endlrbox reject step and
\lrbox\endlrbox mixes requires update rule (not including accept/reject step)
\lrboxMH\endlrbox slowly yes
\lrboxMALA\endlrbox ok yes
\lrboxMALA-approx\endlrbox ok no
Table S1: Samplers properties assuming Gaussian proposal distributions. Samples are drawn via MALA-approx in this paper.

The stochastic gradient Langevin dynamics (SGLD) method [61, 52] was proposed to sidestep this troublesome requirement by generating probability proposals that are based on a small subset of the data only: by using stochastic gradient descent plus noise, by skipping the accept-reject step, and by using decreasing step sizes. Inspired by SGLD, we define an approximate sampler by assuming small step size and doing away with the reject step (by accepting every sample). The idea is that the stochasticity of SGD itself introduces an implicit noise: although the resulting update does not produce asymptotically unbiased samples, it does if we also anneal the step size (or, equivalently, gradually increase the minibatch size).

While an accept ratio of 1 is only approached in the limit as the step size goes to zero, in practice we empirically observe that this approximation produces reasonable samples even for moderate step sizes. This approximation leads to a sampler defined by the simple update rule:


As explained below, we propose to decouple the two step sizes for each of the above two terms after , with two independent scaling factors to allow independently tuning each ( and in Eq. 13). This variant makes sense when we consider that the stochasticity of SGD itself introduces more noise, breaking the direct link between the amount of noise injected and the step size under Langevin dynamics.

We note that , is just the gradient of the energy (because the partition function does not depend on ) and that the scaling factor ( in the above equation) can be partially absorbed when changing the temperature associated with energy, since temperature is just a multiplicative scaling factor in the energy. Changing that link between the two terms is thus equivalent to changing temperature because the incorrect scale factor can be absorbed in the energy as a change in the temperature. Since we do not control directly the amount of noise (some of which is now produced by the stochasticity of SGD itself), it is better to “manually” control the trade-off by introducing an extra hyperparameter. Doing so also may help to compensate for the fact that the SGD noise is not perfectly normal, which introduces a bias in the Markov chain. By manually controlling both the step size and the normal noise, we can thus find a good trade-off between variance (or temperature level, which would blur the distribution) and bias (which makes us sample from a slightly different distribution). In our experience, such decoupling has helped find better tradeoffs between sample diversity and quality, perhaps compensating for idiosyncrasies of sampling without a reject step. We call this sampler MALA-approx:


Table S1 summarizes the samplers and their properties.

Appendix S2 Probabilistic interpretation of previous models (continued)

In this paper, we consider four main representative approaches in light of the framework:

  1. Activation maximization with no priors [38, 51, 11]

  2. Activation maximization with a Gaussian prior [48, 64]

  3. Activation maximization with hand-designed priors [48, 64, 40, 60, 39, 38, 34]

  4. Sampling in the latent space of a generator network [2, 63, 67, 6, 37, 17]

Here we discuss the first three and refer readers to the main text (Sec. 2.2) for the fourth approach.

Activation maximization with no priors. From Eq. 5, if we set , we obtain a sampler that follows the class gradient directly without contributions from a term or the addition of noise. In a high-dimensional space, this results in adversarial or fooling images [51, 38]. We can also interpret the sampling procedure in [51, 38] as a sampler with non-zero but with a such that ; in other words, a uniform where all images are equally likely.

a. Derivative of logit. Has worked well in practice [37, 11] but not quite the right term to maximize under the sampler framework set out in this paper.
b. Derivative of softmax. Previously avoided due to poor performance [48, 64], but poor performance may have been due to ill-conditioned optimization rather than the inclusion of logits from other classes. In particular, the term goes to 0 as goes zero.
c. Derivative of log of softmax. Correct term under the sampler framework set out in this paper. Well-behaved under optimization, perhaps due to the term untouched by the multiplier.
Table S2: A comparison of derivatives for use in activation maximization experiments. The first has most commonly been used, the second has worked in the past but with some difficulty, but the third is correct under the sampler framework set out in this paper. We perform experiments in this paper with the third variant.

Activation maximization with a Gaussian prior. To combat the fooling problem [38], several works have used decay, which can be thought of as a simple Gaussian prior over images [48, 64, 60]. From Eq. 5, if we define a Gaussian centered at the origin (assume the mean image has been subtracted) and set , pulling Gaussian constants into , we obtain the following noiseless update rule:


The first term decays the current image slightly toward the origin, as appropriate under a Gaussian image prior, and the second term pulls the image toward higher probability regions for the chosen class. Here, the second term is computed as the derivative of the log of a softmax unit in the output layer of the classification network, which is trained to model . If we let be the logit outputs of a classification network, where indexes over the classes, then the softmax outputs are given by , and the value is modeled by the softmax unit .

Note that the second term is similar, but not identical, to the gradient of logit term used by [48, 64, 34]. There are three variants of computing this class gradient term: 1) derivative of logit; 2) derivative of softmax; and 3) derivative of log of softmax. Previously mentioned papers empirically reported that derivative of the logit unit produces better visualizations than the derivative of the softmax unit (Table S2a vs. b), but this observation had not been fully justified [48]. In light of our probablistic interpretation (discussed in Sec. 2.1), we consider activation maximization as performing noisy gradient descent to minimize the energy function :


To sample from the joint model , we follow the energy gradient:


which derives the class gradient term that matches that in our framework (Eq. 14, second term). In addition, recall that the classification network is trained to model via softmax, thus the class gradient variant (the derivative of log of softmax) is the most theoretically justifiable in light of our interpretation. We summarize all three variants in Table S2. In overall, we found the proposed class gradient term a) theoretically justifiable under the probabilistic interpretation, and b) working well empirically, and thus suggest it for future activation maximization studies.

Activation maximization with hand-designed priors. In an effort to outdo the simple Gaussian prior, many works have proposed more creative, hand-designed image priors such as Gaussian blur [64], total variation [34], jitter [36], and data-driven patch priors [59]. These priors effectively serve as a simple component. Those that cannot be explicitly expressed in the mathematical form (e.g. jitter [36] and center-biased regularization [40]) can be written as a general regularization function as in [64], in which case the noiseless update becomes:


Note that all methods considered in this section are noiseless and therefore produce samples showing diversity only by starting the optimization process at different initial conditions. The effect is that samples tend to converge to a single mode or a small number of modes [11, 40].

Appendix S3 Comparing feature matching losses

The addition of feature matching losses (i.e. the differences between a real image and a generated image not in pixel space, but in a feature space, such as a high-level code in a deep neural network) to the training cost has been shown to substantially improve the quality of samples produced by generator networks, e.g. by producing sharper and more realistic images [9, 28, 22].

Dosovitskiy & Brox [9] used the feature matching loss measured in the layer code space of AlexNet deep neural network (DNN) [26] trained to classify 1000-class ImageNet images [7]. Here, we empirically compare several feature matching losses computed in different layers of the AlexNet DNN. Specifically, we follow the training procedure in Dosovitskiy & Brox [9], and train 3 generator networks, each with a different feature matching loss computed in different layers from the pretrained AlexNet DNN: a) , b) and c) both and losses. We empirically found that matching the features leads to the best image quality (Fig. S8), and chose the generator with this loss for the main experiments in the paper.

(a) Real images
(b) Joint PPGN- ()
(c) removed ()
(d) removed:
(e) removed:
Figure S8: A comparison of images produced by different generators , each trained with a different loss combination (below each image). , , and are reconstruction losses respectively in the pixel (), feature () and feature () space. is trained to map , i.e. reconstructing images from features. In the Joint PPGN- treatment (Sec. 3.4), is trained with a combination of 4 losses (panel b). Here, we perform an ablation study on this loss combination to understand the effect of each loss, and find a combination that produces the best image quality. We found that removing the GAN loss yields blurry results (panel c). The Noiseless Joint PPGN- variant (Sec. 3.5) is trained with the loss combination that produces the best image quality (panel e). Compared to , feature matching loss often produce the worse image quality because it is effectively encouraging generated images to match the high-level abstract statistics of real images instead of low-level statistics [16]. Our result is in consistent with Dosovitskiy & Brox [9].

Appendix S4 Training details

s4.1 Common training framework

We use the Caffe framework [21] to train the networks. All networks are trained with the Adam optimizer [23] with momentum , , and , and an initial learning rate of following [9]. The batch size is . To stabilize the GAN training, we follow heuristic rules based on the ratio of the discriminator loss over generator loss and pause the training of the generator or discriminator if one of them is winning too much. In most cases, the heuristics are a) pause training D if ; b) pause training G if . We did not find BatchNorm [20] helpful in further stabilizing the training as found in Radford et al. [43]. We have not experimented with all of the techniques discussed in Salimans et al. [47], some of which could further improve the results.

s4.2 Training PPGN-

We train a DAE for images and incorporate it to the sampling procedure as a prior to avoid fooling examples [37]. The DAE is a 4-layer convolutional network that encodes an image to the layer of AlexNet [26] and decodes it back to images with 3 upconvolutional layers. We add an amount of Gaussian noise with to images during training. The network is trained via the common training framework described in Sec. S4.1 for mini-batch iterations. We use regularization of .

s4.3 Training PPGN-

For the PPGN- variant, we train two separate networks: a generator (that maps codes to images ) and a prior . is trained via the same procedure described in Sec. S4.4. We model via a multi-layer perceptron DAE with 7 hidden layers of size: . We experimented with larger networks but found this to work the best. We sweep across different amounts of Gaussian noise , and empirically chose (i.e. 10% of the mean feature activation). The network is trained via the common training framework described in Sec. S4.1 for mini-batch iterations. We use regularization of .

s4.4 Training Noiseless Joint PPGN-

Here we describe the training details of the generator network used in the main experiments in Sections 3.3, 3.5, 3.4. The training procedure follows closely the framework by Dosovitskiy & Brox [9].

The purpose is to train a generator network to reconstruct images from an abstract, high-level feature code space of an encoder network —here, the first fully connected layer () of an AlexNet DNN [26] pre-trained to perform image classification on the ImageNet dataset [7] (Fig. S9a) We train as a decoder for the encoder , which is kept frozen. In other words, form an image autoencoder (Fig. S9b).

Training losses. G is trained with 3 different losses as in Dosovitskiy & Brox [9], namely, an adversarial loss , an image reconstruction loss , and a feature matching loss measured in the spatial layer (Fig. S9b):


and are reconstruction losses in their respective space of images and () codes :


The adversarial loss for (which intuitively maximizes the chance makes mistakes) follows the original GAN paper [14]:


where is a training image, and is a code. As in Goodfellow et al. [14], tries to tell apart real and fake images, and is trained with the adversarial loss as follows:


Architecture. , an upconvolutional (also “deconvolutional”) network [10] with 9 upconvolutional and 3 fully connected layers. is a regular convolutional network for image classification with a similar architecture to AlexNet [26] with 5 convolutional layers followed by 3 fully connected layers, and 2 outputs (for “real” and “fake” classes).

The networks are trained via the common training framework described in Sec. S4.1 for mini-batch iterations. We use regularization of .

Specifics of DGN-AM reproduction. Note that while the original set of parameters in Nguyen et al. [37] (including a small number of iterations, an decay on code , and a step size decay) produces high-quality images, it does not allow for a long sampling chain, traveling from one mode to another. For comparisons with other models within our framework, we sample from DGN-AM with , which is slightly different from in Eq. 10, but produces qualitatively the same result.

s4.5 Training Joint PPGN-

Via the same existing network structures from DGN-AM [37], we train the generator differently by treating the entire model as being composed of 3 interleaved DAEs: one for , , and respectively (see Fig. S9c). Specifically, we add Gaussian noise to these variables during training, and by incorporating three corresponding reconstruction losses (see Fig. S9c). Adding noise to an AE can be considered as a form of regularization that encourages an autoencoder to extract more useful features [57]. Thus, here, we hypothesize that adding a small amount of noise to and might slightly improve the result. In addition, the benefits of adding noise to and training the pair and as a DAE for are two fold: 1) it allows us to formally estimate the quantity (see Eq. 6) following a previous mathematical justification from Alain & Bengio [1]; 2) it allows us to sample with a larger noise level, which might improve the mixing speed.

We add noise to during training, and train with a reconstruction loss for :


Thus, generator network is trained with 4 losses in total:


Three losses , , and remain the same as in the training of Noiseless Joint PPGN- (Sec. S4.4). Network architectures and other common training details remain the same as described in Sec. S4.4.

The amount of Gaussian noise added to the 3 different variables , , and are respectively which are 1% of the mean pixel values and 10% of the mean activations respectively in and space computed from the training set. We experimented with larger noise levels, but were not able to train the model successfully as large amounts of noise resulted in training instability. We also tried training without noise for , i.e. treating the model as being composed of 2 DAEs instead of 3, but did not obtain qualitatively better results.

Note that while we did not experiment in this paper, jointly training both the generator and the encoder via their respective maximum likelihood training algorithms is possible. Also, Xie et al. [62] has proposed a training regime that alternatively updates these two networks. That cooperative training scheme indeeds yields a generator that synthesizes impressive results for multiple image datasets [62].

Figure S9: In this paper, we propose a class of models called PPGNs that are composed of 1) a generator network that is trained to draw a wide range of image types, and 2) a replaceable “condition” network that tells what to draw (Fig. 3). Panel (b) and (c) show the components involved in the training of the generator network for two main PPGN variants experimented in this paper. Only shaded components ( and ) are being trained while others are kept frozen. b) For the Noiseless Joint PPGN- variant (Sec. 3.5), we train a generator to reconstruct images from compressed features produced by a pre-trained encoder network . Specifically, and are, respectively, features extracted at layer and of AlexNet [26] trained to classify ImageNet images (a). is trained with 3 losses: an image reconstruction loss , a feature matching loss [9] and an adversarial loss [14] . As in Goodfellow et al. [14], is trained to tell apart real and fake images. This PPGN variant produces the best image quality and thus used for the main experiments in this paper (Sec. 4). After is trained, we sample from this model following an iterative sampling procedure described in Sec. 3.5. c) For the Joint PPGN- variant (Sec. 3.4), we train the entire model as being composed of 3 interleaved DAEs respectively for , and . In other words, we add noise to each of these variables and train the corresponding AE with a reconstruction loss. The loss for remains the same as in (a), while the loss for is now composed of 4 components: . The sampling procedure for this PPGN variant is provided in Sec. 3.4. See Sec. S4 for more training and architecture details of the two PPGN variants.

(a) Real: top 9             (b) DGN-AM [37]           (c) Real: random 9           (d) PPGN (this)

Figure S10: (a) The 9 training set images that most highly activate a given class output neuron (e.g. fire engine). (b) DGN-AM [37] synthesizes high-quality images, but they often converge to the mode of high-activating images (the top-9 mode). (c) 9 training set images randomly picked from the same class. (d) Our new method PPGN produces samples with better quality and substantially larger diversity than DGN-AM, thus better representing the diversity of images from the class.
(a) Samples produced by PPGN visualized in a grid t-SNE [56]


(b) Samples hand-picked from (a) to showcase the diversity and quality of images produced by PPGN.
Figure S11: We qualitatively evaluate sample diversity by running 10 sampling chains (conditioned on class “volcano”), each for 200 steps, to produce 2000 samples, and filtering out samples with class probability of less than . From the remaining, we randomly pick 400 samples and plot them in a grid t-SNE [56] (top panel). From those, we chose a selection to highlight the quality and diversity of the samples (bottom panel). There is a tremendous amount of detail in each image and diversity across images. Samples include dormant volcanos and active volcanoes with smoke plumes of different colors from white to black to fiery orange. Some have two peaks and others one, and underneath are scree, green forests, or glaciers (complete with crevasses). The sky changes from different shades of mid-day blue through different sunsets to pitch black night.
(a) Samples produced by PPGN visualized in a grid t-SNE [56]


(b) Samples hand-picked from (a) to showcase the diversity and quality of images produced by PPGN.
Figure S12: The figures are selected and plotted in the same way as Fig. S11, but here for the “pool table” class. Once again, we observe a high degree of both image quality and diversity. Different felt colors (green, blue, and red), lighting conditions, camera angles, and interior designs are apparent.
(a) PPGN- with a DAE model of
(b) DGN-AM [37] (which has a hand-designed Gaussian prior)
(c) PPGN-: Generator and multi-layer perceptron DAE model of
(d) Joint PPGN-: joint Generator and DAE
(e) Noiseless Joint PPGN-: joint Generator and AE
Figure S13: A comparison of samples generated from a single sampling chain (starting from a real image on the left) across different models. Each panel shows two sampling chains for that model: one conditioned on the “planetarium” class and the other conditioned on the “kite” (a type of bird) class. The iteration number of the sampling chain is shown on top. (a) The sampling chain in the image space mixes poorly. (b) The sampling chain from DGN-AM [37] (in the code space with a hand-designed Gaussian prior) produces better images, but still mixes poorly, as evidenced by similar samples over many iterations. (c) To improve sampling, we tried swapping in a model represented by a 7-layer DAE for . However, the sampling chain does not mix faster or produce better samples. (d) We experimented with a better way to model , i.e. modeling via . We treat the generator and encoder as an autoencoder for and call this treatment “Noiseless Joint PPGN-” (see Sec. 3.5). This is also our best model that we use for experiments in Sec. 4. This substantially improves the mixing speed and sample quality. (e) We train the entire model as being composed of 3 DAEs and sample from it by adding noise to the image, and variables. The chain mixes slightly faster compared to (d), but generates slightly worse samples.
(a) PPGN- with a DAE model of
(b) DGN-AM [37] (which has a hand-designed Gaussian prior)
(c) PPGN-: Generator and a multi-layer perceptron DAE model of
(d) Joint PPGN-: joint Generator and DAE
(e) Noiseless Joint PPGN-: joint Generator and AE
Figure S14: Same as Fig. S13, but starting from a random code (which when pushed through generator network produces the leftmost images) except for (a) which starts from random images as the sampling operates directly in the pixel space. All of our qualitative conclusions are the same as for Fig. S13. Note that the samples in (b) appear slightly worse than the images reported in Nguyen et al. [37]. The reason is that in the new framework introduced in this paper we perform an infinitely long sampling chain at a constant learning rate to travel from one mode to another in the space. In contrast, the set of parameters (including the number of iterations, an decay on code , and a learning rate decay) in Nguyen et al. [37] is carefully tuned for the best image quality, but does not allow for a long sampling chain (Fig. 2).
(a) Very large noise ()
(b) Large noise ()
(c) Medium noise ()
(d) Small noise ()
(e) Infinitesimal noise ()
Figure S15: Sampling chains with the noiseless PPGN model starting from the code of a real image (left) and conditioning on class “planetarium” i.e. for different noise levels . The sampling step numbers are shown on top. Samples are better with a tiny amount of noise (e) than with larger noise levels (a,b,c & d), so we chose that as our default noise level for all sampling experiments with the Noiseless Joint PPGN- variant (Sec. 3.5). These results suggest that a certain amount of noise added to the DAE during training might help the chain mix faster, and thus partly motivated our experiment in Sec. 3.4.
Figure S16: The default generator network in our experiments (used in Sections 3.3 & 3.5) was trained to reconstruct images from compressed features extracted from AlexNet classification network [26] with three different losses: adversarial loss [14], feature matching loss [9], and image reconstruction loss (more training details are in Sec. S4.4). Here, we test how robust is to Gaussian noise added to an input code of a real image. We sweep across different levels of Gaussian noise with of the mean activation computed by the activations of validation set images. We observed that is robust to even a large amount of noise up to despite being trained without explicit regularizations (i.e. with noise [57] or a contractive penalty [44]).
(a) Samples produced by the DGN-AM method [37]
(b) Samples produced by PPGN (the new model proposed in this paper)
Figure S17: A comparison of images produced by the DGN-AM method [37] (top) and the new PPGN method we introduce in this paper (bottom). Both methods synthesize images conditioned on classes of scene images that the generator was never trained on. Specifically, the condition model is AlexNet trained to classify 205 categories of scene images from the MIT Places dataset [65], while the prior model is trained to generate ImageNet images. Despite having a strong, learned prior (represented by a DAE trained on ImageNet images), the PPGN (like DGN-AM) produces high-quality images for an unseen dataset.
Figure S18: The model can be given a text description of an image and asked to generate the described image. Technically, that involves the same PPGN model, but conditioning on a caption instead of a class. Here the condition network is the LRCN image captioning model from Donahue et al. [8], which can generate reasonable captions for images. For each caption, we show 4 images synthesized by starting from random initializations. Note that it reasonably draws “tarmac”, “silhouette” or “woman” although these are not categories in the ImageNet dataset [7].
Figure S19: PPGNs have the ability to perform ‘multifaceted feature visualization,’ meaning they can generate the set of inputs that activate a given hidden neuron, which improves our ability to understand what type of features that neuron has learned to detect [40, 64]. To demonstrate that capability, instead of conditioning on a class from the dataset, here we generate images conditioned on a hidden neuron previously identified as detecting text [64]: neuron number in layer of the AlexNet DNN. We run 10 sampling chains, each for 200 steps, to produce 2000 samples, and filtering out samples with a softmax probability (taken over all depth columns at the same spatial location) of less than . From the remaining, we randomly pick 400 samples and plot them in a grid t-SNE [56]. These images can be interpreted as the preferred stimuli for this text detector unit [37]. The diversity of samples is substantially improved vs. previous methods [40, 37, 64] uncovering different facets that the neuron has learned to detect. In other words, while previous methods produced one type of sample per neuron [64, 37], or lower quality samples with greater diversity [40], PPGNs produce a diversity of high-quality samples, and thus represent the state of the art in multifaceted feature visualization.
Figure S20: This figure shows the same thing as Fig. S19, except in this case for a hidden neuron previously identified to be a face detector [64] neuron (number ) in layer of the AlexNet DNN. One can see different types of faces that the neuron has learned to detect, including everything from dog faces (top row) to masks (left columns), and human faces from different angles, against different backgrounds, with and without hats, and with different shirt colors. Interestingly, we see that certain types of houses with windows that resemble eye sockets also activate this neuron (center left). This demonstrates the value of PPGNs as tools to identify unexpected facets, which aids in understanding the network, predicting some failure cases, and providing hints for how the network may be improved.
(a) Snail
(b) Studio couch
(c) Harvester
(d) Pomegranate
(e) Tape player
Figure S21: To evaluate the diversity of images within a class, here we show randomly chosen images from 5 different classes (a class label shown below each panel). Each image is the last sample produced from a 200-iteration sampling chain starting from a random initialization. The PPGN model is described in Sec. 3.5. We chose this method because it is simple, intuitive and straightforward to interpret and compare to other image generative models that do not require MCMC sampling. Another method to produce samples is to run a long sampling chain and record images that are produced at every sampling step; however, as done in Fig. S11, that method would require additional processing (including heuristic filtering and clustering) to obtain a set of different images because a well-known issue with MCMC sampling is that mixing is slow i.e. subsequent samples are often correlated. Note that one could obtain a larger diversity by running each sampling chain with a different set of parameters ( multipliers in Eq. 5); however, here we use the same set of parameters as previously reported in Sec. 3.5 for simplicity and reproducibility.
Figure S22: For a fair evaluation of the image quality produced by PPGN, here we show one randomly chosen sample for each of 120 random ImageNet classes (neither cherry-picked). Each image shown is the last sample produced from a 200-iteration sampling chain starting from a random initialization. The PPGN model is described in Sec. 3.5.
(c) (Noiseless Joint PPGN-)
(f) (no contribution from the prior)
Figure S23: To evaluate the effect of the prior term in the sampling, here we sweep across different values of this multiplier. We sample from the Noiseless Joint PPGN- model (Sec. 3.5) starting from the code of a real image (left) and conditioning on class “planetarium” with a fixed amount of noise i.e. for different values. The sampling step numbers are shown on top. Without the learned prior (f), we arrive at the DGN-AM treatment results where the chain does not mix at all (the same result as in Fig. 12(b)). Increasing up to a small value (c-e) results in a chain that mixes faster, from one planetarium to another. When the contribution of the prior is too high which overwrites the class gradients, yielding a chain that mixes from one mode of generic images to another (a). We empirically chose as the default value for the Noiseless Joint PPGN- experiments in this paper as it produces the best image quality and diversity for many classes.

Appendix S5 Inpainting

We first randomly mask out a patch of a real image (Fig. 7a). The patch size is chosen following Pathak et al. [42]. We perform the same update rule as in Eq. 11 (conditioning on a class, e.g. “volcano”), but with an additional step updating image during the forward pass:


where is the binary mask for the corrupted patch, is the uncorrupted area of the real image, and denotes the Hadamard (elementwise) product. Intuitively, we clamp the observed parts of the synthesized image and then sample only the unobserved portion in each pass. The DAE model and the image classification network model see progressively refined versions of the final, filled in image. This approach tends to fill in semantically correct content, but it often fails to match the local details of the surrounding context (Fig. 7b, the predicted pixels often do not transition smoothly to the surrounding context). An explanation is that we are sampling in the fully-connected feature space, which mostly encodes information of the global structure of objects instead of local details [64].

To encourage the synthesized image to match the context of the real image, we can add an extra condition in pixel space in the form of an additional term to the update rule in Eq. 5 to update in the direction of minimizing the cost: . This helps the filled-in pixels match the surrounding context better (Fig. 7 b vs. c). Compared to the Context-Aware Fill feature in Photoshop CS6, which is based on the PatchMatch technique [3], our method often performs worse in matching the local features of the surrounding context, but can fill in semantic objects better in many cases (Fig. 7, bird & bell pepper). More inpainting results are provided in the Fig. S24.

Figure S24: To test the ability of PPGNs to perform “inpainting”, we randomly mask out a patch in a real image, and perform class-conditional image sampling via PPGN-context (described in Sec. 4.2) to fill in missing pixels. In addition to conditioning on a specific class (here, “volcano”, “junco” and “bell pepper” respectively), we put an additional constraint on the code that it has to produce an image that matches the context region. PPGN-context performs semantically well in many cases. However, sometimes it does not match the local features of the surrounding regions. The result shows that the class-conditional image model does understand the semantics of images.

Appendix S6 Ppgn-: DAE model of

We investigate the effectiveness of using a DAE to model directly (Fig. 3a). This DAE is a 4-layer convolutional network trained on unlabeled images from ImageNet. We sweep across different noise amounts for training the DAE and empirically find that a noise level of of the pixel value range, corresponding to , produces the best results. Full training and architecture details are provided in Sec. S4.2.

We sample from this chain following Eq. 7 with 555 The and correspond to the noise level used while training the DAE, and the value is chosen manually to produce the best samples. and show samples in Figs. 12(a) & 13(a). PPGN- exhibits two expected problems: first, it models the data distribution poorly, evidenced by the images becoming blurry over time. Second, the chain mixes slowly, changing only slightly in hundreds of steps.

Note that, instead of training the above DAE, one can also form an -DAE by combining a pair of separately trained encoder and a generator into a composition . We also experiment with this model and call it Joint PPGN-. The details of network and and how they can be combined are described in Sec. 3.4 (Joint PPGN-). For sampling, we sample in the image space, similarly to the PPGN- in this section. We found that Joint PPGN- model performs better than PPGN-, but worse than Joint PPGN- (data not shown).

Appendix S7 Why PPGNs produce high-quality images

One practical question is why Joint PPGN- produces high-quality images at a high resolution for 1000-class ImageNet more successfully than other existing latent variable models [41, 47, 43]. We can consider this question from two perspectives.

First, from the perspective of the training loss, is trained with the combination of three losses (Fig. S9b), which may be a beneficial approach to model . The GAN [14] loss, which is the gradient of , that is used to train pushes each reconstruction toward a mode of real images and away from the current reconstruction distribution. This can be seen by noting that the Bayes optimal is [14]. Since is already near a mode of , the net effect is to push towards one of the modes of , thus making the reconstructions sharper and more plausible. If one uses only the GAN objective and no reconstruction objectives ( losses in the pixel or feature space), may bring the sample far from the original , possibly collapsing several modes of into fewer modes. This is the typical, known “missing-mode” behavior of GANs [47, 14] that arises in part because GANs minimize the Jensen-Shannon divergence rather than Kullback-Leibler divergence between and , leading to an over-memorization of modes [53]. The reconstruction losses are important to combat this missing mode problem and may also serve to enable better convergence of the feature space autoencoder to the distribution it models, which is necessary in order to make the -space reconstruction properly estimate [1].

Second, from the perspective of the learned mapping, we train the parameters of the pair of networks as an -AE, mapping (see Fig. S9b). In this configuration, as in VAEs [24] and regular DAEs [57], the one-to-one mapping helps prevent the typical latent input missing mode collapse that occurs in GANs, where some input images are not representable using any code [14, 47]. However, unlike in VAEs and DAEs, where the latent distribution is learned in a purely unsupervised manner, we leverage the labeled ImageNet data to train in a supervised manner that yields a distribution of features that we hypothesize to be semantically meaningful and useful for building a generative image model. To further understand the effectiveness of using deep, supervised features, it might be interesting future work to train PPGNs with other feature distributions such as random features or shallow features (e.g. produced by PCA).

Model Image size Inception accuracy Inception score MS-SSIM Percent of classes
Real ImageNet images 76.1% 210.4 4.6 0.10 0.06 999 / 1000
AC-GAN [41] 10.1% N/A N/A 847 / 1000
PPGN 59.6% 60.6 1.6 0.23 0.11 829 / 1000
PPGN samples resized to 54.8% 47.7 1.0 0.25 0.11 770 / 1000
Table S3: A comparison between real ImageNet validation set images, AC-GAN [41] samples, PPGN samples and their resized versions. Following the literature, we report Inception scores [47] (higher is better) and Inception accuracies [41] (higher is better) to evaluate sample quality, and MS-SSIM score [41] (lower is better), which measures sample diversity within each class. As in Odena et al. [41], the last column (“Percent of classes”, higher is better) shows the number of classes that are more diverse (by MS-SSIM metric) than the least diverse class in ImageNet. Overall, PPGN samples are of substantially higher quality quality than AC-GAN samples (by Inception accuracy, i.e. PPGN samples are far more recognizable by the Google Inception network [50] than AC-GAN samples). Their diversity scores are similar (last column, vs. ). However, by all 4 metrics, PPGN samples have substantially lower diversity and quality than real images. This result aligns with our qualitative observations in Figs. S25 & S10.
Row 2: Note that we chose to compare with AC-GAN [41] because, this model is also class-conditional and, to the best of our knowledge, it produces the previous highest resolution ImageNet images () in the literature.
Row 3: For comparison with ImageNet images, the spatial dimension of the samples from the generator is and we did not crop it to as done in other experiments in the paper.
Row 4: Although imperfect, we resized PPGN samples down to (last row) for comparison with AC-GAN.
(a) 60 training set images randomly taken from the “volcano” class.


(b) 60 PPGN samples randomly selected from 2000 samples, which are produced from 10 200-step sampling chains.
Figure S25: To evaluate how well PPGN samples represent the training set images, we compare 60 real images (top) vs. 60 PPGN generated images (bottom). All images are randomly selected. While the set of generated images are high-quality, they have a much lower diversity compared to the set of real images. This observation aligns with our quantitative measure in Table S3.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description