Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models
Abstract
Deep generative neural networks have proven effective at both conditional and unconditional modeling of complex data distributions. Conditional generation enables interactive control, but creating new controls often requires expensive retraining. In this paper, we develop a method to condition generation without retraining the model. By posthoc learning latent constraints, value functions that identify regions in latent space that generate outputs with desired attributes, we can conditionally sample from these regions with gradientbased optimization or amortized actor functions. Combining attribute constraints with a universal “realism” constraint, which enforces similarity to the data distribution, we generate realistic conditional images from an unconditional variational autoencoder. Further, using gradientbased optimization, we demonstrate identitypreserving transformations that make the minimal adjustment in latent space to modify the attributes of an image. Finally, with discrete sequences of musical notes, we demonstrate zeroshot conditional generation, learning latent constraints in the absence of labeled data or a differentiable reward function. Code with dedicated cloud instance has been made publicly available (https://goo.gl/STGMGx).
1 Introduction
Generative modeling of complicated data such as images and audio is a longstanding challenge in machine learning. While unconditional sampling is an interesting technical problem, it is arguably of limited practical interest in its own right: if one needs a nonspecific image (or sound, song, document, etc.), one can simply pull something at random from the unfathomably vast media databases on the web. But that naive approach may not work for conditional sampling (i.e., generating data to match a set of userspecified attributes), since as more attributes are specified, it becomes exponentially less likely that a satisfactory example can be pulled from a database. One might also want to modify some attributes of an object while preserving its core identity. These are crucial tasks in creative applications, where the typical user desires finegrained controls (Bernardo et al., 2017).
One can enforce userspecified constraints at training time, either by training on a curated subset of data or with conditioning variables. These approaches can be effective if there is enough labeled data available, but they require expensive model retraining for each new set of constraints and may not leverage commonalities between tasks. Deep latentvariable models, such as Generative Adversarial Networks (GANs; Goodfellow et al., 2014) and Variational Autoencoders (VAEs; Kingma & Welling, 2013; Rezende et al., 2014), learn to unconditionally generate realistic and varied outputs by sampling from a semantically structured latent space. One might hope to leverage that structure in creating new conditional controls for sampling and transformations (Brock et al., 2016).
Here, we show that new constraints can be enforced posthoc on pretrained unsupervised generative models. This approach removes the need to retrain the model for each new set of constraints, allowing users to more easily define custom behavior. We separate the problem into (1) creating an unsupervised model that learns how to reconstruct data from latent embeddings, and (2) leveraging the latent structure exposed in that embedding space as a source of prior knowledge, upon which we can impose behavioral constraints.
Our key contributions are as follows:

We show that it is possible to generate conditionally from an unconditional model, learning a critic function in latent space and generating highvalue samples with either gradientbased optimization or an amortized actor function , even with a nondifferentiable decoder (e.g., discrete sequences).

Focusing on VAEs, we address the tradeoff between reconstruction quality and sample quality (without sacrificing diversity) by enforcing a universal “realism” constraint that requires samples in latent space to be indistinguishable from encoded data (rather than prior samples).

Because we start from a VAE that can reconstruct inputs well, we are able to apply identitypreserving transformations by making the minimal adjustment in latent space needed to satisfy the desired constraints. For example, when we adjust a person’s expression or hair, the result is still clearly identifiable as the same person (see Figure 5). This contrasts with pure GANbased transformation approaches, which often fail to preserve identity.

Zeroshot conditional generation. Using samples from the VAE to generate exemplars, we can learn an actorcritic pair that satisfies userspecified rulebased constraints in the absence of any labeled data.
2 Background
Decoderbased deep generative models such as VAEs and GANs generate samples that approximate a population distribution by passing samples from some simple tractable distribution (often ) through a deep neural network. GANs are trained to fool an auxiliary classifier that tries to learn to distinguish between real and synthetic samples. VAEs are fit to data using a variational approximation to maximumlikelihood estimation:
(1) 
where the “encoder” distribution is an approximation to the posterior , is a tractable likelihood function that depends on some parameters output by a “decoder” function , and and are fit to maximize the evidence lower bound (ELBO) . The likelihood is often chosen to be a product of simple distributions such as for continuous data or for binary data.
GANs and VAEs have complementary strengths and weaknesses. GANs suffer from the “modecollapse” problem, where the generator assigns mass to a small subset of the support of the population distribution—that is, it may generate realistic samples, but there are many more realistic samples that it cannot generate. This is particularly problematic if we want to use GANs to manipulate data rather than generate new data; even GAN variants that include some kind of inference machinery (e.g., Donahue et al., 2016; Dumoulin et al., 2016; Perarnau et al., 2016) to determine what best matches some tend to produce reconstructions that are reminiscent of the input but do not preserve its identity.
On the other hand, VAEs (especially those with simple likelihoods ) often exhibit a tradeoff between sharp reconstructions and sensiblelooking samples (see Figure 2). That is, depending on what hyperparameters they are trained with (e.g., latent dimensionality and the scale of the likelihood term), VAEs tend to either produce blurry reconstructions and plausible (but blurry) novel samples, or bizarre samples but sharp reconstructions. It has been argued (Makhzani et al., 2016) that this is due to the “holes” problem; the decoder is trained on samples from the marginal posterior , which may have very high KL divergence to the presupposed marginal (Hoffman & Johnson, 2016). In particular, if the decoder, , can reconstruct arbitrary values of with high accuracy (as in the case of small ) then the typical posterior will be highly concentrated. We show this experimentally in supplemental Figure 16. If underestimates the posterior variance (as it usually does), then the marginal posterior will also be highly concentrated, and samples from may produce results that are far from typical reconstructions . If we tune to maximize the ELBO (Bishop, 2006), we find the optimal (supplemental Table 4). Figure 2 shows that this choice does indeed lead to good reconstructions but strangelooking samples.
Conditional GANs (CGAN; Mirza & Osindero, 2014) and conditional VAEs (CVAE; Sohn et al., 2015) can generate samples conditioned on attribute information when available, but they must be trained with knowledge of the attribute labels for the whole training set, and it is not clear how to adapt them to new attributes without retraining from scratch. Furthermore, CGANs and CVAEs suffer from the same problems of modecollapse and blurriness as their unconditional cousins.
We take a different approach to conditional generation and identitypreserving transformation. We begin by training an unconditional VAE with hyperparameters chosen to ensure good reconstruction (at the expense of sample quality). We then train a “realism” critic to predict whether a given maps to a highquality sample. We also train critics to predict whether a given maps to a sample that manifests various attributes of interest. To generate samples that are both realistic and exhibit desired attributes, one option is to optimize random vectors until they satisfy both the realism and attribute critics. Alternately, we can amortize this cost by training an “actor” network to map a random set of vectors to a subregion of latent space that satisfies the constraints encoded by the critics. By encouraging these transformed vectors to remain as close as possible to where they started, we alleviate the modecollapse problem common to GANs.
3 The “Realism” Constraint: Sharpening VAE Samples
We define the realism constraint implicitly as being satisfied by samples from the marginal posterior and not those from . By enforcing this constraint, we can close the gap between reconstruction quality and sample quality (without sacrificing sample diversity).
As shown in Figure 1, we can train a critic to differentiate between samples from and . The critic loss, , is simply the crossentropy, with labels for and for . We found that the realism critic had little trouble generalizing to unseen data; that is, it was able to recognize samples from as being “realistic” (Figure 3).
Sampling from the prior is sufficient to train for models with lower KL Divergence, but if the KL Divergence between and is large, the chances of sampling a point that has high probability under becomes vanishingly small. This leads to poor sample quality and makes it difficult for to learn a tight approximation of solely by sampling from . Instead, we use an innerloop of gradientbased optimization, , to move prior samples to points deemed more like by . For clarity, we introduce the shorthand and . This gives us our critic loss for the realism constraint:
(2) 
Since this innerloop of optimization can slow down training, we amortize the generation by using a neural network as a function approximator. There are many examples of such amortization tricks, including the encoder of a VAE, generator of a GAN, and fast neural style transfer (Ulyanov et al., 2016; Li & Wand, 2016; Johnson et al., 2016). As with a traditional GAN, the parameters of the function are updated to maximize the value ascribes to the shifted latent points. One of the challenges using a GAN in this situation is that it is prone to modecollapse. However, an advantage of applying the GAN in latent space is that we can regularize to try and find the closest point in latent space that satisfies , thus encouraging diverse solutions. We introduce a regularization term, to encourage nearby solutions, while allowing more exploration than a mean square error term. As a VAE utilizes only a fraction of its latent dimensions, we scale the distance penalty of each dimension by its utilization, as indicated by the squared reciprocal of the scale of the encoder distribution , averaged over the training dataset, . The regularized loss is
(3) 
4 Attribute Constraints: Conditional Generation
CelebA  Accuracy  Precision  Recall  F1 Score  

(This Work) 10 Attributes  
Test Data  0.936  0.901  0.893  0.895  
0.942  0.914  0.904  0.906  80.7  
(Small Model)  0.926  0.898  0.860  0.870  58.9 
0.928  0.903  0.863  0.874  17.0  
(Perarnau et al., 2016) 18 Attributes  
Test Data  0.928  0.715  
IcGAN  0.860  0.524 
We want to generate samples that are realistic, but we also want to control what attributes they exhibit. Given binary attribute labels for a dataset, we can accomplish this by using a CGAN in the latent space, which amounts to replacing and with conditional versions and and concatenating to as input. If both the actor and critic see attribute information, must find points in latent space that could be samples from with attributes .
This procedure is computationally inexpensive relative to training a generative model from scratch. In most of our experiments, we use a relatively large CGAN actorcritic pair (4 fully connected ReLU layers of 2048 units each), which during training uses about fewer FLOPs/iteration than the unconditional VAE. We also trained a much smaller CGAN actorcritic pair (3 fully connected ReLU layers of 256 units), which uses about fewer FLOPs/iteration than the VAE, and achieves only slightly worse results than the larger CGAN (supplemental Figure 14 and Table 1).
Figure 4 demonstrates the quality of conditional samples from a CGAN actorcritic pair and the effect of the distance penalty, which constrains generation to be closer to the prior sample, maintaining similarity between samples with different attributes. The regularized CGAN actor has less freedom to ignore modes by pushing many random vectors to the same area of the latent space, since it is penalized for moving samples from too far. The increased diversity across rows of the regularized CGAN is evidence that this regularization does fight modecollapse (additional qualitative evidence is in supplemental Figures 7 and 8). However, without a distance penalty, samples appear more a bit realistic with more prominent attributes. This is supported by Table 1, where we use a separately trained attribute classification model to quantitatively evaluate samples. The actor with no penalty generates samples that are more accurately classified than the actor with a penalty but also shifts the samples much farther in latent space.
Although we used a VAE as the base generative model, our approach could also be used to generate highquality conditional samples from pretrained classical autoencoders. We show in supplemental Figure 15 that we obtain reasonably good conditional samples (albeit with highfrequency spatial artifacts) as (equivalent to a classical autoencoder). Learning the decoder using VAE training encourages q(z) to fill up as much of the latent space as possible (without sacrificing reconstruction quality), which in turn encourages the decoder to map more of the latent space to reasonablelooking images. The prior also imposes a natural scale on the latent variables.
5 IdentityPreserving Transformations
If we have a VAE that can produce good reconstructions of heldout data, we can transform the attributes of the output by gradientbased optimization. We simply need to train a critic, , to predict the attribute labels of the data embeddings , and use a crossentropy loss to train. Then, starting from a data point, , we can perform gradient descent on the the realism constraint and attribute constraint jointly, . Note that it is helpful to maintain the realism constraint to keep the image from distorting unrealistically. Using the same procedure, we can also conditionally generate new samples (supplemental Figure 9) by starting from .
Figure 5 demonstrates transformations applied to samples from the heldout evaluation dataset. Note that since the reconstructions are close to the original images, the transformed images also maintain much of their structure. This contrasts with supplemental Figure 10, where a distancepenaltyfree CGAN actor produces transformations that share attributes with the original but shift identity. We could preserve identity by introducing a distance penalty, but find that it is much easier to find the correct weighting of realism cost, attribute cost, and distance penalty through optimization, as each combination does not require retraining the network.
6 Rulebased Constraints: Zeroshot Conditional Generation
So far, we have assumed access to labeled data to train attribute classifiers. We can remove the need to provide labeled examples by leveraging the structure learned by our pretrained model, using it to generate exemplars that are scored by a usersupplied reward function. If we constrain the reward function to be bounded, , the problem becomes very similar to previous GAN settings, but now the actor, , and critic, , are working together. aims to best approximate the true value of each latent state, , and aims to shift samples from the prior to highvalue states. The critic loss is the crossentropy from , and the actor loss is the same as in equation 3, where we again have a distance penalty to promote diversity of outputs.
Note that the reward function and VAE decoder need not necessarily be differentiable, as the critic learns a value function to approximate the reward, which the actor uses for training. To highlight this, we demonstrate that the output of a recurrent VAE model can be constrained to satisfy hardcoded rulebased constraints.
We first train an LSTM VAE (details in the Appendix) on melodic fragments. Each melody, , is represented as a sequence of categorical variables. In order to examine our ability to constrain the pitch classes and note density of the outputs, we define two reward functions, one that encourages notes from a set of pitches , and another for that encourages melodies to have at least notes:
(4) 
Figure 6 gives an example of controlling the pitch class and note density of generated outputs, which is quantitatively supported by the results in Table 2. During training, the actor goes through several phases of exploration and exploitation, oscillating between expanding to find new modes with high reward and then contracting to find the nearest locations of those modes, eventually settling into high value states that require only small movements in the latent space (supplemental Figure 11).
Actor  

Prior  0.579 (0.43%)  0.417 (0.04%)   
0.991 (70.8%)  0.459 (0.01%)  0.015  
0.982 (62.4%)  0.985 (84.9%)  0.039 
7 Related Work
Conditional GANs (Mirza & Osindero, 2014) and VAEs (Sohn et al., 2015) introduce conditioning variables at training time. Sohn et al. (2015) allow these variables to affect the distribution in latent space, but still require that be a tractable distribution. Perarnau et al. (2016) use CGANs to adjust images, but because CGANs cannot usually reconstruct arbitrary inputs accurately, they must resort to imagespace processing techniques to transfer effects to the original input. White (2016) propose adding “attribute vectors” to samples from as a simple and effective heuristic to perform transformations, which relies heavily on the linearity of the latent space.
Some recent work has focused on applying more expressive prior constraints to VAEs (Rezende et al., 2014; Sønderby et al., 2016; Chen et al., 2017; Tomczak & Welling, 2017). The prior that maximizes the ELBO is (Hoffman & Johnson, 2016); one can interpret our realism constraint as trying to find an implicit distribution that is indistinguishable from . Like the adversarial autoencoder of Makhzani et al. (2016), our realism constraint relies on a discriminative model, but instead of trying to force to equal some simple , we only weakly constrain and then use a classifier to “clean up” our results.
Like this work, the recently proposed adversarially regularized autoencoder (Junbo et al., 2017) uses adversarial training to generate latent codes in a latent space discovered by an autoencoder; that work focuses on unconditional generation. GómezBombarelli et al. (2016) train classifiers in the latent space of a VAE to predict what latent variables map to molecules with various properties, and then use iterative gradientbased optimization in the latent space to find molecules that have a desired set of properties. On molecule data, their procedure generates invalid molecules rarely enough that they can simply reject these samples, which are detected using offtheshelf software. By contrast, the probability of generating realistic images under our pretrained VAE is astronomically small, and no simple criterion for detecting valid images exists.
Jaques et al. (2017) also use a classifier to constrain generation; they use a Deep Qnetwork as an auxiliary loss for training an LSTM. Closest to Section 6, Nguyen et al. (2016a, b) generate very high quality conditional images by optimizing a sample from the latent space of a generative network to create an image that maximizes the class activations of a pretrained ImageNet classifier. Our work differs in that we learn an amortized generator/discriminator directly in the latent space and we achieve diversity through regularizing by the natural scale of the latent space rather than through a modified Langevin sampling algorithm.
8 Discussion and Future Work
We have demonstrated a new approach to conditional generation by constraining the latent space of an unconditional generative model. This approach could be extended in a number of ways.
One possibility would be to plug in different architectures, including powerful autoregressive decoders or adversarial decoder costs, as we make no assumptions specific to independent likelihoods. While we have considered constraints based on implicit density estimation, we could also estimate the constrained distribution directly with an explicit autoregressive model or another variational autoencoder. The efficacy of autoregressive priors in VAEs is promising for this approach (Kingma et al., 2016). Conditional samples could then be obtained by ancestral sampling, and transformations by using gradient ascent to increase the likelihood under the model. Active or semisupervised learning approaches could reduce the sample complexity of learning constraints. Realtime constraint learning would also enable new applications; it might be fruitful to extend the reward approximation of Section 6 to incorporate user preferences as in (Christiano et al., 2017).
Acknowledgments
Many thanks to Jascha SohlDickstein, Colin Raffel, and Doug Eck for their helpful brainstorming and encouragement.
Appendix A Appendix
a.1 Experimental Details
For images, we use the MNIST digits dataset (LeCun et al., 2010) and the Largescale CelebFaces Attributes (CelebA) dataset (Liu et al., 2015). MNIST images are pixels and greyscale scaled to [0, 1]. For attributes, we use the number class label of each digit. CelebA images are centercropped to pixels and then downsampled to RGB pixels and scaled to [0, 1]. We find that many of the attribute labels are not strongly correlated with changes in the images, so we narrow the original 40 attributes to the 10 most visually salient: blond hair, black hair, brown hair, bald, eyeglasses, facial hair, hat, smiling, gender, and age.
For melodies, we scraped the web to collect over 1.5 million publicly available MIDI files. We then extracted 16bar melodies by sliding a window with a single bar stride over each nonpercussion instrument with a time signature, keeping only the note with the highest pitch when multiple overlap. This produced over 3 million unique melodies. We represent each melody as a sequence of 256 (16 per bar) categorical variables taking one of 130 discrete states at each sixteenth note: 128 noteon pitches, a hold state, and a rest state.
a.2 Model Architectures
All encoders, decoders, and classifiers are trained with the Adam optimizer (Kingma & Ba, 2015), with learning rate 3e4, 0.9, and 0.999.
To train , and we follow the training procedure of Gulrajani et al. (2017), applying a gradient penalty of 10, training and in a 10:1 step ratio, and use the Adam optimizer with learning rate 3e4, 0.0, and 0.9. While not necessary to converge, we find it improves the stability of optimization. We do not apply any of the other tricks of GAN training such as batch normalization, minibatch discrimination, or onesided label smoothing (Radford et al., 2015; Salimans et al., 2016). As samples from are easier to discriminate than samples from , we train by sampling from at a rate 10 times less than . For actors with innerloop optimization, , 100 iterations of Adam are used with with learning rate 1e1, 0.9, and 0.999.
MNIST Feedforward VAE
To model the MNIST data, we use a deep feedforward neural network (Figure 13a).
The encoder is a series of 3 linear layers with 1024 outputs, each followed by a ReLU, after which an additional linear layer is used to produce 2048 outputs. Half of the outputs are used as the and the softplus of the other half are used as the to parameterize a 1024dimension multivariate Gaussian distribution with a diagonal covariance matrix for .
The decoder is a series of 3 linear layers with 1024 outputs, each followed by a ReLU, after which an additional linear layer is used to produce 28x28 outputs. These outputs are then passed through a sigmoid to generate the output image.
CelebA Convolutional VAE
To model the CelebA data, we use a deep convolutional neural network (Figure 13b).
The encoder is a series of 4 2D convolutional layers, each followed by a ReLU. The convolution kernels are of size , , , and , with 2048, 1024, 512, and 256 output channels, respectively. All convolutional layers have a stride of 2. After the final ReLU, a linear layer is used to produce 2048 outputs. Half of the outputs are used as the and the softplus of the other half are used as the to parameterize a 1024dimension multivariate Gaussian distribution with a diagonal covariance matrix for .
The decoder passes the through a 4x4x2048 linear layer, and then a series of 4 2D transposed convolutional layers, all but the last of which are followed by a ReLU. The deconvolution kernels are of size , , , and , with 1024, 512, 256, and 3 output channels, respectively. All deconvolution layers have a stride of 2. The output from the final deconvolution is passed through a sigmoid to generate the output image.
The classifier that is trained to predict labels from images are identical to the VAE encoders except that they end with a sigmoid crossentropy loss.
Melody Sequence VAE
Music is fundamentally sequential, so we use an LSTMbased sequence VAE for modelling monophonic melodies (Figure 13c).
The encoder is made up of a singlelayer bidirectional LSTM, with 2048 units per cell. The final output in each direction is concatenated and passed through a linear layer to produce 1024 outputs. Half of the outputs are used as the and the softplus of the other half are used as a to parameterize a 512dimension multivariate Gaussian distribution with a diagonal covariance matrix for .
Since musical sequences often have structure at the bar level, we use a hierarchical decoder to model long melodies. First, the goes through a linear layer to initialize the state of a 2layer LSTM with 1024 units per layer, which outputs 16 embeddings of size 512 each, one per bar. Each of these embeddings are passed through a linear layer to produce 16 initial states for another 2layer LSTM with 1024 units per layer. This barlevel LSTM autoregressively produces individual sixteenth note events, passing its output through a linear layer and softmax to create a distribution over the 130 classes. This categorical distribution is used to compute a crossentropy loss during training or samples at inference time. In addition to generating the initial state at the start of each bar, the embedding for the current bar is concatenated with the previous output as the input at each time step.
Actor Feedforward Network
For , we use a deep feedforward neural network (Figure 12a) in all of our experiments.
The network is a series of 4 linear layers with 2048 outputs, each followed by a ReLU, after which an additional linear layer is used to produce outputs. Half of the outputs are used as the and the sigmoid of the other half are used as . The transformed is the computed as . This aids in training as the network only has to then predict shifts in .
When conditioning on attribute labels, , to compute , the labels are passed through a linear layer producing 2048 outputs which are concatenated with as the model input.
Critic Feedforward Network
For , we use a deep feedforward neural network (Figure 12b) in all of our experiments.
The network is a series of 4 linear layers with 2048 outputs, each followed by a ReLU, after which an additional linear layer is used to produce a single output. This output is passed through a sigmoid to compute .
When conditioning on attribute labels, , to compute , the labels are passed through a linear layer producing 2048 outputs which are concatenated with as the model input.
a.3 Supplemental Figures
Figure  Black  Blond  Brown  Eye  

Label  Bald  Hair  Hair  Hair  glasses  Male  Beard  Smiling  Hat  Young 
Blond Hair  0  0  1  0  0  0  0  1  0  1 
Brown Hair  0  0  0  1  0  0  0  1  0  1 
Black Hair  0  1  0  0  0  0  0  1  0  1 
Male  0  1  0  0  0  1  0  1  0  1 
Facial Hair  0  1  0  0  0  1  1  1  0  1 
Eyeglasses  0  1  0  0  1  1  1  1  0  1 
Bald  1  0  0  0  0  1  1  1  0  1 
Aged  1  0  0  0  0  1  1  0  0  0 
.
LL  KL  ELBO  

1  11360  30  11390 
1e1  11325  150  11475 
1e2  15680  600  15080 
1e3  16090  1950  14140 
1e4  16150  3650  12500 
References
 Bernardo, ZbyszyÅski, Fiebrink, and Grierson. Interactive machine learning for enduser innovation. In Proceedings of the AAAI Symposium Series: Designing the User Experience of Machine Learning Systems, 2017. URL http://research.gold.ac.uk/19767/1/BernardoZbyszynskiFiebrinkGrierson_UXML_2017.pdf.
 Christopher M Bishop. Pattern recognition and machine learning (information science and statistics) springerverlag new york. Inc. Secaucus, NJ, USA, 2006.
 Andrew Brock, Theodore Lim, J. M. Ritchie, and Nick Weston. Neural Photo Editing with Introspective Adversarial Networks. arXiv preprint, 2016. URL https://arxiv.org/abs/1609.07093.
 Xi Chen, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. Variational Lossy Autoencoder. In Proceedings of the International Conference on Learning Representations (ICLR), 2017. URL http://arxiv.org/abs/1611.02731.
 Paul Christiano, Jan Leike, Tom B Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. arXiv preprint, 2017. URL https://arxiv.org/abs/1706.03741.
 Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016.
 Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, and Aaron Courville. Adversarially Learned Inference. In Proceedings of the International Conference on Learning Representations (ICLR), 2016. URL https://arxiv.org/abs/1606.00704.
 R. GómezBombarelli, J. N. Wei, D. Duvenaud, J. M. HernándezLobato, B. SánchezLengeling, D. Sheberla, J. AguileraIparraguirre, T. D. Hirzel, R. P. Adams, and A. AspuruGuzik. Automatic chemical design using a datadriven continuous representation of molecules. ArXiv eprints, October 2016.
 Ian Goodfellow, Jean PougetAbadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), 2014. URL http://papers.nips.cc/paper/5423generativeadversarialnets.pdf.
 Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved Training of Wasserstein GANs. arXiv preprint, 2017. URL http://arxiv.org/abs/1704.00028.
 Matthew D. Hoffman and Matthew J. Johnson. ELBO surgery: yet another way to carve up the variational evidence lower bound. In Workshop in Advances in Approximate Bayesian Inference, NIPS, 2016. URL http://approximateinference.org/accepted/HoffmanJohnson2016.pdf.
 Natasha Jaques, Shixiang Gu, Dzmitry Bahdanau, JosÃ© Miguel HernÃ¡ndezLobato, Richard E. Turner, and Douglas Eck. Sequence tutor: Conservative finetuning of sequence generation models with klcontrol. In Proceedings of the International Conference on Learning Representations (ICLR), 2017. URL https://arxiv.org/abs/1611.02796.
 Justin Johnson, Alexandre Alahi, and Li FeiFei. Perceptual losses for realtime style transfer and superresolution. In European Conference on Computer Vision, pp. 694–711. Springer, 2016.
 Junbo, Zhao, Yoon Kim, Kelly Zhang, Alexander M. Rush, and Yann LeCun. Adversarially Regularized Autoencoders for Generating Discrete Structures. arXiv preprint, 2017. URL http://arxiv.org/abs/1706.04223.
 Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR), 2015. URL http://arxiv.org/abs/1412.6980.
 Diederik P. Kingma and Max Welling. Autoencoding variational bayes. In Proceedings of the International Conference on Learning Representations (ICLR), 2013. URL http://arxiv.org/abs/1312.6114.
 Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improving Variational Inference with Inverse Autoregressive Flow. In Advances in Neural Information Processing Systems (NIPS), 2016. URL http://arxiv.org/abs/1606.04934.
 Yann LeCun, Corinna Cortes, and Christopher JC Burges. Mnist handwritten digit database. at&t labs, 2010.
 Chuan Li and Michael Wand. Precomputed realtime texture synthesis with markovian generative adversarial networks. In European Conference on Computer Vision, pp. 702–716. Springer, 2016.
 Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015. URL https://arxiv.org/abs/1411.7766.
 Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders. In Proceedings of the International Conference on Learning Representations (ICLR), 2016. URL http://arxiv.org/abs/1511.05644.
 Mehdi Mirza and Simon Osindero. Conditional Generative Adversarial Nets. arXiv preprint, 2014. URL http://arxiv.org/abs/1411.1784.
 Anh Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In Advances in Neural Information Processing Systems (NIPS), 2016a. URL https://arxiv.org/abs/1605.09304.
 Anh Nguyen, Jason Yosinski, Yoshua Bengio, Alexey Dosovitskiy, and Jeff Clune. Plug & play generative networks: Conditional iterative generation of images in latent space. arXiv preprint arXiv:1612.00005, 2016b.
 Guim Perarnau, Joost van de Weijer, Bogdan Raducanu, and Jose M. Álvarez. Invertible Conditional GANs for image editing. In Workshop on Adversarial Training, NIPS, 2016. URL http://arxiv.org/abs/1611.06355%****␣output.bbl␣Line␣200␣****http://www.cvc.uab.es/LAMP/wpcontent/uploads/Projects/pdfs/presentationNIPS.pdf.
 Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2015. URL http://arxiv.org/abs/1511.06434.
 Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
 Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, and Xi Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems 29, 2016. URL http://papers.nips.cc/paper/6125improvedtechniquesfortraininggans.pdf.
 Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems (NIPS), 2015. URL http://papers.nips.cc/paper/5775learningstructuredoutputrepresentationusingdeepconditionalgenerativemodels.pdf.
 Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder variational autoencoders. In Advances in Neural Information Processing Systems, pp. 3738–3746, 2016.
 Jakub M. Tomczak and Max Welling. VAE with a VampPrior. CoRR, abs/1705.07120, 2017. URL http://arxiv.org/abs/1705.07120.
 Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor S. Lempitsky. Texture networks: Feedforward synthesis of textures and stylized images. In Proceedings of the 33rd International Conference on Machine Learning (ICML), 2016. URL http://arxiv.org/abs/1603.03417.
 Tom White. Sampling generative networks: Notes on a few effective techniques. arXiv preprint, 2016. URL https://arxiv.org/abs/1609.04468.