LogicGAN: Logic-guided Generative Adversarial Networks
Generative Adversarial Networks (GANs) are a revolutionary class of Deep Neural Networks (DNNs) that have been successfully used to generate realistic images, music, text, and other data. However, it is well known that GAN training can be notoriously resource-intensive and presents many challenges. Further, a potential weakness in GANs is that discriminator DNNs typically provide only one value (loss) of corrective feedback to generator DNNs (namely, the discriminator’s assessment of the generated example). By contrast, we propose a new class of GAN we refer to as LogicGAN, that leverages recent advances in (logic-based) explainable AI (xAI) systems to provide a \enquotericher form of corrective feedback from discriminators to generators. Specifically, we modify the gradient descent process using xAI systems that specify the reason as to why the discriminator made the classification it did, thus providing the richer corrective feedback that helps the generator to better fool the discriminator. Using our approach, we show that LogicGANs learn much faster on MNIST data, achieving an improvement in data efficiency of 45% in single and 12.73% in multi-class setting over standard GANs while maintaining the same quality as measured by Fréchet Inception Distance. Further, we argue that LogicGAN enables users greater control over how models learn than standard GAN systems.
Generative Adversarial Networks (GANs), introduced only a few short years ago, already have had a revolutionary impact on generating data of varied kinds such as images, text, music, and videos . The critical insight behind a GAN is the idea of corrective feedback loop from a deep neural network (DNN) called the discriminator back to a generator. However, a weakness of the standard GAN architecture is that the discriminator DNNs provide only one real-numbered value of corrective feedback to the generator DNN. That is, the discriminator simply informs the generator whether or not the data generated by it was acceptable, in the form of a single loss value. The downside of this approach is that the generator has to somehow figure out why the discriminator rejected its generated data, and then make corrections as it learns how to fool the discriminator. Hence, the research questions we address in this paper are the following: is it possible to provide \enquotericher corrective feedback from the discriminator back to the generator? If so, does it enable GANs to learn faster and with greater accuracy?
1.1 An Overview of LogicGANs
To answer the above-mentioned research questions, we propose a new class of GANs we refer to as LogicGAN
A high-level architectural overview of our LogicGAN system is given in Figure 1. Consider the problem of training a GAN with the aim of producing images of digits. Initially, the untrained generator is given a noise sample from a prior noise distribution and produces an example that is then given to discriminator . The loss is calculated, and then the generated image , the discriminator , and the output of the discriminator are fed into an xAI system, such as SHAP, that produces an explanation as to why the image resulted in that loss value. This explanation is then used to guide the training of the generator (refer Section 4 for more details).
A common analogy for GAN training is that of a counterfeiter (the generator) and a detective (the discriminator) playing an adversarial game where the counterfeiter makes a fake and the detective tries to tell if it’s real or not. Over the training process, the detective and counterfeiter both get better at their jobs, with the end goal being that the counterfeiter is so proficient that their fakes can pass for the real thing. To extend this analogy to the LogicGAN setting, our method works by using an expert in the field (the xAI system) to help improve the counterfeiter. When the detective recognizes a fake, the expert tells the counterfeiter what parts of the fake tipped off the detective. The counterfeiter is thus able to learn better why the detective detected a fake, and make better decisions to avoid pitfalls over future learning iterations.
SHAP xAI Explanations: The explanation produced by SHAP  takes the form of an \enquoteexplanation matrix , wherein, for every feature in an example (e.g., an input image), a value in the range [-1,1] is assigned to the corresponding cell of the matrix . If a feature (more precisely, the corresponding cell in ) is assigned value 0 (or close to 0), then it means that pixel had no impact on the classification decision made by the discriminator . If a feature is assigned a value of 1 (or close to 1), then that means that feature is very important and \enquotedeterminative in the classification made by . Finally, if a feature is assigned a value of -1 (or close to -1), then that means feature \enquotehurt the classification made by (more precisely, if the feature were to be changed, then the confidence of in its classification would improve).
xAI-guided Backpropagation: The matrix generated by the xAI system SHAP is then used in a modified backpropagation algorithm (see Algorithm 1) to update the weights of the generator as follows: traditionally, in a GAN the weights of the generator are modified by first computing the gradient of generator’s output with respect to the loss and then applying the chain rule. We modify this algorithm by first computing the explanation matrix (via the xAI system) and then calculating the product (specifically, a Hadamard product ) between and the gradient of the generator output with respect to the loss . More precisely, the explanation matrix is used to mask the gradient function, and consequently the \enquoteimportance of the pixels that went into the discriminator’s classification are taken into account in the modification of the weights of the generator during the application of the backpropagation algorithm. Using our approach, one can foresee that users might be able to augment such explanation matrices with specifications that spell out relationships (using logical formulas) between the update methods for the various weights of a generator.
We would like to emphasize that the standard gradient descent method simply moves toward the greatest decrease in loss over an -dimensional space, while by contrast, xAI-guided gradient descent algorithms can give users greater control over the learning process.
Our key contribution is a logic or xAI-guided gradient descent method (and a resultant GAN we refer to as LogicGAN) that utilizes xAI systems to focus the gradient descent algorithm on weights that are determined to be most influential by the xAI system (refer Section 4.2). In Section 6.1, we discuss how x-AI guided gradient descent methods can give those training models greater control over the learning process.
We performed experiments to evaluate the efficiency and quality (as measured by Fréchet Inception Distance, abbreviated as FID ) of LogicGAN relative to standard GANs. We show that on the MNIST single digit classification problem, LogicGAN achieves a 45% improvement in data efficiency compared to standard GANs (refer Section 5.3), while maintaining the same level of quality as measured by FID score.
We extend our experiments to the multi-class digit classification problem. LogicGANs outperform GANs with a 12.73% improvement in data efficiency while maintaining the same level of quality as measured by FID score in this setting as well (refer Section 5.4).
2 Prior Work
Goodfellow et al. were the first to introduce GANs in \shortciteoriginal_gan. Since then, GANs have continued to be a popular research topic with many versions of GANs developed . GANs can be broadly classified based on their architecture [24, 20, 5, 17] and the type of objective function used [19, 1, 9, 18, 7, 26]. To the best of our knowledge, there is no GAN that uses xAI feedback for training, thus making LogicGAN the first of its kind. We note that the approach exemplified in LogicGANs is independent of architecture or type of objective function used, and therefore can be applied to make any GAN xAI-guided.
ADAGRAD  is an optimization algorithm that maintains separate learning-rates for each parameter of a DNN based on how frequently the parameter is updated. On the other hand, LogicGAN uses xAI feedback to determine how generator parameters are updated (refer Section 4.2).
Explainable AI (xAI) Systems: As AI models become more complex, there is an increasing demand for interpretability or explainability of these models from decision makers, stakeholders, and lay users. In addition, one can make a strong case for a scientific need for explainable AI. Consequently, there has been considerable interest in xAI systems aimed at creating interpretable AI models that enable human understanding of AI systems .
One way to define xAI systems is as follows: they are algorithms that, given a model and a prediction, assigns values to each feature of an input that measures how important that feature is to the prediction. There have been several different xAI systems applied to DNNs with the goal of improving our understanding of how these systems learn. These systems can do so in a variety of ways, and approaches have been developed such as ones using formal logic , game-theoretic approaches , or gradient descent measures . These systems output explanations in a variety of forms such as ranked lists of features, select subsets of the feature sets, and values weighting the importance of features of input data used to train machine learning models.
3 The DeepSHAP xAI System
In our work, we use the DeepSHAP xAI system, which is one of the most widely used xAI systems . DeepSHAP is a combination of the DeepLIFT platform and Shapley value explanations. Introduced in 2017, the platform is well-suited for neural network applications and is freely available. DeepSHAP is an efficient Shapley value estimation algorithm. It uses linear composition rules and backpropagation to calculate a compositional approximation of feature importance values.
Shapley Value Estimation: Classic Shapley regression values are intended for linear models, where the values represent feature importance. Values are calculated by retraining models on every subset of features and valuing each feature based on the prediction values on models with that feature and without. Unfortunately, this method not only requires significant retraining but also requires at least separate models to cover all combinations of included features. Methods to approximate the Shapley values by iterating only over local feature regions, approximating importance using samples from the training dataset, and other approaches have been proposed to reduce computational effort.
The DeepLIFT xAI system: DeepLIFT uses a set of reference inputs and the consequent model outputs to identify the importance of features . The difference between an output and a reference output, denoted by , is explained in terms of the differences between the corresponding input and a reference input, given by . The reference input is chosen by the user based on domain knowledge to represent a typical uninformed state. For example, in the MNIST state, the reference value is an image from the dataset. Each feature is given a value which measures the effect of the model output on that feature being the reference value instead of its original value. The system uses a summation property where the sum of each feature’s changes sum up to the change in the model output of the original in comparison to the reference model: .
Our xAI-guided gradient descent uses the DeepSHAP algorithm to generate feature importance values that are in the range . We then take the absolute value of the explanation matrix and normalize them to the range to use as our mask. Taking the absolute value helps us focus the learning process on the most influential features, regardless of whether those features were beneficial or harmful to the classification. We found that this approach was significantly more effective than only normalizing the xAI results to [0,1].
4 Detailed Overview of LogicGAN Systems
In this section, we provide a detailed overview of our LogicGAN system and contrast it with standard GAN architectures and the way they are trained. The intuition behind the xAI-guided generator training process is that the xAI system acts as a guide, shaping the gradient descent in a way that focuses generator training on the features that the discriminator recognizes as most important. In our implementation we use the DeepSHAP xAI system, though we note that LogicGAN is xAI agnostic and any system that gives a measure for feature importance can be used based on individual needs.
4.1 Generator Training in Standard GANs
Briefly, standard GAN architectures consist of a system of paired DNNs, namely, a discriminator and a generator . The standard training method involves alternate cycles of discriminator and generator training. Initially, the discriminator is trained on a mini batch of examples drawn from both training data from the target distribution, as well as data generated by the untrained generator (which initially is expected to be just noise). These examples are correctly labeled as \enquotereal (if they were from the training set) or \enquotegenerated (if they are from the generator).
Subsequently, the generator is trained as follows (please refer to the non-highlighted part in Algorithm 1): a selection of noise samples are drawn from the noise prior and passed through the generator to get a batch of generated examples (line 1). This batch is labeled as \enquotereal and given to the discriminator, where the loss is found (line 2), and then used to update the generator parameters (the corrective feedback step). More precisely, the discriminator’s gradient is computed using the parameters of the discriminator and its loss (line 3), which is used to find the gradient of the generated example (line 4). Further, the gradients of all layers in the generator are then computed using (line 10). Finally, the parameters of the generator are updated using (line 12) - completing one iteration of training.
In subsequent iterations, the discriminator receives mini batches of real and generated examples from the generator trained in the previous iterations. The ideal termination condition for this process is when both the generated examples are high-quality and the discriminator is unable to distinguish between \enquotereal and \enquotegenerated examples.
4.2 xAI-guided Generator Training in LogicGANs
We start our description of xAI-guided training by first observing that in the standard GAN setting the discriminator only gives a single value of corrective feedback to the generator per generated image. The entire point of xAI-guided training is to augment this corrective feedback with the \enquotereason for the discriminator’s decision, as determined by an xAI system such as DeepSHAP.
During our xAI-guided gradient descent generator training process, the backpropagation algorithm is modified to focus generator training on the most meaningful features for the discriminator’s prediction (please refer to the highlighted part of Algorithm 1). Following with propagating the loss through the discriminator to find , we use an xAI system to find (line 6). is a set of real values , where greater values represent features that are more important to the discriminator’s prediction. The Hadamard (piecewise) product of and is calculated to get the modified gradient (line 7). In an intuitive sense, the explanation acts as a mask for , focusing the gradient on the most important features and limiting the gradient on the less important ones. From there, the gradients of the generator are calculated from (line 8) and the parameters are then updated (line 12).
4.3 LogicGAN Implementation Details
We implemented LogicGAN using Pytorch 1.3 , an open source machine learning framework popular in deep learning research. Our xAI system is SHAP 0.31.0 which implements DeepSHAP . In order to provide the reference outputs to DeepSHAP, we created a background selector system that provides a sample of relevant digits for both single and multi-class settings. We process the explanation matrix M generated by SHAP by taking the absolute value and normalizing the matrix to create a mask vector with values in range . Pytorch notably has the autograd  package which handles automatic differentiation of all tensors. In order to provide xAI-guided feedback to the generator, we overrode the register_backward_hook function normally used to inspect gradients. We modified the gradients of the output layer of the generator using the resultant vector computed by the Hadamard (piecewise) product with the computed mask. This modified gradient is backpropagated through the generator via the autograd. This allows us to easily add our xAI-guided gradient descent method to a GAN with few lines of code.
5 Experimental Results
We performed extensive experimental evaluation of our LogicGAN implementation, comparing against standard GANs.
5.1 Experimental Setup
All our experiments were performed over the MNIST dataset , a collection of 60,000 28x28 grayscale images of handwritten digits. We performed experiments in both single-class and multi-class settings. We used the same architecture for both the standard GAN and LogicGAN models. The generators are fully connected networks with 3 hidden layers, taking 100 values sampled from a normal distribution as input and returning a 28x28 grayscale image. Each hidden layer uses Leaky ReLU, and the output layer uses tanh activation functions. The discriminators are fully connected networks with 3 hidden layers, taking a 28x28 grayscale image as input and returning a prediction score. Each hidden layer uses Leaky ReLU, and the output layer uses the sigmoid activation function. A dropout rate of 0.3 is used during training. The batch size is selected to be 100. The Adam optimizer  is used for both generator and discriminator training. We varied learning rate between low (0.00002) and high (0.0002). We ran experiments using Amazon’s SageMaker platform run on a ml.p2.xlarge instance which uses Nvidia’s 1xK80 GPU with 12GB RAM.
5.2 Evaluation Criteria
Evaluating GANs remains an open problem as there are no single set of metrics identified as universally applicable for GANs . However, there are several domain-dependent metrics widely used in GAN literature currently such as Inception Score (IS), Mode Score (MS) and Fréchet Inception Distance (FID). An overview of these metrics as well as their pros and cons can be found at . Based on a thorough literature survey of metrics for the image domain, we developed the following criteria in order to perform a fair comparison of LogicGANs vs. standard GANs:
Quality of Generated Images: This involves measuring the similarity of the generated images w.r.t the training images as well as measuring diversity to determine mode collapse. We opted to use Fréchet Inception Distance (FID) to measure quality since it has been shown to be consistent with human evaluation of quality . FID was introduced by Heusel et al., to address the shortcomings of Inception Score (IS) such as the latter’s inability to detect intra-class mode dropping and vulnerability to noise \shortcitefid. FID was used as the primary metric by Lucic and the team at Google Brain to compare various GANs . Moreover, the authors argue that GANs can be objectively and fairly compared by measuring FID scores for fixed computational budgets. We go a step further in our evaluation by comparing the FID scores of LogicGAN and standard GANs for each training epoch to get a sense of how the quality of their images change during training. This would help us better understand how our xAI-guided gradient descent affects GAN training. In order to apply FID to MNIST, we use a LeNet like convolutional classifier, consistent with .
Data Efficiency: We quantify data efficiency in terms of the training cycles required for convergence. Kernel Inception Distance (KID) is shown to be an improved measure of GAN convergence in . At a high level, KID computes the square of the Maximum Mean Discrepancy (MMD) between the Inception scores of generated and training samples. KID has been shown to address over-fitting and over-training of GANs and so is a good metric to determine convergence . Here, we denote a GAN as converged if the mean KID score does not vary by more than 1 for 10 continuous epochs.
Training Time: We also measure the time required for convergence to identify the overhead that xAI system adds to LogicGAN. We further separate the time required for the xAI system to better understand its effect on LogicGAN. Here we use the same definition of convergence as above.
5.3 MNIST Single Class Results
We performed testing on the MNIST dataset, first limiting the setting to a single class (selecting the digit 7). We observed during preliminary testing that for low learning rates, LogicGAN had similar FID scores as standard GAN and required the same amount of time for convergence, although it took more time to run due to the overhead of the xAI system. However, in the case of high learning rates, the standard GAN frequently resulted in mode collapse while LogicGAN was observed to converge and learn faster without suffering from the same. Thus, we picked the best version of each, i.e. LogicGAN with a high learning rate and standard GAN with low learning rate and compared the two.
The results of data efficiency metrics and time required for convergence for LogicGAN and standard GAN are given in Tables 1 and 2 respectively. To achieve convergence, the standard GAN required 20 epochs while LogicGAN only needed 11 epochs - resulting in an 45% improvement in data efficiency. However, we note that standard GAN needed 243.2 seconds while LogicGAN took 994.4 seconds of run time - out of which the xAI system required 642.3 seconds. We believe improvements in efficiency of xAI systems will help reduce this difference. LogicGAN was observed to produce good quality images faster than standard GAN. Figures 3 and 5 show images generated by LogicGAN and standard GAN at epoch 1 in batch 15/63. We can clearly see identifiable \enquote7s generated by LogicGAN here. Figures 3 and 5 show images generated by both GANs at convergence.
The FID scores for LogicGAN were observed to be very low (around 0.2) from epoch 1 while the scores in standard GAN required around 6 epochs to get to that value (see Figure 7). Further, both remained within a range of 0.1 till convergence. This reaffirms the fact that LogicGANs learn quicker and create better quality images sooner compared to standard GANs. Overall, LogicGAN resulted in an improvement in data efficiency compared to standard GANs while maintaining the same quality as measured by the FID score.
5.4 MNIST Multi Class Results
We next extended LogicGAN to multi-class setting by training a separate classifier to provide relevant background images to the xAI system. We used the entire MNIST dataset as training data for our experiment. We used the same parameters as in single-class setting.
Again, Tables 1 and 2 respectively outline the results of data efficiency metrics and time required for convergence for LogicGAN and standard GAN. We found that standard GAN took 55 epochs to converge while LogicGAN took 48 epochs - resulting in a 12.73% improvement in data efficiency. However, standard GAN required 27,735.27 seconds while LogicGAN needed 88,294.15 seconds (out of which xAI system required 56,885.49 seconds). Again, we note that this gap can be shortened as xAI systems get more efficient. Figures 7 and 9 show the images generated by LogicGAN and the standard GAN respectively at convergence. A visual inspection of the pictures yielded no significant differences between them.
As for quality measures, standard GAN was observed to start with a lower FID score, but after epoch 2 both GANs result in similar scores (see Figure 9). We viewed the images generated by standard GAN in those two epochs and they were close to random noise. Overall, LogicGAN is observed to be more data efficient as compared to standard GAN while maintaining the same quality scores here as well.
5.5 Discussion of Experimental Results
We performed extensive experiments in both single-class and multi-class settings using the MNIST dataset and showed that LogicGAN is significantly more data efficient compared to standard GANs while maintaining the same quality score. A crucial finding of our experiments is that LogicGAN is able to learn effectively even at higher learning rates, while standard GAN suffers from mode collapse with the same learning rates. There may be settings where a higher learning rate can be crucial, making this a suitable avenue for future research.
|Class Setting||Standard GAN||LogicGAN|
|Single||20 epochs||11 epochs|
|Multi||55 epochs||48 epochs|
6 Future Work
Our results suggest LogicGAN can be leveraged in settings where data efficiency is important - such as where training data is limited or in privacy conscious settings where training with less data allows for increased privacy guarantees.
6.1 Controlling How Models Learn
While standard GANs only use one value of corrective feedback (loss) to the generator, there are many ways this feedback is used. For instance, several GANs vary the type of loss function [19, 1, 9, 18, 7, 26] and the selection of the optimizer (such as Adam, Stochastic Gradient Descent, etc.) to control how the model learns.
Similarly, we believe that the feedback provided by xAI system using LogicGAN - which is richer compared to only using the loss value - can allow for greater control over this learning process. This control can be applied in various ways, such as in selecting the type of xAI system to use, varying the parameters of the chosen xAI system, offsetting the mask to give more weight to xAI feedback, alternating between xAI-guided and standard generator training, and selecting methods to combine xAI feedback with loss. We argue that LogicGANs are a powerful way for users to gain greater control over the training process of GAN models, and that there are many avenues, applications, and extensions of this idea worth exploring in the future.
|GAN||LogicGAN -xAI||LogicGAN +xAI|
In this paper, we introduce LogicGANs, a class of generative adversarial network (GAN), that use an explainable AI (xAI) system to provide richer feedback from the discriminator to the generator to enable more guided training and greater control. We next overview xAI systems and standard GAN training and then introduce our xAI-guided generator training algorithm, contrasting it’s difference with standard generator training. We perform experiments using MNIST dataset and show that LogicGAN has an improvement in data efficiency of 45% in single-class and 12.73% in multi-class setting as compared to standard GANs while maintaining the same quality score as measured by the Fréchet Inception Distance. To the best of our knowledge, LogicGAN is the first GAN to utilize xAI feedback for training. We further provide insights in section 6.1 on various ways to integrate this feedback into LogicGAN. Currently, xAI systems are developed and viewed to be \enquoteexplainable to humans. While this has its uses, we argue that there is a lot of potential in viewing and applying these systems to \enquoteexplain to AI. Ultimately, LogicGAN may enable more control over the GAN learning process - allowing for better performance as well as a better understanding of GAN learning.
- Joint First Authors
- Joint First Authors
- An implementation of LogicGAN, which can be used to view and replicate our results, can be found at http://tiny.cc/LogicGAN.
- (2017) Wasserstein gan. arXiv preprint arXiv:1701.07875. Cited by: §2, §6.1.
- (2018) Demystifying mmd gans. arXiv preprint arXiv:1801.01401. Cited by: item 1.
- (2017) Explanation and justification in machine learning: a survey. In IJCAI-17 workshop on explainable AI (XAI), Vol. 8, pp. 1. Cited by: §2.
- (2019) Pros and cons of gan evaluation measures. Computer Vision and Image Understanding 179, pp. 41–65. Cited by: §5.2.
- (2016) Infogan: interpretable representation learning by information maximizing generative adversarial nets. In Advances in neural information processing systems, pp. 2172–2180. Cited by: §2.
- (2011) Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12 (Jul), pp. 2121–2159. Cited by: §2.
- (2015) Training generative neural networks via maximum mean discrepancy optimization. arXiv preprint arXiv:1505.03906. Cited by: §2, item 2, §6.1.
- (2014) Generative adversarial nets. In Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence and K. Q. Weinberger (Eds.), pp. 2672–2680. External Links: Cited by: §1.
- (2017) Improved training of wasserstein gans. In Advances in neural information processing systems, pp. 5767–5777. Cited by: §2, §6.1.
- (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6626–6637. Cited by: item 2, item 1.
- (1990) The hadamard product. In Proc. Symp. Appl. Math, Vol. 40, pp. 87–169. Cited by: §1.1.
- (2019) Abduction-based explanations for machine learning models. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 1511–1519. Cited by: §2.
- (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §5.1.
- (2010) MNIST handwritten digit database. Note: http://yann.lecun.com/exdb/mnist/ External Links: Cited by: §5.1.
- (2018) Are gans created equal? a large-scale study. In Advances in neural information processing systems, pp. 700–709. Cited by: item 1.
- (2017) A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems, pp. 4765–4774. Cited by: §1.1, §1.1, §2, §3, §4.3.
- (2015) Adversarial autoencoders. arXiv preprint arXiv:1511.05644. Cited by: §2.
- (2017) Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802. Cited by: §2, §6.1.
- (2016) Unrolled generative adversarial networks. External Links: Cited by: §2, §6.1.
- (2014) Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784. Cited by: §2.
- (2019) Recent progress on generative adversarial networks (gans): a survey. IEEE Access 7, pp. 36322–36333. Cited by: §2, item 2, §5.2.
- (2017) Automatic differentiation in pytorch. Cited by: §4.3.
- (2019) PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, pp. 8024–8035. Cited by: §4.3.
- (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: §2.
- (2017) Learning important features through propagating activation differences. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3145–3153. Cited by: §2, §3.
- (2016) Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126. Cited by: §2, §6.1.