Metric Learning-based Generative Adversarial Network

Metric Learning-based Generative Adversarial Network


Generative Adversarial Networks (GANs), as a framework for estimating generative models via an adversarial process, have attracted huge attention and have proven to be powerful in a variety of tasks. However, training GANs is well known for being delicate and unstable, partially caused by its sigmoid cross entropy loss function for the discriminator. To overcome such a problem, many researchers directed their attention on various ways to measure how close the model distribution and real distribution are and have applied different metrics as their objective functions. In this paper, we propose a novel framework to train GANs based on distance metric learning and we call it Metric Learning-based Generative Adversarial Network (MLGAN). The discriminator of MLGANs can dynamically learn an appropriate metric, rather than a static one, to measure the distance between generated samples and real samples. Afterwards, MLGANs update the generator under the newly learned metric. We evaluate our approach on several representative datasets and the experimental results demonstrate that MLGANs can achieve superior performance compared with several existing state-of-the-art approaches. We also empirically show that MLGANs could increase the stability of training GANs.


Generative Adversarial Networks (GANs) are a powerful class of deep generative models [\citeauthoryearGoodfellow et al.2014]. The basic strategy of GANs is to train a generative model and a discriminative model simultaneously via an adversarial process: the goal of the generator is to capture the data distribution whereas the discriminator tries to distinguish between the generated samples and real samples. GANs have attracted great attention due to their impressive performance on a variety of tasks, such as image generation [\citeauthoryearNguyen et al.2016] , image super-resolution [\citeauthoryearLedig et al.2016] and semi-supervised learning [\citeauthoryearSalimans et al.2016] . Recent studies have also shown their great capabilities in feature extraction [\citeauthoryearRadford, Metz, and Chintala2015] , and classification tasks [\citeauthoryearSalimans et al.2016] .

Despite all the great progress, we should be aware that there are also several limitations for GANs. For example, it is well known that GANs are often hard to train and much of the recent work has been devoted to finding ways of stabilizing training [\citeauthoryearGulrajani et al.2017] . In addition, Arjovsky et al. point out that: 1) in theory, one would expect we would first train the discriminator to optimality and then update the generator. In practice, however, as the discriminator gets better, the updates to the generator get consistently worse. 2) a popular fixation using a generator gradient updating with is unstable because of the singularity at the denominator when the discriminator is accurate [\citeauthoryearArjovsky and Bottou2017] .

For GANs, traditional approaches to generative modeling relied on maximizing likelihood, or Jensen-Shannon (JS) divergence between unknown data distribution and generator’s distribution. As has been pointed out by several papers [\citeauthoryearArjovsky, Chintala, and Bottou2017] [\citeauthoryearMetz et al.2016] [\citeauthoryearQi2017] , minimizing such objective function could make GANs suffer from vanishing gradients and thus partially leads to the instability of GANs learning.

To resolve the aforementioned issue, some researchers have directed their attention on various ways to measure how close the model distribution and real distribution are and try to use different metrics as their objective functions to improve the training of GANs. For instance, some papers have found out that applying Wasserstein-1 metric [\citeauthoryearArjovsky, Chintala, and Bottou2017] or energy distance [\citeauthoryearBellemare et al.2017] into GANs could enhance the performance of GANs as well as increase the stability of training GANs. Inspired by their work, in this paper we propose an alternative approach, the discriminator of which can dynamically learn an appropriate metric, rather than a static one, to measure the distance between generated images and real images. Afterwards, we update the generator under the newly learned metric. To do so, we borrow the idea from distance metric learning, a field which is mainly concerned with learning a distance function tuned to a particular task. Basically, now the responsibility of the “discriminator” is to learn an appropriate metric, rather than distinguish between real and fake samples; also, the generator aims at minimizing the distance that has been learned by the “discriminator”. We hope that in this new adversarial framework, we could get better performance as well as more stability.

The contribution of this paper could be listed as follows:

  • We define a novel form of GAN named Metric Learning-based GAN (MLGAN) whose discriminator can dynamically learn a suitable metric and provide a reasonable objective function for the generator.

  • We empirically show that MLGANs have the capacity to stabilize the training of GANs and meanwhile achieve superior performance compared with several other existing state-of-the-art GAN models.

Related Work

Generative Adversarial Network

A generative algorithm models how the data was generated in order to categorize a signal. Generally, to train a generative model we first need to collect a large amount of data, and then train a model to generate data like it. Before GANs, there are several other generative models. For example, Restricted Boltzmann Machines (RBMs) [\citeauthoryearSmolensky1986] [\citeauthoryearHinton, Osindero, and Teh2006] have been used effectively in modeling distributions over binary-valued data and are the basis of many other deep generative models, such as Deep Belief Networks (DBNs) [\citeauthoryearHinton2009] or Deep Boltzmann Machines (DBMs) [\citeauthoryearSalakhutdinov and Hinton2009]. DBNs can be formed by stacking RBMs and optionally fine-tuning the resulting deep network with gradient descent and back-propagation and DBMs are undirected graphical models whose component modules are also RBMs . Variational Autoencoder (VAE) is another important generative model, which inherits autoencoder architecture, but make strong assumptions concerning the distribution of latent variables [\citeauthoryearKingma and Welling2013] .

In 2014, Goodfellow et al. proposed a new framework for estimating generative models via an adversarial process [\citeauthoryearGoodfellow et al.2014] and have attracted huge attention due to their promising results in many fields, like text to image synthesis [\citeauthoryearReed et al.2016] and image to image translation [\citeauthoryearIsola et al.2016] . Unlike aforementioned deep generative models, GANs do not require any approximation method and offer much more flexibility in the definition of the objective function. Also, the goal of GANs, which is generate data that is indistinguishable from data by the discriminator, is highly aligned with the goal of producing realistic data. However, we should also be aware that training GANs is well known for being delicate and unstable as we have mentioned before [\citeauthoryearArjovsky, Chintala, and Bottou2017] .

To alleviate the problem, Arjovsky et al. propose Wasserstein GAN (WGAN), which use Earth-Mover (also called Wasserstein-1) distance as their objective function [\citeauthoryearArjovsky, Chintala, and Bottou2017] . However, to enforce the Lipschitz constraint on the critic, Arjovsky et al. use a method called weight clipping to clamp the weights of neural networks to a fixed box . Gulrajani et al. argue that weight clipping in WGAN could lead to optimization difficulties [\citeauthoryearGulrajani et al.2017] . To resolve the issue, they add gradient penalty in their objective function to provide an alternative way to enforce the Lipschitz constraint. In addition to WGAN, Mao et al. propose Least Squares GANs (LSGANs) [\citeauthoryearMao et al.2016] which adopt the least squares loss function for the discriminator and Zhao et al. propose Energy-Based GANs (EBGANs) [\citeauthoryearZhao, Mathieu, and LeCun2016] which view the discriminator as an energy function that attributes low energies to the regions near the data manifold and higher energies to other regions.

Different from the above variants of GANs, which could be viewed as minimizing a static divergence between the real distribution and the generator’s distribution, our method could dynamically learn a suitable metric to measure the difference between them and thus provide the generator with a more reasonable objective function.

Distance Metric Learning

The basic idea of distance metric learning is to find a distance metric such that the distance between data points in the same class is smaller than that from different classes [\citeauthoryearYe, Zhan, and Jiang2016] . To achieve this goal, different methods use various criteria. For example, Xing et al. pose metric learning as a constrained convex optimization problem [\citeauthoryearXing et al.2003] and Goldberger et al. propose Neighborhood Components Analysis (NCA) whose main idea is to optimize a softmax version of the leave-one-out K-Nearest-Neighbor (KNN) score [\citeauthoryearGoldberger et al.2005] . There are many other methods and more information about metric learning could be found in [\citeauthoryearBellet, Habrard, and Sebban2013] [\citeauthoryearKulis and others2013] .

With the success of deep learning, deep metric learning has gained much popularity in recent years. Compared to previous distance metric learning approaches, deep metric learning learns a nonlinear embedding of the data by using deep neural networks. The approach of Chopra et al. [\citeauthoryearChopra, Hadsell, and LeCun2005] considers utilizing convolutional neural networks to learn a similarity metric using contrastive loss and Hoffer et al. propose the triplet network model which aims to learn useful representations by distance comparisons [\citeauthoryearHoffer and Ailon2014] .

Recently, Zieba et al. put forward a method to train a triplet network by putting it as the discriminator in GANs [\citeauthoryearZieba and Wang2017] . They make use of the good capability of representation learning of the discriminator to increase the predictive quality of the model. Contrary to their work, whose goal is to enhance the performance of models in deep metric learning, in this paper our method aims at improving the performance and stability of GANs.

Proposed Method

In this section, we first illustrate the symbols and definitions that will be used in the following part, and then briefly introduce regular GANs and MMC, a well-known method in distance metric learning. Then, we describe the basic framework of our proposed method (MLGAN). At last, we present two improvements to MLGANs that could make MLGANs achieve better performance.

Figure 1: Real images of four datasets

Symbols and Definitions

We denote the generator as and discriminator as and in this paper they are both deep convolutional neural networks. To learn the generator’s distribution over data , we define a prior on input noise variables , then represent a mapping to data space as . The discriminator , on the other hand, outputs a vector for each data . It should be noted that compared to regular GANs, in this paper outputs a vector rather than a single scalar and represents an embedding of instead of the probability that came from read distribution rather than . In this way, it may be a little inappropriate to call it a “discriminator”, but we decide to still use the word for consistency.

During the training of MLGANs, we would sample minibatch of real examples for each epoch, which would be denoted as , and also minibatch of noise samples for each epoch, which would be denoted as .

Regular GANs

The GANs training strategy is to define a game between two competing networks. The generator network maps a source of noise to the input space. The discriminator network receive either a generated sample or a true data sample and must distinguish between the two. The generator is trained to fool the discriminator.

Formally, the game between the generator and the discriminator is the minimax objective:



As we have stated above, the main idea of MLGAN is to dynamically learn an appropriate metric, rather than a static one, to measure the distance between generated images and real images. After obtaining the metric, we update the generator under the learned metric. To do so, we borrow the idea from distance metric learning.

In the metric learning literature, the term “Mahalanobis distance” is often used to denote any distance function of the form


where A is some positive semi-definite matrix. Since A is positive semi-definite, we factorize it as and simple algebraic manipulations would show that


Thus, this generalized notion of a Mahalanobis distance exactly captures the idea of learning a global linear transformation.

As has been discussed before, there are several existing works on metric learning and one of the most famous methods was proposed by Xing et al. [\citeauthoryearXing et al.2003] , sometimes referred to as MMC. The main idea of MMC is to minimize the sum of distances that should be similar while maximizing the sum of distances that should be dissimilar. In MMC’s setting, they have some points and are given information that certain pairs of them are “similar”:

A simple way of defining a criterion for the desired metric is to demand that pairs of points in have small squared distance between them. In order to ensure that does not collapse the dataset into a single point, they also add a constraint that the pairs of points in DS, which means they are known to be dissimilar, should be separated. This gives the following optimization problem for MMC:


The authors utilize instead of the usual squared Mahalanobis distance. The authors also discuss that in the case that we want to learn a diagonal , we can derive an efficient algorithm using the Newton-Raphson method. Define


It is straightforward to show that minimizing (subject to is equivalent, up to a multiplication of by a positive constant, to solving the original problem.

Figure 2: Experimental results on MNIST
Figure 3: Experimental results on CelebA

Basic Framework of MLGAN

As we have stated above, MMC could be viewed as finding a rescaling of a data that replaces each point with and then applying standard Euclidean metric to the rescaled data. This idea could be easily extended to nonlinear metric learning.

To learn a nonlinear metric, we could utilize artificial neural networks. To illustrate, the equation 3 could be changed to


and in order to parameterize the mapping , we could consider learning a deep neural network.

In this paper, inspired by MMC and their objective function for the diagonal case, we propose a novel objective function for the discriminator of MLGAN.

First, we define


, where and

We can see that represents the sum of distance within each class.

We also define


, where

Again, we can see represents the sum of distance between classes.

Finally, we can present the objective function for the discriminator


Here is a hyper-parameter that balances the two terms.

And on the other hand, the objective function for generator is simply:


Basically, for the discriminator minimizing the first term in the objective function imply that each real data in should be similar to each other and so does each fake data generated by . Also, minimizing the second term of the objective function means that the real data should be dissimilar to the generated data. Based on this idea, the discriminator , which is a deep neural network, embeds the original data into so that in the new embedding space the standard distance between them satisfy the aforementioned condition. We have also tried several other variants of objective function for both discriminator and generator, but we find those variants lead to worse performance.

In regular GANs, the goal of generator is to fool the discriminator so that cannot distinguish between real or generated samples. In this work, however, the generator is trained to generate samples that is close to the real data under the newly learned metric. Intuitively, in this way the discriminator could inform the generator where it should pay attention to correct itself, and then the generator would try to fix its mistake based on the information told by the discriminator .

To summarize, the main differences between regular GANs and MLGANs are as follows:

  • The objective function for regular GANs is Equation 1 whereas for MLGANs the objective functions are Equation 9 and Equation 10.

  • The discriminator of MLGAN does not have a softmax layer.

  • The discriminator of MLGAN could output a real vector for each data rather than a single scalar.

  • When training the generator, MLGAN still needs to use the minibatch of real data.

The whole procedure of proposed algorithm is illustrated in Algorithm 1. It should be noted that since we would make some improvements to MLGANs, which would be illustrated in later part, the revised procedure might be slightly different from Algorithm 1, but the basic framework would be the same.

Input: The number of critic iterations per generator iteration , the batch size , Adam hyper-parameters

1:  while  has not converged do
2:     for  do
3:        Sample minibatch of examples
4:        Sample minibatch of noise samples
6:         = Adam()
7:     end for
8:     Sample minibatch of examples
9:     Sample minibatch of noise samples
11:      = Adam()
12:  end while
Algorithm 1 Vanilla MLGAN

Two Improvements to MLGANs

Since the discriminator does not have any softmax layer, we may give MLGANs too much “freedom” since there is no constraint on the output value of . In practice this feature does lead MLGANs to generate unsatisfactory results, which would be shown in the next section. In order to fix the issue, we propose two possible constraints that could improve the performance of MLGANs and they both achieve good results.

The first improvement we use is “weight clipping”, which has been used in WGAN. To be specific, we clamp the weights of discriminator to a fixed box so that it could only output value in a certain range.

The second improvement we use is to add two terms called “center penalty”, where we give the real and fake data two center vectors and . The discriminator would be punished if it learns an inappropriate embedding for an image away from its center vector. The loss function of “center penalty” is:


Therefore, now the objective function for turns into:


And the objective function for the generator will remain the same.

Figure 4: Experimental results on SVHN


Datasets and Implementation Details

We trained MLGANs on four benchmark datasets MNIST [\citeauthoryearLeCun1998] , CelebFaces Attributes Dataset (CelebA) [\citeauthoryearLiu et al.2015], Street View House Numbers (SVHN) [\citeauthoryearNetzer et al.2011] and CIFAR-10 [\citeauthoryearKrizhevsky and Hinton2009] . The real images of these dataset is shown in Figure 1

For our experiments, we set in Equation 9 to for vanilla MLGAN and MLGAN with clipping, and for MLGAN with center penalty. The dimension of the output vector of discriminator could be set to for vanilla MLGAN and MLGAN with clipping, and for MLGAN with center penalty. is set to and is set to . For MLGANs with weight clipping, the clipping threshold is set to . For MLGANs with center penalty, in Equation 12 is set to or .

It is difficult to compare performance of different models since GANs lack an objective function. Salimans et al. propose an automatic method to evaluate samples which is now considered as a sound way to assess image quality [\citeauthoryearSalimans et al.2016] . Basically, they apply the Inception model [\citeauthoryearSzegedy et al.2016] to every generated image to get the conditional label distribution . Since images that contain meaningful objects should have a conditional label distribution with low entropy and the marginal with high entropy, their proposed metric is : . This metric is named Inception score. In this paper, we utilize Inception score to compare MLGANs with other models in SVHN and CIFAR-10.


The MNIST database [\citeauthoryearLeCun1998] of handwritten digits has a training set set examples and a test set of examples. For MNIST, we use the baseline DCGAN architecture [\citeauthoryearRadford, Metz, and Chintala2015] and our code is based on TensorFlow[\citeauthoryearAbadi et al.2016] implementation of DCGAN, whose code is public available , 1.

The experimental results are shown in Figure 2. As we could see from the figure, although the vanilla MLGAN demonstrate comparatively poor performance. After performing the improvement, MLGAN could generate more realistic images.

Figure 5: Experimental results on CIFAR-10

MLGAN on CelebA

CelebA [\citeauthoryearLiu et al.2015] is a large-scale face attributes dataset with more than K celebrity images. Figure 3 shows the comparison of CelebA samples generated by DCGAN and MLGANs. The results demonstrate that MLGANs could achieve competitive performance on CelebA.


SVHN [\citeauthoryearNetzer et al.2011] is a real-world image dataset for developing machine learning and object recognition algorithms with minimal requirement on data preprocessing and formatting. It can be seen as similar in flavor to MNIST but incorporates and order of magnitude more labeled data and it is obtained from house numbers in Google Street View images.

We use the training set of SVHN, which consists of digits, to train our algorithm and use Inception score to compare the results with different GAN models. We use the same model architecture with WGAN with gradient penalty, whose code is public available 2 .

Model (same architecture) Inception Score
WGAN-gradient penalty
MLGAN-center penalty 3.296 0.17
Table 1: Experimental Results on SVHN

As has been stated above, we use Inception Score to assess the quality of our images and we report the highest score of each model during the training. The result of our experiments is shown in Table 1. As we could see from the table, our MLGANs achieve superior performance compared with other models.

We also show the generated images for each GAN model in Figure 4. From the figure we could know that even though for this architecture the attempt to training DCGAN has failed, MLGAN could still generate realistic images, which in part demonstrates the stability of MLGANs.


CIFAR-10 dataset [\citeauthoryearKrizhevsky and Hinton2009] consists of color images in classes. Again, we use the same model architecture with WGAN with gradient penalty and Inception score to assess the quality of generated images.

Model (same architecture) Inception Score
WGAN-gradient penalty
MLGAN-center penalty 6.279 0.33
Table 2: Experimental Results on CIFAR-10

As we could see from Figure 5, DCGAN has failed in this task again and our model still obtain superior results compared with other GAN models in terms of Inception score. Also, from Table 2 we can see that our MLGANs can achieve higher Inception Score. Furthermore, even though vanilla MLGAN perform the worst, it still do not collapse during the training.

Improved Stability

One of the benefits of MLGANs is that we can train the critic till optimality and the better the critic is; the more reasonable objective function the generator would get. Therefore, the problem of regular GANs, which is we cannot train the discriminator too well, is no longer an issue.

Also, it should be noted that for MLGANs, we just use the same architecture with other GAN models and still get superior results, which could demonstrate the robustness and potential of MLGANs.

Last but not least, even though vanilla MLGAN perform the worst in most cases, it never collapses like regular GAN (DCGAN). In this regard, MLGANs indeed increase the stability of training GANs.


In this work, we propose a novel framework for generating models named MLGANs, which inherits the adversarial process from GANs but significantly change the goal of both discriminator and generator. To be specific, the discriminator of MLGANs aims at learning an appropriate metric between the real and fake samples and the goal of the generator of MLGANs is to minimize the distance between real and fake samples based on the newly learned metric.

We also highlight a major issue for training GANs, which is that training GANs is delicate and unstable. In our experiment, we demonstrate that not only do MLGANs achieve superior results compared with other models, but also it is more stable to train MLGANs.

We hope this proposed method could provide readers with a new perspective towards GANs and inspire others to come up with better idea.

Future Work

Although MLGANs have demonstrated satisfactory results as shown above, there are still several possible improvements that may lead to better performance. First, the objective function for vanilla MLGAN is worth being investigated and by borrowing idea from more state-of-the-art distance metric learning methods, we believe the results would be better.

Second, to constrain the output of the discriminator of MLGAN, here we propose two possible solutions, namely weight clipping and center penalty. However, these are not the only and the best ways to constrain the discriminator and we encourage researchers to find out more possibilities.

Third, adding label information into MLGANs would be more reasonable since it would be more natural to minimize distance within each class and maximize distance between different classes.




  1. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G. S.; Davis, A.; Dean, J.; Devin, M.; et al. 2016. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467.
  2. Arjovsky, M., and Bottou, L. 2017. Towards principled methods for training generative adversarial networks. arXiv preprint arXiv:1701.04862.
  3. Arjovsky, M.; Chintala, S.; and Bottou, L. 2017. Wasserstein gan. arXiv preprint arXiv:1701.07875.
  4. Bellemare, M. G.; Danihelka, I.; Dabney, W.; Mohamed, S.; Lakshminarayanan, B.; Hoyer, S.; and Munos, R. 2017. The cramer distance as a solution to biased wasserstein gradients. arXiv preprint arXiv:1705.10743.
  5. Bellet, A.; Habrard, A.; and Sebban, M. 2013. A survey on metric learning for feature vectors and structured data. arXiv preprint arXiv:1306.6709.
  6. Chopra, S.; Hadsell, R.; and LeCun, Y. 2005. Learning a similarity metric discriminatively, with application to face verification. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, 539–546. IEEE.
  7. Goldberger, J.; Hinton, G. E.; Roweis, S. T.; and Salakhutdinov, R. R. 2005. Neighbourhood components analysis. In Advances in neural information processing systems, 513–520.
  8. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative adversarial nets. In Advances in neural information processing systems, 2672–2680.
  9. Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; and Courville, A. 2017. Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028.
  10. Hinton, G. E.; Osindero, S.; and Teh, Y.-W. 2006. A fast learning algorithm for deep belief nets. Neural computation 18(7):1527–1554.
  11. Hinton, G. E. 2009. Deep belief networks. Scholarpedia 4(5):5947.
  12. Hoffer, E., and Ailon, N. 2014. Deep metric learning using triplet network. arXiv preprint arXiv:1412.6622.
  13. Isola, P.; Zhu, J.-Y.; Zhou, T.; and Efros, A. A. 2016. Image-to-image translation with conditional adversarial networks. arXiv preprint arXiv:1611.07004.
  14. Kingma, D. P., and Welling, M. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
  15. Krizhevsky, A., and Hinton, G. 2009. Learning multiple layers of features from tiny images.
  16. Kulis, B., et al. 2013. Metric learning: A survey. Foundations and Trends® in Machine Learning 5(4):287–364.
  17. LeCun, Y. 1998. The mnist database of handwritten digits. http://yann. lecun. com/exdb/mnist/.
  18. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. 2016. Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint arXiv:1609.04802.
  19. Liu, Z.; Luo, P.; Wang, X.; and Tang, X. 2015. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, 3730–3738.
  20. Mao, X.; Li, Q.; Xie, H.; Lau, R. Y.; Wang, Z.; and Smolley, S. P. 2016. Least squares generative adversarial networks. arXiv preprint ArXiv:1611.04076.
  21. Metz, L.; Poole, B.; Pfau, D.; and Sohl-Dickstein, J. 2016. Unrolled generative adversarial networks. arXiv preprint arXiv:1611.02163.
  22. Netzer, Y.; Wang, T.; Coates, A.; Bissacco, A.; Wu, B.; and Ng, A. Y. 2011. Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised feature learning, volume 2011,  5.
  23. Nguyen, A.; Yosinski, J.; Bengio, Y.; Dosovitskiy, A.; and Clune, J. 2016. Plug & play generative networks: Conditional iterative generation of images in latent space. arXiv preprint arXiv:1612.00005.
  24. Qi, G.-J. 2017. Loss-sensitive generative adversarial networks on lipschitz densities. arXiv preprint arXiv:1701.06264.
  25. Radford, A.; Metz, L.; and Chintala, S. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.
  26. Reed, S.; Akata, Z.; Yan, X.; Logeswaran, L.; Schiele, B.; and Lee, H. 2016. Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396.
  27. Salakhutdinov, R., and Hinton, G. 2009. Deep boltzmann machines. In Artificial Intelligence and Statistics, 448–455.
  28. Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; and Chen, X. 2016. Improved techniques for training gans. In Advances in Neural Information Processing Systems, 2234–2242.
  29. Smolensky, P. 1986. Information processing in dynamical systems: Foundations of harmony theory. Technical report, COLORADO UNIV AT BOULDER DEPT OF COMPUTER SCIENCE.
  30. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; and Wojna, Z. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2818–2826.
  31. Xing, E. P.; Jordan, M. I.; Russell, S. J.; and Ng, A. Y. 2003. Distance metric learning with application to clustering with side-information. In Advances in neural information processing systems, 521–528.
  32. Ye, H.-J.; Zhan, D.-C.; and Jiang, Y. 2016. Instance specific metric subspace learning: A bayesian approach. In AAAI, 2272–2278.
  33. Zhao, J.; Mathieu, M.; and LeCun, Y. 2016. Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126.
  34. Zieba, M., and Wang, L. 2017. Training triplet networks with gan. arXiv preprint arXiv:1704.02227.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description