Versatile Auxiliary Regressor with Generative Adversarial network (VAR+GAN)
Abstract
Being able to generate constrained samples is one of the most appealing applications of the deep generators. Conditional generators are one of the successful implementations of such models wherein the created samples are constrained to a specific class. In this work, the application of these networks is extended to regression problems wherein the conditional generator is restrained to any continuous aspect of the data. A new loss function is presented for the regression network and also implementations for generating faces with any particular set of landmarks is provided.
keywords:
Generative Adversarial Networks, Conditional Generators, Face Generation1 Introduction
Generative Adversarial Networks (GAN) GAN () are among the most successful implementations of deep generators. The idea of GAN is to train two agents, a generator, and a discriminator, simultaneously. The generator is a deep neural network which accepts a vector from a latent space (uniformly distributed noise) and outputs a sample, same type of the database. The discriminator is a binary classifier determining whether this sample is generated or is a genuine data coming from the database. The training is accomplished by playing a minmax game between these two networks. There are several extensions to the original GAN idea wherein the original GAN is adapted to a specific condition by changing the network structures and/or loss function. For example, Conditional GAN (CGAN) CGAN (), Auxiliary Classifier GAN (ACGAN) ACGAN (), and Versatile Auxiliary Classifier with GAN (VAC+GAN) VACGAN () are utilizing the original GAN idea to train conditional generators wherein the output of the generator is constrained to a specific class given the right input sequence. CGAN does this by partitioning the latent space and also the auxiliary knowledge of the data class. In ACGAN the loss of the CGAN is manipulated by adding a classification term which backpropagates through generator and discriminator. The VAC+GAN extends the ACGAN scheme to be more adaptable to different GAN variations. This is done by adding a classifier network in parallel with the discriminator network, and the classification error is backpropagated through the generator.
In this work, the idea of VAC+GAN is extended to regression problems by replacing the classifier with a regression network. A new loss function is presented for this network. The regression error is backpropagated through the generator. This gives the opportunity to train a generator while constraining the generated samples to any continuous aspect of the original database.
Similar ideas include the scheme presented in CBIGAN () wherein a Hierarchical Generative Model (HGM) is utilized for eye image synthesis and eye gaze estimation. This work introduces a variation of GAN known as conditional Bidirectional GAN (cBiGAN) which is a mixture of CGAN and Bidirectional GAN (BiGAN). The main issue with this method is the lack of adaptability to every GAN variation. This method is only applicable to BiGAN scheme. This approach is shown in figure 1. In our observations, cBiGAN implementation is able to generate samples for each aspect but there are very low variations in generated samples for a specific aspect. The proposed VAR+GAN scheme produces higher variations in the same condition. This is discussed in more details in section 3.4. The other advantage of the proposed method is its versatility, i.e., it applies to any GAN implementation regardless of the network architecture and/or loss function.
In the next section, the idea of VAR+GAN is explained alongside with the presented loss function for regression network. Section 3 explains implementation, results and the comparisons for the presented method against cBiGAN and the conclusions and future works is discussed in the last section.
2 Versatile Auxiliary Regressor with Generative Adversarial network (VAR+GAN)
The idea of proposed scheme is to place a regression network in parallel with the discriminator and backpropagate the regression error through the generator (see figure 2). In this method, the generator is constrained to generate samples with specific continuous aspects. For example, in the face generation application, given the right latent sequence, the generator creates faces with particular landmarks.
The following loss function is introduced for the regression network
(1) 
wherein is the latent space variable, is the distribution of an infinitesimal partition of latent space, is the target variable (ground truth), is the regression function and is the generator function.
Proposition 0.
For the loss function in equation 1 the optimal regressor is
(2) 
wherein is the distribution of the generator’s output, is postintegration constant, and is the target function.
Proof.
Considering the inner integration of equation 1 and by replacing , the extremum of the loss function with respect to is
(3) 
which can be written as
(4) 
this results in
(5) 
concluding the proof. ∎
Theorem 1.
Minimizing the loss function in equation 1 decreases the entropy of the generator’s output.
Proof.
Adding the regressor to the model decreases the entropy of the generated samples. This is expectable since the idea is to constrain the output of the generator to obey some particular criteria. This is shown in observations in section 3.4.
Theorem 2.
For any two sets of samples and their corresponding targets ( and ), the loss function in equation 1 increases the Jensen Shannon Divergence (JSD) between generated samples for these two sets.
Proof.
Consider and are two partitions of the latent space correspond to two sets of samples with targets and . In this case, the loss function in equation 1 is given by:
(8) 
Considering , , , and equation 8 simplifies to
(9) 
To find the optimum the derivative of the integrand is set to zero given by
(10) 
which results in
(11) 
By replacing equation 11 in equation 9 it simplifies to
(12) 
which can be rewritten as
(13) 
which equals to
(14) 
minimizing increasing term, concluding the proof. ∎
In this section, it has been shown that the presented loss function increases the distance between generated samples for any two set of aspects, which is desirable.
3 Implementation and Results
In this section, the implementation of the VAR+GAN is presented and compared to cBiGAN method. To keep the consistency in comparisons, the same architecture for generator network has been kept throughout all implementations. All the networks are trained using Lasagne LASAGNE () library on the top of Theano THEANO () library in Python.
3.1 Network Architectures
Three main architectures used in this section are encoder, decoder, and regression networks. The first two are shown in figure 3. The decoder contains one fully connected layer which maps the input to a 3D layer. Next layers are all convolutional layers followed by unpooling layers for every second convolution. The exponential linear unit (ELU) ELU () is used as activation function except in the last layer wherein no nonlinearity has been applied except for the encoder in cBiGAN scheme wherein tanh nonlinearity is applied in the output layer. The encoder network is made of convolutional layers with ELU activation function. The downscaling in these layers is obtained by using stride in every second convolutional layer.In the decoder network, all convolutional layers have 64 channels while in the encoder, the number of the channels is gradually increased to 128, 192, and 256 after each pooling layer. The layers shown in red are applying no nonlinearity to their input.
The regression network is a conventional deep neural network shown in table 1.
Layer  Type  kernel  Activation 

Input  Input  –  – 
Hidden 1  Conv  (64 ch)  ReLU 
Pool 1  Max pooling  –  
Hidden 2  Conv  (64 ch)  ReLU 
Pool 2  Max pooling  –  
Hidden 3  Dense  1024  ReLU 
Output  Dense  98  Tanh 
3.2 Database
The dataset used in this work is CelebA database CelebA () which is made of 202,599 frontal posed images. Face regions are cropped and resized to pixels using OpenCV frontal face cascade classifier H1 (). Supervised Descent Method (SDM) H2 () is used for facial point detection. The detector is based on H3 () and it utilizes the discriminative 3D facial deformable model to find 49 facial landmarks including contours of eyebrows, eyes, mouth and the nose. These landmarks are used as the data aspect in this work.
3.3 Implementation
3.3.1 Var+gan
For the proposed scheme (figure 2), the Boundary Equilibrium Generative Adversarial Network (BEGAN) BEGAN () is utilized to train the deep generator. In this method the generator architecture is same as the decoder network shown in figure 2(a) with input dimension . The discriminator is an autoencoder network wherein the decoder architecture is same as the generator and the encoder is the network shown in figure 2(b) with . The regression network is shown in table 1 and the loss function for proposed implementation is a modified version of the original BEGAN loss BEGAN () given by:
(15) 
Where and are generators and discriminators losses respectively. is the generator function, is a sample from the latent space, and are genuine image and corresponding ground truth drawn from the database , is the learning rate for , is the equilibrium hyper parameter set to 0.5 in this work, is the regression loss given by equation 1, and and are set to 0.97 and 0.03 respectively, and is the autoencoders loss defined by
(16) 
The optimizer used for training the generator and discriminator is ADAM with learning rate, and equal to 0.0001, 0.5 and 0.999 respectively. And the regression network is optimized using nestrov momentum gradient descent with learning rate and momentum equal to 0.01 and 0.9 respectively.
3.3.2 cBiGAN
The cBiGAN scheme is implemented for the same generator architecture and on the same database to make fair comparisons with the proposed method. The generator architecture is same as the decoder network shown in figure 2(a) with input dimension . The discriminator model is same as the encoder network in figure 2(b) with and nonlinearity at the output layer. And the encoder network in figure 1 has the architecture shown in figure 2(b) with and nonlinearity at the output layer. The loss function for this scheme is presented in CBIGAN () given by
(17) 
wherein is the encoder’s output, is the genuine aspect coming from the database, and
(18) 
where and are discriminator and generator functions respectively, and are genuine image and corresponding ground truth drawn from the database, and is a sample from the latent space. The coefficient is set to . The optimizer used for training the model is ADAM with learning rate, and equal to 0.0001, 0.5 and 0.999 respectively.
3.4 Results
In this section, the proposed method is compared against the cBiGAN method while generating faces for a particular landmark point set. The results for six sets of landmarks are shown in figures 4 to 9. In each figure (left) the outputs from the proposed method for a particular set of landmarks are illustrated while in right side of the figures the output of the generator trained in cBiGAN scheme is given for the same landmarks.
As shown in these figures, both methods are able to generate samples constrained to a particular set of landmarks but the proposed method generates higher variations of faces for a given landmark set while cBiGAN fails to create different samples in the same condition. The advantage of VAR+GAN is the versatility of the method which facilitates the implementation and also guarantees the higher quality in generated samples. For example in this work the proposed method is taking advantage of simplicity and power of BEGAN implementation and only change applied is to place a regression network and add its error value to the generator’s loss. While cBiGAN method is constrained to a specific loss function which is a big disadvantage.
4 Conclusion and Future Work
In this work, a new scheme for training conditional deep generators has been introduced wherein the generator is able to constrain the generated samples to any continuous aspect of the dataset. The presented method is versatile enough to be applicable to any GAN variation with any network structure and loss function. The idea is to place a regression network in parallel with the discriminator network and backpropagate the regression error through the generator. A new loss function is also presented and it has been shown that it increases the JSD between data generated for any two set of aspects. The other property of the proposed loss is the reduction of the entropy for generated samples which is expectable because of the constraints applied to generated data.
The proposed method is also compared with the only available method with the same purpose (to the best of our knowledge in the time writing the article). cBiGAN method generates samples with a particular aspect but there are very low variations between generated samples while VAR+GAN produces higher variations for a specific set of aspects. Being able to generate variable samples is crucial for tasks including augmentation purposes. Being able to augment the database for certain data aspects and use them in training the final products is one of the most interesting applications for the conditional generators.
The future works include merging the VAC+GAN and VAR+GAN methods to constrain the generator to create samples from a specific class with a particular continuous aspect. And also investigating the influence of the generated samples in augmentation task for different applications.
References
References
 (1) I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial nets, in: Advances in neural information processing systems, 2014, pp. 2672–2680.
 (2) M. Mirza, S. Osindero, Conditional generative adversarial nets, arXiv preprint arXiv:1411.1784.
 (3) A. Odena, C. Olah, J. Shlens, Conditional image synthesis with auxiliary classifier gans, arXiv preprint arXiv:1610.09585.
 (4) S. Bazrafkan, H. Javidnia, P. Corcoran, Versatile auxiliary classifier+ generative adversarial network (vac+ gan); training conditional generators, arXiv preprint arXiv:1805.00316.
 (5) K. Wang, R. Zhao, Q. Ji, A hierarchical generative model for eye image synthesis and eye gaze estimation.

(6)
S. Dieleman, J. SchlÃ¼ter, C. Raffel, E. Olson, S. K. SÃ¸nderby, D. Nouri,
D. Maturana, M. Thoma, E. Battenberg, J. Kelly, J. D. Fauw, M. Heilman, D. M.
de Almeida, B. McFee, H. Weideman, G. TakÃ¡cs, P. de Rivaz, J. Crall,
G. Sanders, K. Rasul, C. Liu, G. French, J. Degrave,
Lasagne: First release. (Aug.
2015).
doi:10.5281/zenodo.27878.
URL http://dx.doi.org/10.5281/zenodo.27878  (7) J. Bergstra, F. Bastien, O. Breuleux, P. Lamblin, R. Pascanu, O. Delalleau, G. Desjardins, D. WardeFarley, I. Goodfellow, A. Bergeron, et al., Theano: Deep learning on gpus with python, in: NIPS 2011, BigLearning Workshop, Granada, Spain, Vol. 3, Citeseer, 2011.
 (8) D.A. Clevert, T. Unterthiner, S. Hochreiter, Fast and accurate deep network learning by exponential linear units (elus), arXiv preprint arXiv:1511.07289.
 (9) Z. Liu, P. Luo, X. Wang, X. Tang, Deep learning face attributes in the wild, in: Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 3730–3738.
 (10) D. Cristinacce, T. F. Cootes, Feature detection and tracking with constrained local models., in: Bmvc, Vol. 1, 2006, p. 3.
 (11) X. Xiong, F. De la Torre, Supervised descent method and its applications to face alignment, in: Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, IEEE, 2013, pp. 532–539.
 (12) A. Asthana, S. Zafeiriou, S. Cheng, M. Pantic, Incremental face alignment in the wild, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 1859–1866.
 (13) D. Berthelot, T. Schumm, L. Metz, Began: Boundary equilibrium generative adversarial networks, arXiv preprint arXiv:1703.10717.