Beholder-GAN: Generation and Beautification of Facial Images with Conditioning on Their Beauty Level
”Beauty is in the eye of the beholder.” This maxim, emphasizing the subjectivity of the perception of beauty, has enjoyed a wide consensus since ancient times. In the digital era, data-driven methods have been shown to be able to predict human-assigned beauty scores for facial images. In this work, we augment this ability and train a generative model that generates faces conditioned on a requested beauty score. In addition, we show how this trained generator can be used to ”beautify” an input face image. By doing so, we achieve an unsupervised beautification model, in the sense that it relies on no ground truth target images.
Nir Diamant*, Dean Zadok*, Chaim Baskin, Eli Schwartz, Alex M. Bronstein††thanks: * The authors contributed equally to this work. \addressComputer Science Department, Technion - IIT, Israel
Beautification, Face synthesis, Generative Adversarial Network, GAN, CGAN
Methods for facial beauty prediction and beautification of faces in images have attracted the attention of the computer vision and machine learning communities for a long time [1, 2, 3, 4]. The reason goes beyond the importance of these applications, and is probably also related to the inherent challenge in predicting and improving such an utterly subjective attribute as beauty. The fact that beauty is hard to model from first principles makes it a perfect candidate for data-driven methods such as deep learning. Over the years, several datasets and methods for facial beauty prediction (FBP) have been suggested, e.g., [5, 6]. Recently, a new dataset was published  that ranks facial image beauty by group of humans; unlike the previous datasets, the full score distribution for each subject was reported. Our work focuses on the task of generating facial images conditioned on their beauty score. We use it for both generating sequences of images of the same person at different beauty levels, and ”beautification” of a given input image.
Generative Adversarial Networks (GANs) are being extensively researched nowadays and were shown to be able to generate realistic high-resolution images from scratch [8, 9]. Nevertheless, the lack of stability in the training process is still noticeable. Implementations such as Unrolled  and Wasserstein [11, 12] GANs offer sizeable improvements in stabilizing the training. A recent approach, Progressive Growing of GANs (PGGAN)  suggested coping with the challenge of generating high-resolution images by learning first through generation of low-resolution images and progressively growing to higher resolutions. Another important aspect of GANs is their ability to generate images with conditioning on some attribute, e.g., a class label. These models are often referred to as Conditional GANs (CGANs) .
Conditioning vectors can be formed in different structures. One way is the discrete approach where the images are divided into separate classes and fed to the model as a one-hot vector. This method has been used to generate classified images  or to illustrate face aging . On the other hand, conditioning vectors can be treated as continuous values and fed directly as the input into the model. This method was proposed for synthesizing facial expressions  or to reconstruct animations based on facial expressions . Regardless of the way the conditioning vector is assembled, usually another output is added to the discriminator where the conditioning vector is predicted and the loss on this output encourages the generated images to belong to the conditional distribution of the the correct class.
In this work, we showcase the ability to generate realistic facial images conditioned on a beauty score using a variant of PGGAN. We use this variant to generate sequences of facial images with the same latent space vector and different beauty levels. This offers insights into what humans consider beautiful and also reveals human biases with regards to age, gender, and race. In addition, we present a method for using the trained generator for recovering the latent vector of a given real face image and use our model to ”beautify” it.
2.1 A GAN conditioned on a beauty score
While beauty is a subjective attribute, the scores assigned by different people to facial images tend to correlate. This enabled the creation of datasets of facial images together with their human-labeled beauty scores [19, 20, 21]. While the notion of beauty is hard to model mathematically, data-driven methods trained on these datasets are able to predict the beauty scores with remarkable accuracy [22, 23, 24]. Another interesting task, which has not been attempted before, is learning a generative model conditioned on the beauty score:
where denotes the beauty score, is a random Gaussian vector in some latent space, and is the generated face image. To ensure that generated image indeed corresponds to the correct beauty level, we also let the discriminator predict the beauty level and not just the usual real vs. fake probability,
and apply an appropriate loss on the beauty score output. We use the continuous score as input to , and apply the loss on the score estimated by the discriminator . In addition, since the beauty score distribution of a single face by multiple beholders can be informative, we input not a single score but a vector of all the ratings available for the face. In Section 3 we evaluate these different design choices. With the exception of the the addition of input and output beauty scores, we adopted the architecture and training procedure described in .
After training, we can use the trained generator with some fixed and vary the beauty score input to generate faces belonging to the same person but having different beauty levels.
A challenge involved in learning beautification of faces is that it must be performed in an unsupervised manner, as we do not have pairs of more and less beautiful images of the same person as would be required for supervised learning. One possible approach for unsupervised learning of transforming images between two domains are methods similar to CycleGAN and image-to-image translation [25, 26]; or the extension to the multi-class case, such as hair color and facial expressions, presented in StarGAN . These methods, however, are tailored to the discrete class case and it is not obvious how to adjust them to the case of a continuous attribute such as a beauty score. We propose a method for the beautification of an input facial image using the generator trained, as previously described, in an unsupervised manner – in the sense that no target image is used to compute the loss.
Given an image and the pre-trained generator , we want to recover the corresponding latent vector and beauty score . We do this by initializing with a random and and performing gradient descent iterations on an aggregate of the and VGG losses of the output image compared to the input image. We use a VGG network pre-trained for face recognition and exploit it as feature extractor by removing its last layer. The resulting gradient descent step assumes the form
where is the step size, and governs the relative importances of the loss. After recovering the latent vector encoding to the input face and , we use the feed forward model , where is an higher beauty level (), to obtain a similar but more beautiful face.
2.3 Semi-supervised training
Facial images with labeled beauty scores are scarce and, where they exist, do not contain enough variety for training a GAN. To overcome this limitation, we use a semi-supervised approach wherein a model is trained to predict the beauty score of faces based on a limited dataset. The trained model is then used to rate more images, thus creating a richer dataset. Since we condition the GAN on the distribution of scores and not on a single score, we train one predictive model per human rater, e.g., for the SCUT-FBP5500 dataset  with distinct human raters, models were trained to predict the scores assigned by each of them.
|Method||Sliced Wasserstein distance||MS-SSIM|
3.1 Semi-supervised training
As explained in Section 2.3, we enrich our dataset by training a beauty predictor on one dataset and use it for labeling additional faces. To verify the validity of this idea, we trained a predictive model on the SCUT-FBP5500 dataset  and tested it on random images from CelebAHQ . VGG models were trained, one per human rater, with the weights initialized from VGG trained on ImageNet. In addition, we ran an online survey where people (min. age years; max. age years; average age ) rated the beauty score of the random facial images. The average scores predicted by the trained models compared to the average scores given by human raters are presented in Fig. 3. The correlation between the human and model ratings is , indicating that despite the model ratings being somewhat noisy, they can be used to train our GAN.
3.2 Face generation and beautification
We used the previously described methods for labeling the CelebAHQ dataset and trained a GAN for it. We fed each random latent vector to the generator with five beauty scores to generate five images of supposedly the same person with a different level of beauty. A few examples of the generated sequences are presented in Fig. 1. Our evaluation of the results is based on two criteria: the level of realism in the generated images, and the causality between the input beauty score and the generated faces.
To verify that our model generates realistic faces, regardless of the conditioned beauty features, i.e., for quantitative evaluation, we used the Sliced Wasserstein Distance and the MS-SSIM metric employed in the PGGAN evaluation . Table 1 presents the comparison of our conditional GAN with the unconditional PGGAN. While adding the conditioning results in a degradataion in the Sliced Wasserstein Distance, the MS-SSIM metric actually improved.
To evaluate the causality between the input beauty score and the generated faces, we conducted an online survey with pairs from the generated sequences. Each pair consisted of two images with the same latent vector and a distance of in their beauty score . The pair was presented to the human raters in a random order; the raters were asked to evaluate which face in the pair appeared more beautiful. A total of ratings were collected. The percentage of agreement of the human raters with the conditioning score was . Fig. 4 shows a few examples where the raters agreed or disagreed with the generated beauty score.
We used the generator trained for random face generation for the beautification process as described in Section 2.2. Fig. 2 presents real faces with the corresponding outputs of the beautification process. The beautified faces are generated by recovering the and of the real face with gradient descent. Then the feed forward generator is used with the same and . For real life applications probably only is relevant, as it somewhat preserves the identity of the original face.
We have shown that despite the subjective nature of beauty, a generative model can learn to capture its essence and generate faces with different beauty levels. We expected a disentanglement between beauty level and the person’s identity defined by attributes such as race or gender. In practice, we actually found that when generating two faces with the same latent vector and for enough beauty scores, these attributes tend to change. It might be tempting to call it a ”racist algorithm”, but we believe it just reflects the subjective, possibly unconscious, biases of the human annotators. We also presented a method to use the trained generative model for the beautification of faces. It should, however, be used with care: While a small increase in the beauty score looks like retouching, a big increase transforms the face into another person.
The research was funded by ERC StG RAPID.
-  Eytan Ruppin Yael Eisenthal, Gideon Dror, “Facial attractiveness: Beauty and the machine,” Neural Computation, vol. 18, pp. 119–142, 2006.
-  Gideon Dror Dani Lischinski Tommer Leyvand, Daniel Cohen-Or, “Data-driven enhancement of facial attractiveness,” ACM Transactions on Graphics, vol. 27, no. 38, 2008.
-  Luoqi Liu Xiangbo Shu Shuicheng Yan Jianshu Li, Chao Xiong, “Deep face beautification,” MM ’15 Proceedings of the 23rd ACM international conference on Multimedia, pp. 793–794, 2015.
-  Guangming Lu Bob Zhang, Xihua Xiao, “Facial beauty analysis based on features prediction and beautification models,” Pattern Analysis and Applications, vol. 21, pp. 529–542, 2018.
-  Yikui Zhai Yinhua Liu Junying Gan, Lichen Li, “Deep self-taught learning for facial beauty prediction,” Neurocomputing, vol. 144, pp. 295–303, 2014.
-  Lianwen Jin Jie Xu Mengru Li Duorui Xie, Lingyu Liang, “Scut-fbp: A benchmark dataset for facial beauty perception,” arXiv, 2015.
-  Lianwen Jin Duorui Xie Mengru Li Lingyu Liang, Luojun Lin, “Scut-fbp5500: A diverse benchmark dataset for multi-paradigm facial beauty prediction,” ICPR, 2018.
-  Mehdi Mirza Bing Xu David Warde-Farley Sherjil Ozair Aaron Courville Yoshua Bengio Ian J. Goodfellow, Jean Pouget-Abadie, “Generative adversarial networks,” NIPS 2014, 2014.
-  Alec Radford, Luke Metz, and Soumith Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015.
-  David Pfau Jascha Sohl-Dickstein Luke Metz, Ben Poole, “Unrolled generative adversarial networks,” ICLR, 2017.
-  Martin Arjovsky, Soumith Chintala, and Léon Bottou, “Wasserstein gan,” arXiv preprint arXiv:1701.07875, 2017.
-  Martin Arjovsky Vincent Dumoulin-Aaron Courville Ishaan Gulrajani, Faruk Ahmed, “Improved training of wasserstein gans,” NIPS, 2017.
-  Samuli Laine Jaakko Lehtinen Tero Karras, Timo Aila, “Progressive growing of gans for improved quality, stability, and variation,” ICLR, 2017.
-  Simon Osindero Mehdi Mirza, “Conditional generative adversarial nets,” arXiv, 2014.
-  Pablo M. Granitto Guillermo L. Grinblat, Lucas C. Uzal, “Class-splitting generative adversarial networks,” arXiv, 2018.
-  Jean-Luc Dugelay Grigory Antipov, Moez Baccouche, “Face aging with conditional generative adversarial networks,” arXiv, 2017.
-  Jean-Luc Dugelay Grigory Antipov, Moez Baccouche, “Gan-based realistic face pose synthesis with continuous latent code,” AAAI, 2018.
-  Aleix M. Martinez Alberto Sanfeliu Francesc Moreno-Noguer Albert Pumarola, Antonio Agudo, “Ganimation: Anatomically-aware facial animation from a single image,” ECCV, 2018.
-  David Zhang Fangmei Chen, “A benchmark for geometric facial beauty study,” ICMB 2010: Medical Biometrics, vol. 6165, pp. 21–32, 2010.
-  Gaurav Aggarwal Alejandro Jaimes Miriam Redi, Nikhil Rasiwasia, “The beauty of capturing faces: Rating the quality of digital portraits,” 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 1, pp. 1–8, 2015.
-  Massimo Piccardi Hatice Gunes, “Assessing facial beauty through proportion analysis by image processing and supervised learning,” International Journal of Human-Computer Studies, vol. 64, pp. 1184–1199, 2006.
-  Lingyu Liang Ziyong Feng Duorui Xie Jie Xu, Lianwen Jin, “A new humanlike facial attractiveness predictor with cascaded fine-tuning deep learning model,” arXiv, 2015.
-  Xiaohui Yuan Lu Xu, Jinhai Xiang, “Transferring rich deep features for facial beauty prediction,” arXiv, 2018.
-  Siva Viswanathan Lanfei Shi, “Beauty and counter-signaling in online matching markets: Evidence from a randomized field experiment,” ICIS 2018, 2018.
-  Phillip Isola Alexei A. Efros Jun-Yan Zhu, Taesung Park, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” ICCV, 2017.
-  Jan Kautz Ming-Yu Liu, Thomas Breuel, “Unsupervised image-to-image translation networks,” NIPS, 2017.
-  Munyoung Kim Jung-Woo Ha Sunghun Kim-Jaegul Choo Yunjey Choi, Minje Choi, “Stargan: Unified generative adversarial networks for multi-domain image-to-image translation,” CVPR, 2018.