Towards Large-Pose Face Frontalization in the Wild
Despite recent advances in face recognition using deep learning, severe accuracy drops are observed for large pose variations in unconstrained environments. Learning pose-invariant features is one solution, but needs expensively labeled large-scale data and carefully designed feature learning algorithms. In this work, we focus on frontalizing faces in the wild under various head poses, including extreme profile views. We propose a novel deep 3D Morphable Model (3DMM) conditioned Face Frontalization Generative Adversarial Network (GAN), termed as FF-GAN, to generate neutral head pose face images. Our framework differs from both traditional GANs and 3DMM based modeling. Incorporating 3DMM into the GAN structure provides shape and appearance priors for fast convergence with less training data, while also supporting end-to-end training. The 3DMM-conditioned GAN employs not only the discriminator and generator loss but also a new masked symmetry loss to retain visual quality under occlusions, besides an identity loss to recover high frequency information. Experiments on face recognition, landmark localization and 3D reconstruction consistently show the advantage of our frontalization method on faces in the wild datasets.111Detail results and resources can be refered to: http://cvlab.cse.msu.edu/project-face-frontalization.html.
Frontalization of face images observed from extreme viewpoints is a problem of fundamental interest in both human and machine facial processing and recognition. Indeed, while humans are innately skilled at face recognition, newborns do not perform better than chance on recognition from profile views , although this ability seems to develop rapidly within a few months of birth . Similarly, dealing with profile views of faces remains an enduring challenge for computer vision too. The advent of deep convolutional neural networks (CNNs) has led to large strides in face recognition, with verification accuracy surpassing human levels [28, 27] on datasets such as LFW. But even representations learned with state-of-the-art CNNs suffer for profile views, with severe accuracy drops reported in recent studies [16, 26]. Besides aiding recognition, frontalization of faces is also a problem of independent interest, with potential applications such as face editing, accessorizing and creation of models for use in virtual and augmented reality.
In recent years, CNNs have leveraged the availability of large-scale face datasets to also learn pose-invariant representations for recognition, using either a joint single framework  or multi-view pose-specific networks [46, 16]. Early works on face frontalization in computer vision rely on frameworks inspired by computer graphics. The well-known 3D Morphable Model (3DMM)  explicitly models facial shape and appearance to match an input image as closely as possible. Subsequently, the recovered shape and appearance can be used to generate a face image under novel viewpoints. Many 3D face reconstruction methods [22, 43, 23] build upon this direction by improving speed or accuracy. Deep learning has made inroads into data-driven estimation of 3DMM models too [46, 16, 15], circumventing some drawbacks of early methods such as over-reliance on the accuracy of 2D landmark localization [40, 41]. Due to restricted Gaussian assumptions and nature of losses used, insufficient representation ability for facial appearance prevents such deep models from producing outputs of high quality. While inpainting methods such as  attempt to minimize the impact on quality due to self-occlusions, they still do not retain identity information.
In this paper, we propose a novel generative adversarial network framework FF-GAN that incorporates elements from both deep 3DMM and face recognition CNNs to achieve high-quality and identity-preserving frontalization, using a single input image that can be a profile view up to . Our framework targets more on face-in-the-wild conditions which show more challenge on illumination, head pose variation, self-occlusion and so on. To the best of our knowledge, this is the first face frontalization work that handles pose variations at such extreme poses.
Noting the challenge of purely data-driven face frontalization, Section 3.1 proposes to enhance the input to the GAN with 3DMM coefficients estimated by a deep network. This provides a useful prior to regularize the frontalization, however, it is well known that deep 3DMM reconstructions are limited in their ability to retain high-frequency information. Thus, Section 3.2 proposes a method to combine 3DMM coefficients with the input image to generate an image that maintains both global pose accuracy and retains local information present in the input image. In particular, the generator in our GAN produces a frontal image based on a reconstruction loss, a smoothness loss and a novel symmetry-enforcing loss. The aim of the generator is to fool the discriminator, presented in Section 3.3, into being unable to distinguish the generated frontal image from a real one. However, neither the 3DMM that loses high-frequency information, nor the GAN that only aligns domain-level distributions, suffice to preserve identity information in the generated image. To retain identity information, Section 3.4 proposes to use a recognition engine to align the feature representations of the generated image with the input. A balanced training with all the above objectives results in high-quality frontalized faces that preserve identity.
We extensively evaluate our framework on several well-known datasets including Multi-PIE, AFLW, LFW, and IJB-A. In particular, Section 5 demonstrates that the face verification accuracy on LFW dataset that uses information from our frontalized outputs exceeds previous state of the art. We observe even larger improvements on the Multi-PIE dataset especially for the viewpoints beyond and being the sole method among recent works to produce high recognition accuracies for -. We present ablation studies to analyze the effect of each module and several qualitative results to visualize the quality of our frontalization.
To summarize, our key contributions are:
A novel GAN-based end-to-end deep framework to achieve face frontalization even for extreme viewpoints.
A deep 3DMM model to provide shape and appearance regularization beyond the training data.
Effective symmetry-based loss and smoothness regularization that lead to generation of high-quality images.
Use of a deep face recognition CNN to enforce that generated faces satisfy identity-preservation, besides realism and frontalization.
Consistent improvements on several datasets across multiple tasks, such as face recognition, landmark localization and 3D reconstruction.
2 Related Work
Synthesizing a frontal view of a face from a single image with large pose variation is very challenging because recovering the 3D information from 2D projections is ambiguous and there exists self-occlusion. A straightforward method is to build 3D models for faces and directly rotate the 3D face models. Seminal works date back to the 3D Morphable Model (3DMM) , which models both the shape and appearance as PCA spaces. Such 3DMM fitting helps boost 3D face reconstruction  and 3D landmark localization [14, 43] performance. Hassner et al.  apply a shared 3D shape model combined with input images to produce the frontalized appearance. Ferrari et al.  use 3DMM to fit to the input image and search for the dense point correspondence to complete the occlusion region. Zhu et al.  provide a high fidelity pose and expression normalization method based on 3DMM.
Besides model-based methods, Sagonas et al.  propose frontalization as a low-rank constraint optimization problem to target landmark localization. Some deep learning based methods also show promising performance in terms of pose rectification. In , a recurrent transform unit is proposed to synthesize discrete 3D views. A concatenated network structure is applied in  to rotate the face, where the output is regularized by the image level reconstruction loss. In [5, 2], a perception loss is designed to supervise the generator training. Our method is also based on deep learning. We incorporate 3DMM into a deep network and propose a novel GAN based framework to jointly utilize the geometric prior from 3DMM and high frequency information from the original image. A discriminator and a face recognition engine are proposed to regularize the generator to preserve identity.
Generative Adversarial Networks
Introduced by Goodfellow et al. , the Generative Adversarial Network (GAN) maps from a source data distribution to a target distribution using a minimax optimization over a generator and a discriminator. GANs have been used for a wide range of applications including image generation [6, 7, 29], 3D object generation , etc. Deep Convolutional GAN (DC-GAN)  extended the original GAN multi-layer perceptron network into convolutional structures. Many methods favor conditional settings by introducing latent factors to disentangle the objective space and thereby achieve better synthesis. For instance, Info-GAN  employs the latent code for information loss to regularize the generative network.
A recent method, DR-GAN , has been proposed concurrent to this work. It also uses a recognition engine to regularize the identity, while using a pose code as input to the encoder to generate an image with specific pose. Instead of explicitly injecting the latent code, our framework learns the shape and appearance code from a differentiable 3DMM deep network, which supports the flavor of end-to-end joint optimization for the GAN. Unlike our framework, it is the encoder in  that is required to be identity-preserving, which suffices for reconstruction, but results in loss of spatial and high-frequency information crucial for image generation.
Pose-Invariant Feature Representation
While face frontalization may be considered an image-level pose-invariant representation, feature representations invariant to pose have also been a mainstay of face recognition. Early works employ Canonical Correlation Analysis (CCA) to analyze the commonality among pose-variant samples. Recently, deep learning based methods consider several aspects, such as multiview perception layers , to learn a model separating identity from viewpoints. Feature pooling across different poses  is also proposed to allow a single network structure for multiple pose inputs. Pose-invariant feature disentanglement  or identity preservation [45, 39] methods aim to factorize out the non-identity part with carefully designed networks. Some other methods focus on fusing information at the feature level  or distance level . Our method is mostly related to identity preserving methods, in which we apply a face recognition engine as an additional discriminator to guide the generator for better synthesis, while it updates itself towards better discriminative capability for identity classification.
3 Proposed Approach
The mainstay of FF-GAN is a generative adversarial network that consists of a generator and a discriminator . takes a non-frontal face as input to generate a frontal output, while attempts to classify it as a real frontal image or a generated one. Additionally, we include a face recognition engine that regularizes the generator output to preserve identity features. A key component is a deep 3DMM model that provides shape and appearance priors to the GAN that play a crucial role in alleviating the difficulty of large pose face frontalization. Figure 2 summarizes our framework.
Let be the training set with samples, with each sample consisting of an input image with arbitrary pose, a corresponding ground truth frontal face , the ground truth 3DMM coefficients and the identity label . We henceforth omit the sample index for clarity.
3.1 Reconstruction Module
Frontalization from extreme pose variation is a challenging problem. While a purely data-driven approach might be possible given sufficient data and an appropriate training regimen, however it is non-trivial. Therefore, we propose to impose a prior on the generation process, in the form of a 3D Morphable Model (3DMM) . This reduces the training complexity and leads to better empirical performance with limited data.
Recall that 3DMM represents faces in the PCA space:
where are the 3D shape coordinates computed as the linear combination of the mean shape , the shape basis , and the expression basis , while is the texture that is the linear combination of the mean texture and the texture basis . The coefficients define a unique 3D face.
Previous work [14, 43] applies 3DMM for face alignment where a weak perspective projection model is used to project the 3D shape into 2D space. Similar to , we optimize a projection matrix based on pitch, yaw, roll, scale and 2D translations to represent the pose of an input face image. Let denotes the 3DMM coefficients. The target of the reconstruction module is to estimate , given an input image . Since the intent is for to also be trainable with the rest of the framework, we use a CNN model based on CASIA-Net  for this regression task. We apply -score normalization to each dimension of the parameters before training. A weighted parameter distance cost similar to  is used:
where is the importance matrix whose diagonal is the weight of each parameter.
3.2 Generation Module
The pose estimate obtained from 3DMM is quite accurate, however, frontalization using it leads to loss of high frequency details present in the original image. This is understandable, since a low-dimensional PCA representation can preserve most of the energy in lower frequency components. Thus, we use a generative model that relies on 3DMM coefficients and the input image to recover a frontal face that preserves both the low and high frequency components.
In Figure 2, features from the two inputs to the generator are fused through an encoder-decoder network to synthesize a frontal face . To penalize the output from ground truth , the straight-forward objective is the reconstruction loss that aims at reconstructing the ground truth with minimal error:
Since an loss empirically leads to blurry output, we use an loss to better preserve high frequency signals. At the beginning of training, the reconstruction loss harms the overall process since the generation is far from frontalized, so the reconstruction loss operates on a poor set of correspondences. Thus, the weight for reconstruction loss should be set in accordance to the training stage. The details of tuning the weight are discussed in Section 4.
To reduce block artifacts, we use a spatial total variation loss to encourage smoothness in the generator output:
where is the image gradient, is the two dimensional coordinate increment, is the image region and is the area normalization factor.
Based on the observation that human faces share self-similarity across left and right halves, we explicitly impose a symmetry loss. We recover a frontalized 2D projected silhouette, , from the frontalized 3DMM model indicating the visible parts of the face. The mask is binary, with nonzero values indicating visible regions and zero otherwise. Similar masking constraint is shown effective in recent work . By horizontally flipping the face, we achieve another mask . We demand that generated frontal images for the original input image and its flipped version should be similar within their respective masks:
Here, is the horizontal flipped image for input , are the 3DMM coefficients for and denotes element-wise multiplication. Note that the role of the mask is to focus on the visible face portion, rather than invisible face portion and background.
3.3 Discrimination Module
Generative Adversarial Networks , formulated as a two-player game between a generator and a discriminator, have been widely used for image generation. In this work, the generator synthesizes a frontal face image and the discriminator distinguishes between the generated face from the frontal face . Note that in a conventional GAN, all images used for training are considered as real samples. However, we limit “real” faces to frontal views only. Thus, must generate both realistic and frontal face images.
Our consists of five convolutional layers and one linear layer that generates a 2D vector with each dimension representing the probability of the input to be real or generated. During training, is updated with two batches of samples in each iteration. The following objective is minimized:
where and are the real and generated image sets respectively.
On the other hand, aims to fool to classify the generated image to be real with the following loss:
The competition between and improves both modules. In the early stages when face images are not fully frontalized, focuses on the pose of the face to make the real or generated decision, which in turn helps to generate a frontal face. In the later stages when face images are frontalized, focuses on subtle details of frontal faces, which guides to generate a realistic frontal face that is difficult to achieve with the supervisions of (3) and (5) alone.
3.4 Recognition Module
A key challenge in large-pose face frontalization is to preserve the original identity. This is a difficult task due to self-occlusion in profile faces. The discriminator above can only determine whether the generated image is realistic and in frontal pose, but does not consider whether identity of the input image is retained. Although we have L1, total variation and masked symmetry losses for face generation, they treat each pixel equally that result in the loss of discriminative power for identity features. Therefore, we use a recognition module to impart correct identity to the generated images.
We use a CASIA-Net structure for the recognition engine , with a cross-entropy loss for training to classify image with the ground truth identity :
where is the index of the identity classes. Now, our generator is regularized by the signal from to preserve the same identity as the input image. If the identity label of the input image is not available, we regularize the extracted identity features of the generated image to be similar to those of the input image, denoted as . During training, is updated with real input images to retain discriminative power and the loss from the generated images is back-propagated to update the generator :
To summarize the framework, the reconstruction module provides guidance to the frontalization through (2), the discriminator does so through (6) and the recognition engine through (8). Thus, the generator combines all these sources of information to optimize an overall objective function:
The weights above are discussed in Section 4 to illustrate how each component contributes to joint optimization of .
4 Implementation Details
Our framework consists of mainly four parts as shown in Figure 2, the deep 3DMM reconstructor , a two-way fused encoder-decoder generator, the real/generated discriminator and a face recognition engine jointly trained for the identity regularization. All the detailed network structures are introduced in the appendix. It is difficult to initialize the overall network from scratch. The generator is expected to receive the correct 3DMM coefficients, whereas the reconstructor needs to be pre-trained. Our identity regularization also requires correct identity information from the recognizer. Thus, the reconstructor is pre-trained until we achieve comparable performance for face alignment compared to previous work  using 300W-LP . The recognizer is pre-trained using CASIA-WebFace  to achieve good verification accuracy on LFW.
The end-to-end joint training is conducted after and are ready. Notice that we leave generator and discriminator training from scratch simultaneously because we believe pre-trained and do not contribute much to the adversarial training. Good with poor will quickly pull to be poor again and vice versa. Further these two components should also match with each other. Good may be evaluated poor by a good as the discriminator may be trained from other sources. As shown in Equation 10, we set up five balance factors to control the loss contribution to the overall loss. The end-to-end training can be divided into three steps. For the first step, is set to and is set to 0.01, since these two parts are highly related with the mapping from the generated output to the reference input. Typical values for , and are all 1.0. Once the training error of and strikes a balance within usually 20 epochs, we change and to be 1.0 while tuning down to be 0.5, to be 0.8, respectively. It takes another 20 epochs to strike a new balance. Note that we fix model for such two-stage training. After that, we relax model and further fine-tune all the modules jointly. Further details are included in the supplementary materials. Network structures and more results are refered to: http://cvlab.cse.msu.edu/project-face-frontalization.html.
5.1 Settings and Databases
We evaluate our proposed FF-GAN on a variety of tasks including face frontalization, 3D face reconstruction and face recognition. Frontalization and 3D reconstruction are evaluated qualitatively by comparing the visual quality of the generated images to ground truth. We also report quantitative results on sparse 2D landmark localization accuracy. Face recognition is evaluated quantitatively over several challenging databases. We pre-process the images by applying some state-of-the-art face detection  and face alignment  algorithms and crop to image size across all the databases. The face databases used for training and testing are introduced below.
300W-LP consists of images that are augmented from 300W  by the face profiling approach , which is designed to generate images with yaw angles ranging from to . We use 300W-LP as our training set by forming image pairs of pose-variant and frontal-view images of the same identity. The estimated 3DMM coefficients provided with dataset are treated as ground truth to train module .
AFLW2000 is constructed for 3D face alignment evaluation by the same face profiling method applied in 300W-LP. The database includes 3DMM coefficients and augmented landmarks for the first images in AFLW. We use this database to evaluate module for reconstruction.
Multi-PIE consists of images from subjects with large variations in pose, illumination and expression. We select a subset of images with poses, illuminations, neutral expression from all four sessions. The first subjects are used for training and the remaining subjects for testing, in the setting of . We randomly choose one image for each subject with frontal pose and neutral illumination as gallery and all the rest as probe images.
CASIA consists of images of subjects where the images of overlapping subjects with IJB-A are removed. It is a widely applied large-scale database for face recognition. We apply it to pre-train and finetune module .
LFW contains images collected from Internet. The verification set consists of folders, each with same-person pairs and different-person pairs. We evaluate face verification performance on frontalized images and compare with previous frontalization algorithms.
IJB-A includes images and video frames for subjects, which is a challenging with uncontrolled pose variations. Different from previous databases, IJB-A defines face template matching where each template contains a variant amount of images. It consists of folders, each of which being a different partition of the full set. We finetune model on the training set of each folder and evaluate on the testing set for face verification and identification.
5.2 3D Reconstruction
FF-GAN borrows prior shape and appearance information from 3DMM to serve as the reference for frontalization. Though we do not specifically optimize for the reconstruction task, it is interesting to see whether our reconstructor is doing a fair job in the 3D reconstruction task.
Figure 3 (a) shows five examples on AFLW2000 for landmark localization and frontalization. Our method localizes the key points correctly and generates realistic frontal faces even for extreme profile inputs. We also quantitatively evaluated the landmark localization performance using the normalized mean square error. Our model achieves normalized mean square error, compared to for 3DDFA  and for SDM . Note that our method achieves competitive performance compared to 3DDFA and SDM, even though those methods are tuned specifically for the localization task. This indicates that our reconstruction module performs well in providing correct geometric information.
Given the input images in (a), we compute the 3DMM coefficients with our model and generate the 3D geometry and texture using (1), as shown in Figure 3 (b). We observe that our method effectively preserves shape and identity information in the estimated 3D models, which can even outperform the ground truth provided by 3DDFA. For example, the shape and texture estimations in the last example is more similar to the input while the ground truth clearly shows a male subject rather than a female. Given that the 3DMM coefficients cannot preserve local appearance, we obtain such high frequency information from the input image. Thus, the choice of fusing 3DMM coefficients with the original input is shown to be a reasonable one empirically.
5.3 Face Recognition
One motivation for face frontalization is to see, whether the frontalized images bring in the correct identity information for the self-occluded missing part, and thus boost the performance in face recognition. To verify this, we evaluate our model on LFW , MultiPIE , and IJB-A  for verification and identification tasks. Features are extracted from model across all the experiments. Euclidean distance is used as the metric for face matching.
Evaluation on LFW We evaluate the face verification performance on our frontalized images of LFW, compared to previous face frontalization methods. LFW-FF-GAN represents our method to generate frontalized images, LFW-3D is from , and LFW-HPEN from . Those collected databases are pre-processed in the same way as ours. As shown in Table 1, our method achieves strong results compared to the state-of-the-art methods, which verifies that our frontalization technique better preserves identity information. Figure 4 shows some visual examples. Compared to the state-of-the-art, our method can generate realistic and identity-preserving faces especially for large poses. The facial detail filling technique in  relies on a symmetry assumption and may lead to inferior results (st row, th column). In contrast, we introduce a symmetry loss in the training process that generalizes to test images without the need to impose symmetry as a hard constraint for post-processing.
|Ferrari et al. ||-|
Evaluation on IJB-A We further evaluate our algorithm on IJB-A database. Following prior work , we select a subset of well aligned images in each template for face matching. We define our distance metric as the original image pair distance plus the weighted generated image pair distance. The weights are the confidence score provided by our model , i.e. . Recall that model is trained for real or generated classification task, which reflects the quality of the generated images. Obviously, the poorer quality of the generated image, the less likely we take the generated image pair for distance metric. With the fused metric distance, we expect the generated images to provide complimentary information to boost the recognition performance.
Table 2 shows the verification and identification performance. On verification, our method achieves consistently better accuracy compared to the baseline methods. The gap is at FAR and at FAR , which is a significant improvement. On identification, our fused metric also achieves consistently better result, improvement at Rank- and at Rank-. As a challenging face database in the wild, large pose variations, complex background and the uncontrolled illumination prevent the compared methods to perform well. Closing one of those variation gaps would lead to large improvement, as evidenced by our face frontalization method in rectifying pose variation.
|Wang et al. |
Evaluation on Multi-PIE Multi-PIE allows for a graded evaluation with respect to pose, illumination, and expression variations. Thus, it is an important database to validate the performance of our methods with respect to prior works on frontalization. The rank- identification rate is reported in Table 3. Note that previous works only consider poses within , while our method can handle all pose variation including profile views at . The results suggest that when pose variation is within , which is near frontal, our method is competitive to state-of-the-art methods. But when the pose is or larger, our method demonstrates significant advantages over all the other methods. Although the recognition rate of the synthetic images performs worse than the original images, the fused results perform better than the original images, especially on large-pose face images.
5.4 Qualitative Results
Figure 5 shows some visual results for unseen images on Multi-PIE, AFLW, and IJB-A. The input images are of medium to large pose and under a large variation of race, age, expression, and lighting conditions. However, FF-GAN can still generate realistic and identity-preserved frontal faces.
5.5 Ablation Study on Multi-PIE
FF-GAN consists of four modules . Our generator is the key component for image synthesis, which cannot be removed. We train three partial variants by removing each of the remaining modules, which results in , , and . Further, we train another three variants by removing each of the three loss functions (including , , ) applied on the generated images, resulting in , , and . We keep the training process and all hyper-parameters the same and explore how the performances of those models differ.
Figure 6 shows visual comparisons between the proposed framework and its incomplete variants. Our method is visually better than those variants, across all different poses, which suggests that each component in our model is essential for face frontalization. Without the recognizer , it is hard to preserve identity especially on large poses. Without the discriminator , the generated images are blurry without much high-frequency identity information. Without the reconstructor , there are artifacts on the generated faces, which highlights the effectiveness of 3DMM in frontalization. Without the reconstruction loss , the identity can be preserved to some extent, however the overall image quality is low and the lighting condition is not preserved.
Table 4 shows the quantitative results of the ablation study models by evaluating the recognition rate of the synthetic images generated from each model. Our FF-GAN with all modules and all loss functions performs the best among all other variants, which suggests the effectiveness of each part of our framework. For example, the performance drops dramatically without the recognition engine regularization. The 3DMM module also performs a significant role in face frontalization.
In this work, we propose a 3DMM-conditioned GAN framework to frontalize faces under all pose ranges including extreme profile views. To the best of our knowledge, this is the first work to expand pose ranges to in challenging large-scale in-the-wild databases. The 3DMM reconstruction module provides an important shape and appearance prior to guide the generator to rotate faces. The recognition engine regularizes the generated image to preserve identity. We propose new losses and carefully design the training procedure to obtain high-quality frontalized images. Extensive experiments suggest that FF-GAN can boost face recognition performance and be applied for 3D face reconstruction. Large-pose face frontalization in the wild is a challenging and ill-posed problem; we believe this work has made convincing progress towards a viable solution.
-  V. Blanz and T. Vetter. A morphable model for the synthesize of 3D faces. SIGGRAPH, 1999.
-  K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan. Unsupervised pixel-level domain adaptation with generative adversarial networks. In CVPR, 2017.
-  J.-C. Chen, V. M. Patel, and R. Chellappa. Unconstrained face verification using deep CNN features. In WACV, 2016.
-  X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS, 2016.
-  F. Cole, D. Belanger, D. Krishnan, A. Sarna, I. Mosseri, and W. T. Freeman. Synthesizing normalized faces from facial identity features. In CVPR, 2017.
-  E. Denton, S. Chintala, and R. Fergus. Deep generative image models using a Laplacian pyramid of adversarial networks. In NIPS, 2015.
-  A. Dosovitskiy, J. Springenberg, and T. Brox. Learning to generate chairs with convolutional neural networks. In CVPR, 2015.
-  J. Fagan. Infants’ recognition of invariant features of faces. Child Development, 1976.
-  C. Ferrari, G. Lisanti, S. Berretti, and A. Bimbo. Effective 3D based frontalization for unconstrained face recognition. In ICPR, 2016.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
-  R. Gross, I. Matthew, J. Cohn, T. Kanade, and S. Baker. MultiPIE. Image and Vision Computing, 2009.
-  T. Hassner, S. Harel, E. Paz, and R. Enbar. Effective face frontalization in unconstrained images. In CVPR, 2014.
-  G. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical report, University of Massachusetts, Amherst, 2007.
-  A. Jourabloo and X. Liu. Pose-invariant 3D face alignment. In ICCV, 2015.
-  A. Jourabloo and X. Liu. Pose-invariant face alignment via CNN-based dense 3D model fitting. IJCV, 2017.
-  M. Kan, S. Shan, and X. Chen. Multi-view deep network for cross-view classification. In CVPR, 2016.
-  B. F. Klare, B. Klein, E. Taborsky, A. Blanton, J. Cheney, K. Allen, P. Grother, A. Mah, M. Burge, and A. K. Jain. Pushing the frontiers of unconstrained face detection and recognition: IARPA Janus Benchmark A. In CVPR, 2015.
-  I. Masi, S. Rawls, G. Medioni, and P. Natarajan. Pose-aware face recognition in the wild. In CVPR, 2016.
-  E. Park, J. Yang, E. Yumer, D. Ceylan, and A. C. Berg. Transformation-grounded image generation network for novel 3d view synthesis. In CVPR, 2017.
-  X. Peng, X. Yu, K. Sohn, D. N. Metaxas, and M. Chandraker. Reconstruction-based disentanglement for pose-invariant face recognition. In ICCV, 2017.
-  A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, 2015.
-  J. Roth, Y. Tong, and X. Liu. Unconstrained 3D face reconstruction. In CVPR, 2015.
-  J. Roth, Y. Tong, and X. Liu. Adaptive 3D face reconstruction from unconstrained photo collections. TPAMI, 2016.
-  C. Sagonas, Y. Panagakis, S. Zafeiriou, and M. Pantic. Robust statistical face frontalization. In ICCV, 2015.
-  C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic. 300 faces in-the-wild challenge: The first facial landmark localization challenge. In ICCVW, 2013.
-  S. Sengupta, J. C. Chen, C. Castillo, V. M. Patel, R. Chellappa, and D. W. Jacobs. Frontal to profile face verification in the wild. In WACV, 2016.
-  Y. Sun, Y. Chen, X. Wang, and X. Tang. Deep learning face representation by joint identification-verification. In NIPS, 2014.
-  Y. Sun, X. Wang, and X. Tang. Deep learning face representation from predicting 10,000 classes. In CVPR, 2014.
-  Y. Taigman, A. polyak, and L. Wolf. Unsupervised cross-domain image generation. In ICLR, 2017.
-  L. Tran, X. Yin, and X. Liu. Disentangled representation learning GAN for pose-invariant face recognition. In CVPR, 2017.
-  C. Turati, H. Bulf, and F. Simion. Newborns’ face recognition over changes in viewpoint. Cognition, 2008.
-  D. Wang, C. Otto, and A. K. Jain. Face search at scale. TPAMI, 2016.
-  J. Wu, C. Zhang, T. Xue, W. T. Freeman, and J. B. Tenenbaum. Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling. In NIPS, 2016.
-  X. Xiong and F. D. la Torre. Supervised descent method and its applications to face alignment. In CVPR, 2013.
-  F. Yang, W. Choi, and Y. Lin. Exploit all the layers: Fast and accurate CNN object detector with scale dependent pooling and cascaded rejection classifiers. In CVPR, 2016.
-  J. Yang, S. Reed, M.-H. Yang, and H. Lee. Weakly supervised disentangling with recurrent transformations for 3D view synthesis. In NIPS, 2015.
-  D. Yi, Z. Lei, S. Liao, and S. Z. Li. Learning face representation from scratch. arXiv preprint:1411.7923, 2014.
-  J. Yim, H. Jung, B. Yoo, C. Choi, D. Park, and J. Kim. Rotating your face using multi-task deep neural network. In CVPR, 2015.
-  X. Yin and X. Liu. Multi-task convolutional neural network for pose-invariant face recognition. arXiv preprint arXiv:1702.04710, 2017.
-  X. Yu, J. Huang, S. Zhang, and D. N. Metaxas. Face landmark fitting via optimized part mixtures and cascaded deformable model. TPAMI, 2015.
-  X. Yu, Z. Lin, J. Brandt, and D. N. Metaxas. Consensus of regression for occlusion-robust facial feature localization. In ECCV, 2014.
-  X. Yu, F. Zhou, and M. Chandraker. Deep deformation network for object landmark localization. In ECCV, 2016.
-  X. Zhu, Z. Lei, X. Liu, H. Shi, and S. Z. Li. Face alignment across large poses: A 3D solution. In CVPR, 2016.
-  X. Zhu, Z. Lei, J. Yan, D. Yi, and S. Z. Li. High-fidelity pose and expression normalization for face recognition in the wild. In CVPR, 2015.
-  Z. Zhu, P. Luo, X. Wang, and X. Tang. Deep learning identity-preserving face space. In ICCV, 2013.
-  Z. Zhu, P. Luo, X. Wang, and X. Tang. Multi-view perceptron: a deep model for learning face identity and view representations. In NIPS, 2014.