Disentangling Features in 3D Face Shapes for Joint Face Reconstruction and RecognitionThis work is supported by the National Key Research and Development Program of China (2017YFB0802300) and the National Natural Science Foundation of China (61773270, 61703077).

Disentangling Features in D Face Shapes
for Joint Face Reconstruction and Recognitionthanks: This work is supported by the National Key Research and Development Program of China (2017YFB0802300) and the National Natural Science Foundation of China (61773270, 61703077).

Feng Liu, Ronghang Zhu, Dan Zeng, Qijun Zhao ,  and Xiaoming Liu
College of Computer Science, Sichuan University
Department of Computer Science and Engineering, Michigan State University
Corresponding author. Email: qjzhao@scu.edu.cn.
  
Abstract

This paper proposes an encoder-decoder network to disentangle shape features during D face reconstruction from single D images, such that the tasks of reconstructing accurate D face shapes and learning discriminative shape features for face recognition can be accomplished simultaneously. Unlike existing D face reconstruction methods, our proposed method directly regresses dense D face shapes from single D images, and tackles identity and residual (i.e., non-identity) components in D face shapes explicitly and separately based on a composite D face shape model with latent representations. We devise a training process for the proposed network with a joint loss measuring both face identification error and D face shape reconstruction error. To construct training data we develop a method for fitting D morphable model (DMM) to multiple D images of a subject. Comprehensive experiments have been done on MICC, BUDFE, LFW and YTF databases. The results show that our method expands the capacity of DMM for capturing discriminative shape features and facial detail, and thus outperforms existing methods both in D face reconstruction accuracy and in face recognition accuracy.

1 Introduction

Figure 1: Comparison between the learning process of (a) existing methods and (b) our proposed method. GT denotes Ground Truth. (d) and (e) are D face shapes and disentangled identity shapes reconstructed by our method for the images in (c) from LFW [15].

D face shapes reconstructed from D images have been proven to benefit many tasks, e.g., face alignment or facial landmark localization [43, 18], face animation [9, 13], and face recognition [5, 12]. Many prior work have been devoted to reconstructing D face shapes from a single D image, including shape from shading (SFS)-based methods [14, 20], D morphable model (DMM) fitting-based methods [4, 5], and recently proposed regression-based methods [23, 24]. These methods mostly aim to recover D face shapes that are loyal to the input D images or retain as much facial detail as possible (see Fig. 1). Few of them explicitly consider the identity-sensitive and identity-irrelevant features in the reconstructed D faces. Consequently, very few studies have been reported about recognizing faces using the reconstructed D face either by itself or by fusing with legacy D face recognition [5, 34].

Using real D face shapes acquired by D face scanners for face recognition, on the other hand, has been extensively studied, and promising recognition accuracy has been achieved [6, 11]. Apple recently claims to use D face matching in its iPhone X for cellphone unlock [1]. All of these prove the discriminative power of D face shapes. Such a big performance gap between the reconstructed D face shapes and the real D face shapes, in our opinion, demonstrates that existing D face reconstruction methods seriously undervalue the identity features in D face shapes. Taking the widely used DMM fitting based methods as example, their reconstructed D faces are constrained in the limited shape space spanned by the pre-determined bases of DMM, and thus perform poorly in capturing the features unique to different individuals [41].

Inspired by the latest development in disentangling feature learning for D face recognition [35, 27], we propose to disentangle the identity and non-identity components of D face shapes, and more importantly, fulfill reconstructing accurate D face shapes loyal to input D images and learning discriminative shape features effective for face recognition in a joint manner. These two tasks, at the first glance, seem to contradict each other. On one hand, face recognition prefers identity-sensitive features, but not every detail on faces; on the other hand, D reconstruction attempts to recover as much facial detail as possible, regardless whether the detail benefits or distracts facial identity recognition. In this paper, however, we will show that by exploiting the ‘contradictory’ objectives of recognition and reconstruction, we are able to disentangle identity-sensitive features from identity-irrelevant features in D face shapes, and thus simultaneously robustly recognize faces with identity-sensitive features and accurately reconstruct D face shapes with both features (see Fig. 1).

Specifically, we represent D face shapes with a composite model, in which identity and residual (i.e., non-identity) shape components are represented with separate latent variables. Based on the composite model, we propose a joint learning pipeline that is implemented as an encoder-decoder network to disentangle shape features during reconstructing D face shapes. The encoder network converts the input D face image to identity and residual latent representations, from which the decoder network recovers its D face shape. The learning process is supervised by both reconstruction loss and identification loss, and based on a set of D face images with labelled identity information and corresponding D face shapes that are obtained by an adapted multi-image DMM fitting method. Comprehensive evaluation experiments prove the superiority of the proposed method over existing baseline methods in both D face reconstruction accuracy and face recognition accuracy. Our main contributions are summarized below.

(i) We propose a method which for the first time explicitly optimizes face recognition and D face reconstruction simultaneously. The method achieves state-of-the-art D face reconstruction accuracy via joint discriminative feature learning and D face reconstruction.

(ii) We devise an effective training process for the proposed network that can disentangle identity and non-identity features in reconstructed D face shapes. The network, while being pre-trained by DMM-generated data, can surmount the limited D shape space determined by the DMM bases, in the sense that it better captures identity-sensitive and identity-irrelevant features in D face shapes.

(iii) We leverage the effectiveness of disentangled identity features in reconstructed D face shapes for improving face recognition accuracy, as being demonstrated by our experimental results. This further expands the application scope of D face reconstruction.

2 Related Work

Figure 2: Overview of the proposed encoder-decoder based joint learning pipeline for face recognition and D shape reconstruction.

In this section, we review existing work that is closely related to our work from two aspects: D face reconstruction for recognition and Convolutional Neural Network (CNN) based D face reconstruction.

D Face Reconstruction for Recognition. D face reconstruction was first introduced for recognition by Blanz and Vetter [5]. They reconstructed D faces by fitting DMM to D face images, and used the obtained DMM parameters as features for face recognition. Their employed DMM fitting method is essentially an image-based analysis-by-synthesis approach, which does not consider the features unique to different individuals. This method was recently improved by Tran et al. [34] via pooling the DMM parameters of the images of the same subject and using a CNN to regress the pooled parameters. They experimentally proved the improved discriminative power of their obtained DMM parameters.

Instead of using DMM parameters for recognition, Liu et al. [24] proposed to recover pose and expression normalized D face shapes directly from D face landmarks via cascaded regressors and match the reconstructed D face shapes via the iterative closest point algorithm for face recognition. Other researchers [38, 32] utilized the reconstructed D face shapes for face alignment to assist extracting pose-robust features.

To summarize, existing methods, when reconstructing D face shapes, do not explicitly consider recognition performance. In [24] and [34], even though the identity of D face shapes in the training data is stressed, respectively, by pooling DMM parameters and by normalizing pose and expression, their methods of learning mapping from D images to D face shapes are unsupervised in the sense of utilizing identity labels of the training data (see Fig. 1).

CNN-based D Face Reconstruction. Existing CNN-based D face reconstruction methods can be divided into two categories according to the way of representing D faces. Methods in the first category use DMM parameters [43, 28, 34, 10, 31, 33], while methods in the second category use D volumetric representations. Jourabloo and Liu [17, 19, 18] first employed CNN to regress DMM parameters from D images for the purpose of large-pose face alignment. In [43], a cascaded CNN pipeline was proposed to exploit the intermediate reconstructed D face shapes for better face alignment. Recently, Richardson et al. [28] used two CNNs to reconstruct detailed D faces in a coarse-to-fine approach. Although they showed visually more plausible D shapes, it is not clear how beneficial the reconstructed D facial details are to face recognition.

Jackson et al. [16] proposed to represent D face shapes by D volumetric coordinates, and train a CNN to directly regress the coordinates from the input D face image. Considering the high dimensionality of original D face point clouds, as a compromise, they employed D volumetric representations. In consequence, the D face shapes generated by their method are of low resolution, which are apparently not favorable for face recognition.

3 Proposed Method

In this section, we first introduce a composite D face shape model with latent representations, based on which our method is devised. We then present the proposed encoder-decoder based joint learning pipeline. We finally give the implementation detail of our proposed method, including network structure, training data, and training process.

3.1 A Composite D Face Shape Model

In this paper, D face shapes are densely aligned, and each D face shape is represented by the concatenation of its vertex coordinates as

(1)

where is the number of vertices in the point cloud of the D face, and ‘’ means transpose. Based on the assumption that D face shapes are composed by identity-sensitive and identity-irrelevant parts, we re-write the D face shape of a subject as

(2)

where is the mean D face shape (computed across all training samples with neutral expression), is the identity-sensitive difference between and , and denotes the residual difference. A variety of sources could lead to the residual difference, for example, expression-induced deformations and temporary detail.

We further assume that and can be described by latent representations, and , respectively. This is formulated by

(3)

Here, () is the mapping function that generates the corresponding shape component () from the latent representation, with parameters (). The latent representations can be obtained from the input D face image via another function :

(4)

where are the parameters involved in . Usually, the latent representations and () are of much lower dimension than the input D face image as well as the output D face shape point cloud (see Fig. 3).

3.2 An Encoder-Decoder Network

The above composite model can be naturally implemented as an encoder-decoder network, in which serves as an encoder to extract latent representations of D face images, and and are decoders to recover the identity and residual shape components. As shown in Fig. 2, the latent representation is employed as features for face recognition. In order to enhance the discriminative capability of , we impose over an identification loss that can disentangle identity-sensitive from identity-irrelevant features in D face shapes. Meanwhile, a reconstruction loss is applied to the D face shapes generated by the decoders to guide and to better capture identity-irrelevant shape components. Such an encoder-decoder network enables us to jointly learn accurate D face shape reconstructor and discriminative shape features. Next, we detail the implementation of our proposed method.

Figure 3: Encoder in the proposed method is implemented based on SphereFace [25]. It converts the input D image to latent identity and residual shape feature representations.

3.3 Implementation Detail

3.3.1 Network Structure

Encoder Network. The encoder network, aiming at extracting latent identity and residual shape representations of D face images, should have good capacity for discriminating different faces as well as capturing abundant detail on faces. Hence, we employ a state-of-the-art face recognition network, i.e., SphereFace [25], as the base encoder network. This network consists of convolutional layers and a fully-connected (FC) layer, and takes the -dim output of the FC layer as the feature representation of faces. We append another two parallel FC layers to the base SphereFace network to generate -dim identity latent representation and -dim residual latent representation, respectively. Fig. 3 depicts the SphereFace-based encoder network. Input D face images to the encoder network are pre-processed as in [25]: The face regions are detected by using MTCNN [42], and then cropped and scaled to pixels whose values are normalized to the interval from to . Each dimension in the output latent representations is also normalized to the interval from to .

Decoder Network. Taking the identity and residual latent representations as input, the decoder network recovers the identity and residual shape components of D face shapes. Since both the input and output of the decoder network are vectors, we use a multilayer perception (MLP) network to implement the decoder. More specifically, we use two FC layers to convert the latent representations to corresponding shape components, one for identity and the other for the residual. Fig. 4 shows the detail of the implemented decoder network. As can be seen, the generated D face point clouds have vertices, and the output of the MLP-based decoder network thus is -dim. By analogy with the DMM of D faces, the weights of the connections between one entry in or and the output neurons can be considered as one basis of DMM. Thanks to the joint training strategy, the capacity of the ‘bases’ learnt here is much beyond that of the classical DMM, as we will show in the experiments.

Figure 4: Decoders in the proposed method are implemented as a fully connected (FC) layer. They convert the latent representations to corresponding shape components.

Loss Functions. We use two loss functions, D shape reconstruction error and face identification error, as the supervisory signals during the end-to-end training of the encoder-decoder network. To measure the D shape reconstruction error, we use the Euclidean loss, , to evaluate the deviation of the reconstructed D face shape from the ground truth one. The reconstructed D face shape is obtained according to Eq. (2) based on the decoder network’s output and (see Fig. 2). The face identification error is measured by using the softmax loss, , over the identity latent representation. The overall loss to the proposed encoder-decoder network is defined by

(5)

where is the weight for the reconstruction loss.

3.3.2 Training Data

To train the encoder-decoder network, we need a set of data that contain multiple D face images of same subjects with their corresponding D face shapes, i.e., . is the subject label of the D face image and D face . is the total number of D images, and is the total number of subjects in the training set. However, such a large-scale dataset is not publicly available. Motivated by prior work [34], we construct the training data from CASIA-WebFace [39], a widely-used D face recognition database, via a multi-image DMM fitting method, which is adapted from the method in [44, 30].

Faces on the images in CASIA-WebFace are detected by using the method in [42], and landmarks are located by the method in [7]. We discard images where either detection or alignment fails, which results in images of different subjects in our training data. On average, each subject has images. Given the face images and their facial landmarks, we apply the following multi-image DMM fitting method to estimate for each subject an identity D shape component that is common to all its D face images, and different residual D shape components that are unique to each of the subject’s D images.

The DMM represents a D face shape as

(6)

where and are, respectively, the identity and expression shape bases, and and are the corresponding coefficients. In this paper, we use the shape bases given by the Basel Face Model [26] as , and the blendshape bases in FaceWarehouse [8] as .

To fit the DMM to images of a subject, we attempt to minimize the difference between , the landmarks detected on the images, and , the landmarks obtained by projecting the estimated D face shapes onto the images, under the constraint that all the images of the subject share the same . is computed from the estimated D face shape (let denote the vertices in corresponding to the landmarks) by , where is the scale factor, is the orthographic projection, and are the rotation matrix and translation vector in D space. Mathematically, our multi image DMM fitting optimizes the following objective:

(7)

We solve the optimization problem in Eq. (7) in an alternating way. As an initialization, we set both and to zero. We first estimate the projection parameters , then expression parameters , and lastly identity parameters . When estimating one of the three sets of parameters, the rest two sets of parameters are fixed as they are. The optimization is repeated until the objective function value does not change. We have typically found this to converge within seven iterations.

3.3.3 Training Process

With the prepared training data, we train our encoder-decoder network in three phases. In Phase I, we train the encoder by setting the target latent representations as and and using Euclidean loss. In Phase II, we train the decoder for the identity and residual components separately. In Phase III, the end-to-end joint training is conducted based on the pre-trained encoder and decoder. Considering that the network already has good performance in reconstruction after pre-training, we first lay more emphasis on recognition in the joint loss function by setting to . When the loss function gets saturated (usually within epochs), we continue the training by updating to . The joint training concludes in about another epochs.

It is worth mentioning that the recovered DMM parameters are directly used as the latent representations during pre-training. This provides a good initialization for the encoder-decoder network, but limits the network to the capacity of the pre-determined DMM bases. The joint training in Phase III alleviates such limitation by utilizing the identification loss as a complementary supervisory signal to the reconstruction loss. As a result, the learnt encoder-decoder network can better disentangle identity from non-identity information in D face shapes, and thus enhance face recognition accuracy without impairing the D face reconstruction accuracy.

4 Experiments

Two sets of experiments have been done to evaluate the effectiveness of the proposed method in D face reconstruction and face recognition. The MICC [2] and BUDFE [40] databases are used for experiments of D face reconstruction, and the LFW [15] and YTF [37] databases are used in face recognition experiments. Next, we report the experimental results 111More experimental results are provided in the supplementary material..

4.1 D Shape Reconstruction Accuracy

Method Avg.
VRN
DDFA
DMM-CNN - - - -
DSR
Proposed
Table 1: D face reconstruction accuracy (RMSE) under different yaw angles on the BUDFE database.

The D face reconstruction accuracy is assessed by using D Root Mean Square Error (RMSE) [34], defined as where is the total number of testing samples, and are the ground truth and reconstructed D face shape of the testing sample. To compute the RMSE, the reconstructed D faces are first aligned to ground truth via Procrustes global alignment based on D landmarks as suggested by [3], and then cropped at a radius of around the nose tip.

We compare our method with four state-of-the-art D face reconstruction methods, DDFA [44], DMM-CNN [34], D shape regression based (DSR) method [24], and VRN [16]. Among them, the first two methods reconstruct D face shapes via estimating DMM parameters, while the other two directly regress D face shapes from either landmarks or D images. DMM-CNN method is the only existing method that takes into consideration the discriminative power of the estimated DMM parameters. DSR method generates pose and expression normalized D face shapes that are believed to be more beneficial to face recognition. For those methods that need facial landmarks on D images, we use the method in [7] to automatically detect the landmarks.

Figure 5: Reconstruction results for three MICC subjects. The first column shows the input images, and the rest columns show the reconstructed D shapes that have the same expression as the input images, using the methods of VRN [16], DDFA [44], DMM-CNN[34], DSR [24] and the proposed method.

Results on MICC. The MICC database contains three challenging face videos and ground-truth D models acquired using a structured-light scanning system for each of subjects. The videos span the range of controlled indoor to unconstrained outdoor settings. The outdoor videos are very challenging due to the uncontrolled lighting conditions. In this experiment, we randomly select images from outdoor video frames of subjects. Table 2 shows the D face reconstruction error of different methods on the MICC database. As can be seen, our proposed method obtains the best accuracy due to its fine-grained processing of features in D face shapes. Note that VRN, the first method in the literature that regresses D face shapes directly from D images, has relatively high reconstruction error in terms of RMSE, mainly because it generates low-resolution D face shapes as volumetric representations. In contrast, we reconstruct high-resolution (dense) D face shapes as point clouds with help from low dimensional latent representations.

Method VRN DDFA DMM-CNN DSR Proposed
RMSE
Table 2: D face reconstruction accuracy on the MICC database.

Results on BUDFE. The BUDFE database contains D faces of subjects displaying expression of neutral (NE), happiness (HA), disgust (DI), fear (FE), anger (AN), surprise (SU) and sadness (SA). All non-neutral expressions were acquired at four levels of intensity. We select neutral and the first intensity level of the rest six expressions as testing data, resulting in testing samples. Further, we render another set of testing images of neutral expression at different poses, i.e., to yaws with a interval. These two testing sets evaluate the reconstruction across expressions and poses, respectively.

Method Shape Texture Accuracy 100%-EER AUC TAR-10% TAR-1%
Labeled Faces in the Wild (LFW)
DMM
DDFA
DMM-CNN
Proposed
YouTube Faces (YTF)
DMM
DDFA
DMM-CNN
Proposed
Table 3: Face recognition accuracy on the LFW and YTF databases.

Table 1 shows the reconstruction error across poses (i.e., yaw) of different methods. It can be seen that the RMSE of the proposed method is lower than that of baselines. Moreover, as the pose angle becomes large, the error of our method does not increase substantially. This proves the robustness of the proposed method to pose variations. Figure 6 shows the reconstruction error across expressions of VRN, DDFA, and the proposed method based on their reconstructed D face shapes that have the same expression as the input images. Figure 7 compares DMM-CNN, DSR, and the proposed method in terms of RMSE of their reconstructed identity or expression-normalized D face shapes. These results demonstrate the superiority of the proposed method over baselines in handling expressions.

Figure 6: Reconstruction accuracy of D face shapes under different expressions on the BUDFE database. The mean RMSEs of thee methods over all expressions are , , and respectively.
Figure 7: Reconstruction accuracy of the identity component of D face shapes under different expressions on the BUDFE database. The mean RMSEs of thee methods over all expressions are , , and respectively.

Some example D face reconstruction results are shown in Fig. 5 and Fig. 8. From these results, we can clearly see that the proposed method not only performs well in reconstructing accurate D face shapes for in-the-wild D images, but also disentangles identity and non-identity (e.g., expression) components in D face shapes. As we will show in the following face recognition experiments, the disentangled shape features contribute to face recognition.

Figure 8: Reconstruction results for an BUDFE subject under seven different expressions. The first column shows the input images. In the blue box, we show the reconstructed D shapes that have the same expression as the input images, using the methods of VRN [16], DDFA [44] and the proposed method. In the red box, we show the reconstructed identity D shapes obtained by DMM-CNN [34], DSR [24] and the proposed method. Our composite 3D shape model enables us to generate two types of 3D shapes.
Figure 9: Comparing the pre-trained 3DMM-like and our jointly-learnt bases defined by the weights of identity and residual shape decoders. (a) For the bases of identity shape decoder, the weights associated with each entry in are added to the mean shape, reshaped to a point cloud (), and shown as polygon meshes. (b) For the bases of residual shape decoder, the weights associated with each entry in are reshaped to a point cloud (), and shown as a heat map that measures the norm value of each vertex (i.e., the deviation from the identity shape). Red colors in the heat maps indicate larger deviations. It is important to note that the conventional DMM bases are trained from D face scans, while our bases are learnt from D images.

4.2 Face Recognition Accuracy

To evaluate the effectiveness of our shape features (i.e., the identity representations) to face recognition, we compute the similarity of two faces using the cosine distances between their shape features extracted by the encoder of our method. To investigate the complementarity between our learnt shape features and existing texture features, we also fuse our method with existing methods via summation at the score level [21]. The counterpart methods we consider here include DMM [29], DDFA [44], DMM-CNN [34], and SphereFace [25]. We compare the methods in terms of verification accuracy, -EER (Equal Error Rate), AUC (Area Under Curve) of ROC (Receiver Operating Characteristic) curves, and TAR (True Acceptance Rate) at FAR (False Acceptance Rate) of and .

Results on LFW. The Labeled Faces in the Wild (LFW) benchmark dataset contains images collected from Internet. The verification set consists of folders, each with same-person pairs and different-person pairs. The recognition accuracy of different methods on LFW is listed in Tab. 3. Among all the D face reconstruction methods, when using only shape features, our proposed method achieves the highest accuracy, improving TAR@ FAR from to with respect to the latest DMM-based method [34].

Results on YTF. The YouTube Faces (YTF) database contains videos of individuals. Face images (video frames) in YTF have lower quality than those in LFW, due to larger variations in pose, illumination and expression, and low resolution as well. Table 3 summarizes the recognition accuracy of different methods on YTF. Despite the low-quality face images, our proposed method still outperforms the baseline methods in the sense of extracting discriminative shape features. By fusing with one of the state-of-the-art texture-based face recognition methods (i.e., SphereFace [25]), our proposed method further improves the face recognition accuracy on YTF from to . This proves the complementarity of properly reconstructed shape features to texture features in face recognition. This is a notable result especially considering the D face recognition method of SphereFace [25] has already set a very high baseline (i.e., ).

Method VRN DDFA DMM-CNN DSR Proposed
Time (ms)
Table 4: Efficiency comparison of different methods.

4.3 Computational Efficiency

To assess the computational efficiency, we run the methods on a PC (with an Intel Core i-K @ GHz, GB RAM and an GeForce GTX ) for images, and calculate the average runtime per image in Tab. 4. Note that DDFA and DMM-CNN estimate the DMM parameters in the first step, and we report their runtime of obtaining the final D faces. For VRN, DDFA and DMM-CNN, despite stand-alone landmark detection is required, the reported time does not include the landmark detection time. Our proposed method needs only milliseconds (ms) per image, which is an order of magnitude faster than baseline methods. This is owing to the light-weight network in our method. In contrast, baseline methods use either very deep networks [34], or cascade approaches [28, 24].

4.4 Analysis and Discussion

To offer insights into the learnt decoders, we visualize their weight parameters in Fig. 9. The weights associating one entry in the latent representations with all the neurons in the FC layer in the decoders are analogous to a DMM basis (see Fig. 4). Both pre-trained bases and jointly-learnt bases are shown for comparison in Fig. 9, from which the following observations can be made.

(i) The pre-trained identity bases approximate the conventional DMM bases [4] that are ordered with latter bases capturing less shape variations. In contrast, our jointly-learnt identity bases all describe rich shape variations.

(ii) Some basis shapes in the jointly-learnt bases do not look like regular face shapes. We believe this is due to the employed joint reconstruction and identification loss function. The bases trained from a set of D scans as in DMM, while optimal for reconstruction, might limit the discriminativeness of shape parameters. Our bases are trained with the classification in mind, which ensures the superior performance of our method in face recognition.

(iii) The pre-trained residual bases, like the expression shape bases [8], appear symmetrical. The jointly-learnt residual bases display more diverse shape deviation patterns. This indicates that the residual shape deformation captured by the jointly-learnt bases is much beyond that caused by expression changes, and proves the effectiveness of our method in disentangling D face shape features.

5 Conclusions

We have proposed a novel encoder-decoder-based method for jointly learning discriminative shape features from a D face image and reconstructing its dense D face shape. To train the encoder-decoder network, we implement a multi-image DMM fitting method to construct training data, and develop an effective training scheme with a joint reconstruction and identification loss. We show with comprehensive experimental results that the proposed method can effectively disentangle identity and non-identity features in D face shapes and thus achieve state-of-the-art D face reconstruction accuracy as well as improved face recognition accuracy.

Supplementary Material

In this supplementary material, we provide additional experimental results, including

  • Face recognition results on IJB-A database;

  • Phase-by-Phase Evaluation: CNN vs. 3DMM;

  • Qualitative reconstruction results.

Recognition Results on IJB-A

The IJB-A database [22], including 5,396 images and 20,412 video frames of 500 subjects, has full pose variation and is more challenging than LFW [15]. We evaluate both face verification (1:1 comparison) and face identification (1:N search) performance of our proposed method with comparison to existing methods on the IJB-A database. The faces are firstly automatically detected by using the method in [42] and aligned by the method in [7]. If the automated methods fail, we manually crop the faces. The results are reported in Table 5.

Method Shape Texture TAR-10% TAR-1% Rank-1 Rank-5
3DMM
3DDFA
3DMM-CNN
DRGAN
Proposed
DRGAN+Proposed
Table 5: Face verification and identification performance on the IJB-A database.
Training Phase Identity Disentangling Identification Loss Reconstruction RMSE on Recognition Accuracy on
MICC BU3DFE (pose) BU3DFE (exp.) LFW YTF
II
III
Table 6: Reconstruction and recognition accuracy on different test data sets when identity disentangling and identification loss are used or not used. Refer to the paper for test data set details.

When using only reconstructed shape features, our proposed method obtains the best face recognition accuracy in terms of true acceptance rate at false acceptance rate of (TAR-) and (TAR-), and rank-1 and rank-5 identification rate. Although it is outperformed by DR-GAN [36], a state-of-the-art texture-based face recognition method, the face recognition accuracy can be further improved after combining them by score-level summation fusion. These results, consistent with the results on the LFW and YTF [37] databases, prove the effectiveness of our proposed method in disentangling discriminative shape features that are complementary to texture features in face recognition as well as in surpassing the conventional 3D morphable model (3DMM) bases [5] in capturing facial detail.

Figure 10 shows some example genuine and imposter pairs in IJB-A, which are incorrectly recognized by DR-GAN [36], but correctly recognized by the fusion of DR-GAN and our proposed method. As can be seen, while extremely large head rotations may lead to the failure of existing texture-based face recognition methods, our proposed method explores complementary shape features to robustly recognize the off-angle faces with large rotations.

Figure 10: Example (a) genuine pairs and (b) imposter pairs in IJB-A, for which the state-of-the-art texture-based face recognition method (i.e., DR-GAN [36]) fails, whereas its fusion with our proposed method succeeds.

Phase-by-Phase Evaluation: CNN vs. 3DMM

Our proposed model is trained in three phases. Phases I and II replicate 3DMM for a proper initialization of our model, while Phase III makes our model beyond 3DMM by using joint supervisory of reconstruction and recognition (i.e., both reconstruction loss and identification loss). To address the reviewer’s concern, we compare the reconstruction and recognition results at different training phases. Table 6 gives the reconstruction results at Phases II and III, and summarizes the recognition results. It can be seen that reconstruction errors are further reduced after incorporating identification loss in Phase III. As for recognition, the accuracy is significantly improved from Phase II to Phase III. This reveals the limited discrimination power of 3DMM representations and the importance of CNN-based joint learning in expanding the representation and discrimination capacity of 3DMM-like bases.

Figure 11: Reconstruction results by our proposed method on images from YTF (top) and IJB-A (bottom). The first row shows the input images, and the second and third rows show the reconstructed 3D shapes and identity shapes.

Qualitative Results

The 3D face reconstruction results of our proposed method on some images from the YTF and IJB-A databases are shown in Figure 11. One can obviously observe from these results that the reconstructed 3D faces do reveal the facial shape deformation (e.g., around the mouth), while the identity shapes successfully disentangle identity-sensitive from identity-irrelevant features. Figure 12 shows some images (video frames) for which our proposed method fails to generate plausible 3D face shapes. The blurry and very low resolution faces in these images/videos are the main reasons for the failure.

Figure 12: Failure cases of our proposed method due to blurry and very low resolution faces in the images/videos.

References

  • [1] https://support.apple.com/en-us/HT208109. Accessed: 2017-11-15.
  • [2] A. D. Bagdanov, A. Del Bimbo, and I. Masi. The florence 2D/3D hybrid face dataset. In Workshop on Human gesture and behavior understanding, pages 79–80. ACM, 2011.
  • [3] A. Bas, W. A. Smith, T. Bolkart, and S. Wuhrer. Fitting a 3D morphable model to edges: A comparison between hard and soft correspondences. In ACCV, pages 377–391, 2016.
  • [4] V. Blanz and T. Vetter. A morphable model for the synthesis of 3D faces. In SIGGRAPH, pages 187–194, 1999.
  • [5] V. Blanz and T. Vetter. Face recognition based on fitting a 3D morphable model. TPAMI, 25(9):1063–1074, 2003.
  • [6] K. W. Bowyer, K. Chang, and P. Flynn. A survey of approaches and challenges in 3D and multi-modal 3D+ 2D face recognition. CVIU, 101(1):1–15, 2006.
  • [7] A. Bulat and G. Tzimiropoulos. How far are we from solving the 2D & 3D face alignment problem? (and a dataset of 230,000 3D facial landmarks). In ICCV, 2017.
  • [8] C. Cao, Y. Weng, S. Zhou, Y. Tong, and K. Zhou. Facewarehouse: A 3D facial expression database for visual computing. TVCG, 20(3):413–425, 2014.
  • [9] C. Cao, H. Wu, Y. Weng, T. Shao, and K. Zhou. Real-time facial animation with image-based dynamic avatars. TOG, 35(4):126:1–126:12, 2016.
  • [10] P. Dou, S. K. Shah, and I. A. Kakadiaris. End-to-end 3D face reconstruction with deep neural networks. In CVPR, 2017.
  • [11] M. Emambakhsh and A. Evans. Nasal patches and curves for expression-robust 3D face recognition. TPAMI, 39(5):995–1007, 2016.
  • [12] H. Han and A. K. Jain. 3D face texture modeling from uncalibrated frontal and profile images. In BTAS, pages 223–230, 2012.
  • [13] X. Han, C. Gao, and Y. Yu. Deepsketch2face: A deep learning based sketching system for 3D face and caricature modeling. TOG, 36(4), 2017.
  • [14] B. K. Horn and M. J. Brooks. Shape from shading. Cambridge, MA: MIT press, 1989.
  • [15] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical report, Technical Report 07-49, University of Massachusetts, Amherst, 2007.
  • [16] A. S. Jackson, A. Bulat, V. Argyriou, and G. Tzimiropoulos. Large pose 3D face reconstruction from a single image via direct volumetric CNN regression. In ICCV, 2017.
  • [17] A. Jourabloo and X. Liu. Pose-invariant 3D face alignment. In ICCV, pages 3694–3702, 2015.
  • [18] A. Jourabloo and X. Liu. Pose-invariant face alignment via CNN-based dense 3D model fitting. IJCV, in press, 2017.
  • [19] A. Jourabloo, M. Ye, X. Liu, and L. Ren. Pose-invariant face alignment with a single cnn. In ICCV, 2017.
  • [20] I. Kemelmacher-Shlizerman and R. Basri. 3D face reconstruction from a single image using a single reference face shape. TPAMI, 33(2):394–405, 2011.
  • [21] J. Kittler, M. Hatef, R. P. Duin, and J. Matas. On combining classifiers. TPAMI, 20(3):226–239, 1998.
  • [22] B. F. Klare, A. K. Jain, B. Klein, E. Taborsky, A. Blanton, J. Cheney, K. Allen, P. Grother, A. Mah, and M. Burge. Pushing the frontiers of unconstrained face detection and recognition: IARPA janus benchmark A. In CVPR, pages 1931–1939, 2015.
  • [23] F. Liu, D. Zeng, J. Li, and Q. Zhao. Cascaded regressor based 3D face reconstruction from a single arbitrary view image. arXiv:1509.06161, 2015.
  • [24] F. Liu, D. Zeng, Q. Zhao, and X. Liu. Joint face alignment and 3D face reconstruction. In ECCV, pages 545–560, 2016.
  • [25] W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song. Sphereface: Deep hypersphere embedding for face recognition. In CVPR, 2017.
  • [26] P. Paysan, R. Knothe, B. Amberg, S. Romdhani, and T. Vetter. A 3D face model for pose and illumination invariant face recognition. In AVSS, pages 296–301, 2009.
  • [27] X. Peng, X. Yu, K. Sohn, D. N. Metaxas, and M. Chandraker. Reconstruction-based disentanglement for pose-invariant face recognition. In ICCV, 2017.
  • [28] E. Richardson, M. Sela, R. Or-El, and R. Kimmel. Learning detailed face reconstruction from a single image. In CVPR, 2017.
  • [29] S. Romdhani and T. Vetter. Estimating 3D shape and texture using pixel intensity, edges, specular highlights, texture constraints and a prior. In CVPR, pages 986–993, 2005.
  • [30] J. Roth, Y. Tong, and X. Liu. Adaptive 3D face reconstruction from unconstrained photo collections. In CVPR, pages 4197–4206, 2016.
  • [31] M. Sela, E. Richardson, and R. Kimmel. Unrestricted facial geometry reconstruction using image-to-image translation. In ICCV, 2017.
  • [32] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the gap to human-level performance in face verification. In CVPR, pages 1701–1708, 2014.
  • [33] A. Tewari, M. Zollhöfer, H. Kim, P. Garrido, F. Bernard, P. Pérez, and C. Theobalt. Mofa: Model-based deep convolutional face autoencoder for unsupervised monocular reconstruction. In CVPR, 2017.
  • [34] A. T. Tran, T. Hassner, I. Masi, and G. Medioni. Regressing robust and discriminative 3D morphable models with a very deep neural network. In CVPR, 2017.
  • [35] L. Tran, X. Yin, and X. Liu. Disentangled representation learning gan for pose-invariant face recognition. In CVPR, pages 1283–1292, 2017.
  • [36] L. Tran, X. Yin, and X. Liu. Disentangled representation learning GAN for pose-invariant face recognition. In CVPR, in press, 2017.
  • [37] L. Wolf, T. Hassner, and I. Maoz. Face recognition in unconstrained videos with matched background similarity. In CVPR, pages 529–534, 2011.
  • [38] D. Yi, Z. Lei, and S. Z. Li. Towards pose robust face recognition. In CVPR, pages 3539–3545, 2013.
  • [39] D. Yi, Z. Lei, S. Liao, and S. Z. Li. Learning face representation from scratch. arXiv:1411.7923, 2014.
  • [40] L. Yin, X. Wei, Y. Sun, J. Wang, and M. J. Rosato. A 3D facial expression database for facial behavior research. In FG, pages 211–216, 2006.
  • [41] X. Yin, X. Yu, K. Sohn, X. Liu, and M. Chandraker. Towards large-pose face frontalization in the wild. In ICCV, 2017.
  • [42] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao. Joint face detection and alignment using multitask cascaded convolutional networks. SPL, 23(10):1499–1503, 2016.
  • [43] X. Zhu, Z. Lei, X. Liu, H. Shi, and S. Li. Face alignment across large poses: A 3D solution. In CVPR, pages 146–155, 2016.
  • [44] X. Zhu, Z. Lei, J. Yan, D. Yi, and S. Z. Li. High-fidelity pose and expression normalization for face recognition in the wild. In CVPR, pages 787–796, 2015.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
133401
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description