Dense Face Alignment

Dense Face Alignment

Yaojie Liu, Amin Jourabloo, William Ren, and Xiaoming Liu
Department of Computer Science and Engineering, Michigan State University, MI
Monta Vista High School, Cupertino, CA
liuyaoj1,jourablo,liuxm@msu.edu, williamyren@gmail.com
Abstract

Face alignment is a classic problem in the computer vision field. Previous works mostly focus on sparse alignment with a limited number of facial landmark points, i.e., facial landmark detection. In this paper, for the first time, we aim at providing a very dense D alignment for large-pose face images. To achieve this, we train a CNN to estimate the D face shape, which not only aligns limited facial landmarks but also fits face contours and SIFT feature points. Moreover, we also address the bottleneck of training CNN with multiple datasets, due to different landmark markups on different datasets, such as , , . Experimental results show our method not only provides high-quality, dense D face fitting but also outperforms the state-of-the-art facial landmark detection methods on the challenging datasets. Our model can run at real time during testing and it’s available at http:///cvlab.cse.msu.edu/project-pifa.html.

1 Introduction

Face alignment is a long-standing problem in the computer vision field, which is the process of aligning facial components, e.g., eye, nose, mouth, and contour. An accurate face alignment is an essential prerequisite for many face related tasks, such as face recognition [8], D face reconstruction [22, 21] and face animation [37]. There are fruitful previous works on face alignment, which can be categorized as generative methods such as the early Active Shape Model [17] and Active Appearance Model (AAM) based approaches [13], and discriminative methods such as regression-based approaches[38, 28].

Most previous methods estimate a sparse set of landmarks, e.g., landmarks. As this field is being developed, we believe that Dense Face Alignment (DeFA) becomes highly desired. Here, DeFA denotes that it’s doable to map any face-region pixel to the pixel in other face images, which has the same anatomical position in human faces. For example, given two face images from the same individual but with different poses, lightings or expressions, a perfect DeFA can even predict the mole (i.e. darker pigment) on two faces as the same position. Moreover, DeFA should offer dense correspondence not only between two face images, but also between the face image and the canonical D face model. This level of detailed geometry interpretation of a face image is invaluable to many conventional facial analysis problems mentioned above.

Since this interpretation has gone beyond the sparse set of landmarks, fitting a dense D face model to the face image is a reasonable way to achieve DeFA. In this work, we choose to develop the idea of fitting a dense D face model to an image, where the model with thousands of vertexes makes it possible for face alignment to go very “dense”. D face model fitting is well studied in the seminal work of D Morphorbal Model (DMM) [4]. We see a recent surge when it is applied to problems such as large-pose face alignment [10, 41], D reconstruction [5], and face recognition [1], especially using the convolutional neural network (CNN) architecture.

Figure 1: A pair of images with their dense D shapes obtained by imposing landmark fitting constraint, contour fitting constraint and sift pair constraint.

However, most prior works on D-model-fitting-based face alignment only utilize the sparse landmarks as supervision. There are two main challenges to be addressed in D face model fitting, in order to enable high-quality DeFA. First of all, to the best of our knowledge, no public face dataset has dense face shape labeling. All of the in-the-wild face alignment datasets have no more than landmarks in the labeling. Apparently, to provide a high-quality alignment for face-region pixels, we need information more than just the landmark labeling. Hence, the first challenge is to seek valuable information for additional supervision and integrate them in the learning framework.

Secondly, similar to many other data-driven problems and solutions, it is preferred that multiple datasets can be involved for solving face alignment task since a single dataset has limited types of variations. However, many face alignment methods can not leverage multiple datasets, because each dataset either is labeled differently. For instance, AFLW dataset [23] contains a significant variation of poses, but has a few number of visible landmarks. In contrast, W dataset [23] contains a large number of faces with visible landmarks, but all faces are in a near-frontal view. Therefore, the second challenge is to allow the proposed method to leverage multiple face datasets.

With the objective of addressing both challenges, we learn a CNN to fit a D face model to the face image. While the proposed method works for any face image, we mainly pay attention to faces with large poses. Large-pose face alignment is a relatively new topic, and the performances in  [10, 41] still have room to improve. To tackle first challenge of limited landmark labeling, we propose to employ additional constraints. We include contour constraint where the contour of the predicted shape should match the detected D face boundary, and SIFT constraint where the SIFT key points detected on two face images of the same individual should map to the same vertexes on the D face model. Both constraints are integrated into the CNN training as additional loss function terms, where the end-to-end training results in an enhanced CNN for D face model fitting. For the second challenge of leveraging multiple datasets, the D face model fitting approach has the inherent advantage in handling multiple training databases. Regardless of the landmark labeling number in a particular dataset, we can always define the corresponding D vertexes to guide the training.

Generally, our main contributions can be summarized as:

1. We identify and define a new problem of dense face alignment, which seeks alignment of face-region pixels beyond the sparse set of landmarks.

2. To achieve dense face alignment, we develop a novel D face model fitting algorithm that adopts multiple constraints and leverages multiple datasets.

3. Our dense face alignment algorithm outperforms the SOTA on challenging large-pose face alignment, and achieves competitive results on near-frontal face alignment. The model runs at real time.

2 Related Work

We review papers in three relevant areas: D face alignment from a single image, using multiple constraints in face alignment, and using multiple datasets for face alignment.

D model fitting in face alignment Recently, there are increasingly attentions in conducting face alignment by fitting the D face model to the single D image [10, 41, 15, 16, 35, 11]. In  [4], Blanz and Vetter proposed the DMM to represent the shape and texture of a range of individuals. The analysis-by-synthesis based methods are utilized to fit the DMM to the face image. In [41, 10] a set of cascade CNN regressors with the extracted D features is utilized to estimate the parameters of DMM and the projection matrix directly. Liu et al[15] proposed to utilize two sets of regressors, for estimating update of D landmarks and the other set estimate update of dense D shape by using the D landmarks update. They apply these two sets of regressors alternatively. Compared to prior work, our method imposes additional constraints, which is the key to dense face alignment.

Multiple constraints in face alignment Other than landmarks, there are other features that are useful to describe the shape of a face, such as contours, pose and face attributes. Unlike landmarks, those features are often not labeled in the datasets. Hence, the most crucial step of leveraging those features is to find the correspondence between the features and the D shape. In  [20], multiple features constraints in the cost function is utilized to estimate the D shape and texture of a D face. D edge is detected by Canny detector, and the corresponding D edges’ vertices are matched by Iterative Closest Point (ICP) to use this information. Furthermore,  [24] provides statistical analysis about the D face contours and the D face shape under different poses.

There is a few work using constraints as separate side tasks to facilitate face alignment. In  [31], they set a pose classification task, predicting faces as left, right profile or frontal, in order to assist face alignment. Even with such a rough pose estimation, this information boosts the alignment accuracy. Zhang et al[34] jointly estimates D landmarks update with the auxiliary attributes (e.g., gender, expression) in order to improve alignment accuracy. The “mirrorability” constraint is used in [32] to force the estimated D landmarks update be consistent between the image and its mirror image. In contrast, we integrate a set of constraints in an end-to-end trainable CNN to perform D face alignment.

Figure 2: Architecture of CNN in the proposed DeFA method. The structure of each ConvBlock is shown in yellow area in the left bottom corner. Each convolution layer and fully connected layer is followed with one batch normalization layer (BN) and one leaky ReLU layer. The output dimension of each convolution layer is shown in the bottom of each unit, such as conv: , which means the output has channels. pool: denotes the pooling layer adopts a stride of .

Multiple datasets in face alignment Despite the huge advantages (e.g., avoiding dataset bias), there are only a few face alignment works utilizing multiple datasets, owing to the difficulty of leveraging different types of face landmark labeling. Zhu et al[39] propose a transductive supervised descent method to transfer face annotation from a source dataset to a target dataset, and use both datasets for training. [25] ensembles a non-parametric appearance model, shape model and graph matching to estimate the superset of the landmarks. Even though achieving good results, it suffers from high computation cost. Zhang et al[33] propose a deep regression network for predicting the superset of landmarks. For each training sample, the sparse shape regression is adopted to generate the different types of landmark annotations. In general, most of the mentioned prior work learn to map landmarks between two datasets, while our method can readily handle an arbitrary number of datasets since the dense D face model can bridge the discrepancy of landmark definitions in various datasets.

3 Dense Face Alignment

In this section, we explain the details of the proposed dense face alignment method. We train a CNN for fitting the dense D face shape to a single input face image. We utilize the dense D shape representation to impose multiple constraints, e.g., landmark fitting constraint, contour fitting constraint and SIFT pairing constraint, to train such CNN.

3.1 D Face Representation

We represent the dense D shape of the face as, S, which contains the D locations of vertices,

(1)

To compute S for a face, we follow the DMM to represent it by a set of 3D shape bases,

(2)

where the face shape S is the summation of the mean shape and the weighted PCA shape bases and with corresponding weights of . In our work, we use shape bases for representing identification variances such as tall/short, light/heavy, and male/female, and shape bases for representing expression variances such as mouth-opening, smile, kiss and etc. Each basis has vertices, which are corresponding to vertices over all the other bases. The mean shape and the identification bases are from Basel Face Model [18], and the expression bases are from FaceWarehouse [7].

A subset of vertices of the dense D face U corresponds to the location of D landmarks on the image,

(3)

By considering weak perspective projection, we can estimate the dense shape of a D face based on the D face shape. The projection matrix has degrees of freedom and can model changes w.r.t. scale, rotation angles (pitch , yaw , roll ), and translations (, ). The transformed dense face shape can be represented as,

(4)
(5)

where A can be orthographically projected onto D plane to achieve U. Hence, z-coordinate translation () is out of our interest and assigned to be . The orthographic projection can be denoted as matrix .

Given the properties of projection matrix, the normalized third row of the projection matrix can be represented as the outer product of normalized first two rows,

(6)

Therefore, the dense shape of an arbitrary D face can be determined by the first two rows of the projection parameters and the shape basis coefficients . The learning of the dense D shape is turned into the learning of m and p, which is much more manageable in term of the dimensionality.

3.2 CNN Architecture

Due to the success of deep learning in computer vision, we employ a convolutional neural network (CNN) to learn the nonlinear mapping function from the input image I to the corresponding projection parameters m and shape parameters p. The estimated parameters can then be utilized to construct the dense D face shape.

Our CNN network has two branches, one for predicting m and another for p, shown in Fig. 2. Two branches share the first three convolutional blocks. After the third block, we use two separate convolutional blocks to extract task-specific features, and two fully connected layers to transfer the features to the final output. Each convolutional block is a stack of two convolutional layers and one max pooling layer, and each conv/fc layer is followed by one batch normalization layer and one leaky ReLU layer.

In order to improve the CNN learning, we employ a loss function including multiple constraints: Parameter Constraint (PC) minimizes the difference between the estimated parameters and the ground truth parameters; Landmark Fitting Constraint (LFC) reduces the alignment error of D landmarks; Contour Fitting Constraint (CFC) enforces the match between the contour of the estimated D shape and the contour pixels of the input image; and SIFT Pairing Constraint (SPC) encourages that the SIFT feature point pairs of two face images to correspond to the same D vertices.

We define the overall loss function as,

(7)

where the parameter constraint (PC) loss is defined as,

(8)

Landmark Fitting Constraint (LFC) aims to minimize the difference between the estimated D landmarks and the ground truth D landmark labeling . Given D face images with a particular landmark labeling, we first manually mark the indexes of the D face vertices that are anatomically corresponding to these landmarks. The collection of these indexes is denoted as . After the shape A is computed from Eqn. 4 with the estimated and , the D landmarks can be extracted from A by . With projection of to D plain, the LFC loss is defined as,

(9)

where the subscript represents the Frobenius Norm, and is the number of pre-defined landmarks.

(a) (b) (c)
Figure 3: The CFC fitting process. is computed from estimated 3D face shape and is computed from the off-the-shelf edge detector. Contour correspondence is obtained via Closest Pair Algorithm, and loss is calculated based on Eqn. 10

3.3 Contour Fitting Constraint (CFC)

Contour Fitting Constraint (CFC) aims to minimize the error between the projected outer contour (i.e., silhouette) of the dense D shape and the corresponding contour pixels in the input face image. The outer contour can be viewed as the boundary between the background and the D face while rendering 3D space onto a D plane. On databases such as AFLW where there is a lack of labeled landmarks on the silhouette due to self-occlusion, this constraint can be extremely helpful.

To utilize this contour fitting constraint, we need to follow these three steps: 1) Detect the true contour in the D face image; 2) Describe the contour vertices on the estimated D shape A; and 3) Determine the correspondence between true contour and the estimated one, and back-propagate the fitting error.

First of all, we adopt an off-the-shelf edge detector, HED [29], to detect the contour on the face image, . The HED has a high accuracy at detecting significant edges such as face contour in our case. Additionally, in certain datasets, such as [23] and AFLW-LPFA [10], additional landmark labelings on the contours are available. Thus we can further refine the detected edges by only retaining edges that are within a narrow band determined by those contour landmarks, shown in Fig 3.a. This preprocessing step is done offline before the training starts.

In the second step, the contour on the estimated D shape A can be described as the set of boundary vertices . A is computed from the estimated and parameters. By utilizing the Delaunay triangulation to represent shape A, one edge of a triangle is defined as the boundary if the adjacent faces have a sign change in the -values of the surface normals. This sign change indicates a change of visibility so that the edge can be considered as a boundary. The vertices associated with this edge are defined as boundary vertices, and their collection is denoted as . This process is shown in Fig 3.b.

In the third step, the point-to-point correspondences between and are needed in order to evaluate the constraint. Given that we normally detect partial contour pixels on D images while the contour of D shape is typically complete, we match the contour pixel on the D images with closest point on D shape contour, and then calculate the minimun distance. The sum of all minimum distances is the error of CFC, as shown in the Eqn. 10. To make CFC loss differentiable, we rewrite Eqn. 10 to compute the vertex index of the closest contour projection point, i.e., . Once is determined, the CFC loss will be differentiable, similar to Eqn. 9.

(10)

Note that while depends on the current estimation of , for simplicity is treated as constant when performing back-propagation w.r.t. .

3.4 SIFT Pairing Constraint (SPC)

SIFT Pairing Constraint (SPC) regularizes the predictions of dense shape to be consistent on the significant facial points other than pre-defined landmarks, such as edges, wrinkles, and moles. The Scale-invariant feature transform (SIFT) descriptor is a classic local representation that is invariant to image scaling, noise, and illumination. It is widely used in many regression-based face alignment methods [30, 26] to extract the local information.

Figure 4: The illustration of the SIFT Matching process.

In our work, the SIFT descriptors are used to detect and represent the significant points within the face pair. The face pair can either come from the same people with different poses and expressions, or the same image with different augmentation, e.g., cropping, rotation and D augmentation, shown in Fig. 4. The more face pairs we have, the stronger this constraint is. Given a pair of faces and , we first detect and match SIFT points on two face images. The matched SIFT points are denoted as and .

With a perfect dense face alignment, the matched SIFT points would overlay with exactly the same vertex in the estimated D face shapes, denoted as and . In practices, to verify how likely this ideal world is true and leverage it as a constraint, we first find the D vertices whose projections overlay with the D SIFT points, .

(11)

Similarly, we find based on . Now we define the SPC loss function as

(12)

where is computed using . As shown in Fig. 4, we map SIFT points from one face to the other and compute their distances w.r.t. the matched SIFT points on the other face. With the mapping from both images, we have two terms in the loss function of Eqn. 12.

4 Experimental Results

4.1 Datasets

We evaluate our proposed method on four benchmark datasets: AFLW-LFPA [9], AFLW-[41], [23] and IJBA [12]. All datasets used in our training and testing phases are listed in Tab. 1.

AFLW-LFPA: AFLW contains around face images with yaw angles between , and each image is labeled with up to visible landmarks. In [9], a subset of AFLW with a balanced distribution of the yaw angle is introduced as AFLW-LFPA. It consists of training images and testing images. Each image is labeled with additional landmarks.

AFLW2000-3D: Prepared by [41], this dataset contains images with yaw angles between of the AFLW dataset. Each image is labeled with landmarks. Both this dataset and AFLW-LFPA are widely used for evaluating large-pose face alignment.

IJBA: IARPA Janus Benchmark A (IJB-A) [12] is an in-the-wild dataset containing subjects and images with three landmark, two landmarks at eye centers and one on the nose. While this dataset is mainly used for face recognition, the large dataset size and the challenging variations (e.g., yaw and images resolution) make it suitable for evaluating face alignment as well.

300W: [23] integrates multiple databases with standard landmark labels, including AFW [43], LFPW [3], HELEN [36], and IBUG [23]. This is the widely used database for evaluating near-frontal face alignment.

COFW [6]: This dataset includes near-frontal face images with occlusion. We use this dataset in training to make the model more robust to occlusion.

Caltech10k [2]: It contains four labeled landmarks: two on eye centers, one on the top of the nose and one mouth center. We do not use the mouth center landmark since there is no corresponding vertex on the D shape existing for it.

LFW [14]: Despite having no landmark labels, LFW can be used to evaluate how dense face alignment method performs via the corresponding SIFT points between two images of the same individual.

Database Landmark Pose Images
300W [23] Near-frontal
300W-LP [41]
Caltech10k [2] Near-frontal
AFLW-LFPA [9]
COFW [6] Near-frontal
AFLW-LFPA [9]
AFLW2000-3D [41]
300W [23] Near-frontal
IJB-A [12]
LFW [14] Near-frontal
Table 1: The list of face datasets used for training and testing.

4.2 Experimental setup

While utilizing multiple datasets is beneficial for learning an effective model, it also poses challenges to the training procedure. To make the training more manageable, we train our DeFA model in three stages, with the intention to gradually increase the datasets and employed constraints. At stage , we use W-LP to train our DeFA network with parameter constraint (PL). At stage , we additionally include samples from the Caltech[2], and COFW [6] to continue the training of our network with the additional landmark fitting constraint (LFC). At stage , we fine-tune the model with SPC and CFC constraints. For large-pose face alignment, we fine-tune the model with AFLW-LFPA training set. For near-frontal face alignment, we fine-tune the model with W training set. All samples at the third stage are augmented times with up to random in-plain rotation and random noise on the center, width, and length of the initial bounding box. Tab. 2 shows the datasets and constraints that are used at each stage.

Dataset Stage 1 Stage 2 Stage 3
300W-LP [41] PC
PC
LFC
-
Caltech10k [2] - LFC -
COFW [6] - LFC -
AFLW-LFPA [9] - -
LFC
SPC
CFC
300W [23] - -
LFC
SPC
CFC
Table 2: The list of datasets used in each training stage, and the employed constraints for each dataset: Parameter Constraint (PC); Landmark Fitting Constraint (LFC); SIFT Pairing Constraint (SPC); Contour Fitting Constraint (CFC).

Implementation details: Our DeFA model is implemented with MatConvNet [27]. To train the network, we use , , and epochs for stage to . We set the initial global learning rate as , and reduce the learning rate by a factor of when the training error approaches a plateau. The minibatch size is , weight decay is , and the leak factor for Leaky ReLU is . In stage 2, the regularization weights for PC is and for LFC is ; In stage 3, the regularization weights , , for LFC, SPC and CFC are set as , and , respectively.

Evaluation metrics: For performance evaluation and comparison, we use two metrics for normalizing the MSE. We follow the normalization method in [10] for large-pose faces, which normalizes the MSE by using the bounding box size. We term this metric as “NME-lp”. For the near-frontal view datasets such as W, we use the inter-ocular distance for normalizing the MSE, termed as “NME-nf”.

4.3 Experiments on Large-pose Datasets

To evaluate the algorithm on large-pose datasets, we use the AFLW-LFPA, AFLW-D, and IJB-A datasets. The results are presented in Tab. 3, where the performance of the baseline methods is either reported from the published papers or by running the publicly available source code. For AFLW-LFPA, our method outperforms the best methods with a large margin of improvement. For AFLW-D, our method also shows a large improvement. Specifically, for images with yaw angle in , our method improves the performance by (from to ). For the IJB-A dataset, even though we are able to only compare the accuracy for the three labeled landmarks, our method still reaches a higher accuracy. Note that the best performing baselines, DDFA and PAWF, share the similar overall approach in estimating and , and also aim for large-pose face alignment. The consistently superior performance of our DeFA indicates that we have advanced the state of the art in large-pose face alignment.

Baseline CFSS [38] PIFA [9] CCL [40] 3DDFA [zhu2016face] PAWF [10] Ours
AFLW-LFPA - 3.86
AFLW-D - - - - 4.50
IJB-A - - - - 6.03
Table 3: The benchmark comparison (NME-lp) on three large-pose face alignment datasets.

4.4 Experiments on Near-frontal Datasets

Even though the proposed method can handle large-pose alignment, to show its performance on the near-frontal datasets, we evaluate our method on the W dataset. The result of the state-of-the-art method on the both common and challenging sets are shown in Tab. 4. To find the corresponding landmarks on the cheek, we apply the landmark marching [42] algorithm to move contour landmarks from self-occluded location to the silhouette. Our method is the second best method on the challenging set. In general, the performance of our method is comparable to other methods that are designed for near-frontal datasets, especially under the following consideration. That is, most prior face alignment methods do not employ shape constraints such as DMM, which could be an advantage for near-frontal face alignment, but might be a disadvantage for large-pose face alignment. The only exception in Tab. 4 in DDFA [41], which attempted to overcome the shape constraint by using the additional SDM-based finetuning. It is a strong testimony of our model in that DeFA, without further fine-tuning, outperforms both DDFA and its fine tuned version with SDM.

Method Common set Challenging set Full set
RCPR [6]
SDM [30]
LBF [19]
CFSS [38] 4.73 5.76
RAR [28] 4.12 8.35 4.94
3DDFA [41]
3DDFA+SDM
DeFA 9.38
Table 4: The benchmark comparison (NME-nf) on W dataset. The top two performances are in bold.

4.5 Ablation Study

To analyze the effectiveness of the DeFA method, we design two studies to compare the influence of each part in the DeFA and the improvement by adding each dataset.

Tab. 5 shows the consistent improvement achieved by utilizing more datasets in different stages and constraints according to Tab. 2 on both testing datasets. It shows the advantage and the ability of our method in leveraging more datasets. The accuracy of our method on the AFLW-D consistently improves by adding more datasets. For the AFLW-PIFA dataset, our method achieves and relative improvement by utilizing the datasets in the stage and stage over the first stage, respectively. If including the datasets from both the second and third stages, we can have relative improvement and achieve NME of . Comparing the second and third rows in Tab. 5 shows that the effectiveness of CFC and SPC is more than LFC. This is due to the utilization of more facial matching in the CFC and SPC.

Training Stages AFLW-D AFLW-LFPA
stage1
stage1 + stage2
stage1 + stage3
stage1 + stage2 + stage3 4.50 3.86
Table 5: The NME-lp when utilizing more datasets.

The second study shows the performance improvement achieved by using the proposed constraints. We train models with different types of active constraints and test them on the AFLW-PIFA test set. Due to the time constraint, for this experiment, we did not apply times augmentation of the third stage’s dataset. We show the results in the left of Fig. 5. Comparing LFC+SPC and LFC+CFC performances shows that the CFC is more helpful than the SPC. The reason is that CFC is more helpful in correcting the pose of the face and leads to more landmark error reduction. Using all constraints achieves the best performance.

Finally, to evaluate the influence of using the SIFT pairing constraint (SPC), we use all of the three stages datasets to train our method. We select pairs of images from the IJB-A dataset and compute the NME-lp of all matched SIFT points according to Eqn. 12. The right plot in Fig. 5 illustrates the CED diagrams of NME-lp, for the trained models with and without the SIFT pairing constraint. This result shows that for the images with NME-lp between and the SPC is helpful.

Figure 5: Left: The effect of constraints in enhancing the accuracy on the AFLW-LPFA testing set. The NME-lp of each setting is shown in legend. Right: The influence of the SIFT pairing constraint (SPC) in improving the performance for selected pairs from IJB-A.
Figure 6: The estimated dense D shape and their landmarks with visibility labels for different datasets. From top to bottom, the results on AFLW-LPFA, IJB-A and W datasets are shown in two rows each. The green landmark are visible and the red landmarks show the estimated locations for invisible landmarks. Our model can fit to diverse poses, resolutions, and expressions.

Part of the reason DeFA works well is that it receives “dense” supervision. To show that, we take all matched SIFT points in the W-LP dataset, find their corresponding vertices, and plot the log of the number of SIFT points on each of the D face vertex. As shown in Fig. 7, SPC utilizes SIFT points to cover the whole D shape and the points in the highly textured areas are substantially used. We can expect that these SIFT constraints will act like anchors to guild the learning of the model fitting process.

Figure 7: The log plot of the number of matched SIFT points in the W-LP training set. It shows that the SIFT constraints cover the whole face, especially the highly textured area.

5 Conclusion

We propose a large-pose face alignment method which estimates accurate D face shapes by utilizing a deep neural network. In addition to facial landmark fitting, we propose to align contours and the SIFT feature point pairs to extend the fitting beyond facial landmarks. Our method is able to leverage from utilizing multiple datasets with different landmark markups and numbers of landmarks. We achieve the state-of-the-art performance on three challenging large pose datasets and competitive performance on hard medium pose datasets.

References

  • [1] A. T. an Trãn, T. Hassner, I. Masi, and G. Medioni. Regressing robust and discriminative 3D morphable models with a very deep neural network. arXiv preprint arXiv:1612.04904, 2016.
  • [2] A. Angelova, Y. Abu-Mostafam, and P. Perona. Pruning training sets for learning of object categories. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 494–501. IEEE, 2005.
  • [3] P. N. Belhumeur, D. W. Jacobs, D. J. Kriegman, and N. Kumar. Localizing parts of faces using a consensus of exemplars. IEEE transactions on pattern analysis and machine intelligence, 35(12):2930–2940, 2013.
  • [4] V. Blanz and T. Vetter. A morphable model for the synthesis of 3d faces. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, pages 187–194. ACM Press/Addison-Wesley Publishing Co., 1999.
  • [5] J. Booth, E. Antonakos, S. Ploumpis, G. Trigeorgis, Y. Panagakis, and S. Zafeiriou. 3d face morphable models” in-the-wild”. 2017.
  • [6] X. P. Burgos-Artizzu, P. Perona, and P. Dollár. Robust face landmark estimation under occlusion. In Proceedings of the IEEE International Conference on Computer Vision, pages 1513–1520, 2013.
  • [7] C. Cao, Y. Weng, S. Zhou, Y. Tong, and K. Zhou. Facewarehouse: A 3d facial expression database for visual computing. IEEE Transactions on Visualization and Computer Graphics, 20(3):413–425, 2014.
  • [8] C. Ding, J. Choi, D. Tao, and L. S. Davis. Multi-directional multi-level dual-cross patterns for robust face recognition. IEEE transactions on pattern analysis and machine intelligence, 38(3):518–531, 2016.
  • [9] A. Jourabloo and X. Liu. Pose-invariant 3d face alignment. In Proceedings of the IEEE International Conference on Computer Vision, pages 3694–3702, 2015.
  • [10] A. Jourabloo and X. Liu. Large-pose face alignment via cnn-based dense 3d model fitting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4188–4196, 2016.
  • [11] A. Jourabloo and X. Liu. Pose-invariant face alignment via cnn-based dense 3d model fitting. April 2017.
  • [12] B. F. Klare, B. Klein, E. Taborsky, A. Blanton, J. Cheney, K. Allen, P. Grother, A. Mah, and A. K. Jain. Pushing the frontiers of unconstrained face detection and recognition: Iarpa janus benchmark a. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1931–1939, 2015.
  • [13] J. Kossaifi, Y. Tzimiropoulos, and M. Pantic. Fast and exact newton and bidirectional fitting of active appearance models. IEEE Transactions on Image Processing, 2016.
  • [14] E. Learned-Miller, G. B. Huang, A. RoyChowdhury, H. Li, and G. Hua. Labeled faces in the wild: A survey. In Advances in Face Detection and Facial Image Analysis, pages 189–248. Springer, 2016.
  • [15] F. Liu, D. Zeng, Q. Zhao, and X. Liu. Joint face alignment and 3d face reconstruction. In European Conference on Computer Vision, pages 545–560. Springer, 2016.
  • [16] J. McDonagh and G. Tzimiropoulos. Joint face detection and alignment with a deformable hough transform model. In European Conference on Computer Vision, pages 569–580. Springer, 2016.
  • [17] S. Milborrow and F. Nicolls. Locating facial features with an extended active shape model. In European conference on computer vision, pages 504–513. Springer, 2008.
  • [18] P. Paysan, R. Knothe, B. Amberg, S. Romdhani, and T. Vetter. A 3d face model for pose and illumination invariant face recognition. In Advanced video and signal based surveillance, 2009. AVSS’09. Sixth IEEE International Conference on, pages 296–301. IEEE, 2009.
  • [19] S. Ren, X. Cao, Y. Wei, and J. Sun. Face alignment at 3000 fps via regressing local binary features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1685–1692, 2014.
  • [20] S. Romdhani and T. Vetter. Estimating 3d shape and texture using pixel intensity, edges, specular highlights, texture constraints and a prior. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), volume 2, pages 986–993. IEEE, 2005.
  • [21] J. Roth, Y. Tong, and X. Liu. Unconstrained 3d face reconstruction. In Proc. IEEE Computer Vision and Pattern Recognition, Bostan, MA, June 2015.
  • [22] J. Roth, Y. Tong, and X. Liu. Adaptive 3d face reconstruction from unconstrained photo collections. IEEE Transactions on Pattern Analysis and Machine Intelligence, PP(99):1–1, 2016.
  • [23] C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic. 300 faces in-the-wild challenge: The first facial landmark localization challenge. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 397–403, 2013.
  • [24] D. Sánchez-Escobedo, M. Castelán, and W. A. Smith. Statistical 3d face shape estimation from occluding contours. Computer Vision and Image Understanding, 142:111–124, 2016.
  • [25] B. M. Smith and L. Zhang. Collaborative facial landmark localization for transferring annotations across datasets. In European Conference on Computer Vision, pages 78–93. Springer, 2014.
  • [26] G. Tzimiropoulos and M. Pantic. Gauss-newton deformable part models for face alignment in-the-wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1851–1858, 2014.
  • [27] A. Vedaldi and K. Lenc. Matconvnet – convolutional neural networks for matlab. In Proceeding of the ACM Int. Conf. on Multimedia, 2015.
  • [28] S. Xiao, J. Feng, J. Xing, H. Lai, S. Yan, and A. Kassim. Robust facial landmark detection via recurrent attentive-refinement networks. In European Conference on Computer Vision, pages 57–72. Springer, 2016.
  • [29] S. Xie and Z. Tu. Holistically-nested edge detection. In Proceedings of the IEEE International Conference on Computer Vision, pages 1395–1403, 2015.
  • [30] X. Xiong and F. De la Torre. Supervised descent method and its applications to face alignment. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 532–539, 2013.
  • [31] H. Yang, W. Mou, Y. Zhang, I. Patras, H. Gunes, and P. Robinson. Face alignment assisted by head pose estimation. In Proceedings of the British Machine Vision Conference 2015, BMVC 2015, Swansea, UK, September 7-10, 2015, pages 130.1–130.13, 2015.
  • [32] H. Yang and I. Patras. Mirror, mirror on the wall, tell me, is the error small? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4685–4693, 2015.
  • [33] J. Zhang, M. Kan, S. Shan, and X. Chen. Leveraging datasets with varying annotations for face alignment via deep regression network. In Proceedings of the IEEE International Conference on Computer Vision, pages 3801–3809, 2015.
  • [34] Z. Zhang, P. Luo, C. C. Loy, and X. Tang. Learning deep representation for face alignment with auxiliary attributes. IEEE transactions on pattern analysis and machine intelligence, 38(5):918–930, 2016.
  • [35] R. Zhao, Y. Wang, C. F. Benitez-Quiroz, Y. Liu, and A. M. Martinez. Fast and precise face alignment and 3d shape reconstruction from a single 2d image. In European Conference on Computer Vision, pages 590–603. Springer, 2016.
  • [36] E. Zhou, H. Fan, Z. Cao, Y. Jiang, and Q. Yin. Extensive facial landmark localization with coarse-to-fine convolutional network cascade. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 386–391, 2013.
  • [37] K. Zhou, Y. Weng, and C. Cao. Method for real-time face animation based on single video camera, June 7 2016. US Patent 9,361,723.
  • [38] S. Zhu, C. Li, C. Change Loy, and X. Tang. Face alignment by coarse-to-fine shape searching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4998–5006, 2015.
  • [39] S. Zhu, C. Li, C. C. Loy, and X. Tang. Transferring landmark annotations for cross-dataset face alignment. arXiv preprint arXiv:1409.0602, 2014.
  • [40] S. Zhu, C. Li, C.-C. Loy, and X. Tang. Unconstrained face alignment via cascaded compositional learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3409–3417, 2016.
  • [41] X. Zhu, Z. Lei, X. Liu, H. Shi, and S. Li. Face alignment across large poses: A 3d solution. In Proc. IEEE Computer Vision and Pattern Recognition, Las Vegas, NV, June 2016.
  • [42] X. Zhu, Z. Lei, J. Yan, D. Yi, and S. Z. Li. High-fidelity pose and expression normalization for face recognition in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 787–796, 2015.
  • [43] X. Zhu and D. Ramanan. Face detection, pose estimation, and landmark localization in the wild. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2879–2886. IEEE, 2012.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
3165
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description