Funnel-Structured Cascade for Multi-View Face Detection with Alignment-Awareness

Funnel-Structured Cascade for Multi-View Face Detection with Alignment-Awareness

Abstract

Multi-view face detection in open environment is a challenging task due to diverse variations of face appearances and shapes. Most multi-view face detectors depend on multiple models and organize them in parallel, pyramid or tree structure, which compromise between the accuracy and time-cost. Aiming at a more favorable multi-view face detector, we propose a novel funnel-structured cascade (FuSt) detection framework. In a coarse-to-fine flavor, our FuSt consists of, from top to bottom, 1) multiple view-specific fast LAB cascade for extremely quick face proposal, 2) multiple coarse MLP cascade for further candidate window verification, and 3) a unified fine MLP cascade with shape-indexed features for accurate face detection. Compared with other structures, on the one hand, the proposed one uses multiple computationally efficient distributed classifiers to propose a small number of candidate windows but with a high recall of multi-view faces. On the other hand, by using a unified MLP cascade to examine proposals of all views in a centralized style, it provides a favorable solution for multi-view face detection with high accuracy and low time-cost. Besides, the FuSt detector is alignment-aware and performs a coarse facial part prediction which is beneficial for subsequent face alignment. Extensive experiments on two challenging datasets, FDDB and AFW, demonstrate the effectiveness of our FuSt detector in both accuracy and speed.

\cvprfinalcopy

1 Introduction

Fast and accurate detection of human faces is greatly demanded in various applications. While current detectors can easily detect frontal faces, they become less satisfactory when confronted with complex situations, e.g. to detect faces viewed from various angles, in low resolution, with occlusion, etc. Especially, the multi-view face detection is quite challenging, because faces can be captured almost from any angle - even exceeding in extreme cases, leading to significant divergence in facial appearances and shapes.

Along with the steady progress of face detection, there have been mainly three categories of face detectors with different highlights. The most classic are those following the boosted cascade framework [23, 14, 1], originating in the seminal work of Viola and Jones [20]. These detectors are quite computationally efficient, benefited from the attentional cascade and fast feature extraction. Then to explicitly deal with large appearance variations, deformable part models (DPM) [4] are introduced to simultaneously model global and local face features [29, 21, 17], providing an intuitive way to cover intra-class variations and thus being more robust to deformations due to pose, facial expressions, etc. DPM has established a reputation for its promising results on challenging datasets, but detection with DPM is time-consuming, inspiring researches on speeding up techniques [21]. Recently, detectors based on neural networks, e.g. convolutional neural networks (CNN) [3, 12, 25, 27, 19, 7], have attracted much attention and achieved magnificent accuracy on the challenging FDDB dataset [6], as they enjoy the natural advantage of strong capability in non-linear feature learning. The weakness of CNN-based detectors is their high computational cost due to intensive convolution and complex nonlinear operations.

{subfigure}

[t]0.25   {subfigure}[t]0.28   {subfigure}[t]0.28

Figure 1: Parallel structure
Figure 2: Pyramid structure
Figure 3: Tree structure
Figure 4: Different structures for multi-view face detection.

Most works mentioned above focus on designing an effective detector for generic faces without considerations for specific scenarios such as multi-view face detection. In order to handle faces in different views, a straightforward solution is to use multiple face detectors in parallel [14, 23, 17], one for each view, as shown in Figure 4. The parallel structure requires each candidate window to be classified by all models, resulting in an increase of the overall computational cost and false alarm rate. To alleviate this issue, each model needs to be elaborately trained and tuned for better discrimination between face and non-face windows, ensuring faster and more accurate removal of non-face windows.

More efficiently, the multiple models for multi-view face detection can be organized in a pyramid [15] or tree structure [5], as shown in Figure 4 and 4, forming a coarse-to-fine classification scheme. In such structures, the root classifier performs the binary classification of face vs. non-face, and then at subsequent layers, faces are divided into multiple sub-categories with respect to views in a finer granularity, each of which is handled by an independent model. The pyramid structure is actually a compressed parallel structure with shared nodes in higher layers or a stack of parallel structures with different view partitions. Therefore the pyramid-structured detectors suffer from similar problems that parallel-structured ones are faced with. The tree-structured detectors are different in that branching schemes are adopted to avoid evaluating all classifiers at each layer, but this can easily lead to missing detections with incorrect branching. To relax the dependence on accurate branching, Huang \etal[5] designs a vector boosting algorithm to allow multiple branching.

Considering the appearance divergence of multi-view faces from the perspective of feature representation, the intra-class variations are mainly due to features extracted at positions with inconsistent semantics. For instance, in Figure 5, three faces in different views are shown and the window at the same positions on different faces contains completely distinct semantics, resulting in features describing eye, nose and cheek respectively. Thus there does not exist a good correspondence between representations of faces in different views. Chen \etal[1] compares densely extracted features with shape-indexed features and finds the latter to be more discriminative. By using features at aligned landmarks, faces in different views can be more compactly represented and better distinguished from non-face regions.

Figure 5: The window at the same position on three faces in varied views contain totally distinct semantics.

To provide a more effective framework for multi-view face detection, we design a novel funnel-structured cascade (FuSt) multi-view face detector, which enjoys both high accuracy and fast speed. The FuSt detector, as shown in Figure 6, features a funnel-like structure, being wider on the top and narrower at the bottom, which is evidently different from previous ones. At early stages from the top, multiple fast but coarse classifiers run in parallel to rapidly remove a large proportion of non-face windows. Each of the parallel classifiers is trained specifically for faces within a small range of views, so they are able to ensure a high recall of multi-view faces. By contrast, at subsequent stages, fewer classifiers, which are slightly more time-consuming but with higher discriminative capability, are employed to verify the remaining candidate windows. Gathering the small number of windows surviving from previous stages, at the last stages at the bottom, a unified multilayer perceptron (MLP) cascade with shape-indexed features is leveraged to output the final face detection results. From top to bottom, the number of models used decreases while the model complexity and discriminative capability increase, forming a coarse-to-fine framework for multi-view face detection.

Figure 6: An overview of our proposed funnel-structured cascade framework for multi-view face detection.

Compared with previous multi-view face detectors, the proposed FuSt detector is superior in that a more effective framework is used to organize multiple models. The contribution of our work compared to existing literature is listed as below.

    [noitemsep]
  • First, a unified MLP cascade is leveraged as last few stages to examine proposals provided by previous stages, which addresses the problem of increased false alarm rate resulting from using multiple models in other structures, e.g. parallel or tree structure.

  • Second, the proposed FuSt detector operates in a gathering style instead of adopting any branching mechanism as in pyramid- or tree-structured detectors. Therefore it can naturally avoid missing detections caused by incorrect branching and reach a high recall.

  • Third, in the final unified MLP cascade, features are extracted in semantically consistent positions by integrating shape information rather than fixed positions as in conventional face detectors, and thus multi-view faces can be better distinguished from non-face regions. Moreover, the extra shape output from our FuSt detector can provide a good initialization for subsequent alignment.

  • With extensive experiments on challenging face detection datasets including FDDB [6] and AFW [29], the FuSt detector is demonstrated to have both good performance and fast speed.

The rest of the paper is organized as follows. Section 2 describes the proposed FuSt detector in detail, explaining the design of different stages from top to bottom. Section 3 presents the experimental results on two challenging face detection datasets together with analysis on the structure and shape prediction. The final Section 4 concludes the paper and discusses the future work.

2 Funnel-Structured Cascade Multi-View Face Detector

An overview of the framework of FuSt detector is presented in Figure 6. Specifically, the FuSt detector consists of three coarse-to-fine stages in consideration of both detection accuracy and computational cost, i.e. Fast LAB Cascade classifier, Coarse MLP Cascade classifier, and Fine MLP Cascade classifier. An input image is scanned according to the sliding window paradigm, and each window goes through the detector stage by stage.

The Fast LAB Cascade classifiers aim to quickly remove most non-face windows while retaining a high call of face windows. The following Coarse MLP Cascade classifiers further roughly refine the candidate windows at a low cost. Finally the unified Fine MLP Cascade classifiers accurately determine faces with the expressive shape-indexed features. In addition, it also predicts landmark positions which are beneficial for subsequent alignment.

2.1 Fast LAB Cascade

For real-time face detection, the major concern in the sliding window paradigm is the large quantity of candidate windows to be examined. For instance, to detect faces with sizes larger than on a image, over a million windows need to be examined. Hence it is quite necessary to propose a small number of windows that are most likely to contain faces at minimal time cost.

A good option for fast face proposal is to use boosted cascade classifiers, which are very efficient for face detection task as shown by Viola and Jones [20]. Yan \etal[22] propose an efficient LAB (Locally Assembled Binary) feature, which only considers the relative relations between Haar features, and can be accelerated with a look-up table. Extracting an LAB feature in a window requires only one memory access, resulting in constant time complexity of . Therefore we employ the more preferable LAB feature with boosted cascade classifiers, leading to the extremely fast LAB cascade classifiers, which are able to rapidly reject a large proportion of non-face windows at the very beginning.

Although the LAB feature is quite computationally efficient, it is less expressive and has difficulty modeling the complicated variations of multi-view faces for a high recall of face windows. Therefore, we adopt a divide-and-conquer strategy by dividing the difficult multi-view face detection problem into multiple easier single-view face detection problems. Specifically, multiple LAB cascade classifiers, one for each view, are leveraged in parallel and the final candidate face windows are the union of surviving windows from all of them.

Formally, denote the whole training set containing multi-view faces as , and it is partitioned into subsets according to view angles, denoted as . With each , an LAB cascade classifier is trained, which attempts to detect faces in the -th view angle. For a window within an input image, whether it is possible to be a face is determined with all LAB cascade classifiers as follows:

(1)

where and indicate whether is determined to be a face or not. As can be seen from Eq. (1), a window will be rejected if and only if it is classified as negative by all LAB cascade classifiers. Using multiple models will cost more time, but all models can share the same LAB feature map for feature extraction. Therefore more models add only minor cost and the overall speed is still very fast as a high recall is reached.

Besides the high recall, the parallel structure also allows more flexibility in view partitions. Since it does not suffer from missing detections caused by incorrect branching as in tree structure, a rough rather than an accurate view partition is enough. In other words, degenerated partitions with incorrect view labeling of faces has minor influences on the overall recall of all LAB cascade classifiers. It is even applicable for automatic view partition from clustering or that based on other factors.

2.2 Coarse MLP Cascade

After the stages of LAB cascade, most of the non-face windows have been discarded, and the remaining ones are too hard for the simple LAB feature to handle. Therefore, on subsequent stages, the candidate windows are further verified by more sophisticated classifiers, i.e. MLP with SURF (Speeded-Up Robust Feature) [13]. To avoid imposing too much computational cost, small networks are exploited to perform a better but still coarse examination.

SURF features are more expressive than LAB features, but are still computationally efficient benefited from the integral image trick. Therefore face windows can be better differentiated from non-face windows with low time cost. Furthermore, MLP is used with SURF feature for window classification, which can better model the non-linear variations of multi-view faces and diverse non-face patterns with the equipped nonlinear activation functions.

MLP is a type of neural network consisting of an input layer, an output layer, and one or more hidden layers in between. An -layer MLP can be formulated as

(2)
(3)

where is the input, i.e. the SURF features of a candidate window; and are the weights and biases of connections from layer to respectively. The activation function is commonly designed as a nonlinear function such as a sigmoid function . As can be seen in Eq. (2) and (3), units in hidden layers and output layer are both equipped with nonlinear functions, so the MLP is endowed with strong capability to model highly nonlinear transformations. The training of MLPs aims to minimize the mean squared error between the predictions and the true labels as below

(4)

where is the feature vector of the -th training sample and the corresponding label as either or , representing whether the sample is a face or not. The problem in Eq. (4) can be easily solved by using gradient descent under the back propagation framework [18].

We employ multiple coarse MLPs to construct an attentional cascade, in which the number of features used and the size of the network gradually increase stage by stage. The SURF features used at each stage is selected by using group sparse [2]. Since the MLP cascade classifiers have stronger ability to model face and non-face variations, windows passing through multiple LAB cascade classifiers can be handled together by one model, i.e. one MLP cascade can connect to multiple LAB cascade classifiers.

2.3 Fine MLP Cascade with shape-indexed feature

Figure 7: The Fine MLP Cascade with shape-indexed feature. The input of each stage of MLP is the shape-indexed feature extracted according to the shape predicted by the previous stage (or mean shape for the first stage). The output includes the class label indicating whether the window is a face or not as well as a more accurate shape, which is used to extract more distinctive shape-indexed features for the next stage.

Surviving from the previous stages, the small number of windows have been quite challenging, among which face and non-face windows are more difficult to be distinguished. Considering that multiple models running in parallel tend to introduce more false alarms, it is desirable to process the remaining windows in a unified way. Hence we leverage one single MLP cascade following the previous Coarse MLP Cascade classifiers.

Prominent divergence exists in appearances of multi-view faces, which is mainly due to the unaligned features, i.e. features are extracted at positions that are not semantically consistent. For example, the central region of a frontal face covers the nose, while that of a profile face is part of the cheek, as shown in Figure 5. To address this issue, we adopt shape-indexed features extracted at semantically consistent positions as the input of the Fine MLP Cascade classifier. As shown in Figure 8, four semantic positions are selected, corresponding to the facial landmarks of left and right eye center, nose tip and mouth center. For profile faces, the invisible eye is assumed to be at the same position as the other eye. The SIFT (Scale-Invariant Feature Transform) [16] feature is computed at each semantic position on candidate windows, and they are robust to large face variations such as pose, translation, etc.

Figure 8: The four semantic positions (landmarks) used to extract shape-indexed feature: left and right eye center, nose tip and mouth center.

With the more expressive shape-indexed features, larger MLPs with higher capacity of nonlinearity are used to perform finer discrimination between face and non-face windows. Moreover, different from previous ones, the larger MLPs predict both class label, indicating whether a candidate window is a face, and shape simultaneously. An extra term of shape prediction errors is added to the objective function in Eq. (4). The new optimization problem is the following

(5)

where corresponds to the face classification output, and the shape prediction output; indicates the shape-indexed feature (i.e. SIFT) extracted from the -th training sample according to a mean shape or predicted shape ; is the groundtruth shape for the sample; is the weighting factor to maintain the balance between the two types of errors, which is set to with as the dimension of shape. As can be seen from Eq. (5), a more accurate shape than the input can be obtained with the MLP. Hence a subsequent model can exploit more compact shape-indexed features extracted according to the refined shape . As so, in multiple cascaded MLPs, the shapes used for feature extraction become more and more accurate stage by stage, leading to more and more distinctive shape-indexed features and further making multi-view faces more distinguishable from non-face regions. The process is shown in Figure 7.

Additionally, predicting shapes has made the detector alignment-aware in the sense that an alignment model can be initialized with landmark coordinates directly instead of bounding boxes of detected faces.

3 Experiments

To evaluate the proposed FuSt detector for multi-view face detection, as well as to analyse the detector in various aspects, extensive experiments are performed on two challenging face datasets.

3.1 Experimental settings

The most popular dataset for evaluating face detectors is the FDDB [6]. It contains labeled faces from news images. FDDB is challenging in the sense that the labeled faces appear with great variations in view, skin color, facial expression, illumination, occlusion, resolution, etc.

Another widely used face detection dataset is the AFW [29]. This set contains images from Flickr with faces. It is a small set, yet is challenging, since faces appears in cluttered backgrounds and with large variations in viewpoints.

For evaluation of the detection accuracy, we apply the officially provided tool to our detection results on FDDB to obtain the ROCs, and draw precision-recall curve for the results on AFW, following most existing works.

For the training data of the FuSt detector, we use faces from MSRA-CFW [28], PubFig [10], and AFLW [8] as positive samples, and randomly crop patches from collected images not containing faces as negative samples. To augment the training set with more variations, we add random distortions to the face samples. Besides, all samples are resized to for training.

We use stage with a total of LAB features for the Fast LAB Cascade, and stages for the Coarse MLP Cascade, which exploit , and SURF features respectively. SURF features are extracted based on local patches, which will cover redundant information if there is considerable overlap between them. Therefore a large step of are chosen for adjacent SURF patches, resulting in a pool of SURF features on a sample image. The three stages of MLP all have only one hidden layer, and there are hidden units in the first-stage MLP and hidden units in the second- and third-stage MLP. The final Fine MLP Cascade contains stages of single-hidden-layer MLP with hidden units with SIFT features extracted around the four semantic positions as mentioned in Section 2.3.

3.2 Analysis of the funnel-structured cascade

We first conduct a detailed analysis of the proposed FuSt detector to evaluate its performance from various perspectives. Specifically, we compare different view partitions, verify the effectiveness of shape-indexed features, assess the accuracy of shape predictions, and compare the final MLP cascade with two widely used CNN models.

Different view partitions

At the beginning, we adopt a divide-and-conquer strategy to treat faces in different views with separate LAB cascade classifiers. This makes it possible for such simple classifiers to reject a large proportion of non-faces windows, while retaining a high overall recall of faces. To explore the impact of different view partitions, we compare two typical partition schemes: (1) five-view partition, i.e. left full profile, left half profile, near frontal, right half profile, and right full profile; (2) two-view partition, i.e. near frontal, profile. Note that in the second two-view partition scheme, left and right profile faces are mixed together, and half profile faces are mixed with frontal ones. To supplement the training set with more half profile face images, we also use some images from CelebA dataset [30]. The recall of faces with the two schemes are presented in Table 1. Here we manually partition the FDDB into two subsets of profile and frontal faces to evaluate on them separately. The former contains profile faces from images, and the latter, i.e. the frontal face subset, contains the rest faces including both near frontal and some half profile faces.

View Recall of Faces (%)
Partition Frontal Profile Overall
Views
Views
Table 1: Recall of faces with different view partitions with over windows removed

As can be seen, the recall of faces with the five-view partition, especially the recall of profile faces, are higher than that with the two-view partition when both scheme remove over of candidate windows. As expected, the finer partition allows classifiers to cover more variations within each view of faces, and is beneficial for obtaining higher recall. This demonstrates the effectiveness of using a reasonably wide top in the proposed funnel structure.

Funnel structure vs parallel structure

To demonstrate the effectiveness of the proposed funnel structure employing a unified model to handle candidate windows coming from different classifiers, we compare the parallel and the funnel structure on frontal and half profile faces in the coarse MLP cascade stage. Specifically, for the parallel structure, we train three MLPs, one for each of the three views, which follows the corresponding fast LAB cascade. For the funnel structure, only one MLP is trained for frontal, left half profile and right half profile faces. The parallel structure obtains a recall of with windows per image, while the funnel structure reaches a higher recall of with only windows per image. This demonstrates that a unified model can effectively control the false positives with less sacrifice of recall.

Shape-indexed feature

To verify the effectiveness of the shape-indexed feature, we train two types of two-stage Fine MLP Cascade classifiers with mean shape and refined shape respectively, which are used to extract shape-indexed feature. Namely, one MLP cascade uses SIFT extracted according to mean shape as input at both stages, while the other uses SIFT extracted with refined and thus more accurate shapes as input at the second stage.

Fixing previous stages, we compare the two types of Fine MLP Cascades on FDDB. The performance curves are presented in Figure 9. As expected, using more accurate shapes brings performance gain, demonstrating the effectiveness of shape-indexed features for multi-view faces. Shape-indexed features from two faces have good semantic consistence, thus reducing intra-class variations and increasing inter-class distinctions. This makes it easier to distinguish face from non-face windows.

We also evaluate the coarse shape predictions on AFW. Figure 10 compares the predicted shape with the mean shape. With only two stages of refinement, the predicted shapes achieve significant improvement over the mean shape, leading to more semantically consistent shape-indexed features. When followed by an alignment model, the predicted shape from our FuSt detector can be directly used as a good initialization, which is more preferable than only bounding boxes of detected faces. Figure 11 gives several examples of predicted shapes on faces in different views.

Figure 9: Comparison between shape-indexed features extracted with mean shape and refined shape
Figure 10: Comparison between predicted shape and mean shape on AFW
Figure 11: Examples of predicted shapes on AFW
Figure 12: Comparison of MLP cascade, LeNet and AlexNet

MLP vs CNN

The powerful CNN models have achieved good results in face detection task [3, 12, 25], so we also compare MLP with CNN under the proposed funnel-structured cascade framework. Two commonly used CNN models are considered in the comparison, i.e. LeNet [11] and AlexNet [9], and they serve as replacements for the final Fine MLP Cascade. The input sizes of LeNet and AlexNet are and respectively, and the output layers are adjusted for two-class classification of face or non-face. Both CNN models are fine-tuned using the same data as that used in training the MLP cascade. The performance curves on FDDB are given in Figure 12. As is shown, the MLP cascade outperforms LeNet by a large margin and also performs better than the -layer AlexNet. This is most likely because the semantically consistent shape-indexed features are more effective than the learned convolutional features. Considering the result that the MLP with hand-crafted features has the ability to defeat deep CNN models, it implies that a well-designed model with considerations for the problem can be better than an off-the-shelf CNN.

Detection Speed

Our FuSt detector enjoys a good advantage of detection speed with the coarse-to-fine framework design and is faster than complex CNN-based detectors. When detecting faces no smaller than on a VGA image of size , our detector takes ms with step- sliding window using a single thread on an i7 CPU. The Fast LAB Cascade and Coarse MLP Cascade cost only ms, and the final Fine MLP Cascade ms. By contrast, Cascade CNN takes ms over an image pyramid with scaling factor of on CPU [12]. Moreover, further speed-up of FuSt detector can be easily obtained with GPU since a large amount of data parallelism exists in our framework, e.g. feature extraction for each window, the inner product operations in MLP, etc.

Methods DR@100FPs Speed
Landmark
Prediction
Cascade CNN [12] ms No
Our FuSt ms Yes
Table 2: Comparison with Cascade CNN [12] in different aspects. The DR@100FPs is computed on FDDB, and the speed is compared with minimum face size set as and image size .

Discussion

Compared with CNN based methods, the proposed funnel structure is a general framework of organizing multiple models, adopting a divide-and-conquer strategy to handle multi-view faces. The MLPs used with the framework can also be replaced by CNNs. One other aspect that makes our FuSt detector different is that hand-crafted shape-indexed feature is adopted based on explicit consideration for semantically consistent feature representation. By contrast, CNN learns the feature representation merely from data without considering the semantic consistency.

3.3 Comparison with the state-of-the-art

{subfigure}

[t]0.9

Figure 13: FDDB
{subfigure}

[t]0.9

Figure 14: AFW
Figure 15: Comparison with the state-of-the-art on two face detection datasets: (a) FDDB, (b) AFW.

To further evaluate the performance of the FuSt detector on multi-view face detection, we compare it with the state-of-the-art methods on FDDB and AFW as shown in Figure 15. Methods being compared include cascade-structured detectors such as Joint Cascade [1], ACF [23], SURF Cascade [14], and Head Hunter [17], DPM-based detectors such as Fastest DPM [21], and TSM [29], and deep-network-based detectors such as DDFD [3], Cascade CNN [12], CCF [24], and FacenessNet [25].

Compared with multi-view face detectors like SURF Cascade, ACF, and Head Hunter, which all employ a parallel structure, our FuSt detector performs better on FDDB, indicating the superiority of our funnel structure. With as few as false positives, the FuSt detector achieves a high recall of , which is quite favorable in practical applications. Compared with the impressive deep-network-based methods, we achieve comparable performance with that of Cascade CNN. However, as stated in Section 3.2, our FuSt detector enjoys a more favorable speed, taking only ms to detect a VGA image with a single thread on CPU. By contrast, Cascade CNN costs ms on CPU. On AFW dataset, our PR curve is comparable to or better than most methods, further demonstrating that our FuSt detector is favorable for multi-view face detection.

To further investigate the potential of our FuSt detector on FDDB, we trained a new detector FuSt-wf with a more diverse dataset WIDER FACE [26]. WIDER FACE dataset covers much more face variations, which is beneficial for obtaining higher performance. Since WIDER FACE does not provide landmark annotations for faces, we only trained one stage for the unified MLP cascade with mean shape. As shown in Figure 15, FuSt-wf achieves obvious performance boost, further demonstrating the effectiveness of the funnel-structure design. With higher quality and more data, the FuSt detector can continue to improve.

Figure 16: Examples of detections on FDDB and AFW (Blue: near frontal faces, Orange: profile faces)

4 Conclusions and Future Works

In this paper, we have proposed a novel multi-view face detection framework, i.e. the funnel-structured cascade (FuSt), which has a coarse-to-fine flavor and is alignment-aware. The proposed FuSt detector operates in a gathering style, with the early stages of multiple parallel models reaching a high recall of faces at low cost and the final unified MLP cascade well reducing false alarms. As evaluated on two challenging datasets, the FuSt detector has shown good performance, and the speed of the detector is also quite favorable. In addition, the alignment-awareness nature of our FuSt detector can be leveraged to achieve a good initial shape for subsequent alignment models with minor cost.

For the future work, the funnel structure framework can be further enhanced with specifically designed CNN models which have good capability of learning feature representations automatically from data. It is also worth trying different hand-crafted shape-indexed features, e.g. the multi-scale pixel difference features used in [1], and comparing them with CNN-learned features. Considering the alignment-awareness nature of the FuSt detector, it is also a promising direction to design a joint face detection and alignment framework.

Acknowledgements

This work was partially supported by 973 Program under contract No. 2015CB351802, Natural Science Foundation of China under contracts Nos. 61173065, 61222211, 61402443 and 61390511.

References

  1. D. Chen, S. Ren, Y. Wei, X. Cao, and J. Sun. Joint cascade face detection and alignment. In European Conference on Compute Vision (ECCV), pages 109–122, 2014.
  2. Y. C. Eldar, P. Kuppinger, and H. Bolcskei. Block-sparse signals: Uncertainty relations and efficient recovery. IEEE Transactions on Signal Processing (TSP), 58(6):3042–3054, 2010.
  3. S. S. Farfade, M. Saberian, and L.-J. Li. Multi-view face detection using deep convolutional neural networks. In International Conference on Multimedia Retrieval (ICMR), 2015.
  4. P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part-based models. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 32(9):1627–1645, 2010.
  5. C. Huang, H. Ai, Y. Li, and S. Lao. High-performance rotation invariant multiview face detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 29(4):671–686, 2007.
  6. V. Jain and E. Learned-Miller. FDDB: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst, 2010.
  7. X. Jiang, Y. Pang, X. Li, and J. Pan. Speed up deep neural network based pedestrian detection by sharing features across multi-scale models. Neurocomputing, 185:163 – 170, 2016.
  8. M. Kostinger, P. Wohlhart, P. M. Roth, and H. Bischof. Annotated Facial Landmarks in the Wild: A large-scale, real-world database for facial landmark localization. In IEEE International Conference on Computer Vision Workshops (ICCVW), pages 2144–2151, 2011.
  9. A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, pages 1097–1105. 2012.
  10. N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar. Attribute and simile classifiers for face verification. In IEEE International Conference on Computer Vision (ICCV), pages 365–372, 2009.
  11. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  12. H. Li, Z. Lin, X. Shen, J. Brandt, and G. Hua. A convolutional neural network cascade for face detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
  13. J. Li, T. Wang, and Y. Zhang. Face detection using SURF cascade. In IEEE International Conference on Computer Vision Workshops (ICCVW), pages 2183–2190, 2011.
  14. J. Li and Y. Zhang. Learning SURF cascade for fast and accurate object detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3468–3475, 2013.
  15. S. Z. Li, L. Zhu, Z. Zhang, A. Blake, H. Zhang, and H. Shum. Statistical learning of multi-view face detection. In European Conference on Compute Vision (ECCV), pages 67–81, 2002.
  16. D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision (IJCV), 60(2):91–110, 2004.
  17. M. Mathias, R. Benenson, M. Pedersoli, and L. Van Gool. Face detection without bells and whistles. In European Conference on Compute Vision (ECCV), pages 720–735, 2014.
  18. M. Schmidt. minFunc: unconstrained differentiable multivariate optimization in Matlab, 2005. http://www.cs.ubc.ca/~schmidtm/Software/minFunc.html.
  19. Q.-Q. Tao, S. Zhan, X.-H. Li, and T. Kurihara. Robust face detection using local CNN and SVM based on kernel combination. Neurocomputing, pages –, 2016.
  20. P. Viola and M. J. Jones. Robust real-time face detection. International Journal of Computer Vision (IJCV), 57(2):137–154, 2004.
  21. J. Yan, Z. Lei, L. Wen, and S. Z. Li. The fastest deformable part model for object detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2497–2504, 2014.
  22. S. Yan, S. Shan, X. Chen, and W. Gao. Locally assembled binary (LAB) feature with feature-centric cascade for fast and accurate face detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1–7, 2008.
  23. B. Yang, J. Yan, Z. Lei, and S. Z. Li. Aggregate channel features for multi-view face detection. In IEEE International Joint Conference on Biometrics (IJCB), pages 1–8, 2014.
  24. B. Yang, J. Yan, Z. Lei, and S. Z. Li. Convolutional channel features. In IEEE International Conference on Computer Vision (ICCV), 2015.
  25. S. Yang, P. Luo, C. C. Loy, and X. Tang. From facial parts responses to face detection: A deep learning approach. In IEEE International Conference on Computer Vision (ICCV), 2015.
  26. S. Yang, P. Luo, C. C. Loy, and X. Tang. WIDER FACE: A face detection benchmark. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  27. S. Zhan, Q.-Q. Tao, and X.-H. Li. Face detection using representation learning. Neurocomputing, 187:19 – 26, 2016.
  28. X. Zhang, L. Zhang, X.-J. Wang, and H.-Y. Shum. Finding celebrities in billions of web images. IEEE Transactions on Multimedia (TMM), 14(4):995–1007, 2012.
  29. X. Zhu and D. Ramanan. Face detection, pose estimation, and landmark localization in the wild. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2879–2886, 2012.
  30. X. W. Ziwei Liu, Ping Luo and X. Tang. Deep learning face attributes in the wild. In IEEE International Conference on Computer Vision (ICCV), 2015.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
113244
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description