Scalable Angular Discriminative Deep Metric Learningfor Face Recognition

Scalable Angular Discriminative Deep Metric Learning
for Face Recognition

Bowen Wu Huaming Wu Monica M.Y. Zhang Center for Combinatorics, Nankai University, Tianjin 300071, China Center for Applied Mathematics, Tianjin University, Tianjin 300072, China
Abstract

With the development of deep learning, Deep Metric Learning (DML) has achieved great improvements in face recognition. Specifically, the widely used softmax loss in the training process often bring large intra-class variations, and feature normalization is only exploited in the testing process to compute the pair similarities. To bridge the gap, we impose the intra-class cosine similarity between the features and weight vectors in softmax loss larger than a margin in the training step, and extend it from four aspects. First, we explore the effect of a hard sample mining strategy. To alleviate the human labor of adjusting the margin hyper-parameter, a self-adaptive margin updating strategy is proposed. Then, a normalized version is given to take full advantage of the cosine similarity constraint. Furthermore, we enhance the former constraint to force the intra-class cosine similarity larger than the mean inter-class cosine similarity with a margin in the exponential feature projection space. Extensive experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and IARPA Janus Benchmark A (IJB-A) datasets demonstrate that the proposed methods outperform the mainstream DML methods and approach the state-of-the-art performance.

keywords:
Deep metric learning, face Recognition, convolutional neural network, intra-class cosine similarity, inter-class cosine similarity, self-adaptive margin

1 Introduction

Face recognition has been one of the most challenging and attractive areas in computer vision, due to its close relationship with some actual applications, such as biometrics and surveillance. However, face recognition problem is far from solved, since it is closely related to face detection, face alignment, feature extraction (or face representation) and classification, which influence the final performance from different aspects. Especially, feature extraction plays a paramount role. Conventional feature extraction methods (such as LBP, Gabor and SIFT) always work with suitable metric distances (such as Euclidean distance and cosine distance). However, these methods are not discriminative enough to meet the demands for more complex face recognition scenarios. And the situation may be worse when accompanied by inappropriate metric distances.

Figure 1: The face recognition pipeline in this paper.

Convolutional Neural Network (CNN), which emerges as a powerful feature extraction method, has drawn much attention due to its excellent performance in computer vision community. Several Deep Metric Learning (DML) methods, which unify deep learning and metric learning into a joint learning framework, have been proposed recently and set new state-of-the-arts in various tasks, such as objection classification oh2016deep (), image retrieval zhao2015deep (), person reidentification yi2014deep (), and so on. Specifically, DML has surpassed the humans’abilities on some benchmark datasets in the field of face recognition sun2014deep (); schroff2015facenet (); wen2016discriminative ().

Face recognition can be classified into two tasks, namely face identification and face verification (Fig. 1). The former aims to classify an input image to a specific identity, while the latter is to determine whether a pair of face images is from the same identity or not. In general end-to-end CNN based face recognition training process, Euclidean distance is used to measure the similarities between features. Whereas, the cosine similarity or normalized inner product is widely used in the testing process. As illustrated in liu2017sphereface (), Euclidean distance or Euclidean margin-based loss is not always suitable for learning discriminative features, and using normalized features to compute the pair similarities for testing can boost the performance. These properties motivate some works wang2017normface (); chunjie2017cosine () to incorporate the cosine similarity constraint into training stage to keep the consistency with testing. One step further, we force the intra-class cosine similarity larger than a given margin in this paper. Combined with the separability of softmax loss, our original method achieves accuracy improvement on Labeled Face in the Wild (LFW) dataset and accuracy improvement on Youtube Faces (YTF) dataset. Some previous works sun2014deep1 (); schroff2015facenet () have clarified the importance of hard sample mining procedure in training CNN, but they haven’t exhibited the specific comparative experiment results about whether to use it or not. Here, we compare the effect of cosine similarity constraint on the original training set or the misclassified hard samples of softmax loss. For the diversity of data and the ubiquitously heterogeneous distribution, the global cosine similarity is insufficient to faithfully characterize the true feature distance, as stated in huang2016local (). A self-adaptive margin updating technique is exploited afterwards, so that the local uniqueness of each identity is considered and the human labor of adjusting the margin is largely saved. To acquire more discriminative features, only imposing the intra-class cosine similarity larger than a margin doesn’t seem to be the best choice. So we improve the former constraint to a more powerful case, which enforces the intra-class cosine similarity larger than the mean of the nearest neighboring inter-class cosine similarities in the normalized exponential feature projection space.

In conclusion, our major contributions can be summarized as follows: 1) We first propose a novel metric loss function to directly force the intra-class cosine similarity larger than a fixed margin, so that the training process coincides with the normalized testing criterion. 2) We conduct a contrastive experiment to show the effect of a hard sample mining strategy on the proposed loss function. 3) A self-adaptive margin strategy is incorporated to strengthen the supervision in the updated feature space. 4) To avoid the side effects of infinitely growing norm of features, we further normalize the features and weight vectors of softmax loss to a same value in each mini-batch. 5) A more progressive metric loss function to consider the intra-class and inter-class variations simultaneously is proposed to achieve the discriminative features. Finally, we conduct extensive experiments on three face recognition benchmark datasets, namely LFW huang2014labeled (), YTF wolf2011face () and IARPA Janus Benchmark A (IJB-A) klare2015pushing (), to verify the excellent performance of our approaches.

2 The Proposed Approaches

In this section, we reveal the existing phenomenon of large intra-class variations in deeply learned features trained by softmax loss, and propose several novel metric loss functions to alleviate this problem.

2.1 Recalling Softmax Loss

From the viewpoint of probability, softmax function aims to convert a vector of real weights to a probability distribution. The original softmax loss is the cross entropy of softmax function, which can be written as

(1)

where is the number of training samples, is the number of classes, is the feature of the -th sample, is the corresponding class label in range , and are the weight matrix and bias vector of the last inner-product layer before softmax loss, is the -th column of and is the corresponding bias term. For simplicity, we omit the bias term in the following experiments, as in wang2017normface (). Understandingly, if all classes are well-separated, will roughly correspond to the mean of features in -th class, and it can also be recognized as the center of -th class in general cases.

Figure 2: Visualization of the deeply learned 2-D features on MNIST with (a) MNIST network and (b) LeNet++ network.

To visualize the effect of softmax loss, we conduct a contrastive experiment on the MNIST dataset lecun1998mnist () with two different CNNs, namely LeNet++ network wen2016discriminative () and MNIST network liu2016large (). We reduce the number of the last feature dimension to 2, so the features can be plotted directly on the 2-D surface. The resulting 2-D features of training and testing sets with the above two different networks are shown in Fig. 2. We can see that the deeply learned features are separable under the supervision of softmax loss, but not discriminative enough. Especially, there exists significant intra-class variations in the feature space of LeNet++ network, which coincides with the phenomenon elaborated in wang2017normface () that softmax loss encourages the features to have bigger magnitudes.

2.2 LMC Loss and HLMC Loss

To remove the large intra-class variations of softmax loss and to keep the consistency between training and testing, we first propose the Large Margin Cosine (LMC) loss function, which enforces the intra-class cosine similarity between a sample and the corresponding weight vector in the last inner-product layer before softmax loss larger than a given margin . The LMC loss function is first formulated as follows:

(2)

where , and .

Specifically, the joint supervision of softmax loss and LMC loss is necessary to train the CNN for discriminative feature learning. The final LMC loss function for training is

(3)

where is a weighting parameter that is used for balancing the two loss functions.

It is widely observed that there are often many more easy examples than those meaningful hard ones, an effective data sampling strategy is thus crucial to ensure the learning efficiency of deep features. Therefore, the Hard Large Margin Cosine (HLMC) loss function is explored to impose the previous intra-class cosine similarity constraint on the hard samples. Here, we refer to the hard samples as the ones misclassified by softmax loss, which alleviates the costly computational complexity of pair/triple samples mining strategy adopted in the contrastive/triplet loss sun2014deep1 (); schroff2015facenet ().

(4)

where is a misclassified sample indicator, and if is the misclassified sample of softmax loss.

2.3 MALMC Loss

Previous metric loss functions like contrastive loss, triplet loss and L-Softmax loss, often bring in additional hyper-parameters as their fixed margins throughout the training. An intractable hyper-parameter searching process is crucial to the successful training. Following the work yi2014learning (), we have to suspend training and search for a new margin for every several epochs. In this part, we provide a Margin-Adaptive Large Margin Cosine (MALMC) method, which gives each class an independently updated margin and set it as the maximum of an initially given value and the mean of largest intra-class cosine similarities in the mini-batch.

(5)

where is an initially given margin, is the indicator function where if the condition is satisfied and for else, is the number of intra-class cosine similarities in class and these similarities are sorted in descending order, is a predefined percentage to control the valid number of similarities in each class. We refer to as the set including the indices of the first similarities. Analytically, this self-adaptive margin strategy is more suitable for the realistic data distribution, relating the margin to the dynamic feature space and largely alleviating the multifarious human labor of adjusting the margin.

2.4 NLMC Loss

Accompanying the cosine similarity constraint in previous parts is the changing norm of features and weight vectors in a mini-batch. As illustrated in Fig. 2, softmax loss is prone to amplifying the norm. The trade-off between dynamic norm and intra-class cosine similarity constraint seems to be harmful to the final testing accuracy computed by the pair cosine similarities. To better exert the power of this constraint in the training process without sacrificing most of the time on amplifying the norm, we normalize both the features and weight vectors of the last inner-product layer before softmax loss to a same value , which is automatically learned as in wang2017normface (). In this case, the training process will pay more attention to the intra-class cosine similarity constraint, because all the deeply learned features are distributed on a circle with the same radius in each iteration and the angular between them is an appropriate distance metric. The Normalized Large Margin Cosine (NLMC) loss function is formulated as follows:

(6)

where we substitute for and for in original softmax loss.

2.5 DLMC Loss

It seems that the intra-class constraint alone is not enough to obtain discriminative features. Inspired by the form of softmax loss, we extend the NLMC loss to Discriminative Large Margin Cosine (DLMC) loss, which aims to enforce the intra-class cosine similarity larger than the mean of nearest neighboring inter-class cosine similarities with a fixed margin in the normalized exponential feature space.

(7)

where is the number of different inter-class cosine similarities between a sample of class and the weight vectors of other classes in a mini-batch, and these cosine similarities are sorted in descending order, is a predefined margin to discriminate the intra-class and inter-class similarities.

For datasets with many classes, most inter-class similarities are useless. While, the proposed neighborhood sampling strategy can incorporate the most meaningful classes to acquire the reliable mean inter-class similarity. Specifically, when , the DLMC loss immediately reduces to a variant of triplet loss.

(8)

Compared to the Euclidean distance constraint in the original feature space of triplet loss, this variant loss function imposes the cosine similarity constraint between a sample and the weight vectors in the normalized feature space, analytically strengthening the robustness in the training process.

3 Experiments

The implementation details are given in Section 3.1. In Section 3.2, some exploratory experiments are conducted to find the best settings of hyper-parameters in each method. Finally, we evaluate our approaches on three face recognition benchmark datasets in Section 3.3 and 3.4.

3.1 Implementation Details

Basic Training Settings. To test the sensitivity of face recognition results regarding different face detectors, we preprocess the face images by MTCNN zhang2016joint () and SeetaFace wu2017funnel () detectors, respectively. We use the publicly available CASIA-WebFace yi2014learning () as the training set, which originally has 494,414 labeled face images from 10,575 individuals. After removing the undetected images, the resulting datasets have 490,869 images for MTCNN and 437,633 images for SeetaFace. The obvious difference between these two detectors is that there is a high false negative rate of SeetaFace, such that the resulting training set has few false positive samples. We use the Caffe library jia2014caffe () to implement the CNN model wen2016discriminative () in this paper, which is a reduced version of ResNet with only 27 convolutional layers. The input faces are cropped to RGB images, followed by subtracting 127.5 and dividing by 128. The batch size is set to 256 in all the experiments, and the images are horizontally flipped for data augmentation. For LMC, HLMC and MALMC, we train the models from scratch. The initial learning rate is set to 0.1, then divided by 10 at 16K, 24K iterations. The complete training terminates at 28K iterations. While, we fine-tune the networks of NLMC, NLMC+MALMC and DLMC from the softmax baseline model and a relatively small learning rate of 0.001 is applied. For other compared metric loss functions, we train them to achieve their best performance. The classical back-propagation algorithm and mini-batch based Stochastic Gradient Descent (SGD) work well for the training, and the momentum and weight decay are set to 0.9 and 0.0005.

Evaluation. The proposed methods are evaluated on three face recognition datasets, namely LFW, YTF and IJB-A datasets. 10-fold validation is used to acquire the final performance. We extract the features from both the frontal face and its mirror image, and merge the two features by element-wise summation. PCA dimension reduction is applied to the final representations. Nearest neighbor and threshold comparison are used for both identification and verification tasks. Note that we only use single model for all the testings.

3.2 Exploratory Experiments

All the experiments in this section are conducted on the resulting dataset by MTCNN detector, if not specified.

Effect of the hard sample mining strategy. To ensure the learning efficiency in the training process, we explore a new hard sample mining strategy, where the hard samples refer to the ones misclassified by softmax loss. The hyper-parameters and dominate the balance between intra-class and inter-class variations. Properly selected values of them can improve the performance of the proposed methods. So we conduct a pair of contrastive experiments on LMC and HLMC to investigate the sensitivity of these two parameters (Fig. 3).

Figure 3: Verification accuracies on LFW of LMC and HLMC (a) with different and fixed . (b) with different , for LMC and for HLMC.

In this experiment, we can see that both LMC and HLMC perform much better than the softmax loss. The accuracies fluctuate with different and . The best settings are ( on LFW) for LMC and ( on LFW) for HLMC. Though the accuracies are almost the same, HLMC simplifies the training process by discarding the easy samples, and we will not deeply explore it in the following experiments.

Effect of and . We can find the importance of choosing the appropriate values of and in Fig. 4. In this section, we explore the best settings of these two hyper-parameters in some of the proposed methods.

Figure 4: Verification accuracies on LFW of some proposed methods (a) with different and fixed . (b) with different and the best setting of in each method according to (a).

In the first experiment, we fix to and vary from to . In the second experiment, we fix as their respective best settings in the first experiment (0.005 for LMC and 0.001 for NLMC) and vary from to . Specifically, we set and in MALMC. We can observe that the performance of our models is always stable with different and , and simply using the softmax loss is not a good choice.

Effect of in MALMC and DLMC. In this section, we explore the effect of different neighbors on the performance of MALMC and DLMC, namely the verification accuracies on LFW with different in MALMC and DLMC, while keeping other parameters fixed as their best settings in the previous experiments (Fig. 5a).

Figure 5: (a) Verification accuracies on LFW with different , fixed for MALMC by MTCNN and for DLMC by SeetaFace. (b) The margin distribution of each class with different training iteration steps in MALMC, where and .

It is obvious that MALMC is sensitive to , and its best setting is 0.6, which controls the valid number of intra-class similarities in a mini-batch. While, DLMC is robust to across a wide range. The reason is that there exists an inconsistent distribution in each mini-batch, a fixed dose not seem suitable for measuring the updated feature subspace of each class. However, the robustness of DLMC stems from its similarity to softmax loss which is accompanied by significant inter-class separability, so that the first largest inter-class similarities play the most important role in the training process.

Self-adaptive margin strategy in MALMC. To make clear the margin updating process in MALMC, we perform a toy example of the margin statistics of each class with different iteration steps (10000, 18000 and 25000) during training (Fig. 5b). One can find that the margin is prone to be larger, and eventually fluctuates around the best value of .

3.3 Experiments on LFW and YTF datasets

This dataset contains 13,233 face images of 5,749 different identities from the Internet, with large variations in pose, expression and illumination. For comparison, algorithms typically report the mean face verification accuracies and the ROC curves on 6,000 given face pairs, following the standard protocol of unrestricted with labeled outside data huang2014labeled ().

This dataset consists of 3,425 videos from 1,595 different people, with an average of 2.15 videos for each identity. Just as the experiments on LFW, we follow the standard protocol of unrestricted with labeled outside data wolf2011face (), and report the results on 5,000 video pairs. The final similarity score of each video pair is computed by the average of the cosine similarities from 100 frame pairs.

Method #Alig. #Train #Net Acc.  on  LFW (%) Acc.  on  YTF (%)
High-dim LBP 27 100K - -
73 4M 3 97.35 91.40
- 20K 1 98.52 -
5 200K 1 97.45 -
DeepID-2+ 18 300K 25 99.47 93.20
Center Loss 5 700K 1 99.28 94.90
- 200M 1
CASIA-WebFace 2 WebFace 1 97.73 90.60
[m] 5 490K 1 97.65 92.26
[m] 5 490K 1 98.12 92.96
L-Softmax[m] 5 490K 1 98.98 93.94
[m] 5 490K 1 98.55 94.04
[m] 5 490K 1 98.42 93.30
[m] 5 490K 1 98.50 93.68
[m] 5 490K 1 98.75 94.06
NLMC+MALMC[m] 5 490K 1 98.67 94.14
[m] 5 490K 1 98.80 94.16
[s] 5 430K 1 97.42 91.52
[s] 5 430K 1 98.20 92.16
L-Softmax[s] 5 430K 1 98.86 94.14
[s] 5 430K 1 98.57 93.74
[s] 5 430K 1 98.05 93.04
[s] 5 430K 1 98.07 92.90
[s] 5 430K 1 98.88 93.60
NLMC+MALMC[s] 5 430K 1 98.93 93.78
[s] 5 430K 1 99.07 94.16
Table 1: Face verification performance on LFW and YTF datasets, where [m] refers to the result by MTCNN detector and [s] refers to the result by SeetaFace detector.

In this experiment, we test the methods presented in Section 3 on datasets preprocessed by two different face detectors, namely MTCNN and SeetaFace. Some state-of-the-art methods (High-dim chen2013blessing (), DeepFace taigman2014deepface (), Gaussian  Face lu2014surpassing (), DeepID sun2014deep (), DeepID-2+ sun2015deeply (), Center Loss wen2016discriminative (), FaceNet schroff2015facenet (), CASIA-WebFace yi2014learning ()) are incorporated as a contrast, even though most of their high performance is achieved by huge training data or model ensemble. As can be observed in Table 1, while using single model trained on the publicly available small dataset, our methods are still competitive with other models using high-quality private datasets, such as DeepFace (4M) and FaceNet (200M).

(a) LFW by MTCNN
(b) YTF by MTCNN
(c) LFW by SeetaFace
(d) YTF by SeetaFace
Figure 6: ROC curves of compared metric loss functions on LFW and YTF datasets by two different face detectors.

For a fair comparison, some typical metric loss functions (Triplet schroff2015facenet (), L-Softmax liu2016large (), NormFace wang2017normface ()) are also tested in our own settings. Among these compared loss functions, the proposed methods consistently outperform softmax loss by a significant margin. Specifically, the DLMC loss performs superior by MTCNN ( accuracy on LFW and accuracy on YTF) and SeetaFace ( accuracy on LFW and accuracy on YTF). Compared with NormFace, the NLMC, NLMC+MALMC and DLMC methods obviously show the advantages of cosine similarity constraint in the training process. Similarly, the performance of triplet loss is also not satisfactory. As illustrated in Section 2, the DLMC loss immediately reduces to a variant of triplet loss when . Besides, the hard triplet mining strategy is avoided here, largely reduces the exponentially increased computational complexity of training dataset. The results convincingly demonstrate that the DLMC loss can alleviates the difficult convergence and big data dependence of triplet loss. The Receiver Operating Characteristic (ROC) curves of them are shown in Fig. 6. One should notice that there exists a discrepancy between the results of the two different face detectors, and the trends vary from one loss to another. Though, our DLMC method always among the top performance.

3.4 Experiments on IJB-A dataset

-This dataset contains 5,712 images and 2,085 videos of 500 subjects, with an average of 11.4 images and 4.2 videos per subject. The IJB-A evaluation protocol consists of open-set verification (1:1 comparison) and identification (1:N search) over 10 random training and testing splits. Unlike the LFW and YTF datasets, the IJB-A dataset divides the testing images/video frames into gallery and probe sets, and the subjects are described by templates. Moreover, the images in the IJB-A dataset contain extreme pose, illumination and expression variations without being filtered by a commercial face detector. These factors essentially make IJB-A a challenging unconstrained face recognition dataset klare2015pushing (). We use the Softmax operator masi2016we () to compute the similarity score of two sets described by templates.

Method IJB-A Ver.(TAR) (%) IJB-A Id.(Rec. Rate) (%)
FAR 0.01 FAR 0.001 Rank-1 Rank-5 Rank-10
40.6 19.8 44.3 59.5 -
Deep Multi-Pose 87.6 - 84.6 92.7 94.7
93.9 83.6 92.8 - 98.6
All-In-One Face 92.2 82.3 94.7 - 98.8
[m] 81.36 59.46 87.70 94.29 96.25
[m] 70.66 47.48 88.82 94.40 96.06
L-Softmax [m] 72.71 40.10 88.45 93.85 95.71
[m] 85.86 69.77 90.98 95.51 96.76
86.47 72.71 89.66 95.02 96.73
85.41 67.55 91.15 95.77 96.90
86.19 70.94 90.96 95.45 96.72
NLMC+MALMC[m] 85.52 70.05 90.63 95.41 96.73
86.02 62.45 93.21 97.34 98.33
Table 2: Results on IJB-A dataset. The True Accept Rate (TAR) at False Accept Rate (FAR)=0.01 and 0.001 for the ROC curves. The Rank-1, Rank-5 and Rank-10 retrieval accuracies for the CMC curves.

For simplicity, we only present the results by MTCNN detector here. As in the experiments of LFW and YTF, we compare our methods with several current mainstream DML approaches (Triplet, L-Softmax, NormFace) under the same settings, and other state-of-the-art approaches (GOTS klare2015pushing (), Deep Multi-Pose abdalmageed2016face (), Template Adaptation crosswhite2017template (), All-In-One Face ranjan2017all ()) using larger training datasets or model ensemble. Whereas, simply comparing our methods to those state-of-the-art results is unfair, because their system designs and implementation details are different from ours, and the difficult access to their codes and data makes it hard to say exactly how much improvement our proposed methods acquire. From the results in Table 2, we can find that our proposed methods significantly improve over the off the shelf commercial systems GOTS. Compared to some deep learning based methods, our approaches still achieve satisfactory performance. To better show the comparison results with some typical DML methods under our own settings, the ROC curves for face verification and the Cumulative Match Characteristic (CMC) curves for face identification are plotted in Fig. 7, respectively. Obviously, our methods exhibit prominent advantages consistently over other DML methods, and always among the top performance. However, the performance of triplet loss and L-Softmax loss on the IJB-A dataset is not as good as that on the LFW and YTF datasets, due to the large variations of IJB-A.

Figure 7: Recognition accuracies on IJB-A dataset. (a) ROC curves for the compare protocol. (b) CMC curves for the search protocol.

4 Conclusion and future work

In this paper, we introduce the intra-class cosine similarity constraint into the training process, to alleviate the large intra-class variations of softmax loss and keep consistency with the testing process. Accompanied by the inter-class separability of softmax method, the original LMC method achieves a significant improvement. Based on this, the MALMC method is proposed to mitigate the fussy human labor of adjusting the margin hyper-parameter. Furthermore, the NLMC method is given to take full advantage of the intra-class cosine similarity constraint with all the features and weight vectors in a mini-batch fixed to the same norm. To acquire more discriminative features, a profound idea of considering the intra-class and inter-class constraints simultaneously is proposed to form the DLMC method. Extensive experiments on several public face recognition benchmark datasets convincingly demonstrate the effectiveness and robustness of these proposed methods, even on a small training dataset.

Noticeably, these loss functions are not differentiable everywhere, and some smoothed versions seem to be a meaningful research direction. We will apply the proposed methods on other metric leaning tasks in the future, such as person re-identification or image retrieval. Furthermore, how to develop robust DML methods regarding different face detectors is an interesting future direction for research.

References

References

  • (1) H. Oh Song, Y. Xiang, S. Jegelka, and S. Savarese, “Deep metric learning via lifted structured feature embedding,” in CVPR, pp. 4004–4012, 2016.
  • (2) F. Zhao, Y. Huang, L. Wang, and T. Tan, “Deep semantic ranking based hashing for multi-label image retrieval,” in CVPR, pp. 1556–1564, 2015.
  • (3) D. Yi, Z. Lei, S. Liao, and S. Z. Li, “Deep metric learning for person re-identification,” in ICPR, pp. 34–39, IEEE, 2014.
  • (4) Y. Sun, X. Wang, and X. Tang, “Deep learning face representation from predicting 10,000 classes,” in CVPR, pp. 1891–1898, 2014.
  • (5) F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” in CVPR, pp. 815–823, 2015.
  • (6) Y. Wen, K. Zhang, Z. Li, and Y. Qiao, “A discriminative feature learning approach for deep face recognition,” in ECCV, pp. 499–515, Springer, 2016.
  • (7) W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song, “Sphereface: Deep hypersphere embedding for face recognition,” in CVPR, 2017.
  • (8) F. Wang, X. Xiang, J. Cheng, and A. L. Yuille, “Normface: L hypersphere embedding for face verification,” in ACM MM, 2017.
  • (9) L. Chunjie, Y. Qiang, et al., “Cosine normalization: Using cosine similarity instead of dot product in neural networks,” arXiv preprint arXiv:1702.05870, 2017.
  • (10) Y. Sun, Y. Chen, X. Wang, and X. Tang, “Deep learning face representation by joint identification-verification,” in NIPS, pp. 1988–1996, 2014.
  • (11) C. Huang, C. C. Loy, and X. Tang, “Local similarity-aware deep feature embedding,” in NIPS, pp. 1262–1270, 2016.
  • (12) G. B. Huang and E. Learned-Miller, “Labeled faces in the wild: Updates and new reporting procedures,” Dept. Comput. Sci., Univ. Massachusetts Amherst, Amherst, MA, USA, Tech. Rep, pp. 14–003, 2014.
  • (13) L. Wolf, T. Hassner, and I. Maoz, “Face recognition in unconstrained videos with matched background similarity,” in CVPR, pp. 529–534, IEEE, 2011.
  • (14) B. F. Klare, B. Klein, E. Taborsky, A. Blanton, J. Cheney, K. Allen, P. Grother, A. Mah, and A. K. Jain, “Pushing the frontiers of unconstrained face detection and recognition: Iarpa janus benchmark a,” in CVPR, pp. 1931–1939, 2015.
  • (15) Y. LeCun, C. Cortes, and C. J. Burges, “The mnist database of handwritten digits,” 1998.
  • (16) W. Liu, Y. Wen, Z. Yu, and M. Yang, “Large-margin softmax loss for convolutional neural networks,” in ICML, pp. 507–516, 2016.
  • (17) D. Yi, Z. Lei, S. Liao, and S. Z. Li, “Learning face representation from scratch,” arXiv preprint arXiv:1411.7923, 2014.
  • (18) K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint face detection and alignment using multitask cascaded convolutional networks,” IEEE Signal Processing Letters, vol. 23, no. 10, pp. 1499–1503, 2016.
  • (19) S. Wu, M. Kan, Z. He, S. Shan, and X. Chen, “Funnel-structured cascade for multi-view face detection with alignment-awareness,” Neurocomputing, vol. 221, pp. 138–145, 2017.
  • (20) Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture for fast feature embedding,” in ACM MM, pp. 675–678, 2014.
  • (21) D. Chen, X. Cao, F. Wen, and J. Sun, “Blessing of dimensionality: High-dimensional feature and its efficient compression for face verification,” in CVPR, pp. 3025–3032, 2013.
  • (22) Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “Deepface: Closing the gap to human-level performance in face verification,” in CVPR, pp. 1701–1708, 2014.
  • (23) C. Lu and X. Tang, “Surpassing human-level face verification performance on lfw with gaussianface,” in AAAI, 2014.
  • (24) Y. Sun, X. Wang, and X. Tang, “Deeply learned face representations are sparse, selective, and robust,” in CVPR, pp. 2892–2900, 2015.
  • (25) I. Masi, A. T. Trãn, T. Hassner, J. T. Leksut, and G. Medioni, “Do we really need to collect millions of faces for effective face recognition?,” in ECCV, pp. 579–596, 2016.
  • (26) W. AbdAlmageed, Y. Wu, S. Rawls, S. Harel, T. Hassner, I. Masi, J. Choi, J. Lekust, J. Kim, P. Natarajan, et al., “Face recognition using deep multi-pose representations,” in WACV, pp. 1–9, IEEE, 2016.
  • (27) N. Crosswhite, J. Byrne, C. Stauffer, O. Parkhi, Q. Cao, and A. Zisserman, “Template adaptation for face verification and identification,” in FG, pp. 1–8, IEEE, 2017.
  • (28) R. Ranjan, S. Sankaranarayanan, C. D. Castillo, and R. Chellappa, “An all-in-one convolutional neural network for face analysis,” in FG, pp. 17–24, IEEE, 2017.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
169242
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description