Unconstrained Face Verification using Deep CNN Features
In this paper, we present an algorithm for unconstrained face verification based on deep convolutional features and evaluate it on the newly released IARPA Janus Benchmark A (IJB-A) dataset as well as on the traditional Labeled Face in the Wild (LFW) dataset. The IJB-A dataset includes real-world unconstrained faces from 500 subjects with full pose and illumination variations which are much harder than the LFW and Youtube Face (YTF) datasets. The deep convolutional neural network (DCNN) is trained using the CASIA-WebFace dataset. Results of experimental evaluations on the IJB-A and the LFW datasets are provided.
Face verification is one of the core problems in computer vision and has been actively researched for over two decades . In face verification, given two videos or images, the objective is to determine whether they belong to the same person. Many algorithms have been shown to work well on images that are collected in controlled settings. However, the performance of these algorithms often degrades significantly on images that have large variations in pose, illumination, expression, aging, cosmetics, and occlusion.
To deal with this problem, many methods have focused on learning invariant and discriminative representation from face images and videos. One approach is to extract over-complete and high-dimensional feature representation followed by a learned metric to project the feature vector into a low-dimensional space and to compute the similarity score. For instance, the high-dimensional multi-scale Local Binary Pattern (LBP) features extracted from local patches around facial landmarks is reasonably effective for face recognition. Face representation based on Fisher vector (FV) has also shown to be effective for face recognition problems , . However, deep convolutional neural networks (DCNN) have demonstrated impressive performances on different tasks such as object recognition , object detection , and face verification . It has been shown that a DCNN model can not only characterize large data variations but also learn a compact and discriminative feature representation when the size of the training data is sufficiently large. Once the model is learned, it is possible to generalize it to other tasks by fine-tuning the learned model on target datasets . In this work, we train a DCNN model using a relatively small face dataset, the CASIA-WebFace , and compare the performance of our method with other commercial off-the-shelf face matchers on the challenging IJB-A dataset which contains significant variations in pose, illumination, expression, resolution and occlusion. We also evaluate the performance of the proposed method on the LFW dataset.
The rest of the paper is organized as follows. We briefly review some related works in Section 2. Details of the different components of the proposed method including the DCNN representation and joint Bayesian metric learning are given in Section 3. The protocol and the experimental results are presented in Section 4. Finally, we conclude the paper in Section 5 with a brief summary and discussion.
In this section, we briefly review several recent related works on face verification.
Learning invariant and discriminative feature representation is the first step for a face verification system. It can be broadly divided into two categories: (1) hand-crafted features, and (2) feature representation learned from data. In the first category, Ahonen et al.  showed that the Local Binary Pattern (LBP) is effective for face recognition. Gabor wavelets  have also been widely used to encode multi-scale and multi-orientation information for face images. Chen et al.  demonstrated good results for face verification using the high-dimensional multi-scale LBP features extracted from patches around facial landmarks. In the second category, Patel et. al.  and Chen et. al.  applied dictionary-based approaches for image and video-based face recognition by learning representative atoms from the data which are compact and robust to pose and illumination variations .  used the FV encoding to generate over-complete and high-dimensional feature representation for still and video-based face recognition. Lu et al. proposed a dictionary learning framework in which the sparse codes of local patches generated from local patch dictionaries are pooled to generate a high-dimensional feature vector. The high-dimensionality of feature vectors makes these methods hard to train and scale to large datasets. However, advances in deep learning methods have shown that compact and discriminative representation can be learned using DCNN from very large datasets. Taigman et al.  learned a DCNN model on the frontalized faces generated with a general 3D shape model from a large-scale face dataset and achieved better performance than many traditional face verification methods. Sun et al.  achieved results that surpass human performance for face verification on the LFW dataset using an ensemble of 25 simple DCNN with fewer layers trained on weakly aligned face images from a much smaller dataset than the former. Schroff et al.  adapted the state-of-the-art deep architecture for object recognition to face recognition and trained it on a large-scale unaligned private face dataset with the triplet loss. This method also achieved top performances on face verification problems. These works essentially demonstrate the effectiveness of the DCNN model for feature learning and detection/recognition/verification problems.
Learning a similarity measure from data is the other key component that can boost the performance of a face verification system. Many approaches have been proposed in the literature that essentially exploit the label information from face images or face pairs. For instance, Weinberger et al.  proposed Large Margin Nearest Neighbor(LMNN) metric which enforces the large margin constraint among all triplets of labeled training data. Taigman et al.  learned the Mahalanobis distance using the Information Theoretic Metric Learning (ITML) method . Chen et al.  proposed a joint Bayesian approach for face verification which models the joint distribution of a pair of face images instead of the difference between them, and the ratio of between-class and within-class probabilities is used as the similarity measure. Hu et al.  learned a discriminative metric within the deep neural network framework. Huang et al.  learned a projection metric over a set of labeled images which preserves the underlying manifold structure.
Our approach consists of both training and testing stages. For training, we first perform face and landmark detection on the CASIA-WebFace, and the IJB-A datasets to localize and align each face. Next, we train our DCNN on the CASIA-WebFace and derive the joint Bayesian metric using the training sets of the IJB-A dataset and the DCNN features. Then, given a pair of test image sets, we compute the similarity score based on their DCNN features and the learned metric. Figure 1 gives an overview of our method. The details of each component of our approach are presented in the following subsections.
Before training the convolutional network, we perform landmark detection using the method presented in  because of its ability to be effective on unconstrained faces. Then, each face is aligned into the canonical coordinate with similarity transform using the 7 landmark points (i.e. two left eye corners, two right eye corners, nose tip, and two mouth corners). After alignment, the face image resolution is 100 100 pixels, and the distance between the centers of two eyes is about 36 pixels.
3.2Deep Face Feature Representation
A DCNN with small filters and very deep architecture (i.e. 19 layers in  and 22 layers in ) has shown to produce state-of-the-art results on many datasets including ImageNet 2014, LFW, and Youtube Face dataset. Stacking small filters to approximate large filters and to build very deep convolution networks not only reduces the number of parameters but also increases the nonlinearity of the network. In addition, the resulting feature representation is compact and discriminative.
Our approach is motivated by . However, we only consider the identity information per face without modeling the pair-wise cost. The dimensionality of the input layer is for gray-scale images. The network includes 10 convolutional layers, 5 pooling layers and 1 fully connected layer. The detailed architecture is shown in Table 1. Each convolutional layer is followed by a rectified linear unit (ReLU) except the last one, Conv52. Instead of suppressing all the negative responses to zero using ReLU, we use parametric ReLU (PReLU) which allows negative responses that in turn improves the network performance. Thus, we use PReLU as an alternative to ReLU in our work. Moreover, two local normalization layers are added after Conv12 and Conv22, respectively to mitigate the effect of illumination variations. The kernel size of all filters is . The first four pooling layers use the max operator. To generate a compact and discriminative feature representation, we use average pooling for the last layer, pool. The feature dimensionality of pool is thus equal to the number of channel of Conv52 which is 320. Dropout ratio is set as 0.4 to regularize Fc6 due to the large number of parameters (i.e. 320 10548.). To classify a large number of subjects in the training data (i.e. 10548), this low-dimensional feature should contain strong discriminative information from all the face images. Consequently, the pool feature is used for face representation. The extracted features are further -normalized into unit length before the metric learning stage. If there are multiple frames available for the subject, we use the average of the pool features as the overall feature representation. Figure 2 illustrates some of the extracted feature maps.
|Name||Type||Filter Size/Stride||Output Size||Depth||Params|
|Conv11||convolution||331 / 1||10010032||1||0.28K|
|Conv12||convolution||3332 / 1||10010064||1||18K|
|Pool1||max pooling||22 / 2||505064||0|
|Conv21||convolution||3364 / 1||505064||1||36K|
|Conv22||convolution||3364 / 1||5050128||1||72K|
|Pool2||max pooling||22 / 2||2525128||0|
|Conv31||convolution||33128 / 1||252596||1||108K|
|Conv32||convolution||3396 / 1||2525192||1||162K|
|Pool3||max pooling||22 / 2||1313192||0|
|Conv41||convolution||33192 / 1||1313128||1||216K|
|Conv42||convolution||33128 / 1||1313256||1||288K|
|Pool4||max pooling||22 / 2||77256||0|
|Conv51||convolution||33256 / 1||77160||1||360K|
|Conv52||convolution||33160 / 1||77320||1||450K|
|Pool5||avg pooling||77 / 1||11320||0|
3.3Joint Bayesian Metric Learning
To utilize the positive and negative label information available from the training dataset, we learn a joint Bayesian metric which has achieved good performances on face verification problems . Instead of modeling the difference vector between two faces, this approach directly models the joint distribution of feature vectors of both th and th images, , as a Gaussian. Let when and belong to the same class, and when they are from different classes. In addition, each face vector can be modeled as, , where stands for the identity and for pose, illumination, and other variations. Both and are assumed to be independent zero-mean Gaussian distributions, and , respectively.
The log likelihood ratio of intra- and inter-classes, , can be computed as follows:
where and are both negative semi-definite matrices. Equation can be rewritten as where . More details can be found in . Instead of using the EM algorithm to estimate and , we optimize the distance in a large-margin framework as follows:
where is the threshold, and is the label of a pair: if person and are the same and , otherwise. For simplicity, we denote as . and are updated using stochastic gradient descent as follows and are equally trained on positive and negative pairs in turn:
where and is the learning rate for and , and for the bias . We use random semi-definite matrices to initialize both and where both and , and and . Note that and are updated only when the constraints are violated. In our implementation, the ratio of the positive and negative pairs that we generate based on the identity information of the training set is 1:20. In addition, the other reason to train the metric instead of using traditional EM is that for IJB-A training and test data, some templates only contain a single image. More details about the IJB-A dataset are given in Section Section 4.
3.4DCNN Training Details
The DCNN is implemented using caffe and trained on the CASIA-WebFace dataset. The CASIA-WebFace dataset contains 494,414 face images of 10,575 subjects downloaded from the IMDB website. After removing the 27 overlapping subjects with the IJB-A dataset, there are 10548 subjects
In this section, we present the results of the proposed approach on the challenging IARPA Janus Benchmark A (IJB-A) , its extended version Janus Challenging set 2 (JANUS CS2) dataset and the LFW dataset. The JANUS CS2 dataset contains not only the sampled frames and images in the IJB-A but also the original videos. The JANUS CS2 dataset
|#Image: 22||#Image: 14||#Image: 3||#Image: 34||#Image: 32||#Image: 50|
|Template ID: 2047||Template ID: 2030||Template ID: 5794||Template ID: 226||Template ID: 187||Template ID: 4726|
|Subject ID: 543||Subject ID: 543||Subject ID:: 791||Subject ID: 102||Subject ID: 101||Subject ID: 404|
|#Image: 1||#Image: 22||#Image: 9||#Image: 6||#Image: 4||#Image: 4|
|Template ID: 2993||Template ID: 2992||Template ID: 948||Template ID: 1312||Template ID: 3779||Template ID: 5812|
|Subject ID: 1559||Subject ID: 1559||Subject ID:: 1558||Subject ID:: 1704||Subject ID: 1876||Subject ID: 2166|
|#Image: 1||#Image: 25||#Image: 7||#Image: 3||#Image: 32||#Image: 6|
|Template ID: 2062||Template ID: 986||Template ID: 5295||Template ID: 3729||Template ID: 187||Template ID: 5494|
|Subject ID: 158||Subject ID: 347||Subject ID:: 2058||Subject ID: 606||Subject ID: 101||Subject ID: 2102|
4.1JANUS-CS2 and IJB-A
Both the IJB-A and JANUS CS2 contain 500 subjects with 5,397 images and 2,042 videos split into 20,412 frames, 11.4 images and 4.2 videos per subject. Sample images and video frames from the datasets are shown in Figure 3. The videos are only released for the JANUS CS2 dataset. The IJB-A evaluation protocol consists of verification (1:1 matching) over 10 splits. Each split contains around 11,748 pairs of templates (1,756 positive and 9,992 negative pairs) on average. Similarly, the identification (1:N search) protocol also consists of 10 splits which evaluates the search performance. In each search split, there are about 112 gallery templates and 1763 probe templates (i.e. 1,187 genuine probe templates and 576 impostor probe templates). On the other hand, for the JANUS CS2, there are about 167 gallery templates and 1763 probe templates and all of them are used for both identification and verification. The training set for both dataset contains 333 subjects, and the test set contains 167 subjects. Ten random splits of training and testing are provided by each benchmark, respectively. The main differences between IJB-A and JANUS CS2 evaluation protocol are (1) IJB-A considers the open-set identification problem and the JANUS CS2 considers the closed-set identification and (2) IJB-A considers the more difficult pairs which are the subsets from the JANUS CS2 dataset.
Both the IJB-A and the JANUS CS2 datasets are divided into training and test sets. For the test sets of both benchmarks, the image and video frames of each subject are randomly split into gallery and probe sets without any overlapping subjects between them. Unlike the LFW and YTF datasets which only use a sparse set of negative pairs to evaluate the verification performance, the IJB-A and JANUS CS2 both divide the images/video frames into gallery and probe sets so that it uses all the available positive and negative pairs for the evaluation. Also, each gallery and probe set consist of multiple templates. Each template contains a combination of images or frames sampled from multiple image sets or videos of a subject. For example, the size of the similarity matrix for JANUS CS2 split1 is 167 1806 where 167 are for the gallery set and 1806 for the probe set (i.e. the same subject reappears multiple times in different probe templates). Moreover, some templates contain only one profile face with challenging pose with low quality image. In contrast to the LFW and YTF datasets which only include faces detected by the Viola Jones face detector , the images in the IJB-A and JANUS CS2 contain extreme pose, illumination and expression variations. These factors essentially make the IJB-A and JANUS CS2 challenging face recognition datasets .
4.2Evaluation on JANUS-CS2 and IJB-A
For the JANUS CS2 dataset, we compare the results of our DCNN method with the FV approach proposed in  and two other commercial off-the-shelf matchers, COTS1 and GOTS . The COTS1 and GOTS baselines provided by JANUS CS2 are the top performers from the most recent NIST FRVT study . The FV method is trained on the LFW dataset which contains few faces with extreme pose. Therefore, we use the pose information estimated from the landmark detector and select face images/video frames whose yaw angle are less than or equal to 25 degrees for each gallery and probe set. If there are no images/frames satisfying the constraint, we choose the one closest to the frontal one. However, for the DCNN method, we use all the frames without applying the same selection strategy.
We illustrate the query samples in Table Table 2. The first column shows the query images from the probe templates. The remaining five columns show the corresponding top-5 queried gallery templates (i.e. rank-1 means the most similar one, rank-2 the second most similar, etc.). For the first two rows, our approach can successfully find the subjects in rank 1. For the third, the query template only contains one image with extreme pose. However, in the corresponding gallery template for the same subject, it happens to contain only near-frontal faces. Thus, it failed to find the subject within the top-5 matches. To solve the pose generalization problem of CNN features, one possible solution is to augment the templates by synthesizing faces in various poses with the help of a generic 3D model. We plan to pursue this approach in the near future, and we leave it for the future work.
While this paper was under preparation, the authors became aware of , which also proposes a CNN-based approach for face verification/identification and evaluates it on the IJB-A dataset. The method proposed in  combines the features from seven independent DCNN models. With finetuning on the JANUS training data and metric learning, our approach works comparable to  as shown in Figure ?. Furthermore, with the replacement of ReLU with PReLU and data augmentation, our approach significantly outperforms  with only a single model.
|Method||#Net||Training Set||Metric||Mean Accuracy Std|
|DeepFace||1||4.4 million images of 4,030 subjects, private||cosine||95.92% 0.29%|
|DeepFace||7||4.4 million images of 4,030 subjects, private||unrestricted, SVM||97.35% 0.25%|
|DeepID2||1||202,595 images of 10,117 subjects, private||unrestricted, Joint-Bayes||95.43%|
|DeepID2||25||202,595 images of 10,117 subjects, private||unrestricted, Joint-Bayes||99.15% 0.15%|
|DeepID3||50||202,595 images of 10,117 subjects, private||unrestricted, Joint-Bayes||99.53% 0.10%|
|FaceNet||1||260 million images of 8 million subjects, private||L2||99.63% 0.09%|
|Yi et al.||1||494,414 images of 10,575 subjects, public||cosine||96.13% 0.30%|
|Yi et al.||1||494,414 images of 10,575 subjects, public||unrestricted, Joint-Bayes||97.73% 0.31%|
|Wang et al.||1||494,414 images of 10,575 subjects, public||cosine||96.95% 1.02%|
|Wang et al.||7||494,414 images of 10,575 subjects, public||cosine||97.52% 0.76%|
|Wang et al.||1||494,414 images of 10,575 subjects, public||unrestricted, Joint-Bayes||97.45% 0.99%|
|Wang et al.||7||494,414 images of 10,575 subjects, public||unrestricted, Joint-Bayes||98.23% 0.68%|
|Ours||1||490,356 images of 10,548 subjects, public||cosine||97.15% 0.7%|
|Ours||1||490,356 images of 10,548 subjects, public||unrestricted, Joint-Bayes||97.45% 0.7%|
4.3Labeled Face in the Wild
We also evaluate our approach on the well-known LFW dataset using the standard protocol which defines 3,000 positive pairs and 3,000 negative pairs in total and further splits them into 10 disjoint subsets for cross validation. Each subset contains 300 positive and 300 negative pairs. It contains 7,701 images of 4,281 subjects. We compare the mean accuracy of the proposed deep model with other state-of-the-art deep learning-based methods: DeepFace , DeepID2 , DeepID3 , FaceNet , Yi et al. , Wang et al. , and human performance on the “funneled” LFW images. The results are summarized in Table Table 5. It can be seen from this table that our approach performs comparably to other deep learning-based methods. Note that some of the deep learning-based methods compared in Table 5 use millions of data samples for training the model. Whereas we use only the CASIA dataset for training our model which has less than 500K images.
The DCNN model is trained for about 9 days using NVidia Tesla K40. The feature extraction time takes about 0.006 second per face image. In future, the supervised information will be fed into the intermediate layers to make the model more discriminative and also to converge faster.
In this paper, we study the performance of a DCNN method on a newly released challenging face verification dataset, IARPA Benchmark A, which contains faces with full pose, illumination, and other difficult conditions. It was shown that the DCNN approach can learn a robust model from a large dataset characterized by face variations and generalizes well to another dataset. Experimental results demonstrate that the performance of the proposed DCNN on the IJB-A dataset is much better than the FV-based method and other commercial off-the-shelf matchers and is competitive for the LFW dataset.
For future work, we plan to directly train a Siamese network using all the available positive and negative pairs from CASIA-Webface and IJB-A training datasets to fully utilize the discriminative information for realizing better performance.
This research is based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA R&D Contract No. 2014-14071600012. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. We thank NVIDIA for donating of the K40 GPU used in this work.
- The list of overlapping subjects is available at http://www.umiacs.umd.edu/~pullpull/janus_overlap.xlsx
- The JANUS CS2 dataset is not publicly available yet.
- We fix the typos in  that the selection strategy is only applied to FV-based method, not for DCNN.
- DCNN and DCNN are our improved results for JANUS CS2 and IJB-A datasets obtained after the paper was accepted.
- Face description with local binary patterns: Application to face recognition.
T. Ahonen, A. Hadid, and M. Pietikainen. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(12):2037–2041, 2006.
- Robust discriminative response map fitting with constrained local models.
A. Asthana, S. Zafeiriou, S. Y. Cheng, and M. Pantic. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3444–3451, 2013.
- Incremental face alignment in the wild.
A. Asthana, S. Zafeiriou, S. Y. Cheng, and M. Pantic. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1859–1866, 2014.
- A practical transfer learning algorithm for face verification.
X. D. Cao, D. Wipf, F. Wen, G. Q. Duan, and J. Sun. In IEEE International Conference on Computer Vision, pages 3208–3215. IEEE, 2013.
- Bayesian face revisited: A joint formulation.
D. Chen, X. D. Cao, L. W. Wang, F. Wen, and J. Sun. In European Conference on Computer Vision, pages 566–579. 2012.
- Blessing of dimensionality: High-dimensional feature and its efficient compression for face verification.
D. Chen, X. D. Cao, F. Wen, and J. Sun. In IEEE Conference on Computer Vision and Pattern Recognition, 2013.
- Landmark-based fisher vector representation for video-based face verification.
J.-C. Chen, V. M. Patel, and R. Chellappa. In IEEE Conference on Image Processing, 2015.
- Unconstrained face verification using deep cnn features.
J.-C. Chen, V. M. Patel, and R. Chellappa. arXiv preprint arXiv:1508.01722, 2015.
- Unconstrained face verification using fisher vectors computed from frontalized faces.
J.-C. Chen, S. Sankaranarayanan, V. M. Patel, and R. Chellappa. In IEEE International Conference on Biometrics: Theory, Applications and Systems, 2015.
- Adaptive representations for video-based face recognition across pose.
Y.-C. Chen, V. M. Patel, R. Chellappa, and P. J. Phillips. In IEEE Winter Conference on Applications of Computer Vision, 2014.
- Dictionary-based face recognition from video.
Y.-C. Chen, V. M. Patel, P. J. Phillips, and R. Chellappa. In European Conference on Computer Vision, pages 766–779. 2012.
- Information-theoretic metric learning.
J. V. Davis, B. Kulis, P. Jain, S. Sra, and I. S. Dhillon. In International Conference on Machine learning, pages 209–216, 2007.
- Decaf: A deep convolutional activation feature for generic visual recognition.
J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. arXiv preprint arXiv:1310.1531, 2013.
- Rich feature hierarchies for accurate object detection and semantic segmentation.
R. Girshick, J. Donahue, T. Darrell, and J. Malik. In IEEE Conference on Computer Vision and Pattern Recognition, pages 580–587, 2014.
- Face recognition vendor test(frvt): Performance of face identification algorithms.
P. Grother and M. Ngan. NIST Interagency Report 8009, 2014.
- Delving deep into rectifiers: Surpassing human-level performance on imagenet classification.
K. He, X. Zhang, S. Ren, and J. Sun. arXiv preprint arXiv:1502.01852, 2015.
- Discriminative deep metric learning for face verification in the wild.
J. Hu, J. Lu, and Y.-P. Tan. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1875–1882, 2014.
- Projection metric learning on Grassmann manifold with application to video based face recognition.
Z. Huang, R. Wang, S. Shan, and X. Chen. In IEEE Conference on Computer Vision and Pattern Recognition, pages 140–149, 2015.
- Caffe: Convolutional architecture for fast feature embedding.
Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. In ACM International Conference on Multimedia, pages 675–678, 2014.
- Pushing the frontiers of unconstrained face detection and recognition: IARPA Janus Benchmark A.
B. F. Klare, B. Klein, E. Taborsky, A. Blanton, J. Cheney, K. Allen, P. Grother, A. Mah, M. Burge, and A. K. Jain. In IEEE Conference on Computer Vision and Pattern Recognition, 2015.
- Imagenet classification with deep convolutional neural networks.
A. Krizhevsky, I. Sutskever, and G. E. Hinton. In Advances in Neural Information Processing Systems, pages 1097–1105, 2012.
- Joint feature learning for face recognition.
J. Lu, V. E. Liong, G. Wang, and P. Moulin. IEEE Transactions on Information Forensics and Security, PP(99):1–1, 2015.
- A compact and discriminative face track descriptor.
O. M. Parkhi, K. Simonyan, A. Vedaldi, and A. Zisserman. In IEEE Conference on Computer Vision and Pattern Recognition, 2014.
- Dictionary-based face recognition under variable lighting and pose.
V. M. Patel, T. Wu, S. Biswas, P. J. Phillips, and R. Chellappa. IEEE Transactions on Information Forensics and Security, 7(3):954–965, 2012.
- Facenet: A unified embedding for face recognition and clustering.
F. Schroff, D. Kalenichenko, and J. Philbin. arXiv preprint arXiv:1503.03832, 2015.
- Fisher vector faces in the wild.
K. Simonyan, O. M. Parkhi, A. Vedaldi, and A. Zisserman. In British Machine Vision Conference, volume 1, page 7, 2013.
- Very deep convolutional networks for large-scale image recognition.
K. Simonyan and A. Zisserman. arXiv preprint arXiv:1409.1556, 2014.
- Deep learning face representation by joint identification-verification.
Y. Sun, Y. Chen, X. Wang, and X. Tang. In Advances in Neural Information Processing Systems, pages 1988–1996, 2014.
- Deepid3: Face recognition with very deep neural networks.
Y. Sun, D. Liang, X. Wang, and X. Tang. arXiv preprint arXiv:1502.00873, 2015.
- Deeply learned face representations are sparse, selective, and robust.
Y. Sun, X. Wang, and X. Tang. arXiv preprint arXiv:1412.1265, 2014.
- Going deeper with convolutions.
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. arXiv preprint arXiv:1409.4842, 2014.
- Multiple one-shots for utilizing class label information.
Y. Taigman, L. Wolf, and T. Hassner. In British Machine Vision Conference, pages 1–12, 2009.
- Deepface: Closing the gap to human-level performance in face verification.
Y. Taigman, M. Yang, M. A. Ranzato, and L. Wolf. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1701–1708, 2014.
- Robust real-time face detection.
P. Viola and M. J. Jones. International journal of computer vision, 57(2):137–154, 2004.
- Face search at scale: 80 million gallery.
D. Wang, C. Otto, and A. K. Jain. arXiv preprint arXiv:1507.07242, 2015.
- Distance metric learning for large margin nearest neighbor classification.
K. Q. Weinberger, J. Blitzer, and L. K. Saul. In Advances in neural information processing systems, pages 1473–1480, 2005.
- Fusing local patterns of gabor magnitude and phase for face recognition.
S. Xie, S. G. Shan, X. L. Chen, and J. Chen. IEEE Transactions on Image Processing, 19(5):1349–1361, 2010.
- Learning face representation from scratch.
D. Yi, Z. Lei, S. Liao, and S. Z. Li. arXiv preprint arXiv:1411.7923, 2014.
- Histogram of Gabor phase patterns (hgpp): a novel object representation approach for face recognition.
B. C. Zhang, S. G. Shan, X. L. Chen, and W. Gao. IEEE Transactions on Image Processing, 16(1):57–68, 2007.
- Face recognition: A literature survey.
W. Y. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld. ACM Computing Surveys, 35(4):399–458, 2003.