Unconstrained Face Verification using Deep CNN Features

Unconstrained Face Verification using Deep CNN Features

Abstract

In this paper, we present an algorithm for unconstrained face verification based on deep convolutional features and evaluate it on the newly released IARPA Janus Benchmark A (IJB-A) dataset as well as on the traditional Labeled Face in the Wild (LFW) dataset. The IJB-A dataset includes real-world unconstrained faces from 500 subjects with full pose and illumination variations which are much harder than the LFW and Youtube Face (YTF) datasets. The deep convolutional neural network (DCNN) is trained using the CASIA-WebFace dataset. Results of experimental evaluations on the IJB-A and the LFW datasets are provided.

1Introduction

Face verification is one of the core problems in computer vision and has been actively researched for over two decades [40]. In face verification, given two videos or images, the objective is to determine whether they belong to the same person. Many algorithms have been shown to work well on images that are collected in controlled settings. However, the performance of these algorithms often degrades significantly on images that have large variations in pose, illumination, expression, aging, cosmetics, and occlusion.

To deal with this problem, many methods have focused on learning invariant and discriminative representation from face images and videos. One approach is to extract over-complete and high-dimensional feature representation followed by a learned metric to project the feature vector into a low-dimensional space and to compute the similarity score. For instance, the high-dimensional multi-scale Local Binary Pattern (LBP)[5] features extracted from local patches around facial landmarks is reasonably effective for face recognition. Face representation based on Fisher vector (FV) has also shown to be effective for face recognition problems [26][23], [9]. However, deep convolutional neural networks (DCNN) have demonstrated impressive performances on different tasks such as object recognition [21][31], object detection [14], and face verification [25]. It has been shown that a DCNN model can not only characterize large data variations but also learn a compact and discriminative feature representation when the size of the training data is sufficiently large. Once the model is learned, it is possible to generalize it to other tasks by fine-tuning the learned model on target datasets [13]. In this work, we train a DCNN model using a relatively small face dataset, the CASIA-WebFace [38], and compare the performance of our method with other commercial off-the-shelf face matchers on the challenging IJB-A dataset which contains significant variations in pose, illumination, expression, resolution and occlusion. We also evaluate the performance of the proposed method on the LFW dataset.

The rest of the paper is organized as follows. We briefly review some related works in Section 2. Details of the different components of the proposed method including the DCNN representation and joint Bayesian metric learning are given in Section 3. The protocol and the experimental results are presented in Section 4. Finally, we conclude the paper in Section 5 with a brief summary and discussion.

Figure 1: An overview of the proposed DCNN approach for face verification.
Figure 1: An overview of the proposed DCNN approach for face verification.

2Related Work

In this section, we briefly review several recent related works on face verification.

2.1Feature Learning

Learning invariant and discriminative feature representation is the first step for a face verification system. It can be broadly divided into two categories: (1) hand-crafted features, and (2) feature representation learned from data. In the first category, Ahonen et al. [1] showed that the Local Binary Pattern (LBP) is effective for face recognition. Gabor wavelets [39][37] have also been widely used to encode multi-scale and multi-orientation information for face images. Chen et al. [6] demonstrated good results for face verification using the high-dimensional multi-scale LBP features extracted from patches around facial landmarks. In the second category, Patel et. al. [24] and Chen et. al. [11][10] applied dictionary-based approaches for image and video-based face recognition by learning representative atoms from the data which are compact and robust to pose and illumination variations . [26][23][7] used the FV encoding to generate over-complete and high-dimensional feature representation for still and video-based face recognition. Lu et al.[22] proposed a dictionary learning framework in which the sparse codes of local patches generated from local patch dictionaries are pooled to generate a high-dimensional feature vector. The high-dimensionality of feature vectors makes these methods hard to train and scale to large datasets. However, advances in deep learning methods have shown that compact and discriminative representation can be learned using DCNN from very large datasets. Taigman et al. [33] learned a DCNN model on the frontalized faces generated with a general 3D shape model from a large-scale face dataset and achieved better performance than many traditional face verification methods. Sun et al. [28][30] achieved results that surpass human performance for face verification on the LFW dataset using an ensemble of 25 simple DCNN with fewer layers trained on weakly aligned face images from a much smaller dataset than the former. Schroff et al. [25] adapted the state-of-the-art deep architecture for object recognition to face recognition and trained it on a large-scale unaligned private face dataset with the triplet loss. This method also achieved top performances on face verification problems. These works essentially demonstrate the effectiveness of the DCNN model for feature learning and detection/recognition/verification problems.

2.2Metric Learning

Learning a similarity measure from data is the other key component that can boost the performance of a face verification system. Many approaches have been proposed in the literature that essentially exploit the label information from face images or face pairs. For instance, Weinberger et al. [36] proposed Large Margin Nearest Neighbor(LMNN) metric which enforces the large margin constraint among all triplets of labeled training data. Taigman et al. [32] learned the Mahalanobis distance using the Information Theoretic Metric Learning (ITML) method [12]. Chen et al. [5] proposed a joint Bayesian approach for face verification which models the joint distribution of a pair of face images instead of the difference between them, and the ratio of between-class and within-class probabilities is used as the similarity measure. Hu et al. [17] learned a discriminative metric within the deep neural network framework. Huang et al. [18] learned a projection metric over a set of labeled images which preserves the underlying manifold structure.

3Method

Our approach consists of both training and testing stages. For training, we first perform face and landmark detection on the CASIA-WebFace, and the IJB-A datasets to localize and align each face. Next, we train our DCNN on the CASIA-WebFace and derive the joint Bayesian metric using the training sets of the IJB-A dataset and the DCNN features. Then, given a pair of test image sets, we compute the similarity score based on their DCNN features and the learned metric. Figure 1 gives an overview of our method. The details of each component of our approach are presented in the following subsections.

3.1Preprocessing

Before training the convolutional network, we perform landmark detection using the method presented in [2][3] because of its ability to be effective on unconstrained faces. Then, each face is aligned into the canonical coordinate with similarity transform using the 7 landmark points (i.e. two left eye corners, two right eye corners, nose tip, and two mouth corners). After alignment, the face image resolution is 100 100 pixels, and the distance between the centers of two eyes is about 36 pixels.

3.2Deep Face Feature Representation

A DCNN with small filters and very deep architecture (i.e. 19 layers in [27] and 22 layers in [31]) has shown to produce state-of-the-art results on many datasets including ImageNet 2014, LFW, and Youtube Face dataset. Stacking small filters to approximate large filters and to build very deep convolution networks not only reduces the number of parameters but also increases the nonlinearity of the network. In addition, the resulting feature representation is compact and discriminative.

Our approach is motivated by [38]. However, we only consider the identity information per face without modeling the pair-wise cost. The dimensionality of the input layer is for gray-scale images. The network includes 10 convolutional layers, 5 pooling layers and 1 fully connected layer. The detailed architecture is shown in Table 1. Each convolutional layer is followed by a rectified linear unit (ReLU) except the last one, Conv52. Instead of suppressing all the negative responses to zero using ReLU, we use parametric ReLU (PReLU)[16] which allows negative responses that in turn improves the network performance. Thus, we use PReLU as an alternative to ReLU in our work. Moreover, two local normalization layers are added after Conv12 and Conv22, respectively to mitigate the effect of illumination variations. The kernel size of all filters is . The first four pooling layers use the max operator. To generate a compact and discriminative feature representation, we use average pooling for the last layer, pool. The feature dimensionality of pool is thus equal to the number of channel of Conv52 which is 320. Dropout ratio is set as 0.4 to regularize Fc6 due to the large number of parameters (i.e. 320 10548.). To classify a large number of subjects in the training data (i.e. 10548), this low-dimensional feature should contain strong discriminative information from all the face images. Consequently, the pool feature is used for face representation. The extracted features are further -normalized into unit length before the metric learning stage. If there are multiple frames available for the subject, we use the average of the pool features as the overall feature representation. Figure 2 illustrates some of the extracted feature maps.

Figure 2:  An illustration of some feature maps of Conv11, Conv21, and Conv31 layers. At the upper layers, the feature maps capture more global shape features which are also more robust to illumination changes than Conv11.
Figure 2: An illustration of some feature maps of Conv11, Conv21, and Conv31 layers. At the upper layers, the feature maps capture more global shape features which are also more robust to illumination changes than Conv11.

Table 1: The architecture of DCNN used in this paper.
Name Type Filter Size/Stride Output Size Depth Params
Conv11 convolution 331 / 1 10010032 1 0.28K
Conv12 convolution 3332 / 1 10010064 1 18K
Pool1 max pooling 22 / 2 505064 0
Conv21 convolution 3364 / 1 505064 1 36K
Conv22 convolution 3364 / 1 5050128 1 72K
Pool2 max pooling 22 / 2 2525128 0
Conv31 convolution 33128 / 1 252596 1 108K
Conv32 convolution 3396 / 1 2525192 1 162K
Pool3 max pooling 22 / 2 1313192 0
Conv41 convolution 33192 / 1 1313128 1 216K
Conv42 convolution 33128 / 1 1313256 1 288K
Pool4 max pooling 22 / 2 77256 0
Conv51 convolution 33256 / 1 77160 1 360K
Conv52 convolution 33160 / 1 77320 1 450K
Pool5 avg pooling 77 / 1 11320 0
Dropout dropout (40%) 11320 0
Fc6 fully connection 10548 1 3296K
Cost softmax 10548 0
total 11 5006K

3.3Joint Bayesian Metric Learning

To utilize the positive and negative label information available from the training dataset, we learn a joint Bayesian metric which has achieved good performances on face verification problems [5][4]. Instead of modeling the difference vector between two faces, this approach directly models the joint distribution of feature vectors of both th and th images, , as a Gaussian. Let when and belong to the same class, and when they are from different classes. In addition, each face vector can be modeled as, , where stands for the identity and for pose, illumination, and other variations. Both and are assumed to be independent zero-mean Gaussian distributions, and , respectively.

The log likelihood ratio of intra- and inter-classes, , can be computed as follows:

where and are both negative semi-definite matrices. Equation can be rewritten as where . More details can be found in [5]. Instead of using the EM algorithm to estimate and , we optimize the distance in a large-margin framework as follows:

where is the threshold, and is the label of a pair: if person and are the same and , otherwise. For simplicity, we denote as . and are updated using stochastic gradient descent as follows and are equally trained on positive and negative pairs in turn:

where and is the learning rate for and , and for the bias . We use random semi-definite matrices to initialize both and where both and , and and . Note that and are updated only when the constraints are violated. In our implementation, the ratio of the positive and negative pairs that we generate based on the identity information of the training set is 1:20. In addition, the other reason to train the metric instead of using traditional EM is that for IJB-A training and test data, some templates only contain a single image. More details about the IJB-A dataset are given in Section Section 4.

3.4DCNN Training Details

The DCNN is implemented using caffe[19] and trained on the CASIA-WebFace dataset. The CASIA-WebFace dataset contains 494,414 face images of 10,575 subjects downloaded from the IMDB website. After removing the 27 overlapping subjects with the IJB-A dataset, there are 10548 subjects 1 and 490,356 face images. For each subject, there still exists several false images with wrong identity labels and few duplicate images. All images are scaled into and subtracted from the mean. The data is augmented with horizontal flipped face images. We use the standard batch size 128 for the training phase. Because it only contains sparse positive and negative pairs per batch in addition to the false image problems, we do not take the verification cost into consideration as is done in [30]. The initial negative slope for PReLU is set to 0.25 as suggested in [16]. The weight decay of all convolutional layers are set to 0, and the weight decay of the final fully connected layer to 5e-4. In addition, the learning rate is set to 1e-2 initially and reduced by half every 100,000 iterations. The momentum is set to 0.9. Finally, we use the snapshot of 1,000,000th iteration for all our experiments.

4Experiments

In this section, we present the results of the proposed approach on the challenging IARPA Janus Benchmark A (IJB-A) [20], its extended version Janus Challenging set 2 (JANUS CS2) dataset and the LFW dataset. The JANUS CS2 dataset contains not only the sampled frames and images in the IJB-A but also the original videos. The JANUS CS2 dataset2 includes much more test data for identification and verification problems in the defined protocols than the IJB-A dataset. The receiver operating characteristic curves (ROC) and the cumulative match characteristic (CMC) scores are used to evaluate the performance of different algorithms. The ROC curve measures the performance in the verification scenarios, and the CMC score measures the accuracy in a closed set identification scenarios.

Table 2: Query results. The first column shows the query images from probe templates. The remaining 5 columns show the corresponding top-5 queried gallery templates.
Probe Template Rank-1 Rank-2 Rank-3 Rank-4 Rank-5
#Image: 22 #Image: 14 #Image: 3 #Image: 34 #Image: 32 #Image: 50
image image image image image image
Template ID: 2047 Template ID: 2030 Template ID: 5794 Template ID: 226 Template ID: 187 Template ID: 4726
Subject ID: 543 Subject ID: 543 Subject ID:: 791 Subject ID: 102 Subject ID: 101 Subject ID: 404
#Image: 1 #Image: 22 #Image: 9 #Image: 6 #Image: 4 #Image: 4
image image image image image image
Template ID: 2993 Template ID: 2992 Template ID: 948 Template ID: 1312 Template ID: 3779 Template ID: 5812
Subject ID: 1559 Subject ID: 1559 Subject ID:: 1558 Subject ID:: 1704 Subject ID: 1876 Subject ID: 2166
#Image: 1 #Image: 25 #Image: 7 #Image: 3 #Image: 32 #Image: 6
image image image image image image
Template ID: 2062 Template ID: 986 Template ID: 5295 Template ID: 3729 Template ID: 187 Template ID: 5494
Subject ID: 158 Subject ID: 347 Subject ID:: 2058 Subject ID: 606 Subject ID: 101 Subject ID: 2102

4.1JANUS-CS2 and IJB-A

Both the IJB-A and JANUS CS2 contain 500 subjects with 5,397 images and 2,042 videos split into 20,412 frames, 11.4 images and 4.2 videos per subject. Sample images and video frames from the datasets are shown in Figure 3. The videos are only released for the JANUS CS2 dataset. The IJB-A evaluation protocol consists of verification (1:1 matching) over 10 splits. Each split contains around 11,748 pairs of templates (1,756 positive and 9,992 negative pairs) on average. Similarly, the identification (1:N search) protocol also consists of 10 splits which evaluates the search performance. In each search split, there are about 112 gallery templates and 1763 probe templates (i.e. 1,187 genuine probe templates and 576 impostor probe templates). On the other hand, for the JANUS CS2, there are about 167 gallery templates and 1763 probe templates and all of them are used for both identification and verification. The training set for both dataset contains 333 subjects, and the test set contains 167 subjects. Ten random splits of training and testing are provided by each benchmark, respectively. The main differences between IJB-A and JANUS CS2 evaluation protocol are (1) IJB-A considers the open-set identification problem and the JANUS CS2 considers the closed-set identification and (2) IJB-A considers the more difficult pairs which are the subsets from the JANUS CS2 dataset.

Figure 3: Sample images and frames from the IJB-A and JANUS CS2 datasets. A variety of challenging variations on pose, illumination, resolution, occlusion, and image quality are present in these images.
Figure 3: Sample images and frames from the IJB-A and JANUS CS2 datasets. A variety of challenging variations on pose, illumination, resolution, occlusion, and image quality are present in these images.

Both the IJB-A and the JANUS CS2 datasets are divided into training and test sets. For the test sets of both benchmarks, the image and video frames of each subject are randomly split into gallery and probe sets without any overlapping subjects between them. Unlike the LFW and YTF datasets which only use a sparse set of negative pairs to evaluate the verification performance, the IJB-A and JANUS CS2 both divide the images/video frames into gallery and probe sets so that it uses all the available positive and negative pairs for the evaluation. Also, each gallery and probe set consist of multiple templates. Each template contains a combination of images or frames sampled from multiple image sets or videos of a subject. For example, the size of the similarity matrix for JANUS CS2 split1 is 167 1806 where 167 are for the gallery set and 1806 for the probe set (i.e. the same subject reappears multiple times in different probe templates). Moreover, some templates contain only one profile face with challenging pose with low quality image. In contrast to the LFW and YTF datasets which only include faces detected by the Viola Jones face detector [34], the images in the IJB-A and JANUS CS2 contain extreme pose, illumination and expression variations. These factors essentially make the IJB-A and JANUS CS2 challenging face recognition datasets [20].

4.2Evaluation on JANUS-CS2 and IJB-A

For the JANUS CS2 dataset, we compare the results of our DCNN method with the FV approach proposed in [26] and two other commercial off-the-shelf matchers, COTS1 and GOTS [20]. The COTS1 and GOTS baselines provided by JANUS CS2 are the top performers from the most recent NIST FRVT study [15]. The FV method is trained on the LFW dataset which contains few faces with extreme pose. Therefore, we use the pose information estimated from the landmark detector and select face images/video frames whose yaw angle are less than or equal to 25 degrees for each gallery and probe set. If there are no images/frames satisfying the constraint, we choose the one closest to the frontal one. However, for the DCNN method, we use all the frames without applying the same selection strategy. 3 Figures ? and ? show the ROC curves and the CMC curves, respectively for the verification results using the previously described protocol where DCNN means using DCNN feature with cosine distance, “ft” means finetuning on the training data, “metric” means applying Joint Bayesian metric learning, and “color” means to use all of the RGB images instead of gray-scale images. For the results of DCNN, besides finetuning and metric learning, we also replace ReLU with PReLU and apply data augmentation (i.e. randomly cropping 100 100-pixel subregions from a 125 125 region). For DCNN4, we further use RGB images and larger face regions. (i.e. we use 125 125-pixel face regions and resize them into 100 100-pixel ones.) Then, we show the fusion results, DCNN, by directly summing the similarity scores of two models, DCNN and DCNN, where DCNN is trained on gray-scale images with smaller face regions and DCNN is trained on RGB images with larger face regions. From these figures, we can clearly see the impact of each component to the improvement of final identification and verification results. From the ROC and CMC curves, we see that the DCNN method performs better than other competitive methods. This can be attributed to the fact that the DCNN model does capture face variations over a large dataset and generalizes well to a new small dataset.

We illustrate the query samples in Table Table 2. The first column shows the query images from the probe templates. The remaining five columns show the corresponding top-5 queried gallery templates (i.e. rank-1 means the most similar one, rank-2 the second most similar, etc.). For the first two rows, our approach can successfully find the subjects in rank 1. For the third, the query template only contains one image with extreme pose. However, in the corresponding gallery template for the same subject, it happens to contain only near-frontal faces. Thus, it failed to find the subject within the top-5 matches. To solve the pose generalization problem of CNN features, one possible solution is to augment the templates by synthesizing faces in various poses with the help of a generic 3D model. We plan to pursue this approach in the near future, and we leave it for the future work.

While this paper was under preparation, the authors became aware of [35], which also proposes a CNN-based approach for face verification/identification and evaluates it on the IJB-A dataset. The method proposed in [35] combines the features from seven independent DCNN models. With finetuning on the JANUS training data and metric learning, our approach works comparable to [35] as shown in Figure ?. Furthermore, with the replacement of ReLU with PReLU and data augmentation, our approach significantly outperforms [35] with only a single model.

Table 3: Results on the IJB-A dataset. The TAR of all the approaches at FAR=0.1 and 0.01 for the ROC curves. The Rank-1, Rank-5, and Rank-10 retrieval accuracies of the CMC curves where subscripts ft, m and c stand for finetuning, metric, and color respectively.
IJB-A-Verif DCNN DCNN DCNN DCNN DCNN
FAR=1e-2 0.7320.033 0.573 0.640.045 0.7870.043 0.8180.037

0.8380.042

FAR=1e-1 0.8950.013 0.80.012 0.8830.012 0.9470.011 0.9610.01

0.9670.009

IJB-A-Ident DCNN DCNN DCNN1 DCNN DCNN
Rank-1 0.8200.024 0.7260.034 0.7990.036 0.8520.018 0.8820.01

0.903 0.012

Rank-5 0.9290.013 0.840.023 0.9010.025 0.9370.01 0.9570.07

0.9650.008

Rank-10 0.8840.025 0.9340.016 0.9540.007 0.9740.005

0.9770.007

Table 4: Results on the JANUS CS2 dataset. The TAR of all the approaches at FAR=0.1 and 0.01 for the ROC curves. The Rank-1, Rank-5, and Rank-10 retrieval accuracies of the CMC curves where subscripts ft, m and c stand for finetuning, metric, and color respectively.
CS2-Verif COTS1 GOTS FV DCNN DCNN DCNN DCNN DCNN
FAR=1e-2 0.5810.054 0.4670.066 0.4110.081 0.6490.015 0.7650.014 0.8760.013 0.9040.011

0.9210.013

FAR=1e-1 0.7670.015 0.6750.015 0.7040.028 0.8550.01 0.9020.011 0.9730.005 0.9830.004

0.9850.004

CS2-Ident COTS1 GOTS FV DCNN DCNN DCNN DCNN DCNN
Rank-1 0.5510.03 0.4130.022 0.3810.018 0.6940.012 0.7680.013 0.8380.012 0.8670.01

0.8910.01

Rank-5 0.6940.017 0.5710.017 0.5590.021 0.8090.011 0.8740.01 0.9240.009 0.9490.005

0.9570.007

Rank-10 0.7410.017 0.6240.018 0.6370.025 0.850.009 0.910.008 0.9490.006 0.9660.005

0.9720.005

Table 5: Accuracy of different methods on the LFW dataset.
Method #Net Training Set Metric Mean Accuracy Std
DeepFace 1 4.4 million images of 4,030 subjects, private cosine 95.92% 0.29%
DeepFace 7 4.4 million images of 4,030 subjects, private unrestricted, SVM 97.35% 0.25%
DeepID2 1 202,595 images of 10,117 subjects, private unrestricted, Joint-Bayes 95.43%
DeepID2 25 202,595 images of 10,117 subjects, private unrestricted, Joint-Bayes 99.15% 0.15%
DeepID3 50 202,595 images of 10,117 subjects, private unrestricted, Joint-Bayes 99.53% 0.10%
FaceNet 1 260 million images of 8 million subjects, private L2 99.63% 0.09%
Yi et al. 1 494,414 images of 10,575 subjects, public cosine 96.13% 0.30%
Yi et al. 1 494,414 images of 10,575 subjects, public unrestricted, Joint-Bayes 97.73% 0.31%
Wang et al. 1 494,414 images of 10,575 subjects, public cosine 96.95% 1.02%
Wang et al. 7 494,414 images of 10,575 subjects, public cosine 97.52% 0.76%
Wang et al. 1 494,414 images of 10,575 subjects, public unrestricted, Joint-Bayes 97.45% 0.99%
Wang et al. 7 494,414 images of 10,575 subjects, public unrestricted, Joint-Bayes 98.23% 0.68%
Human, funneled N/A N/A N/A 99.20%
Ours 1 490,356 images of 10,548 subjects, public cosine 97.15% 0.7%
Ours 1 490,356 images of 10,548 subjects, public unrestricted, Joint-Bayes 97.45% 0.7%

4.3Labeled Face in the Wild

We also evaluate our approach on the well-known LFW dataset using the standard protocol which defines 3,000 positive pairs and 3,000 negative pairs in total and further splits them into 10 disjoint subsets for cross validation. Each subset contains 300 positive and 300 negative pairs. It contains 7,701 images of 4,281 subjects. We compare the mean accuracy of the proposed deep model with other state-of-the-art deep learning-based methods: DeepFace [33], DeepID2 [30], DeepID3 [29], FaceNet [25], Yi et al. [38], Wang et al. [35], and human performance on the “funneled” LFW images. The results are summarized in Table Table 5. It can be seen from this table that our approach performs comparably to other deep learning-based methods. Note that some of the deep learning-based methods compared in Table 5 use millions of data samples for training the model. Whereas we use only the CASIA dataset for training our model which has less than 500K images.

4.4Run Time

The DCNN model is trained for about 9 days using NVidia Tesla K40. The feature extraction time takes about 0.006 second per face image. In future, the supervised information will be fed into the intermediate layers to make the model more discriminative and also to converge faster.

5Conclusion

In this paper, we study the performance of a DCNN method on a newly released challenging face verification dataset, IARPA Benchmark A, which contains faces with full pose, illumination, and other difficult conditions. It was shown that the DCNN approach can learn a robust model from a large dataset characterized by face variations and generalizes well to another dataset. Experimental results demonstrate that the performance of the proposed DCNN on the IJB-A dataset is much better than the FV-based method and other commercial off-the-shelf matchers and is competitive for the LFW dataset.

For future work, we plan to directly train a Siamese network using all the available positive and negative pairs from CASIA-Webface and IJB-A training datasets to fully utilize the discriminative information for realizing better performance.

6Acknowledgments

This research is based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA R&D Contract No. 2014-14071600012. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. We thank NVIDIA for donating of the K40 GPU used in this work.


Footnotes

  1. The list of overlapping subjects is available at http://www.umiacs.umd.edu/~pullpull/janus_overlap.xlsx
  2. The JANUS CS2 dataset is not publicly available yet.
  3. We fix the typos in [8] that the selection strategy is only applied to FV-based method, not for DCNN.
  4. DCNN and DCNN are our improved results for JANUS CS2 and IJB-A datasets obtained after the paper was accepted.

References

  1. Face description with local binary patterns: Application to face recognition.
    T. Ahonen, A. Hadid, and M. Pietikainen. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(12):2037–2041, 2006.
  2. Robust discriminative response map fitting with constrained local models.
    A. Asthana, S. Zafeiriou, S. Y. Cheng, and M. Pantic. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3444–3451, 2013.
  3. Incremental face alignment in the wild.
    A. Asthana, S. Zafeiriou, S. Y. Cheng, and M. Pantic. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1859–1866, 2014.
  4. A practical transfer learning algorithm for face verification.
    X. D. Cao, D. Wipf, F. Wen, G. Q. Duan, and J. Sun. In IEEE International Conference on Computer Vision, pages 3208–3215. IEEE, 2013.
  5. Bayesian face revisited: A joint formulation.
    D. Chen, X. D. Cao, L. W. Wang, F. Wen, and J. Sun. In European Conference on Computer Vision, pages 566–579. 2012.
  6. Blessing of dimensionality: High-dimensional feature and its efficient compression for face verification.
    D. Chen, X. D. Cao, F. Wen, and J. Sun. In IEEE Conference on Computer Vision and Pattern Recognition, 2013.
  7. Landmark-based fisher vector representation for video-based face verification.
    J.-C. Chen, V. M. Patel, and R. Chellappa. In IEEE Conference on Image Processing, 2015.
  8. Unconstrained face verification using deep cnn features.
    J.-C. Chen, V. M. Patel, and R. Chellappa. arXiv preprint arXiv:1508.01722, 2015.
  9. Unconstrained face verification using fisher vectors computed from frontalized faces.
    J.-C. Chen, S. Sankaranarayanan, V. M. Patel, and R. Chellappa. In IEEE International Conference on Biometrics: Theory, Applications and Systems, 2015.
  10. Adaptive representations for video-based face recognition across pose.
    Y.-C. Chen, V. M. Patel, R. Chellappa, and P. J. Phillips. In IEEE Winter Conference on Applications of Computer Vision, 2014.
  11. Dictionary-based face recognition from video.
    Y.-C. Chen, V. M. Patel, P. J. Phillips, and R. Chellappa. In European Conference on Computer Vision, pages 766–779. 2012.
  12. Information-theoretic metric learning.
    J. V. Davis, B. Kulis, P. Jain, S. Sra, and I. S. Dhillon. In International Conference on Machine learning, pages 209–216, 2007.
  13. Decaf: A deep convolutional activation feature for generic visual recognition.
    J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. arXiv preprint arXiv:1310.1531, 2013.
  14. Rich feature hierarchies for accurate object detection and semantic segmentation.
    R. Girshick, J. Donahue, T. Darrell, and J. Malik. In IEEE Conference on Computer Vision and Pattern Recognition, pages 580–587, 2014.
  15. Face recognition vendor test(frvt): Performance of face identification algorithms.
    P. Grother and M. Ngan. NIST Interagency Report 8009, 2014.
  16. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification.
    K. He, X. Zhang, S. Ren, and J. Sun. arXiv preprint arXiv:1502.01852, 2015.
  17. Discriminative deep metric learning for face verification in the wild.
    J. Hu, J. Lu, and Y.-P. Tan. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1875–1882, 2014.
  18. Projection metric learning on Grassmann manifold with application to video based face recognition.
    Z. Huang, R. Wang, S. Shan, and X. Chen. In IEEE Conference on Computer Vision and Pattern Recognition, pages 140–149, 2015.
  19. Caffe: Convolutional architecture for fast feature embedding.
    Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. In ACM International Conference on Multimedia, pages 675–678, 2014.
  20. Pushing the frontiers of unconstrained face detection and recognition: IARPA Janus Benchmark A.
    B. F. Klare, B. Klein, E. Taborsky, A. Blanton, J. Cheney, K. Allen, P. Grother, A. Mah, M. Burge, and A. K. Jain. In IEEE Conference on Computer Vision and Pattern Recognition, 2015.
  21. Imagenet classification with deep convolutional neural networks.
    A. Krizhevsky, I. Sutskever, and G. E. Hinton. In Advances in Neural Information Processing Systems, pages 1097–1105, 2012.
  22. Joint feature learning for face recognition.
    J. Lu, V. E. Liong, G. Wang, and P. Moulin. IEEE Transactions on Information Forensics and Security, PP(99):1–1, 2015.
  23. A compact and discriminative face track descriptor.
    O. M. Parkhi, K. Simonyan, A. Vedaldi, and A. Zisserman. In IEEE Conference on Computer Vision and Pattern Recognition, 2014.
  24. Dictionary-based face recognition under variable lighting and pose.
    V. M. Patel, T. Wu, S. Biswas, P. J. Phillips, and R. Chellappa. IEEE Transactions on Information Forensics and Security, 7(3):954–965, 2012.
  25. Facenet: A unified embedding for face recognition and clustering.
    F. Schroff, D. Kalenichenko, and J. Philbin. arXiv preprint arXiv:1503.03832, 2015.
  26. Fisher vector faces in the wild.
    K. Simonyan, O. M. Parkhi, A. Vedaldi, and A. Zisserman. In British Machine Vision Conference, volume 1, page 7, 2013.
  27. Very deep convolutional networks for large-scale image recognition.
    K. Simonyan and A. Zisserman. arXiv preprint arXiv:1409.1556, 2014.
  28. Deep learning face representation by joint identification-verification.
    Y. Sun, Y. Chen, X. Wang, and X. Tang. In Advances in Neural Information Processing Systems, pages 1988–1996, 2014.
  29. Deepid3: Face recognition with very deep neural networks.
    Y. Sun, D. Liang, X. Wang, and X. Tang. arXiv preprint arXiv:1502.00873, 2015.
  30. Deeply learned face representations are sparse, selective, and robust.
    Y. Sun, X. Wang, and X. Tang. arXiv preprint arXiv:1412.1265, 2014.
  31. Going deeper with convolutions.
    C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. arXiv preprint arXiv:1409.4842, 2014.
  32. Multiple one-shots for utilizing class label information.
    Y. Taigman, L. Wolf, and T. Hassner. In British Machine Vision Conference, pages 1–12, 2009.
  33. Deepface: Closing the gap to human-level performance in face verification.
    Y. Taigman, M. Yang, M. A. Ranzato, and L. Wolf. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1701–1708, 2014.
  34. Robust real-time face detection.
    P. Viola and M. J. Jones. International journal of computer vision, 57(2):137–154, 2004.
  35. Face search at scale: 80 million gallery.
    D. Wang, C. Otto, and A. K. Jain. arXiv preprint arXiv:1507.07242, 2015.
  36. Distance metric learning for large margin nearest neighbor classification.
    K. Q. Weinberger, J. Blitzer, and L. K. Saul. In Advances in neural information processing systems, pages 1473–1480, 2005.
  37. Fusing local patterns of gabor magnitude and phase for face recognition.
    S. Xie, S. G. Shan, X. L. Chen, and J. Chen. IEEE Transactions on Image Processing, 19(5):1349–1361, 2010.
  38. Learning face representation from scratch.
    D. Yi, Z. Lei, S. Liao, and S. Z. Li. arXiv preprint arXiv:1411.7923, 2014.
  39. Histogram of Gabor phase patterns (hgpp): a novel object representation approach for face recognition.
    B. C. Zhang, S. G. Shan, X. L. Chen, and W. Gao. IEEE Transactions on Image Processing, 16(1):57–68, 2007.
  40. Face recognition: A literature survey.
    W. Y. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld. ACM Computing Surveys, 35(4):399–458, 2003.
12165
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description