SeqFace: Make full use of sequence information for face recognition

SeqFace: Make full use of sequence information for face recognitio

Abstract

Deep convolutional neural networks (CNNs) have greatly improved the Face Recognition (FR) performance in recent years. Almost all CNNs in FR are trained on the carefully labeled datasets containing plenty of identities. However, such high-quality datasets are very expensive to collect, which restricts many researchers to achieve state-of-the-art performance. In this paper, we propose a framework, called SeqFace, for learning discriminative face features. Besides a traditional identity training dataset, the designed SeqFace can train CNNs by using an additional dataset which includes a large number of face sequences collected from videos. Moreover, the label smoothing regularization (LSR) and a new proposed discriminative sequence agent (DSA) loss are employed to enhance discrimination power of deep face features via making full use of the sequence data. Our method achieves excellent performance on Labeled Faces in the Wild (LFW), YouTube Faces (YTF), only with a single ResNet. The code and models are publicly available onlien1.

face recognition, face sequences, convolutional neural networks
\TOGonlineid

45678 \TOGvolume0 \TOGnumber0 \TOGarticleDOI1111111.2222222 \TOGprojectURL \TOGvideoURL \TOGdataURL \TOGcodeURL \pdfauthorWei Hu

\keywordlist\TOGlinkslist\copyrightspace

1 Introduction

In most scenario, FR is a metric learning problem since it is impossible to classify faces to known identities in the training set. Recently, deep CNNs are widely used in FR, due to their great discriminative feature learning capability. The face feature is mainly trained via two types of methods according to their loss functions in CNN models. One method uses classification loss functions, such as softmax loss [\citenameTaigman et al. 2014, \citenameSun et al. 2014, \citenameWen et al. 2016a]. The other type uses metric learning loss functions, such as contrastive loss and triplet loss [\citenameHoffer and Ailon 2015, \citenameChopra et al. 2005]. In many recent CNNs for FR, two types of loss functions are usually combined together for learning face features [\citenameWen et al. 2016b]. All these loss functions aim to maximize the inter-identity variations and minimize the intra-identity variations under a certain metric space. No matter which loss functions are applied, we find that the training data share the same type, called identity data in the paper.

An identity dataset includes faces of identities, and each face in the dataset is clearly labeled as the image of the -th () identity. Currently all public or private datasets for training deep face features, such as CASIA [\citenameYi et al. 2014], MS-Celeb-1M [\citenameGuo et al. 2016] and CelebFaces [\citenameChen et al. 2014], belong to identity datasets. However, a large-scale high-quality identity dataset is very expensive to construct, since it could cost lots of effort and money. Identity data need two kinds of information: face image and identity annotation. Identities in most public and private datasets are celebrities, because celebrity photos are rather easily crawled and annotated from the Internet. However, a celebrity dataset might be not a satisfied training dataset, if there are obvious differences between the evaluated faces and the celebrity faces in age, race, pose, and so on.

Beside photos from the Internet, videos (movies, TVs, surveillance videos, etc.) can also provide large quantities of face images, but few works utilize these face images because labelling process of identities is relatively difficult. However, it might be necessary to collect such faces as training data in some circumstances, such as surveillance. Face detection and tracking on videos can automatically generate data with lots of face sequences, and each sequence contains several faces of one identity. We call this type of data sequence data.

Training DataCNNLossIdentitiesSequencesFeature layerChief lossAuxiliary loss
Figure 1: In our SeqFace framework, the CNN model is trained on an identity dataset and a sequence dataset, and is supervised jointly by a chief classification loss and another auxiliary loss. Different sequences can belong to the same identity in sequence data.

A sequence dataset includes faces of sequences, and each face is labeled as the image of the -th () sequence. Because faces in a sequence must belong to one identity (note that different sequences might belong to one identity), sequence data have potential to help to reduce the intra-identity variations. A large-scale high-quality sequence dataset can be efficiently and automatically constructed by using state-of-the-art face detection and tracking methods. Although face sequences are broadly used in video FR applications, previous works have rarely utilized these unlabelled sequence datasets as the training data to learn face features in image FR.

In this paper, we propose a framework, namely SeqFace, to learn discriminative face features on both identity data and sequence data (see Figure 1). The SeqFace framework is not objective to replace other models or loss functions, but to make full use of sequence data in the training procedure. In the SeqFace, a CNN model is jointly supervised by two loss functions. The first one is a chief classification loss, such as the softmax loss, which aims to maximize the inter-identity variations and minimize the intra-identity variations simultaneously. The second one is an auxiliary loss, such as the center loss [\citenameWen et al. 2016b], which mainly encourages the intra-identity compactness. Because the traditional classification loss functions cannot deal with sequence data, label smoothing regularization (LSR) is employed to improve the chief loss. Moreover, we also propose a DSA loss as the auxiliary loss, which is superior to the center loss because it contributes to the inter-class dispersion. With the help of additional sequence data, CNN models can be trained with high feature discrimination in the SeqFace.

To summarize, our major contributions are as follows:

  1. We present a framework (SeqFace) to learn discriminative face features. Besides the traditional identity data, unlabeled sequences are used as the training data in the SeqFace to enhance discrimination power of face features for the first time.

  2. To make full use of sequence data, we employed the LSR to help the chief classification loss deal with sequence data. A new DSA loss function, which contributes to the intra-class compactness and the inter-class dispersion of the features, is also proposed as the auxiliary loss to train CNNs. Experiments demonstrate that the LSR and the DSA loss both boost the FR performance greatly.

  3. We conduct experiments on two popular and challenging FR benchmark datasets (Labeled Faces in the Wild (LFW) and YouTube Faces (YTF) [\citenameWolf et al. 2011]) with one single ResNet-64, and the model achieves a 99.83 % verification accuracy on LFW, and a 98.12% verification accuracy on YTF.

2 Related Works

In this section, we briefly review works of deep face recognition, and face sequences related works in FR are also introduced.

Deep Face Recognition

Deep face recognition is one of the most active field, and has achieved a series of breakthroughs in recent years thanks to the great success of CNNs [\citenameHe et al. 2016, \citenameKrizhevsky et al. 2012, \citenameSimonyan and Zisserman 2014, \citenameSzegedy et al. 2015]. Many methods [\citenameTaigman et al. 2014, \citenameSun et al. 2014, \citenameWen et al. 2016a, \citenameSchroff et al. 2015, \citenameChen et al. 2014, \citenameSun et al. 2015a, \citenameSun et al. 2015b] have proven that CNNs outperform humans in FR on some benchmark data sets. FR is treated as a multi-class classification problem and CNN models are supervised by the softmax loss in many methods [\citenameTaigman et al. 2014, \citenameSun et al. 2014, \citenameWen et al. 2016a]. Some metric learning loss functions, such as contrastive loss [\citenameYi et al. 2014, \citenameChopra et al. 2005], triplet loss [\citenameHoffer and Ailon 2015, \citenameSchroff et al. 2015] and center loss [\citenameWen et al. 2016b], are also applied to boost FR performance greatly. Other loss functions [\citenameDeng et al. 2017, \citenameZhang et al. 2017] based on metric loss also demonstrate effective performance on FR. Recently, some angular margin based methods [\citenameLiu et al. 2016, \citenameLiu et al. 2017, \citenameDeng et al. 2018, \citenameWang et al. 2018] are proposed and achieve outperforming performance.

Sequences in Face Recognition

In many applications, sequences or image sets are the most natural form of input to the FR system. Video face recognition methods [\citenameYamaguchi et al. 1998, \citenameCevikalp and Triggs 2010, \citenameWang et al. 2017, \citenameBashbaghi et al. 2017, \citenameDing and Tao 2016, \citenameHuang et al. 2015, \citenameParchami et al. 2017, \citenameSohn et al. 2017] based on face sequences, or face sets, also are expected to achieve better performance than ones based on individual images. Most of these studies attempt to utilize redundant information of face sequences/sets to improve recognition performance, but not to learn discriminative features from sequence data. Recently, some approaches [\citenameDing and Tao 2016, \citenameParchami et al. 2017, \citenameSohn et al. 2017] aim to learn deep video features for video face recognition. In [\citenameSohn et al. 2017], large-scale unlabeled face sequences are employed as the training data, but these sequence data are only utilized to learn transformations between image and video domains. Learning discriminative face features still mainly depends on traditional large-scale identity datasets in this deep CNN approaches of video FR.

3 The Proposed Approach

3.1 SeqFace Framework

The proposed SeqFace is a framework for learning discriminative face features on identity datasets and sequence datasets simultaneously. Identity overlap between these two datasets is not allowed in the SeqFace. A CNN model (ResNet-like models in our implementation) is jointly supervised by one chief classification loss and one auxiliary loss in the SeqFace. The chief loss enlarges the inter-identity feature differences and reduces the intra-identity feature variations simultaneously, and the major target of the auxiliary loss is to reduce intra-identity(intra-sequence) variations. The final loss can be formulated as

(1)

where is a parameter used to balance two loss functions.

Similar with many methods, we also treat the FR problem as a classification task to train CNNs, and CNNs are mainly supervised by a chief classification loss, such as the Softmax loss, the A-Softmax loss in SphereFace [\citenameLiu et al. 2017], and so on. All faces of one identity in identity data is labeled as belonging to one class in the classification loss. However, an input face in sequence data cannot belong to any class (identity) in the classification loss, because there is no identity annotation in sequence data. That is to say, though a regular classification loss can be applied to train the CNN model in the SeqFace, it only can deal with identity data and has to ignore sequence data.

We know that faces in one sequence certainly belong to one identity. Therefore, if a loss encourages the intra-sequence feature compactness, and does not penalize the inter-sequence feature compactness, it could supervise CNNs to learn discriminative face features on sequence data, and it could naturally deal with identity data too. Because this loss mainly affects the intra-sequence and intra-identity compactness, it has to be an auxiliary loss. The center loss [\citenameWen et al. 2016b] function is formulated as

(2)

where is the number of training samples, denotes the feature of the -th training sample, is the class(identity) label of the sample, and the denotes the -th class(identity) center of deep features for classification problems. The center loss can deals with identity data and sequence data in the same way, and the is the feature center of the -th identity for identity data or the feature center of the -th sequence for sequence data. Since the formulation only characterizes intra-identity and intra-sequence feature compactness effectively, it doesn’t penalize closed feature centers of different sequences. Therefore, the center loss is a good option for the auxiliary loss in the SeqFace.

To summarize, one of the benefits of our SeqFace framework is to reduce the intra-identity variations while enlarging the inter-identity variation with CNNs supervised by a chief classification loss and another auxiliary loss. In fact, we can employ the regular softmax loss and the center loss as the chief loss and the auxiliary loss in the SeqFace. However, the softmax loss has to ignore sequence data, and the center loss only concerns the intra-identity and intra-sequence compactness. Sequence data only contribute to intra-identity compactness through the supervision of the center loss. In order to make full use of sequence information, the LSR and the DSA loss are presented in the paper.

3.2 Label Smoothing Regularization

The softmax loss is applied to supervise CNNs classification, and its simplicity and probabilistic interpretation make the softmax loss widely adopted in FR issues. The softmax loss can be used as the chief loss in the SeqFace, but it has to ignore sequence data when training CNNs because of the lack of identity annotation. The softmax loss can be considered as the combination of a softmax function and a cross-entropy loss, and the cross-entropy loss is formulated as

(3)

where is the class number, is the predicted probability (the output of the softmax function) of the input belonging to class , and is the ground truth distribution defined as

(4)

where is the ground truth class label of the input. Label smoothing regularization (LSR) is introduced to deal with non-ground truth inputs [\citenameSzegedy et al. 2016], and label smoothing regularization for outlier (LSRO) is then used to corporate unlabeled inputs [\citenameZheng et al. 2017] in CNNs. In the LSR, the value of can be a float value between 0 and 1 for the input which cannot be clearly labeled as any class.

In our framework, because all identities in sequence data do not exist in identity data, we define (so ) as [\citenameZheng et al. 2017] for all input faces in sequence data. Therefore, the cross-entropy loss is rewritten as

(5)

where for the input face of identity data, and for the input face of sequence data.

The LSR can also be integrated into other softmax-like classification loss functions. In practice, a feature normalised SphereFace (L2-SphereFace for short, the same with FNorm-SphereFace in [\citenameDeng et al. 2018]) is applied as the chief classification loss. An additional -constraint is added to the regular SphereFace [\citenameLiu et al. 2017], it means the input feature must be firstly normalized and scaled by a scalar parameter (). Therefore, the decision boundaries of the L2-SphereFace under binary classification is for class 1, and is for class 2. In our implementation, the parameter and the margin are set to 32.0 and 4 respectively.

3.3 DSA loss

The center loss only reduces intra-class variations as the auxiliary loss, and the inter-class separability of features completely depends on the classification loss. In this section, we propose a new auxiliary loss, namely discriminative sequence agent loss (DSA Loss), which concerns the intra-class compactness and the inter-class dispersion simultaneously.

First, considering the traditional classification problem with an identity dataset, we define

(6)

as the distance between the feature of the -th training sample and the feature center of the -th class(identity), is actually equivalent to the Euclidean distance. Note that if and are normalized, can be re-formulated as

(7)

where denotes the angle between and , and can be regarded as the angular distance. Since our target is to reduce the distance between and and enlarge other distances between and for all , where is the label of the -th training sample, a discriminative loss can be formulated as

(8)

where and are two parameters to adjust the discriminative power of the learned features. Therefore, the final loss function is

(9)

where the parameter is applied to balance the intra-class compactness and the inter-class dispersion, is the number of identity(class) of the identity dataset. We introduce another parameter as the probability that the -th center is employed in computing the final loss, because might be a huge number and it will be time-consuming if all are computed in each iteration. means the Bernoulli distribution with the probability .

The gradients of with respect to and the update equation of , similar with that in the center loss, are computed as:

(10)

and

(11)

where if the is satisfied, and if not.

Different from the center loss, our DSA loss also enforces constraints on inter-class variations. According to Equation 9, the feature is pulled towards the feature center of its identity, and is pushed away from feature centers of other identities randomly selected in each training iteration.

Figure 2: Illustration of forces on sample features of identity data and sequence data. The -th sample is from the identity dataset, and the -th one is from the sequence dataset. , and are feature centers of corresponding identities, and and are feature centers of corresponding sequences. and .

Taking into account sequence data, there is a slight modification in Equation 9 to compute the final DSA loss. We assume that there are ( is also the number of class in the chief classification loss) identities in the identity dataset and sequences in the sequence dataset. There is no overlap between these two datasets as mentioned above. In Equation 9, is selected from all identities and sequences (means ) for samples in the identity dataset, and is only selected from identities (means ) for samples in the sequence dataset. That is to say, if the -th sample is in the identity dataset, should be pushed away from feature centers of other identities and all sequences, or is only pushed away from feature centers of identities. Figure 2 illustrates two examples.

There are four parameters (, , , and ) in the DSA loss function. The parameter can be set to 0.5 since we concern both the intra-class compactness and the inter-class dispersion. The parameters and are used to adjust the discriminative power of features. Using larger values is preferred, but it will increase the difficulty of convergence in training. According to our experiments, and can be applied in most applications. The parameter is applied to select part of identities/sequences while computing , in order to reduce the computing cost. The value of the parameter can be set flexibly based on computing resources in real applications.

MNIST Example

We perform a toy example on the MNIST dataset [\citenameLecun and Cortes 2010] with our DSA loss. LeNet++ [\citenameWen et al. 2016b], a deeper and wider version of LeNet, is employed. The last hidden layer output of the model is restricted to 2-dimensions for easy visualization (see Figure 3). For comparison, we train 4 models supervised by a softmax loss, a softmax loss and a center loss, a softmax loss and a DSA loss, a softmax loss and a DSA loss (with normalized and ), respectively. We set , , and in the DSA loss. The loss weight values of the center/DSA loss are set to 0.04. All models are trained with the batch size of 32. The learning rate begins with 0.01, and is divided by 10 at 14K iterations. The training process is finished at 20K iterations. As shown in Figure 3, the features learned with the DSA loss are more discriminative. The feature dispersion in Figure 3(b) and Figure 3(c) demonstrates that the DSA loss can enlarge inter-class distances, and the feature centers of different classes are pushed away from each other.

Figure 3: Visualization of 2-D feature distribution for the MNIST test set. The features of samples from different classes are denoted by the points with different colors. Four CNNs are supervised by the loss functions of (a)Softmax loss. (b)Softmax loss + Center loss. (c)Softmax loss + DSA loss with Euclidean distance. (d)Softmax loss + DSA loss with angular distance.

4 Experiment

4.1 Implement Details

In our experiments, all the face images and their landmarks are detected by MTCNN [\citenameZhang et al. 2016]. The faces are aligned by similar transformation as [\citenameWu et al. 2017], and are cropped to RGB images (randomly cropped to in training). Each pixel in RGB images is normalized by subtracting 127.5 then divided by 128.

Training and Testing

Caffe [\citenameJia et al. 2014] is used to implement CNN models. Different CNN models are employed in the experiments, which will be further introduced. All weights of the auxiliary losses ( in Equation 1) are set to 0.04 in the experiments. Euclidean distances (do not normalize and in Equation 6) are applied in the DSA loss functions used in these section. At the testing stage, only features of the original image are directly extracted from the last full connected layer of CNNs, and the cosine similarity is used to measure the feature distance in the experiments. More details are presented in the corresponding sections.

4.2 Exploration Experiment

In this section, the employed CNN is a ResNet-20 network which is similar to [\citenameLiu et al. 2017], and it is trained on the publicly available CASIA-WebFace dataset [\citenameYi et al. 2014] containing about 0.5M faces from 10,575 identities. All models are trained with the batch size of 32 on one Titanx GPU. The learning rate begins with 0.01, and is divided by 10 at 200K iterations. The training process is finished at 300K iterations.

To evaluate the effectiveness of sequence data, 10,575 identities in the CASIA-WebFace dataset are randomly divided into two parts: the dataset A (5,000 identities) and the dataset B(5,575 identities). Faces in the dataset B is then randomly split into 32,996 sequences. The dataset A and B are treated as the identity dataset and the sequence dataset respectively.

Figure 4: Face verification accuracies on LFW achieved by the DSA losses with different parameter values.

Effect of the DSA loss parameters

We firstly study the effect of parameter values in the DSA loss. We train several CNN models only on the dataset A(5,000 identities) under the supervision of the DSA loss with different parameter values, and then evaluate these models on the LFW dataset. From the results shown in Figure 4, we can conclude that higher performances can be achieved if the DSA loss simultaneously concern the intra-class compactness and the inter-class dispersion of the learned features (). The results also demonstrate that larger and might lead to more discriminative features. According to our experiments, setting and is preferred to balance the FR performance and the difficulty of convergence.

Effect of SeqFace

We also train 10 models (see Table 1) to demonstrate the effectiveness of our SeqFace, the LSR and the DSA loss.

First, we use a regular softmax loss (Model \@slowromancapi@), a L2-SphereFace loss (Model \@slowromancapii@) to train 2 CNN models respectively. Only the dataset A is used as the training dataset. Verification accuracies demonstrate that the L2-SphereFace greatly boosts the performance.

Then, the center loss and the DSA loss are applied as the auxiliary loss to jointly supervise 2 CNN models (Model \@slowromancapiii@ and Model \@slowromancapiv@) with a softmax loss respectively. The reported results also demonstrate that: 1) Auxiliary loss functions have positive effect on the FR performance even without sequence training data; 2) Our DSA loss outperforms the center loss on FR issue.

Moreover, sequence data (the dataset B) are added to train 5 CNN models supervised by a LSR-based softmax loss and a center loss (Model \@slowromancapv@), a LSR-based softmax loss and a DSA loss (Model \@slowromancapvi@), a LSR-based L2-SphereFace loss (Model \@slowromancapvii@), a LSR-based L2-SphereFace loss and a center loss (Model \@slowromancapviii@), a LSR-based L2-SphereFace and a DSA loss (Model \@slowromancapix@), respectively. According to results, we have following observation: 1) Sequence data can obviously improve the FR performance on the SeqFace; 2) Even one chief classification loss with the LSR can also utilize sequence data to improve the FR performance; 3) The LSR and the DSA loss greatly enhance the discriminative power of learned features.

Last, we also train a model (Model \@slowromancapx@) on total CASIA-WebFace with a L2-SphereFace loss, and it reaches 99.03% accuracy on LFW. Comparing accuracies between the Model \@slowromancapix@ and \@slowromancapx@, we can conclude that complete identity annotation is naturally preferred in training datasets, but the little gap shows that competitive performance also can be achieved by making full use of sequence information.

Model Loss Training Dataset Accuracy
\@slowromancapi@ Softmax loss Dataset A 95.12%
\@slowromancapii@ L2-SphereFace Dataset A 97.35%
\@slowromancapiii@ Softmax + Center loss Dataset A 97.03%
\@slowromancapiv@ Softmax + DSA loss Dataset A 97.25%
\@slowromancapv@ LSR-Softmax + Center loss Dataset A + Dataset B 97.62%
\@slowromancapvi@ LSR-Softmax + DSA loss Dataset A + Dataset B 98.13%
\@slowromancapvii@ LSR-L2-SphereFace Dataset A + Dataset B 98.65%
\@slowromancapviii@ LSR-L2-SphereFace + Center loss Dataset A + Dataset B 98.72%
\@slowromancapix@ LSR-L2-SphereFace + DSA loss Dataset A + Dataset B 98.85%
\@slowromancapx@ L2-SphereFace CASIA-WebFace 99.03%
Table 1: Face verification accuracy on LFW dataset

4.3 Evaluation on LFW and YTF

In this section, we evaluate the proposed SeqFace on LFW and YTF in unconstrained environments. LFW [\citenameHuang et al. 2007] and YTF [\citenameWolf et al. 2011] are challenging testing benchmarks released for face verification. LFW dataset contains 13,233 faces of 5749 different identities, with large variations in pose, expression and illuminations. YTF dataset includes 3,425 videos of 1,595 identities. We follow the unrestricted with labeled outside data protocol. To evaluate performance on YTF, the simple average feature of all faces in a video is applied to compute the final score.

A ResNet-27 model2(the architecture is shown in Figure 5) and a ResNet-64 [\citenameLiu et al. 2017] are employed for evaluation. To accelerate the training process, we first train a baseline model under the supervision of the regular L2-SphereFace on the identity dataset only, and then fine-tune the baseline model by using the SeqFace. Our models are trained with batch size of 128 on 4 Titanx GPU. The learning rate begins with 0.01, and is divided by 10 at 300K and 600K iterations. The training is finished at 800K iterations. The model is jointly supervised by a LSR-L2-SphereFace loss and a DSA loss, and is learned on the MS-Celeb-1M and our Celeb-Seq datasets described below. In the DSA loss, , , . The parameter is set to 0.001 because of the large number of sequences in the Celeb-Seq dataset.

Figure 5: The ResNet-27 architecture for the experiments on LFW and YTF. The CNN is jointly supervised by the LSR-L2-SphereFace and the DSA loss. ID denotes identity input data, SEQ denotes sequence input data, C denotes the convolution layer, P denotes the max-pooling layer, and FC denotes the fully connected layer.

Training Datasets

A refined MS-Celeb-1M (4M images and 79K identities) provided by [\citenameWu et al. 2017] is used as the identity dataset. Since there is no public sequence datasets for training deep CNNs, we construct a sequence dataset Celeb-Seq, which includes about 2.5M face images of 550K face sequences. We firstly extract about 800K face sequences by using MTCNN [\citenameZhang et al. 2016] and Kalman-Consensus Filter (KCF) [\citenameOlfati-Saber 2010] to detect and track video faces from 32 online TV Channels, then compute image features with the model provided by SphereFace [\citenameLiu et al. 2017]. Lastly faces of overlap identities with MS-Celeb-1M, and nearly noisy and duplicate faces in one sequence are discarded from the dataset automatically and manually. We also remove face images belong to identities that appear in the LFW and YTF test sets. Some face sequences in the Celeb-Seq dataset are shown in Figure 6.

Figure 6: Some face sequences in the Celeb-Seq dataset. All faces are aligned and cropped to . Some sequences belong to the same identity (two sequences at top-right corner). Note that the numbers of faces in the sequences are different from each other.

Removing overlap identities with MS-Celeb-MS costs us most of the time in the constructing process, because many celebrities in MS-Celeb-MS can be found from the original 800K sequences. Fortunately, constructing a satisfied sequence dataset will not be a time-consuming task in many real scenarios. For example, it is almost certain that people in an Asian street surveillance video will not appear in another European street surveillance video, and will not be found in the MS-Celeb-1M dataset too.

Evaluation Results

Table 2 reports the verification performance of several methods. To demonstrate effectiveness of the SeqFace, the performance of our baseline ResNet-27 is also reported in the table. Note that the ArcFace employs the improved ResNets [\citenameHan et al. 2017]. The SeqFace achieves the best accuracies on these two benchmark testing sets. It is reported in the ArcFace that a regular 50-layer ResNet achieves a 99.71% accuracy on LFW. Moreover, our ResNet-27 and ResNet-64 models achieve 99.50% and 99.67% at VR@FAR=0 on LFW.

The SeqFace is only a framework to make use of sequence data. We believe that new loss functions (such as ArcFace, CosFace, etc.) and a deeper ResNet with improved residual units can be employed in the SeqFace to further improve the performance.

Method Models Data LFW YTF
DeepFace [\citenameTaigman et al. 2014] 3 4M images 97.35 91.4
DeepID2+ [\citenameSun et al. 2015b] 1 300K images 98.70 -
DeepID2+ [\citenameSun et al. 2015b] 25 300K images 99.47 93.2
FaceNet [\citenameSchroff et al. 2015] 1 200M images 99.65 95.1
Baidu [\citenameLiu et al. 2015] 9 1.2M images 99.77 -
Center Face [\citenameWen et al. 2016b] 1 0.7M images 99.28 94.9
SphereFace [\citenameLiu et al. 2017] 1 ResNet-64 CASIA-Webface 99.42 95.0
CosFace [\citenameWang et al. 2018] 1 ResNet-64 5M images 99.73 97.6
ArcFace [\citenameDeng et al. 2018] 1 ResNet-50 MS-Celeb-1M 99.78 -
ArcFace [\citenameDeng et al. 2018] 1 ResNet-100 MS-Celeb-1M 99.83 -
L2-SphereFace 1 ResNet-27 MS-Celeb-1M 99.55 95.7
SeqFace 1 ResNet-27 MS-Celeb-1M + Celeb-Seq 99.80 98.0
SeqFace 1 ResNet-64 MS-Celeb-1M + Celeb-Seq 99.83 98.12
Table 2: Verification accuracies(%) of different methods on LFW and YTF. Note that ResNet models in the ArcFace used the improved residual units, and the training MS-Celeb-1M dataset used in the ArcFace contains 3.8M images and 85K identities.

5 Conclusion

A large-scale high-quality dataset for training CNNs in FR is very expensive to construct. Face features learned on publicly available datasets for researchers might not achieve satisfied performance in some circumstances, for example evaluating Asia people in surveillance videos. Though large amount of face images in the real situation can be collected, assigning labels to these images is still time-consuming. Fortunately, a dataset containing large amount of face sequences can be efficiently constructed by using face detection and tracking methods.

In this paper, we proposed a framework named SeqFace, which can utilize sequence data to learn highly discriminative face features. A chief classification loss and another auxiliary loss are combined together to learn features on a traditional identity dataset and another sequence dataset. The LSR is employed to help the chief loss to deal with sequence input. The DSA loss was also proposed to supervise CNNs as an auxiliary loss. We achieved good results on several popular face benchmarks only with a simple ResNet model. We also believe that more competitive performance can be obtained, if recently proposed loss functions [\citenameWang et al. 2018, \citenameDeng et al. 2018] and CNN architectures [\citenameHan et al. 2017] are employed.

As far as we know, SeqFace is the first framework to employ face sequences as training data to learn highly discriminative face features. The requirement of no identity overlap between the identity and sequence datasets might have influence on the efficiency of constructing sequence datasets, but it always happens in many situations mentioned above. In fact, we train a CNN model on MS-Celeb-1M dataset and another sequence dataset, whose face sequences are collected from surveillance videos in China. The learned model achieves good performance in surveillance applications. It is obvious that our SeqFace also has great potentiality to be applied in other similar fields, such as Person-reidentification.

Footnotes

  1. https://github.com/huangyangyu/SeqFace
  2. https://github.com/ydwen/caffe-face

References

  1. Bashbaghi, S., Granger, E., Sabourin, R., and Bilodeau, G. A. 2017. Dynamic ensembles of exemplar-svms for still-to-video face recognition. Pattern Recognition 69, C, 61–81.
  2. Cevikalp, H., and Triggs, B. 2010. Face recognition based on image sets. In Computer Vision and Pattern Recognition, 2567–2573.
  3. Chen, Y., Chen, Y., Wang, X., and Tang, X. 2014. Deep learning face representation by joint identification-verification. In International Conference on Neural Information Processing Systems, 1988–1996.
  4. Chopra, S., Hadsell, R., and LeCun, Y. 2005. Learning a similarity metric discriminatively, with application to face verification. In CVPR (1), IEEE Computer Society, 539–546.
  5. Deng, J., Zhou, Y., and Zafeiriou, S. 2017. Marginal loss for deep face recognition. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2006–2014.
  6. Deng, J., Guo, J., and Zafeiriou, S. 2018. Arcface: Additive angular margin loss for deep face recognition.
  7. Ding, C., and Tao, D. 2016. Trunk-branch ensemble convolutional neural networks for video-based face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence PP, 99, 1–1.
  8. Guo, Y., Zhang, L., Hu, Y., He, X., and Gao, J. 2016. Ms-celeb-1m: A dataset and benchmark for large-scale face recognition. In European Conference on Computer Vision, 87–102.
  9. Han, D., Kim, J., and Kim, J. 2017. Deep pyramidal residual networks. In IEEE Conference on Computer Vision and Pattern Recognition, 6307–6315.
  10. He, K., Zhang, X., Ren, S., and Sun, J. 2016. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 00, 770–778.
  11. Hoffer, E., and Ailon, N. 2015. Deep metric learning using triplet network. In SIMBAD, Springer, A. Feragen, M. Pelillo, and M. Loog, Eds., vol. 9370 of Lecture Notes in Computer Science, 84–92.
  12. Huang, G. B., Mattar, M., Berg, T., and Learned-Miller, E. 2007. Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. Technical Report, University of Massachusetts.
  13. Huang, Z., Shan, S., Wang, R., Zhang, H., Lao, S., Kuerban, A., and Chen, X. 2015. A benchmark and comparative study of video-based face recognition on cox face database. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society 24, 12, 5967–5981.
  14. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., and Darrell, T. 2014. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22Nd ACM International Conference on Multimedia, ACM, New York, NY, USA, MM ’14, 675–678.
  15. Krizhevsky, A., Sutskever, I., and Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. In International Conference on Neural Information Processing Systems, 1097–1105.
  16. Lecun, Y., and Cortes, C. 2010. The mnist database of handwritten digits.
  17. Liu, J., Deng, Y., Bai, T., and Huang, C. 2015. Targeting ultimate accuracy: Face recognition via deep embedding. arXiv:1506.07310 abs/1506.07310.
  18. Liu, W., Wen, Y., Yu, Z., and Yang, M. 2016. Large-margin softmax loss for convolutional neural networks. In International Conference on International Conference on Machine Learning, 507–516.
  19. Liu, W., Wen, Y., Yu, Z., Li, M., Raj, B., and Song, L. 2017. Sphereface: Deep hypersphere embedding for face recognition. In CVPR, IEEE Computer Society, 6738–6746.
  20. Olfati-Saber, R. 2010. Kalman-consensus filter : Optimality, stability, and performance. In Decision and Control, 2009 Held Jointly with the 2009 Chinese Control Conference. Cdc/ccc 2009. Proceedings of the IEEE Conference on, 7036–7042.
  21. Parchami, M., Bashbaghi, S., and Granger, E. 2017. Video-based face recognition using ensemble of haar-like deep convolutional neural networks. In IJCNN, IEEE, 4625–4632.
  22. Schroff, F., Kalenichenko, D., and Philbin, J. 2015. Facenet: A unified embedding for face recognition and clustering. In CVPR, IEEE Computer Society, 815–823.
  23. Simonyan, K., and Zisserman, A. 2014. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 abs/1409.1556.
  24. Sohn, K., Liu, S., Zhong, G., Yu, X., Yang, M. H., and Chandraker, M. 2017. Unsupervised domain adaptation for face recognition in unlabeled videos. In IEEE International Conference on Computer Vision, 5917–5925.
  25. Sun, Y., Wang, X., and Tang, X. 2014. Deep learning face representation from predicting 10, 000 classes. In CVPR, IEEE Computer Society, 1891–1898.
  26. Sun, Y., Liang, D., Wang, X., and Tang, X. 2015. Deepid3: Face recognition with very deep neural networks. arxiv:1502.00873 abs/1502.00873.
  27. Sun, Y., Wang, X., and Tang, X. 2015. Deeply learned face representations are sparse, selective, and robust. In IEEE Conference on Computer Vision and Pattern Recognition, 2892–2900.
  28. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S. E., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. 2015. Going deeper with convolutions. In CVPR, IEEE Computer Society, 1–9.
  29. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. 2016. Rethinking the inception architecture for computer vision. In Computer Vision and Pattern Recognition, 2818–2826.
  30. Taigman, Y., Yang, M., Ranzato, M., and Wolf, L. 2014. Deepface: Closing the gap to human-level performance in face verification. In CVPR, IEEE Computer Society, 1701–1708.
  31. Wang, W., Wang, R., Shan, S., and Chen, X. 2017. Discriminative covariance oriented representation learning for face recognition with image sets. In IEEE Conference on Computer Vision and Pattern Recognition, 5749–5758.
  32. Wang, H., Wang, Y., Zhou, Z., Ji, X., Li, Z., Gong, D., Zhou, J., and Liu, W. 2018. Cosface: Large margin cosine loss for deep face recognition.
  33. Wen, Y., Zhang, K., Li, Z., and Qiao, Y. 2016. A discriminative feature learning approach for deep face recognition. In ECCV (7), Springer, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds., vol. 9911 of Lecture Notes in Computer Science, 499–515.
  34. Wen, Y., Zhang, K., Li, Z., and Qiao, Y. 2016. A discriminative feature learning approach for deep face recognition. In European Conference on Computer Vision, 499–515.
  35. Wolf, L., Hassner, T., and Maoz, I. 2011. Face recognition in unconstrained videos with matched background similarity. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, Washington, DC, USA, CVPR ’11, 529–534.
  36. Wu, X., He, R., Sun, Z., and Tan, T. 2017. A light cnn for deep face representation with noisy labels. arXiv:1511.02683.
  37. Yamaguchi, O., Fukui, K., and Maeda, K. 1998. Face recognition using temporal image sequence. In IEEE International Conference on Automatic Face and Gesture Recognition, 1998. Proceedings, 318–323.
  38. Yi, D., Lei, Z., Liao, S., and Li, S. Z. 2014. Learning face representation from scratch. arXiv:1411.7923.
  39. Zhang, K., Zhang, Z., Li, Z., and Qiao, Y. 2016. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters 23, 10, 1499–1503.
  40. Zhang, X., Fang, Z., Wen, Y., Li, Z., and Qiao, Y. 2017. Range loss for deep face recognition with long-tailed training data. In IEEE International Conference on Computer Vision, 5419–5428.
  41. Zheng, Z., Zheng, L., and Yang, Y. 2017. Unlabeled samples generated by gan improve the person re-identification baseline in vitro. In IEEE International Conference on Computer Vision, 3774–3782.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
130569
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description