Revisiting Few-Shot Learning for Facial Expression Recognition

Revisiting Few-Shot Learning for Facial Expression Recognition

Abstract

Most of the existing deep neural nets on automatic facial expression recognition focus on a set of predefined emotion classes, where the amount of training data has the biggest impact on performance. However, in the standard settings over-parameterised neural networks are not amenable for learning from few samples as they can quickly over-fit. In addition, these approaches do not have such a strong generalisation ability to identify a new category, where the data of each category is too limited and significant variations exist in the expression within the same semantic category. We embrace these challenges and formulate the problem as a low-shot learning, where once the base classifier is deployed, it must rapidly adapt to recognise novel classes using a few samples. In this paper, we revisit and compare existing few-shot learning methods for the low-shot facial expression recognition in terms of their generalisation ability via episode-training. In particular, we extend our analysis on the cross-domain generalisation, where training and test tasks are not drawn from the same distribution. We demonstrate the efficacy of low-shot learning methods through extensive experiments.

\FGfinalcopy

I Introduction

Substantial previous research has focused on building methods for facial expression recognition (FER) using datasets that contain a large number of annotated images. These methods are able to reach human performance for datasets created in a controlled setting that usually contain a limited number of facial expressions. However, this scenario is not practical nor always possible for real life applications. Building a real life FER model introduces two important challenges. First, it is almost impossible to have access to large amounts of annotated data with a large spectrum of facial expressions. This is due to the large variability found in the data for each person, given the fact that human behaviour is influenced by neurotransmitters, hormones, environmental factors, childhood, culture, genes and epigenetics [27]. Most current facial expression datasets use as labels the model introduced by Ekman: anger, disgust, fear, happiness, sadness, and surprise [8]. This is insufficient to describe all types of facial expressions. Some datasets introduced labels for neutral [22], fatigue [15], others for compound emotions such as happily surprised [18, 7] but there exists a lot of subtle emotions that cannot be easily gathered in large amounts. Second, the data distribution of facial expressions is highly imbalanced. Several facial expressions occur less often than others, such as fearfully disgusted. Some are common, but contain a large spectrum of intensities that is difficult to capture, such as angry. Therefore, there is a need to reduce the number of resources and to leverage the limited number of samples that a real life scenario can provide.

A new paradigm shift called few-shot learning has started to explore the ability to learn using limited number of samples. Few-shot learning creates models that are able to generalise for classes that are not seen during training using only a few examples in the testing phase [9, 36, 32, 10, 25, 28, 6].

Three lines of research have been explored in the domain of few-shot learning. Distance metric learning-based models aim to analyse the similarity between the representation of a class and the representation of a given sample to be classified [36, 32, 6, 33]. Initialisation based models intend to provide a good model initialisation by optimizing standard iterative learning algorithms [10], [23] or learning the update rule of the learner [25], [13], [1]. Hallucination-based models used a learned generator to create new novel class data for data augmentation [2], [37].

Our contributions are the following:

  1. We formalise the problem of FER using a few-shot classification setting. To the best of our knowledge, this is the first work that compares the performance of different few-shot learning algorithms for facial expression classification.

  2. We analyse the performance of few-shot learning algorithms on two cross-domain scenarios where the base and novel classes are sampled from different domains. We observe that few-shot learning algorithms do not generalise well when the dataset used for novel classes contains large intra-class variation.

  3. We observe that a shallow domain-shift leverages the power of few shot learning algorithms.

Ii Related Work

Face representation, identification, clustering and facial expression recognition received a lot of attention in the past decade as a result of the advances in deep convolutional neural networks coupled with the availability of large annotated datasets [4, 30, 26, 29, 35]. In the following we present the relevant literature in the domains of FER and few-shot learning.

Ii-a Facial expression recognition

The main focus of FER is to classify facial expressions into discrete emotions. Architectures such as AlexNet [16], VGGNet [31], GoogleNet [34], ResNet [12] were used for the task of FER. Recently, Zhong et al. addressed this problem with GNNs [38]. They used Gabor filters to extract features around the landmarks of the face. The features were used afterwards as node representations in a graph modeled with a bidirectional recurrent neural network. Hayale et al. used deep siamese neural networks with a supervised loss function[11] to build a FER system. By dynamically modulating the verification signal over the identification one, they were able to reduce the intra-class variations by minimising the distance between features for the same class and to maximise the distance between the features for different classes.

Ii-B Few-shot learning

One of the main disadvantage of using the models mentioned previously is that they require large amounts of annotated data that sometimes might be expensive to obtain for FER. In real life scenario where data are scarce, over-parameterised networks are not able to learn from a few samples and they tend to over-fit. Few-shot learning techniques were introduced to reduce the number of data used for training. Ranadive et al. used k-shot learning techniques for face identification [24]. Siamese networks with triplet loss were used by Schroff et al. to perform face recognition and clustering [29]. Lu et al. [20] also aim to reduce the number of labeled samples required for training a model by introducing a zero-shot learning technique to recognise facial expressions. Although we have similar intentions, our analysis is different than theirs because we use a few samples for building the models.

Iii Methodology

Iii-a Problem formulation

In the classic machine learning setting, given a training set , we train a learner to estimate the parameters of a predictor and we evaluate its generalisation ability on the unseen test set . The training and the testing set are typically mapped into a feature space parameterised by . The parameters are computed by minimizing the empirical loss over training data along to which a regularisation term is added to avoid over-fitting as illustrated in Equation 1.

(1)

However, in the meta-learning setting, the aim is to minimise the generalisation error across a distribution of tasks sampled from a task distribution. Given a collection of training and test sets = , often referred as meta-training set, the objective is to learn an embedding model that minimises the generalisation error across tasks given a learner , as it is presented in Equation 2.

(2)

This stage is often called in the literature meta-training. After the embedding model is learned, its generalisation is estimated on a set of never seen tasks = , often referred to as meta-test. This stage is described in Equation 3 and it is often called meta-testing.

(3)

Iii-B Episodic training

Few-shot learning is cast as a meta-learning problem and it evaluates models in N-way, K-shot classification tasks. During the meta-training phase, a meta-learner aims to learn from several tasks called episodes that are created to solve N-class problems using only K samples for each class. Each episode consists of a training phase during which a base-learner is optimised using an episodic train set also known as support set and a testing phase during which the meta-learner is updated with the loss given by the base-learner on the episodic test set, also known as query set . During the meta-testing phase, the meta-learner is adapted to classify into held-out classes during the training phase, using a few samples for each class.

Iii-C Overview of few-shot learning algorithms

In this section we first present two baselines and the meta-learning algorithms used in our experiments. The baselines analysed in our experiments use transfer learning principles such as pre-training and fine-tuning with the mention that the fine-tuning employs a few examples for each class. The Baseline and Baseline++ introduced by [3] are composed of a feature extractor and a classifier , parameterised by the the weight matrix . Both Baseline and Baseline++ are trained on base class data in the training stage. In the fine-tuning stage, the network parameters are frozen and a new classifier is trained on the novel class data. The difference between the two baselines lies in the construction of the classifier. The classifier of the Baseline model is composed of a linear layer followed by a softmax function , whereas the one of Baseline++ replaces the linear layer with a list of weight vectors for each class. During training they compute cosine distances between the input feature and the weight vectors to obtain similarity scores. After normalisation the similarity scores translate to per class probabilities. Baseline++ focuses on reducing the intra-class variation.

Several few-learning algorithms have been recently proposed. For our experiment we selected four distance metric learning-based algorithms that use different strategies to analyse the similarity between the class representation and a given sample: MatchingNet [36], ProtoNet [32], RelationNet [33] and SubspaceNet [6]. MatchingNet computes the cosine distance between the representation of the query and the representation of each sample in the support set. Then for each class it computes the average cosine distance. The query will have the same label as the class with the smallest cosine distance. ProtoNet computes the Euclidean distance between the embedding of the query and the class mean of the support features. RelationNet replaces the Euclidean distance used in ProtoNet with a learnable relation module. SubspaceNet assumes that the embeddings of the samples that belong to the same class span the class subspace. The classification of a query is performed by computing the distance between its embedding to the class subspace.

Iv Evaluation

Iv-a Evaluation setup

Datasets. We conducted experiments on three datasets: (i) miniImageNet, a subset of ImageNet [5] with 100 classes and 600 images per class. It was originally proposed by Vinyals et al. [36] and later refined by Ravi and Larochelle [25]. (ii) CK+ [21], which contains 7 classes representing basic emotions of 123 participants taken in a controlled setting. (iii) RAF-DB [18] with two different subsets: 7 classes of basic emotions and 12 classes of compound emotions; RAF-DB contains crowdsourced images with a large variability in terms of illumination, occlusion and subject’s age, ethnicity and pose. For this experiment we selected only the basic emotions from RAF-DB. Further on, we will refer to that subset as RAF basic.

Metrics. We use the precision and the confusion matrix to validate the quality of the few-shot learning algorithms.

Scenarios. We wanted to evaluate whether and in what conditions the few-shot learning algorithms are able to generalise, therefore we focused our analysis on two cross-domain scenarios. We used miniImageNet as our base class and CK+/RAF basic as our novel class: miniImageNet CK+ and miniImageNet RAF basic. Out of the 100 classes of miniImageNet only 4 classes illustrate people performing different activities. Consequently, there is significant domain shift between miniImageNet and CK+/RAF basic. We selected several feature embedding architectures with different depth levels to reduce intra-class variation for all methods: Conv-6, ResNet10, ResNet18, ResNet34 and ResNet50 [12]. We trained Baseline and Baseline++ for 100 epochs using a batch size of 8 on the entire miniImageNet. In the meta-training phase for the few-shot learning methods we trained 100,000 episodes for both 1-shot and 5-shot tasks. Each episode represents an N-way classification task, where the support set has K samples for each class and the query set has 8 samples for each class. In our setting N is 5 and K is 1 or 5. In the fine-tuning and the meta-testing phase we selected randomly N classes with K samples each from CK+ or RAF basic datasets for the two cross-domain experiments. The results were averaged over 600 experiments.

Iv-B Implementation details

We used the implementation of Baseline, Baseline++, MatchingNet, ProtoNet and RelationNet provided by Chen et al. [3] and the one of SubspaceNet provided by Devos et al. [6]. In the training and meta-training stages we applied color jitter and horizontal flip to augment the datasets containing facial expressions, whereas for miniImageNet we used random crop besides the techniques previously mentioned. For RAF basic dataset we used the aligned images with a dimension of 100 x 100. For CK+ dataset we detected the face with Dual Shot Face Detector by Li et al. [17]. The few-shot learning methods were trained using Adam optimizer [14] with an initial learning rate of , whereas in the pre-training of the backbones RAdam optimizer [19] with an initial learning rate of was used.

Iv-C Discussions

We performed three types of experiments for cross-domain few-shot classification: two experiments with large domain-shift: miniImageNet RAF basic and miniImageNet CK+ and one experiment with shallow domain-shift. For all the settings 1-shot and 5-shot classification was performed.

Large domain-shift.

Fig. 1: Results for cross-domain study: miniImageNet RAF basic
Fig. 2: Results for cross-domain study: miniImageNet CK+

As observed in Figures 1 and 2, the Baseline surpasses all the few-shot learning algorithms for almost all the ResNet backbones. The reason is that the Baseline is first trained on the support set selected from miniImageNet and then is fine-tuned only with the novel class selected from the CK+/RAF basic dataset. This procedure increases the robustness to domain shift because the feature extractor contains well defined features. Compared to the Baseline, few shot learning algorithms are not able to adapt the meta-learner with a few samples from a different distribution. In practice they learn to learn from a support set within the same dataset and usually the support set does not have a large domain shift. SubspaceNet outperforms all other few-shot learning algorithms in the majority of experiments because it naturally embeds more information about a class by creating a subspace spanned by its examples in a representation space [6]. ProtoNet shows better performance than SubspaceNet in the 5-way 1-shot when shallow ResNet architectures are used, but as the backbone gets deeper SubspaceNet reaches better accuracy. The good performance of ProtoNet in the 5-way 1-shot scenario can also be due to the sample averaging performed before training, which helps in reducing the intra-class variation. MatchingNet usually gives the lowest accuracy for both 1-shot and 5-shot experiments because the average cosine distance for each class does not take into account the relation between different samples in a class.

The performance of few-shot learning algorithms for cross-domain adaptation is highly influenced by the intra-class variation present in the datasets used for novel classes. In Table I we saw that few-shot learning algorithms are able to generalise better when the novel classes are extracted from a dataset created in a controlled environment, thus with less intra-class variation. Nevertheless, in Table II we observed that the few-shot learning algorithms performed poorly when the novel classes are extracted from a dataset with substantial diversity in terms of facial expressions, illumination, occlusion.

Fig. 3: Confusion matrix for miniImageNet CK+ using 5-way, 1-shot setting, SubspaceNet as model and ResNet18 as backbone
miniImageNet CK+
Method 1-shot 5-shot
Baseline
Baseline++
ProtoNet %
MatchingNet %
RelationNet % %
SubspaceNet % %
TABLE I: miniImageNet CK+ using ResNet18 as a backbone
miniImageNet RAF basic
Method 1-shot 5-shot
Baseline
Baseline++
ProtoNet
MatchingNet
RelationNet
SubspaceNet
TABLE II: miniImageNet RAF basic using ResNet18 as a backbone
RAF basic CK+
Method 1-shot 5-shot
Baseline
Baseline++
ProtoNet
MatchingNet
RelationNet
SubspaceNet
TABLE III: RAF basic CK+ using ResNet18 as a backbone

Shallow domain-shift. We observed that a large domain-shift has a negative impact on the performance of few-shot learning algorithms, therefore in this subsection we present a scenario that has RAF basic as a source of base classes and CK+ as a source of novel classes. We are aware that this setting is closer to domain adaptation than to few-shot learning, because the datasets have almost the same classes: disgust, happy, surprise, fear, angry, contempt, neutral for CK+ and disgust, happy, surprise, fear, angry, sad, neutral. We present the results for this experiment in Table III. Baseline++ outperforms all the meta-learning methods for both the 5-way 1-shot and 5-way 5-shot settings. This might be explained by the ability to reduce intra-class variation among features during training. We present the best performing setting in Figure 3 using SubspaceNet with a ResNet18 backbone.

V Conclusion

In this paper, we explored the generalisation ability of few-shot classification algorithms on recognizing novel classes with limited training samples. As compared to related work which samples the novel classes from the same dataset that was used during training, we provide a more realistic evaluation scenario. We consider cross-domain adaptation in the presence of large domain-shift between the base and novel classes. More in detail, we selected base classes from miniImageNet and novel classes from two facial expression datasets: RAF basic and CK+. We addressed the questions of how and what to transfer in a cross-domain low-shot learning. We observed that current few-shot learning algorithms are fragile to address a large domain-shift. We emphasised on the performance gain with increased low-shot model capacity and in the presence of limited domain gap between datasets.

We also analysed the scenario with a narrow domain shift: RAF basic CK+ and the best preforming algorithm reached 84.90% 0.53% accuracy, when only learning from five samples.

Acknowledgement

This project is partially supported by the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 754354.

References

  1. M. Andrychowicz, M. Denil, S. Gómez, M. W. Hoffman, D. Pfau, T. Schaul, B. Shillingford and N. de Freitas (2016) Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems (NIPS), pp. 3981–3989. Cited by: §I.
  2. A. Antoniou, A. J. Storkey and H. Edwards (2018) Data augmentation generative adversarial networks. Proceedings of the International Conference on Learning Representations Workshops (ICLR Workshops). Cited by: §I.
  3. W. Chen, Y. Liu, Z. Kira, Y. Wang and J. Huang (2019) A closer look at few-shot classification. In International Conference on Learning Representations (ICLR), Cited by: §III-C, §IV-B.
  4. S. Datta, G. Sharma and C. Jawahar (2018) Unsupervised learning of face representations. In IEEE International Conference on Automatic Face and Gesture Recognition (FG), Cited by: §II.
  5. J. Deng, W. Dong, R. Socher, L. Li, K. Li and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255. Cited by: §IV-A.
  6. A. Devos and M. Grossglauser (2019) Subspace networks for few-shot classification. CoRR abs/1905.13613. External Links: Link Cited by: §I, §I, §III-C, §IV-B, §IV-C.
  7. S. Du, Y. Tao and A. M. Martínez (2014) Compound facial expressions of emotion. Proceedings of the National Academy of Sciences of the United States of America 111 15, pp. E1454–62. Cited by: §I.
  8. P. Ekman and W. V. Friesen (1971) Constants across cultures in the face and emotion.. Journal of personality and social psychology 17 2, pp. 124–9. Cited by: §I.
  9. L. Fei-Fei, R. Fergus and P. Perona (2006-04) One-shot learning of object categories. IEEE Transactions on Pattern Analysis and Machine Intelligence 28 (4), pp. 594–611. Cited by: §I.
  10. C. Finn, P. Abbeel and S. Levine (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning (ICML), pp. 1126–1135. Cited by: §I, §I.
  11. W. Hayale, P. Negi and M. Mahoor (2019) Facial expression recognition using deep siamese neural networks with a supervised loss function. In 2019 14th IEEE International Conference on Automatic Face and Gesture Recognition (FG), pp. 1–7. Cited by: §II-A.
  12. K. He, X. Zhang, S. Ren and J. Sun (2015) Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. Cited by: §II-A, §IV-A.
  13. S. Hochreiter, A. S. Younger and P. R. Conwell (2001) Learning to learn using gradient descent. In IN LECTURE NOTES ON COMP. SCI. 2130, PROC. INTL. CONF. ON ARTI NEURAL NETWORKS (ICANN-2001, pp. 87–94. Cited by: §I.
  14. D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. CoRR abs/1412.6980. Cited by: §IV-B.
  15. R. Kosti, J. M. Alvarez, A. Recasens and À. Lapedriza (2017) Emotion recognition in context. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1960–1968. Cited by: §I.
  16. A. Krizhevsky, I. Sutskever and G. E. Hinton (2012) ImageNet classification with deep convolutional neural networks. Commun. ACM 60, pp. 84–90. Cited by: §II-A.
  17. J. Li, Y. Wang, C. Wang, Y. Tai, J. Qian, J. Yang, C. Wang, J. Li and F. Huang (2019) DSFD: dual shot face detector. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (CVPR), Cited by: §IV-B.
  18. S. Li, W. Deng and J. Du (2017) Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2584–2593. Cited by: §I, §IV-A.
  19. L. Liu, H. Jiang, P. He, W. Chen, X. Liu, J. Gao and J. Han (2019) On the variance of the adaptive learning rate and beyond. ArXiv abs/1908.03265. Cited by: §IV-B.
  20. Z. Lu, J. Zeng, S. Shan and X. Chen (2019) Zero-shot facial expression recognition with multi-label label propagation. In Computer Vision – ACCV 2018, pp. 19–34. Cited by: §II-B.
  21. P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar and I. Matthews (2010) The extended cohn-kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, pp. 94–101. Cited by: §IV-A.
  22. A. Mollahosseini, B. Hasani and M. H. Mahoor (2019) AffectNet: a database for facial expression, valence, and arousal computing in the wild. IEEE Transactions on Affective Computing 10, pp. 18–31. Cited by: §I.
  23. A. Nichol, J. Achiam and J. Schulman (2018) On first-order meta-learning algorithms. CoRR abs/1803.02999. External Links: Link Cited by: §I.
  24. O. Ranadive and D. Thakkar (2018) K-shot learning for face recognition. Vol. 181. Cited by: §II-B.
  25. S. Ravi and H. Larochelle (2017) Optimization as a model for few-shot learning. In 5th International Conference on Learning Representations (ICLR), Cited by: §I, §I, §IV-A.
  26. V. Roethlingshoefer, V. Sharma and R. Stiefelhagen (2019) Self-supervised face-grouping on graph. In ACM MM, Cited by: §II.
  27. R. Sapolsky (2017) Behave: the biology of humans at our best and worst. Penguin Press. Cited by: §I.
  28. V. G. Satorras and J. B. Estrach (2018) Few-shot learning with graph neural networks. In 6th International Conference on Learning Representations (ICLR), Cited by: §I.
  29. F. Schroff, D. Kalenichenko and J. Philbin (2015) Facenet: a unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, (CVPR), pp. 815–823. Cited by: §II-B, §II.
  30. V. Sharma, M. Tapaswi, M. S. Sarfraz and R. Stiefelhagen (2019) Self-supervised learning of face representations for video face clustering. In IEEE International Conference on Automatic Face and Gesture Recognition (FG), Cited by: §II.
  31. K. Simonyan and A. Zisserman (2015) Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations (ICLR), Cited by: §II-A.
  32. J. Snell, K. Swersky and R. Zemel (2017) Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems (NIPS), pp. 4077–4087. Cited by: §I, §I, §III-C.
  33. F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. S. Torr and T. M. Hospedales (2018) Learning to compare: relation network for few-shot learning. In 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1199–1208. Cited by: §I, §III-C.
  34. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. E. Reed, D. Anguelov, D. Erhan, V. Vanhoucke and A. Rabinovich (2014) Going deeper with convolutions. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9. Cited by: §II-A.
  35. Y. Taigman, M. Yang, M. Ranzato and L. Wolf (2014) Deepface: closing the gap to human-level performance in face verification. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), pp. 1701–1708. Cited by: §II.
  36. O. Vinyals, C. Blundell, T. P. Lillicrap, K. Kavukcuoglu and D. Wierstra (2016) Matching networks for one shot learning. In Advances in Neural Information Processing Systems (NIPS), Cited by: §I, §I, §III-C, §IV-A.
  37. Y. Wang, R. B. Girshick, M. Hebert and B. Hariharan (2018) Low-shot learning from imaginary data. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7278–7286. Cited by: §I.
  38. L. Zhong, C. Bai, J. Li, T. Chen, S. Li and Y. Liu (2019) A graph-structured representation with brnn for static-based facial expression recognition. In 2019 14th IEEE International Conference on Automatic Face and Gesture Recognition (FG), pp. 1–5. Cited by: §II-A.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
401549
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description