A Novel SpaceTime Representation on the Positive Semidefinite Cone for Facial Expression Recognition
Abstract
In this paper, we study the problem of facial expression recognition using a novel spacetime geometric representation. We describe the temporal evolution of facial landmarks as parametrized trajectories on the Riemannian manifold of positive semidefinite matrices of fixedrank. Our representation has the advantage to bring naturally a second desirable quantity when comparing shapes – the spatial covariance – in addition to the conventional affineshape representation. We derive then geometric and computational tools for rateinvariant analysis and adaptive resampling of trajectories, grounding on the Riemannian geometry of the manifold. Specifically, our approach involves three steps: 1) facial landmarks are first mapped into the Riemannian manifold of positive semidefinite matrices of rank 2, to build timeparameterized trajectories; 2) a temporal alignment is performed on the trajectories, providing a geometryaware (dis)similarity measure between them; 3) finally, pairwise proximity function SVM (ppfSVM) is used to classify them, incorporating the latter (dis)similarity measure into the kernel function. We show the effectiveness of the proposed approach on four publicly available benchmarks (CK+, MMI, OuluCASIA, and AFEW). The results of the proposed approach are comparable to or better than the stateoftheart methods when involving only facial landmarks.
1 Introduction
In recent years, Automated Facial Expression Recognition (AFER) has aroused considerable interest [8]. Earlier literature mostly focused on static faces grounding on either shape (geometry) or appearance features. Recently, there have been a general shift to exploit the dynamics (motion) in facial videos [14, 20, 25], as conveying an expression is obviously a temporal process. In particular, advances in landmarks detection [3, 34, 44] have opened the door to accurate geometrydriven approaches. Besides, it has been stated that in unconstrained scenario, geometric features outperform appearance features [23]. However, analyzing temporal shape features brings new challenges – (1) which suitable representation to facial shape analysis under rigid transformations due to changes in head position and orientation? (2) Which temporal representation for modeling the dynamic of facial expression? (3) How to compare and classify temporal sequences for the purpose of facial expression recognition? To tackle these challenges, we introduce in this work a comprehensive geometric framework which involves the temporal evolution of facial landmarks. Our framework incorporates a novel shape representation using Gramian matrices derived from centered facial landmark configurations and its extension to timeparametrized trajectories on the positive semidefinite cone. We use then appropriate tools to compare and classify trajectories in a rateinvariant fashion, grounding on the geometry of the manifold of interest.
2 Related Work
In the appearancebased (A) category, first works extend conventional local features such us SIFT, LBP, and HOG to suit videobased data, giving rise to 3D SIFT [33], LBPTOP [46], and 3D HOG [21]. In [25], the authors exploit the dynamics of facial expressions and propose a semanticsaware representation. They model a video clip as a SpatioTemporal Manifold (STM) spanned by local spatiotemporal features called Expressionlets built from lowlevel appearance features. These features are based on clustering cuboids of predefined sizes extracted from facial sequences in order to model the manifold of facial expression variations. A temporal alignment among STMs is performed to allow a rateinvariant analysis of facial expressions. Deep Networks based on appearance features have been recently applied on facial image sequences for the purpose of AFER. Elaiwat \etal[14] propose a restricted Boltzmann machine (RBM) network that, unlike typical deep models, is shallow and therefore easier to optimize. The key property of the RBM network is to disentangle expressionrelated image transformations from transformations that are not related to the expressions. Despite their investigation in expression recognition, deep networks are less effective if trained with small datasets [37]. To overcome this limitation, Jung \etalexploit two temporal features from the appearance and the geometry (landmarks) to train two deep networks termed respectively DTAN and DTGN [20]. They are then combined using a joint finetuning method to give rise to the DTAGN. Finally, it has been shown in [36] that face analysis using deep networks is sensitive to pose variations and often requires a face alignment step. As far as the geometrybased (G) approaches are concerned, in [18], the authors propose a probabilistic method to capture the subtle motions within expressions using LatentDynamic Conditional Random Fields (LDCRFs) on both geometric and appearance features. They illustrate experimentally that variations in shape are much more important than appearance for AFER. In another work, Wang \etal[43] introduce a unified probabilistic framework based on an interval temporal Bayesian network (ITBN) built from the movements of specific geometric points detected on the face along a sequence. Recently, shape trajectorybased methods showed their effectiveness in many temporal pattern recognition tasks, especially in action recognition [2, 4, 6, 9, 40]. Taheri \etal[35] propose an affineinvariant shape representation on the Grassmann manifold [5] and model the dynamic of facial expression by parametrized trajectories on this manifold. Geodesic velocities between facial shapes are then used to capture the facial deformations. The classification was achieved using LDA followed by SVM.
From the discussion above, we propose a novel shape representation invariant to rigid motions by embedding shapes into a Positive Semidefinite Riemannian manifold. Facial expression sequences are then viewed as trajectories on this manifold. To compare and classify these trajectories, we propose a variant of SVM that takes into account the nonlinearity of this space. The full approach is illustrated in Fig.1. In summary, the main contributions of this paper are:

A novel static shape representation based on computing the Gramian matrix from centered landmark configurations, as well as a comprehensive study of the Riemannian geometry of the space of representations (called the cone of Positive Semidefinite matrices of fixedrank ). Despite the large use of these matrices in several research fields, to the best of our knowledge, this is the first application in static and dynamic shape analysis.

A temporal extension of the representation via parametrized trajectories in the underlying Riemannian manifold, with associated computational tools for temporal alignment and adaptive resampling of trajectories.

The classification of trajectories based on pairwise proximity function SVM (ppfSVM) grounding on pairwise (dis)similarity measures between them, with respect to the metric of the underlying manifold.

Extensive experiments and baselines on four publicly datasets and a comparative study with existing literature, which demonstrates the competitiveness of the approach.
The rest of the paper is organized as following. In section 3, we study the Riemannian geometry of the Positive Semidefinite manifold. In section 4, we adopt a temporal extension of the representation via timeparametrized trajectories in the manifold, with the definition of relevant geometric tools for temporal registration and trajectory resampling. Section 5 states the classification approach based on a variant of the standard SVM associated to a closeness between the trajectories. Experimental results and discussions are reported in section 6. In section 7, we conclude and draw some perspectives of the work.
3 Shape and Trajectory Representations
Let us consider an arbitrary sequence of landmark configurations. Each configuration is an matrix of rank encoding the positions of distinct points on the plane: . We are interested in studying such sequences or curves of landmark configurations up to Euclidean motions of the plane. In what follows, we will first study static observations representation, then adopt a timeparametrized representation for a temporal analysis.
As a first step, we seek a shape representation that is invariant up to Euclidean transformations (rotation and translation). Arguably, the most natural choice is the matrix of pairwise distances between the landmark points of the same shape augmented by the distances from all landmarks to the center of mass . Since we are dealing with Euclidean distances, it will turn out to be more convenient to consider the matrix of the squares of these distances. Also note that by subtracting the center of mass from the coordinates of the landmarks, these can be considered as centered: the center of mass is always at the origin. From now on we will assume . With this provision, the augmented pairwise squaredistance matrix takes the form,
where for all . As usual, denotes the norm associated to the inner product .
A key observation is that the matrix can be easily read from the Gram matrix . Indeed, the entries of are the pairwise inner products of the points ,
(1) 
and the equality
(2) 
establishes a linear equivalence between the set of Gram matrices and augmented squaredistance matrices of distinct points on the plane. On the other hand, Gram matrices of the form , where is an matrix of rank , are characterized as positive semidefinite matrices of rank (for a detailed account of the relation between positive semidefinte matrices, Gram matrices, and squaredistance matrices, we refer the reader to Section 6.2.1 of the book [10]). Conveniently for us, the Riemannian geometry of the space of these matrices, called the positive semidefinite cone , was studied in [7, 15, 27, 38].
An alternative shape representation considered in [5] and [35] associates to each configuration the twodimensional subspace spanned by its columns. This representation, which exploits the wellknown geometry of the Grassmann manifold of twodimensional subspaces in , is invariant under all invertible linear transformations. By fully encoding the set of all mutual distances between landmark points, the Euclidean shape representation proposed in this paper supplements the affine shape representation with the knowledge of the covariance matrix for the centered landmarks. This leads to considerable improvements in the results of the conducted facial expression recognition experiments.
3.1 Riemannian geometry of
Given an matrix of rank two, its polar decomposition with allows us to write the Gram matrix as . Since the columns of the matrix are orthonormal, this decomposition defines a map
from the product of the Stiefel manifold and the cone of positive definite matrices to the manifold of positive semidefinite matrices of rank two. The map defines a principal fiber bundle over with fibers
where is the group of orthogonal matrices. Bonnabel and Sepulchre [7] use this map and the geometry of the structure space to introduce a Riemannian metric on and study its geometry.
3.2 Tangent space and Riemannian metric
The tangent space consists of pairs , where is a matrix satisfying and is any symmetric matrix. Bonnabel and Sepulchre define a connection (see [22, p. 63]) on the principal bundle by setting the horizontal subspace at the point to be the space of tangent vectors such that and is an arbitrary symmetric matrix. They also define an inner product on : given two tangent vectors and on , set
(3) 
where is a real parameter.
It is easily checked that the action of the group of orthogonal matrices on the fiber sends horizontals to horizontals isometrically. It follows that the inner product on induced from that of via the linear isomorphism is independent of the choice of point projecting onto . This procedure defines a Riemannian metric on for which the natural projection
is a Riemannian submersion. This allows us to relate the geometry of with that of the Grassmannian .
Recall that the geometry of the Grassmannian is easily described by using the map
that sends an matrix with orthonormal columns to their span . Given two subspaces and , the geodesic curve connecting them is {dmath} span(U(t))= span(U_1cos(Θt)+Msin(Θt)), where is a diagonal matrix formed by the principal angles between and , while the matrix is given by the formula , with being the pseudoinverse . The Riemannian distance between and is given by
(4) 
3.3 Pseudogeodesics and closeness in
Bonnabel and Sepulchre ([7]) define the pseudogeodesic connecting two matrices and in as the curve
(5) 
where is a geodesic in and is the geodesic in given by Eq.(3.2). They also define the closeness between and , , as the square of the length of this curve: {dmath} d_S^+(G_1,G_2)=∥Θ∥_F^2+k∥logR_1^1R_2^2R_1^1 ∥_F^2 =d_G^2(span(U_1),span(U_2))+kd_P_2^2(R_1^2,R_2^2) .
The closeness consists of two independent contributions, the square of the distance between the two associated subspaces and the square of the distance on the positive cone (Fig.2). Note that is not necessarily a geodesic and therefore, the closeness is not a true Riemannian distance. From the viewpoint of the landmark configurations and , with and , the closeness encodes the distances measured between the affine shapes and in and between their spatial covariances in . Indeed, the spatial covariance of is the symmetric positive definite matrix
(6) 
The weight parameter controls the relative contribution of these two informations. Note that for the distance on collapses to the distance on . Nevertheless, the authors in [7] recommend choosing small values for this parameter. The conducted experiments for expression recognition reported in section 6 are in accordance with this recommendation.
4 Modeling Facial Expressions as Trajectories in
We are able to compare static shape representations based on their Gramian representation , the induced space, and closeness introduced in the previous section. We need a natural and effective extension to study their temporal evolution. Following [6, 35, 39], we define curves (I denotes the time domain, \eg) to model the spatiotemporal evolution of elements on . Given a sequence of shapes represented by their corresponding Gram matrices in , the corresponding curve is the trajectory of the point on , when ranges in . These curves are obtained by connecting all successive Gramian representations of shapes and , , by pseudogeodesics in .
4.1 Temporal alignment and analysis
The execution rate of facial expressions is often arbitrary and that results in different parameterizations of corresponding trajectories. This parameterization variability distorts the comparison measures of these trajectories. Given trajectories on , we are interested in finding functions such that the are matched optimally for all . In other words, two curves and represent the same trajectories if their images are the same. This happens if, and only if, , where is a reparameterization of the interval . The problem of temporal alignment is turned to find an optimal warping function according to,
(7) 
where denotes the set of all monotonicallyincreasing functions . The most commonly used method to solve such optimization problem is the Dynamic Time Warping (DTW) algorithm. Note that accommodation of the DTW algorithm to the manifoldvalues sequences can be achieved with respect to an appropriate metric defined on the underlying manifold . Having the optimal reparametrization function , one can define a (dis)similarity measure between two trajectories allowing a rateinvariant comparison:
(8) 
From now, we shall use to compare trajectories in our manifold of interest .
4.2 Adaptive resampling of trajectories
One difficulty in video analysis is to catch the most relevant frames and focus on them. In fact, it is relevant to reduce the number of frames when no motion happened and in the same time ”introduce” new frames, otherwise. Our geometric framework provides tools to do so. In fact, interpolation between successive frames could be achieved using pseudogeodesics defined in Eq.(5), while their length (closeness defined in Eq.(3.3)) expresses the magnitude of the motion. Accordingly, we have designed an adaptive resampling tool that is able to increase/decrease the number of samples in a fixed time interval according to their relevance, with respect to the geometry of the underlying manifold . Relevant samples are identified by a relatively low closeness to the previous frame, while irrelevant ones correspond to a higher closeness level. Here, the downsampling is performed by removing irrelevant shapes. In turn, the upsampling is possible by interpolating between successive shape representations in using pseudogeodesics.
More formally, given a trajectory on , for each sample we compute the closeness to the previous sample, \ie. If the value is below a defined threshold , current sample is simply removed from the trajectory. In contrast, if the distance exceeds a second threshold , new samples (shapes) generated from the pseudogeodesic curve connecting to are inserted in the trajectory.
5 Classification of Trajectories in
Our trajectory representation reduces the problem of facial sequences classification to trajectories classification in . Let us consider , the set of timeparametrized trajectories of the underlying manifold. Let be the training set with class labels, where and , \eg, such that . The goal here is to find an approximation to such that . In Euclidean spaces, any standard classifier (\egstandard SVM) may be a natural and appropriate choice to classify the trajectories. Unfortunately, this is no more suitable as the space built from is nonlinear. A function that divides the manifold is rather a complicated notion compared with the Euclidean space. In current literature, few approaches have been proposed to handle the nonlinearity of Riemannian manifolds [19, 35, 39, 40]. These methods map the points on the manifold to a tangent space or to Hilbert space where traditional learning techniques can be used for classification. Mapping data to a tangent space only yields a firstorder approximation of the data that can be distorted, especially in regions far from the origin of the tangent space. Moreover, iteratively mapping back and forth, \ieRiemannian Logarithmic and exponential maps, to the tangent spaces significantly increases the computational cost of the algorithm. Recently, some authors propose to embed a manifold in a high dimensional Reproducing Kernel Hilbert Space (RKHS), where Euclidean geometry can be applied [19]. The Riemannian kernels enable the classifiers to operate in an extrinsic feature space without computing tangent space and and maps. Many Euclidean machine learning algorithms can be directly generalized to an RKHS, which is a vector space that possesses an important structure: the inner product. Such an embedding, however, requires a positive semidefinite kernel function, according to Mercer’s theorem [32].
Inspired by a recent work of [4] for action recognition, we adopt the pairwise proximity function SVM (ppfSVM) [16, 17]. PpfSVM requires the definition of a (dis)simlarity measure to compare samples. In our case, it is natural to consider the defined in Eq.(8) for such a comparison. This strategy involves the construction of inputs such that each trajectory is represented by its (dis)similarity to all the trajectories in the dataset, with respect to , and then apply a conventional SVM to this transformed data [17]. The ppfSVM is related to the arbitrary kernelSVM without restrictions on the kernel function [16].
Given trajectories in , we follow [4] and define a proximity function between two trajectories as following,
(9) 
According to [16], there are no restrictions on the function . For an input trajectory , the mapping is given by,
(10) 
6 Experimental Results
To validate the proposed approach, we have conducted extensive experiments on four publicly available datasets – CK+, MMI, OuluCASIA, and AFEW. We have followed experimental settings commonly used in recent works. Note that all our experiments are made once the facial landmarks are extracted using the method proposed in [3] on CK+, MMI, and OuluCASIA datasets. On the challenging AFEW, we have considered the corrections provided in
CohnKanade Extended (CK+) database [26] – is one of the most popular datasets. It contains subjects and frontal image sequences of posed expressions. Among them, subjects are annotated with the seven labels – anger (An), contempt (Co), disgust (Di), fear (Fe), happy (Ha), sad (Sa), and surprise (Su). Note that only the two first temporal phases of the expression, \ieneutral and onset (with apex frames), are present. Following the same settings of [14, 20], we have performed fold cross validation experiment. The results are summarized in Table 1.
An  Co  Di  Fe  Ha  Sa  Su  

An  100  5.55  3.38  0  0  3.57  0 
Co  0  83.35  0  0  1.44  0  1.2 
Di  0  0  96.62  0  0  0  0 
Fe  0  0  0  92  0  0  0 
Ha  0  5.55  0  8  98.56  0  0 
Sa  0  5.55  0  0  0  96.43  0 
Su  0  0  0  0  0  0  98.8 
Overall, the average accuracy is 96.87%. When individual accuracy of (An), (Di), (Ha), and (Su) are high, recognizing (Co) and (Fe) is still challenging. Note that the accuracy of the trajectory representation on , following the same pipeline is 2% lower, which confirms the contribution of the covariance embedded to our shape representation.
MMI database [37] – consists of image sequences with frontal faces of only subjects labeled with the six basic emotion labels. This database is different from the other databases; each sequence begins with a neutral facial expression and has a posed facial expression in the middle of the sequence. This ends with the neutral facial expression. The location of the peak frame is not provided as a prior information. Here again, the protocol used in [14, 20] was followed according to a fold crossvalidation schema. The confusion matrix is reported in Table 2. An average classification accuracy of 79.19% is reported. Note that based on geometric features only, our approach grounding on both representations on and achieves competitive results with respect to the literature (see Table 5).
An  Di  Fe  Ha  Sa  Su  
An  76.66  9.37  0  0  9.37  0 
Di  13.33  75  13.79  2.44  3.12  0 
Fe  0  3.12  55.17  0  9.37  12.82 
Ha  0  12.5  0  97.56  0  0 
Sa  10  0  3.44  0  71.87  2.56 
Su  0  0  27.58  0  6.25  84.61 
OuluCASIA database [45] – includes image sequences of subjects taken under normal illumination conditions. They are labeled with one of the six basic emotion labels. Each sequence begins with a neutral facial expression and ends with the apex of the expression. We adopt a fold cross validation schema similarly to [20, 25]. This time, the average accuracy is 83.13%, hence 3% higher than the Grassmann trajectory representation. This is the highest accuracy reported in literature (refer to Table 6).
An  Di  Fe  Ha  Sa  Su  
An  81.25  15  1.25  0  13.75  0 
Di  10  78.75  2.5  0  6.25  0 
Fe  1.25  1.25  78.75  6.25  3.75  5 
Ha  1.25  1.25  3.75  91.25  1.25  1.25 
Sa  6.25  3.75  5  2.5  75  0 
Su  0  0  8.75  0  0  93.75 
AFEW database [12] – collected from movies showing closetorealworld conditions, which depicts or simulates the spontaneous expressions in uncontrolled environment. According to the protocol defined in EmotiW’2013 [11], the database is divided into three sets: training, validation, and test. The task is to classify each video clip into one of the seven expression categories (the six basic emotions plus the neutral). As the ground truth of test set is still unreleased, here we only report our results on the validation set for comparison with [11, 14, 25]. The average accuracy is 39.94%. Unsurprisingly, the (Ne), (An), and (Ha) are better recognized over the rest. Despite their competitiveness with respect to recent literature, these results state clearly that the AFER ”inthewild” is still a distant goal.
An  Di  Fe  Ha  Ne  Sa  Su  

An  56.25  12.5  30.43  4.76  7.93  13.11  26.08 
Di  0  10  8.69  4.76  0  6.55  2.17 
Fe  7.81  7.5  26.08  4.76  7.93  14.75  19.56 
Ha  10.93  22.5  10.87  66.66  6.35  11.47  2.17 
Ne  9.37  37.5  10.87  12.69  63.49  32.78  30.43 
Sa  10.93  2.5  6.52  6.35  11.11  18.03  2.17 
Su  4.68  7.5  6.52  0  3.17  3.27  17.39 
In Fig.3 we study the method’s behavior when varying the parameter (of the closeness) defined in Eq.(3.3). Recall that serves to balance the contribution of the distance between covariance matrices living in with respect to the Grassmann contribution . The graphs report the method accuracy respectively on CK+, MMI, OuluCASIA, and AFEW. The optimal performances are achieved for the following values – , , , and .
Method  CK+  MMI 
3D HOG [21] (from [20])  91.44  60.89 
3D SIFT [33] (from [20])    64.39 
Cov3D [30] (from [20])  92.3   
MSR [29] (LOSO)  91.4   
STMExpLet [25] (10fold)  94.19  75.12 
CSPL [48] (10fold)  89.89  73.53 
FBases [31] (LOSO)  96.02  75.12 
STRBM [14] (10fold)  95.66  81.63 
FaceNet2ExpNet [13]  96.8   
3DCNNDAP [24] (15fold)  87.9  62.2 
DTAN [20] (10fold)  91.44  62.45 
DTAGN [20] (10fold)  97.25  70.24 
DTGN [20] (10fold)  92.35  59.02 
TMS [18] (4fold)  85.84   
HMM [43] (15fold)  83.5  51.5 
ITBN [43] (15fold)  86.3  59.7 
Velocity on [35]  82.8   
traj. on (10fold)  94.25 3.71  78.18 4.87 
traj. on (10fold)  96.87 2.46  79.19 4.62 
Comparative study with the stateoftheart. In tables 5 and 6, we compare our approach over the recent literature. Overall, our approach achieves competitive performances with respect to the most recent approaches. On CK+, we obtained the second highest accuracy. The rankedfirst approach is DTAGN [20], in which two deep networks are trained on shape and appearance channels, then fused. Note that the geometry deep network (DTGN) achieved 92.35%, which is much lower than ours. Furthermore, our approach outperforms the STRBM [14] and the STMExpLet [25]. On MMI dataset, our approach outperforms the DTAGN [20] and the STMExpLet [25]. However, it is behind STRBM [14]. Note that the FaceNet2ExpNet [13] is a pure static approach and is reported here as the stateoftheart of static AFER.
Method  OuluCASIA  AFEW 

HOG 3D [21]  70.63  26.90 
HOE [41]    19.54 
3D SIFT [33]  55.83  24.87 
LBPTOP [47]  68.13  25.13 
EmotiW [11]    27.27 
STM [25]    29.19 
STMExpLet [25]  74.59  31.73 
DTAGN [20] (10fold)  81.46   
STRBM [14]    46.36 
traj. on  80.0 5.22  39.1 
traj. on  83.13 3.86  39.94 
On OuluCASIA dataset, our approach shows a clear superiority to existing methods, in particular STMExpLet [25] and DTGN [20]. Elaiwat \etal[14] do not report any results on this dataset. However, their approach achieved the highest accuracy on AFEW. Our approach is ranked second showing a superiority to remaining approaches on AFEW.
Baseline experiments. Based on the results reported in table 7, we discuss in this paragraph algorithms and their computational complexity with respect to baselines.
Distance  CK+ (%)  Time (s) 

Flat distance  93.78 2.92  0.020 
Distance in  92.92 2.45  0.816 
Closeness  96.87 2.46  0.055 
Temporal alignment  CK+ (%)  MMI (%)  Time (s) 

without DTW  90.94 4.23  66.93 5.79  0.018 
with DTW  96.87 2.46  79.19 4.62  0.055 
Adaptive resampling  MMI (%)  AFEW (%) 

without resampling  74.72 5.34  36.81 
with resampling  79.19 4.62  39.94 
Classifier  CK+ (%)  AFEW (%) 

KNN  88.97 6.14  29.77 
ppfSVM  96.87 2.46  39.94 
We highlight firstly the superiority of the trajectory representation on over the Grassmannian (refer to Tables 5 and 6). This is due to the contribution of the covariance part further to the conventional affineshape analysis over the Grassmannian. Secondly, we have used different distances defined on . Specifically, given two matrices and in : (1) as proposed in [42], we used that was defined in Eq.(3.3) to compare them through regularizing their ranks, \iemaking them fullrank and considering them in (the space of by positive definite matrices), ; (2) we used the Euclidean flat distance , where denotes the Frobeniusnorm. The closeness between two elements of defined in Eq.(3.3) is more suitable compared to the distance and the flat distance . This demonstrates the importance of being faithful to the geometry of the manifold of interest. Another advantage of using over is the computational time as it involves by and by matrices instead of by matrices.
Table 7 reports the average accuracy when DTW is used or not in our pipeline on both CK+ and MMI datasets. It is clear from these experiments that a temporal alignment of the trajectories is a crucial step as an improvement of around is obtained on MMI and on CK+. The adaptive resampling tool is also analyzed. When it is involved in the pipeline, an improvement of around is achieved on MMI and on AFEW.
In the last table, we compare the results of ppfSVM to a KNN classifier for both CK+ and AFEW databases. Each test sequence is classified by a majority vote of its Knearest neighbors using the (dis)similarity measure defined in Eq. 8. The number of nearest neighbors K to consider for each database is chosen by crossvalidation. On CK+, we obtained an average accuracy of for . On AFEW, we obtained an average accuracy of for . These results are outperformed by ppfSVM classifier.
7 Conclusion and Future Work
We have proposed in this paper a geometric approach for effectively modeling and classifying dynamic facial sequences. Based on Gramian matrices derived from the facial landmarks, our representation consists of an affineinvariant shape representation and a spatial covariance of the landmarks. We have exploited the geometry of the space to define a closeness between static and dynamic (trajectory) representations. We have derived then computational tools to align, resample and compare these trajectories giving rise to a rateinvariant analysis. Finally, facial expressions are learned from these trajectories using a variant of SVM, called ppfSVM, which allows to deal with the nonlinearity of the space of representations. Our experiments on four publicly available datasets showed that the proposed approach gives competitive or better than stateofart results. In the future, we will extend this approach to handle with smaller variations of facial expressions. Another direction could be adapting our approach for other applications that involve landmark sequences analysis such as action recognition.
8 Acknowledgements
This work has been partially supported by PIA (ANR11EQPX0023), European Founds for the Regional Development (FEDERPresage 41779).
Footnotes
 http://sites.google.com/site/chehrahome
References
 P.A. Absil, R. Mahony, and R. Sepulchre. Riemannian geometry of grassmann manifolds with a view on algorithmic computation. Acta Applicandae Mathematica, 80(2):199–220, 2004.
 R. Anirudh, P. K. Turaga, J. Su, and A. Srivastava. Elastic functional coding of riemannian trajectories. IEEE IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(5):922–936, 2017.
 A. Asthana, S. Zafeiriou, S. Cheng, and M. Pantic. Incremental face alignment in the wild. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, OH, USA, June 2328, 2014, pages 1859–1866, 2014.
 M. A. Bagheri, Q. Gao, and S. Escalera. Support vector machines with time series distance kernels for action classification. In Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on, pages 1–7. IEEE, 2016.
 E. Begelfor and M. Werman. Affine invariance revisited. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2006), 1722 June 2006, New York, NY, USA, pages 2087–2094, 2006.
 B. Ben Amor, J. Su, and A. Srivastava. Action recognition using rateinvariant analysis of skeletal shape trajectories. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(1):1–13, 2016.
 S. Bonnabel and R. Sepulchre. Riemannian metric and geometric mean for positive semidefinite matrices of fixed rank. SIAM Journal on Matrix Analysis and Applications, 31(3):1055–1070, 2009.
 C. A. Corneanu, M. Oliu, J. F. Cohn, and S. Escalera. Survey on rgb, 3D, thermal, and multimodal approaches for facial expression recognition: history, trends, and affectrelated applications. 2016.
 M. Devanne, H. Wannous, S. Berretti, P. Pala, M. Daoudi, and A. D. Bimbo. 3D human action recognition by shape analysis of motion trajectories on riemannian manifold. IEEE Trans. Cybernetics, 45(7):1340–1352, 2015.
 M. M. Deza and M. Laurent. Geometry of cuts and metrics, volume 15. Springer, 2009.
 A. Dhall, R. Goecke, J. Joshi, M. Wagner, and T. Gedeon. Emotion recognition in the wild challenge (emotiw) challenge and workshop summary. In 2013 International Conference on Multimodal Interaction, ICMI ’13, Sydney, NSW, Australia, December 913, 2013, pages 371–372, 2013.
 A. Dhall, R. Goecke, S. Lucey, and T. Gedeon. Collecting large, richly annotated facialexpression databases from movies. IEEE MultiMedia, 19(3):34–41, 2012.
 H. Ding, S. K. Zhou, and R. Chellappa. Facenet2expnet: Regularizing a deep face recognition net for expression recognition. CoRR, abs/1609.06591, 2016.
 S. Elaiwat, M. Bennamoun, and F. Boussaïd. A spatiotemporal rbmbased model for facial expression recognition. Pattern Recognition, 49:152–161, 2016.
 M. Faraki, M. T. Harandi, and F. Porikli. Image set classification by symmetric positive semidefinite matrices. In Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on, pages 1–8. IEEE, 2016.
 T. Graepel, R. Herbrich, P. BollmannSdorra, and K. Obermayer. Classification on pairwise proximity data. Advances in neural information processing systems, pages 438–444, 1999.
 S. Gudmundsson, T. P. Runarsson, and S. Sigurdsson. Support vector machines and dynamic time warping for time series. In Neural Networks, 2008. IJCNN 2008.(IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on, pages 2772–2776. IEEE, 2008.
 S. Jain, C. Hu, and J. K. Aggarwal. Facial expression recognition with temporal modeling of shapes. In IEEE International Conference on Computer Vision Workshops, ICCV 2011 Workshops, Barcelona, Spain, November 613, 2011, pages 1642–1649, 2011.
 S. Jayasumana, R. I. Hartley, M. Salzmann, H. Li, and M. T. Harandi. Kernel methods on riemannian manifolds with gaussian RBF kernels. IEEE Trans. Pattern Anal. Mach. Intell., 37(12):2464–2477, 2015.
 H. Jung, S. Lee, J. Yim, S. Park, and J. Kim. Joint finetuning in deep neural networks for facial expression recognition. In IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 713, 2015, pages 2983–2991, 2015.
 A. Kläser, M. Marszalek, and C. Schmid. A spatiotemporal descriptor based on 3dgradients. In Proceedings of the British Machine Vision Conference 2008, Leeds, September 2008, pages 1–10, 2008.
 S. Kobayashi and K. Nomizu. Foundations of Differential Geometry, volume 1. Interscience Publishers, 1963.
 J. Kossaifi, G. Tzimiropoulos, S. Todorovic, and M. Pantic. AFEWVA database for valence and arousal estimation inthewild. Image and Vision Computing, 2017.
 M. Liu, S. Li, S. Shan, R. Wang, and X. Chen. Deeply learning deformable facial action parts model for dynamic expression analysis. In Computer Vision  ACCV 2014  12th Asian Conference on Computer Vision, Singapore, Singapore, November 15, 2014, Revised Selected Papers, Part IV, pages 143–157, 2014.
 M. Liu, S. Shan, R. Wang, and X. Chen. Learning expressionlets on spatiotemporal manifold for dynamic facial expression recognition. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, OH, USA, June 2328, 2014, pages 1749–1756, 2014.
 P. Lucey, J. F. Cohn, T. Kanade, J. M. Saragih, Z. Ambadar, and I. A. Matthews. The extended cohnkanade dataset (CK+): A complete dataset for action unit and emotionspecified expression. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR Workshops 2010, San Francisco, CA, USA, 1318 June, 2010, pages 94–101, 2010.
 G. Meyer, S. Bonnabel, and R. Sepulchre. Regression on fixedrank positive semidefinite matrices: a Riemannian approach. Journal of Machine Learning Research, 12(Feb):593–625, 2011.
 X. Pennec, P. Fillard, and N. Ayache. A riemannian framework for tensor computing. International Journal of Computer Vision, 66(1):41–66, 2006.
 R. W. Ptucha, G. Tsagkatakis, and A. E. Savakis. Manifold based sparse representation for robust expression recognition without neutral subtraction. In IEEE International Conference on Computer Vision Workshops, ICCV 2011 Workshops, Barcelona, Spain, November 613, 2011, pages 2136–2143, 2011.
 A. Sanin, C. Sanderson, M. T. Harandi, and B. C. Lovell. Spatiotemporal covariance descriptors for action and gesture recognition. In 2013 IEEE Workshop on Applications of Computer Vision, WACV 2013, Clearwater Beach, FL, USA, January 1517, 2013, pages 103–110, 2013.
 E. Sariyanidi, H. Gunes, and A. Cavallaro. Learning bases of activity for facial expression recognition. IEEE Transactions on Image Processing, PP(99):1–1, 2017.
 B. Scholkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA, USA, 2001.
 P. Scovanner, S. Ali, and M. Shah. A 3dimensional sift descriptor and its application to action recognition. In Proceedings of the 15th International Conference on Multimedia 2007, Augsburg, Germany, September 2429, 2007, pages 357–360, 2007.
 J. Shen, S. Zafeiriou, G. G. Chrysos, J. Kossaifi, G. Tzimiropoulos, and M. Pantic. The first facial landmark tracking inthewild challenge: Benchmark and results. In 2015 IEEE International Conference on Computer Vision Workshop, ICCV Workshops 2015, Santiago, Chile, December 713, 2015, pages 1003–1011, 2015.
 S. Taheri, P. Turaga, and R. Chellappa. Towards viewinvariant expression analysis using analytic shape manifolds. In Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on, pages 306–313. IEEE, 2011.
 Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the gap to humanlevel performance in face verification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1701–1708, 2014.
 M. F. Valstar and M. Pantic. Induced disgust, happiness and surprise: an addition to the mmi facial expression database. In Proceedings of Int’l Conf. Language Resources and Evaluation, Workshop on EMOTION, pages 65–70, Malta, May 2010.
 B. Vandereycken, P.A. Absil, and S. Vandewalle. Embedded geometry of the set of symmetric positive semidefinite matrices of fixed rank. In Statistical Signal Processing, 2009. SSP’09. IEEE/SP 15th Workshop on, pages 389–392. IEEE, 2009.
 R. Vemulapalli, F. Arrate, and R. Chellappa. Human action recognition by representing 3d skeletons as points in a lie group. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 588–595, 2014.
 R. Vemulapalli and R. Chellapa. Rolling rotations for recognizing human actions from 3d skeletal data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4471–4479, 2016.
 L. Wang, Y. Qiao, and X. Tang. Motionlets: Midlevel 3d parts for human motion recognition. In 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, June 2328, 2013, pages 2674–2681, 2013.
 R. Wang, H. Guo, L. S. Davis, and Q. Dai. Covariance discriminative learning: A natural and efficient approach to image set classification. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2496–2503. IEEE, 2012.
 Z. Wang, S. Wang, and Q. Ji. Capturing complex spatiotemporal relations among facial muscles for facial expression recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3422–3429, 2013.
 X. Xiong and F. D. la Torre. Supervised descent method and its applications to face alignment. In 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, June 2328, 2013, pages 532–539, 2013.
 G. Zhao, X. Huang, M. Taini, S. Z. Li, and M. Pietikäinen. Facial expression recognition from nearinfrared videos. Image Vision Comput., 29(9):607–619, 2011.
 G. Zhao and M. Pietikainen. Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(6), 2007.
 G. Zhao and M. Pietikäinen. Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(6):915–928, 2007.
 L. Zhong, Q. Liu, P. Yang, B. Liu, J. Huang, and D. N. Metaxas. Learning active facial patches for expression analysis. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, June 1621, 2012, pages 2562–2569, 2012.