A Riemannian Network for SPD Matrix Learning
Abstract
Symmetric Positive Definite (SPD) matrix learning methods have become popular in many image and video processing tasks, thanks to their ability to learn appropriate statistical representations while respecting Riemannian geometry of underlying SPD manifolds. In this paper we build a Riemannian network architecture to open up a new direction of SPD matrix nonlinear learning in a deep model. In particular, we devise bilinear mapping layers to transform input SPD matrices to more desirable SPD matrices, exploit eigenvalue rectification layers to apply a nonlinear activation function to the new SPD matrices, and design an eigenvalue logarithm layer to perform Riemannian computing on the resulting SPD matrices for regular output layers. For training the proposed deep network, we exploit a new backpropagation with a variant of stochastic gradient descent on Stiefel manifolds to update the structured connection weights and the involved SPD matrix data. We show through experiments that the proposed SPD matrix network can be simply trained and outperform existing SPD matrix learning and stateoftheart methods in three typical visual classification tasks.
A Riemannian Network for SPD Matrix Learning
Zhiwu Huang and Luc Van Gool Computer Vision Lab, ETH Zurich, Switzerland {zhiwu.huang, vangool}@vision.ee.ethz.ch
Copyright © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Introduction
Symmetric Positive Definite (SPD) matrices are often encountered and have made great success in a variety of areas. In medical imaging, they are commonly used in diffusion tensor magnetic resonance imaging (?; ?; ?). In visual recognition, SPD matrix data provide powerful statistical representations for images and videos. Examples include region covariance matrices for pedestrian detection (?; ?; ?), joint covariance descriptors for action recognition (?), image set covariance matrices for face recognition (?; ?; ?) and secondorder pooling for object classification (?).
As a consequence, there has been a growing need to carry out effective computations to interpolate, restore, and classify SPD matrices. However, the computations on SPD matrices often accompany with the challenge of their nonEuclidean data structure that underlies a Riemannian manifold (?; ?). Applying Euclidean geometry to SPD matrices directly often results in undesirable effects, such as the swelling of diffusion tensors (?). To address this problem, (?; ?; ?) introduced Riemannian metrics, e.g., LogEuclidean metric (?), to encode Riemannian geometry of SPD manifolds properly.
By employing these wellstudied Riemannian metrics, existing SPD matrix learning approaches typically flatten SPD manifolds via tangent space approximation (?; ?; ?; ?), or map them into reproducing kernel Hilbert spaces (?; ?; ?; ?; ?; ?). To more faithfully respect the original Riemannian geometry, recent methods (?; ?) adopt a geometryaware SPD matrix learning scheme to pursue a mapping from the original SPD manifold to another one with the same SPD structure. However, all the existing methods merely apply shallow learning, with which traditional methods are typically surpassed by recent popular deep learning methods in many contexts in artificial intelligence and visual recognition.
In the light of the successful paradigm of deep neural networks (e.g., (?; ?)) to perform nonlinear computations with effective backpropagation training algorithms, we devise a deep neural network architecture, that receives SPD matrices as inputs and preserves the SPD structure across layers, for SPD matrix nonlinear learning. In other words, we aim to design a deep learning architecture to nonlinearly learn desirable SPD matrices on Riemannian manifolds. In summary, this paper mainly brings three innovations:

A novel Riemannian network architecture is introduced to open a new direction of SPD matrix deep nonlinear learning on Riemannian manifolds.

This work offers a paradigm of incorporating the Riemannian structures into deep network architectures for compressing both of the data space and the weight space.

A new backpropagation is derived to train the proposed network with exploiting a stochastic gradient descent optimization algorithm on Stiefel manifolds.
Related Work
Deep neural networks have exhibited their great powers when the processed data own a Euclidean data structure. In many contexts, however, one may be faced with data defined in nonEuclidean domains. To tackle graphstructured data, (?) presented a spectral formulation of convolutional networks by exploiting a notion of non shiftinvariant convolution that depends on the analogy between the classical Fourier transform and the LaplaceBeltrami eigenbasis. Following (?), a localized spectral network was proposed in (?) to nonEuclidean domains by generalizing the windowed Fourier transform to manifolds to extract the local behavior of some dense intrinsic descriptor. Similarly, (?) proposed a ‘geodesic convolution’ on nonEuclidean local geodesic system of coordinates to extract ‘local patches’ on shape manifolds. The convolutions in this approach were performed by sliding a window over the shape manifolds.
Stochastic gradient descent (SGD) has been the workhorse for optimizing deep neural networks. As an application of the chain rule, backpropagation is commonly employed to compute Euclidean gradients of objective functions, which is the key operation of SGD. Recently, the two works (?; ?) extended backpropagation directly on matrices. For example, (?) formulated matrix backpropagation as a generalized chain rule mechanism for computing derivatives of composed matrix functions with respect to matrix inputs. Besides, the other family of network optimization algorithms exploits Riemannian gradients to handle weight space symmetries in neural networks. For instance, recent works (?; ?; ?; ?) developed several optimization algorithms by building Riemannian metrics on the activity and parameter space of neural networks, treated as Riemannian manifolds.
Riemannian SPD Matrix Network
Analogously to the wellknown convolutional network (ConvNet), the proposed SPD matrix network (SPDNet) also designs fully connected convolutionlike layers and rectified linear units (ReLU)like layers, named bilinear mapping (BiMap) layers and eigenvalue rectification (ReEig) layers respectively. In particular, following the classical manifold learning theory that learning or even preserving the original data structure can benefit classification, the BiMap layers are designed to transform the input SPD matrices, that are usually covariance matrices derived from the data, to new SPD matrices with a bilinear mapping. As the classical ReLU layers, the proposed ReEig layers introduce a nonlinearity to the SPDNet by rectifying the resulting SPD matrices with a nonlinear function. Since SPD matrices reside on nonEuclidean manifolds, we have to devise an eigenvalue logarithm (LogEig) layer to carry out Riemannian computing on them to output their Euclidean forms for any regular output layers. The proposed Riemannian network is conceptually illustrated in Fig.1.
BiMap Layer
The primary function of the SPDNet is to generate more compact and discriminative SPD matrices. To this end, we design the BiMap layer to transform the input SPD matrices to new SPD matrices by a bilinear mapping as
(1) 
where is the input SPD matrix of the th layer, is the transformation matrix (connection weights), is the resulting matrix. Note that multiple bilinear mappings can be also performed on each input. To ensure the output becomes a valid SPD matrix, the transformation matrix is basically required to be a row fullrank matrix. By applying the BiMap layer, the inputs on the original SPD manifold are transformed to new ones which form another SPD manifold . In other words, the data space on each BiMap layer corresponds to one SPD manifold.
Since the weight space of fullrank matrices is a noncompact Stiefel manifold where the distance function has no upper bound, directly optimizing on the manifold is infeasible. To handle this problem, one typical solution is to additionally assume the transformation matrix to be orthogonal (semiorthogonal more exactly here) so that they reside on a compact Stiefel manifold ^{1}^{1}1A compact Stiefel manifold is the set of dimensional orthonormal matrices of the .. As a result, optimizing over the compact Stiefel manifolds can achieve optimal solutions of the transformation matrices.
ReEig Layer
In the context of ConvNets, (?; ?) presented various rectified linear units (ReLU) (including the nonlinearity) to improve discriminative performance. Hence, exploiting ReLUlike layers to introduce a nonlinearity to the context of the SPDNet is also necessary. Inspired by the idea of the nonlinearity, we devise a nonlinear function for the ReEig (th) layer to rectify the SPD matrices by tuning up their small positive eigenvalues:
(2) 
where and are achieved by eigenvalue decomposition (EIG) , is a rectification threshold, is an identity matrix, is a diagonal matrix with diagonal elements being defined as
(3) 
Intuitively, Eqn.2 prevents the input SPD matrices from being close to nonpositive ones (while the ReLU yields sparsity). Nevertheless, it is not originally designed for regularization because the inputs are already nonsingular after applying BiMap layers. In other words, we always set above the topn smallest eigenvalue even when the eigenvalues of original SPD matrices are all much greater than zero.
Besides, there also exist other feasible strategies to derive a nonlinearity on the input SPD matrices. For example, the sigmoidal function (?) could be considered to extend Eqn.2 to a different activation function. Due to the space limitation, we do not discuss this any further.
LogEig Layer
The LogEig layer is designed to perform Riemannian computing on the resulting SPD matrices for output layers with objective functions. As studied in (?), the LogEuclidean Riemannian metric is able to endow the Riemannian manifold of SPD matrices with a Lie group structure so that the manifold is reduced to a flat space with the matrix logarithm operation on the SPD matrices. In the flat space, classical Euclidean computations can be applied to the domain of SPD matrix logarithms. Formally, we employ the Riemannian computation (?) in the th layer to define the involved function as
(4) 
where is an EIG operation, is the diagonal matrix of eigenvalue logarithms.
For SPD manifolds, the LogEuclidean Riemannian computation is particularly simple to use and avoids the high expense of other Riemannian computations (?; ?), while preserving favorable theoretical properties. As for other Riemannian computations on SPD manifolds, please refer to (?; ?) for more studies on their properties.
Other Layers
After applying the LogEig layer, the vector forms of the outputs can be fed into classical Euclidean network layers. For example, the Euclidean fully connected (FC) layer could be inserted after the LogEig layer. The dimensionality of the filters in the FC layer is set to , where and are the class number and the dimensionality of the vector forms of the input matrices respectively. The final output layer for visual recognition tasks could be a softmax layer used in the context of Euclidean networks.
In addition, the pooling layers and the normalization layers are also important to improve regular Euclidean ConvNets. For the SPDNet, the pooling on SPD matrices can be first carried out on their matrix logarithms, and then transform them back to SPD matrices by employing the matrix exponential map in the Riemannian framework (?; ?). Similarly, the normalization procedure on SPD matrices could be first to calculate the mean and variance of their matrix logarithms, and then normalize them with their mean and variance as done in (?).
Riemannian Matrix Backpropagation
The model of the proposed SPDNet can be written as a series of successive function compositions with a parameter tuple , where is the function for the th layer, is the weight parameter of the th layer and is the number of layers. The loss of the th layer could be denoted by a function as , where is the loss function for the final output layer.
Training deep networks often uses stochastic gradient descent (SGD) algorithms. The key operation of one classical SGD algorithm is to compute the gradient of the objective function, which is obtained by an application of the chain rule known as backpropagation (backprop). For the th layer, the gradients of the weight and the data can be respectively computed by backprop as
(5)  
(6) 
where is the output, . Eqn.5 is the gradient for updating , while Eqn.6 is to compute the gradients of the involved data in the layers below. For simplicity, we often replace with in the sequel.
There exist two key issues for generalizing backprop to the context of the proposed Riemannian network for SPD matrices. The first one is updating the weights in the BiMap layers. As we force the weights to be on Stiefel manifolds, merely using Eqn.5 to compute their Euclidean gradients rather than Riemannian gradients in the procedure of backprop cannot yield valid orthogonal weights. While the gradients of the SPD matrices in the BiMap layers can be calculated by Eqn.6 as usual, computing those with EIG decomposition in the layers of ReEig and LogEig has not been wellsolved by the traditional backprop. Thus, it is the second key issue for training the proposed network.
To tackle the first issue, we propose a new way of updating the weights defined in Eqn.1 for the BiMap layers by exploiting an SGD setting on Stiefel manifolds. The steepest descent direction for the corresponding loss function with respect to on the Stiefel manifold is the Riemannian gradient . To obtain it, the normal component of the Euclidean gradient is subtracted to generate the tangential component to the Stiefel manifold. Searching along the tangential direction takes the update in the tangent space of the Stiefel manifold. Then, such the update is mapped back to the Stiefel manifold with a retraction operation. For more details about the Stiefel geometry and retraction, readers are referred to (?) and (?) (Page 4548, 59). Formally, an update of the current weight on the Stiefel manifold respects the form
(7)  
(8) 
where is the retraction operation, is the learning rate, is the normal component of the Euclidean gradient that can be computed by using Eqn.5 as
(9) 
As for the second issue, we exploit the matrix generalization of backprop studied in (?) to compute the gradients of the involved SPD matrices in the ReEig and LogEig layers. In particular, let be a function describing the variations of the upper layer variables with respect to the lower layer variables, i.e., . With the function , a new version of the chain rule Eqn.6 for the matrix backprop is defined as
(10) 
where is a nonlinear adjoint operator of , i.e., , the operator is the matrix inner product with the property .
Actually, both of the two functions Eqn.2 and Eqn.4 for the ReEig and LogEig layers involve the EIG operation (note that, to increase the readability, we drop the layer indexes for and in the sequel). Hence, we introduce a virtual layer ( layer) for the EIG operation. Applying the new chain rule Eqn.10 and its properties, the update rule for the data is derived as
(11)  
where the two variations and are derived by the variation of the EIG operation as:
(12)  
(13) 
where is the Hadamard product, , is with all offdiagonal elements being 0 (note that we also use these two denotations in the following), is calculated by operating on the eigenvalues in :
(14) 
For more details to derive Eqn.12 and Eqn.13, please refer to (?). Plugging Eqn.12 and Eqn.13 into Eqn.11 and using the properties of the matrix inner product can derive the partial derivatives of the loss functions for the ReEig and LogEig layers:
(15)  
where and can be obtained with the same derivation strategy used in Eqn.11. For the function Eqn.2 employed in the ReEig layers, its variation becomes , and these two partial derivatives can be computed by
(16)  
(17) 
where is defined in Eqn.3, and is the gradient of with diagonal elements being defined as
(18) 
For the function Eqn.4 used in the LogEig layers, its variation is . Then we calculate the following two partial derivatives:
(19)  
(20) 
Discussion
Although the two works (?; ?) have studied the geometryaware map to preserve SPD structure, our SPDNet exploits more general (i.e., Stiefel manifold) setting for the map, and sets it up in the context of deep learning. Furthermore, we also introduce a nonlinearity during learning SPD matrices in the network. Thus, the main theoretical advantage of the proposed SPDNet over the two works is its ability to perform deep learning and nonlinear learning mechanisms.
While (?) introduced a covariance pooling layer into ConvNets starting from images, our SPDNet works on SPD matrices directly and exploits multiple layers tailored for SPD matrix deep learning. From another point of the view, the proposed SPDNet can be built on the top of the network (?) for deeper SPD matrix learning that starts from images. Moreover, we must claim that our SPDNet is still useful while (?) will totally break down when the processed data are not covariance matrices for images.
Experiments
We evaluate the proposed SPDNet for three popular visual classification tasks including emotion recognition, action recognition and face verification, where SPD matrix representations have achieved great successes. The comparing stateoftheart SPD matrix learning methods are Covariance Discriminative Learning (CDL) (?), LogEuclidean Metric Learning (LEML) (?) and SPD Manifold Learning (SPDML) (?) that uses affineinvariant metric (AIM) (?) and stein divergence (?). The Riemannian Sparse Representation (RSR) (?) for SPD matrices is also evaluated. Besides, we measure the deep secondorder pooling (DeepO2P) network (?) which introduces a covariance pooling layer into typical ConvNets. For all of them, we use their source codes from authors with tuning their parameters according to the original works. For our SPDNet, we study 4 configurations, i.e., SPDNet0BiRe/1BiRe/2BiRe/3BiRe, where BiRe means using blocks of BiMap/ReEig. For example, the structure of SPDNet3BiRe is , where indicate the BiMap, ReEig, LogEig, FC and softmax logloss layers respectively. The learning rate is fixed as , the batch size is set to 30, the weights are initialized as random semiorthogonal matrices, and the rectification threshold is set to . For training the SPDNet, we just use an i72600K (3.40GHz) PC without any GPUs.
Emotion Recognition
We use the popular Acted Facial Expression in Wild (AFEW) (?) dataset for emotion recognition. The AFEW database collects 1,345 videos of facial expressions of actors in movies with closetorealworld scenarios.
The database is divided into training, validation and test data sets where each video is classified into one of seven expressions. Since the ground truth of the test set has not been released, we follow (?) to report the results on the validation set. To augment the training data, we segment the training videos into 1,747 small clips. For the evaluation, each facial frame is normalized to an image of size . Then, following (?), we compute the covariance matrix of size to represent each video.
On the AFEW database, the dimensionalities of the SPDNet3BiRe transformation matrices are set to , respectively. Training the SPDNet3BiRe per epoch (500 epoches in total) takes around 2 minutes(m) on this dataset. As show in Table.1, we report the performances of the competing methods including the stateoftheart method (STMExpLet (?)) on this database. It shows our proposed SPDNet3BiRe achieves several improvements over the stateoftheart methods although the training data is small.
In addition, we study the behaviors of the proposed SPDNet with different settings. First, we evaluate our SPDNet without using the LogEig layer. Its extremely low accuracy 21.49% shows the layer for Riemannian computing is necessary. Second, we study the case of learning directly on LogEuclidean forms of original SPD matrices, i.e., SPDNet0BiRe. The performance of SPDNet0BiRe is 26.32%, which is clearly outperformed by the deeper SPDNets (e.g., SPDNet3BiRe). This justifies the importance of using the SPD layers. Besides, SPDNets also clearly outperform DeepO2P that inserts one LogEiglike layer into a standard ConvNet architecture. This somehow validates the improvements are from the contribution of the BiMap and ReEig layers rather than deeper architectures. Third, to study the gains of using multiple BiRe blocks, we compare SPDNet1BiRe and SPDNet2BiRe that feed the LogEig layer with SPD matrices of the same size as set in SPDNet3BiRe. As reported in Table.1, the performance of our SPDNet is improved when stacking more BiRes. Fourth, we also study the power of the designed ReEig layers in Fig.2 (a). The accuracies of the three different rectification threshold settings are respectively, that verifies the success of introducing the nonlinearity. Lastly, we also show the convergence behavior of our SPDNet in Fig.2 (b), which suggests it can converge well after hundreds of epoches.
Action Recognition
We handle the problem of skeletonbased human action recognition using the HDM05 database (?), which is one of the largestscale datasets for the problem.
On the HDM05 dataset, we conduct 10 random evaluations, in each of which half of sequences are randomly selected for training, and the rest are used for testing. Note that the work (?) only recognizes 14 motion classes while the protocol designed by us is to identify 130 action classes and thus be more challenging. For data augmentation, the training sequences are divided into around 18,000 small sequences in each random evaluation. As done in (?), we represent each sequence by a joint covariance descriptor of size , which is computed by the second order statistics of the 3D coordinates of the 31 joints in each frame.
For our SPDNet3BiRe, the sizes of the transformation matrices are set to respectively, and its training time at each of 500 epoches is about 4m on average. Table.1 summarizes the results of the comparative algorithms and of the stateoftheart method (RSRSPDML) (?) on this dataset. As DeepO2P (?) is merely for image based visual classification tasks, we do not evaluate it in the 3D skeleton based action recognition task. We find that our SPDNet3BiRe outperforms the stateoftheart shallow SPD matrix learning methods by a large margin (more than 13%). This shows that the proposed nonlinear deep learning scheme on SPD matrices leads to great improvements when the training data is large enough.
The studies on without using LogEig layers and different configurations of BiRe blocks are executed as the way of the last evaluation. The performance of the case of without using LogEig layers is 4.89%, again validating the importance of the Riemannian computing layers. Besides, as seen from Table.1, the same conclusions as before for different settings of BiRe blocks can be drew on this database.
Method  AFEW  HDM05  PaSC1  PaSC2 

STMExpLet  31.73%  –  –  – 
RSRSPDML  30.12%  48.01%3.38  –  – 
DeepO2P  28.54%  –  68.76%  60.14% 
HERMLDeLF  –  –  58.0%  59.0% 
VGGDeepFace  –  –  78.82%  68.24% 
CDL  31.81%  41.74%1.92  78.29%  70.41% 
LEML  25.13%  46.87%2.19  66.53%  58.34% 
SPDMLAIM  26.72%  47.25%2.78  65.47%  59.03% 
SPDMLStein  24.55%  46.21%2.65  61.63%  56.67% 
RSR  27.49%  41.12%2.53  –  – 
SPDNet0BiRe  26.32%  48.12%3.15  68.52%  63.92% 
SPDNet1BiRe  29.12%  55.26%2.37  71.75%  65.81% 
SPDNet2BiRe  31.54%  59.13%1.78  76.23%  69.64% 
SPDNet3BiRe  34.23%  61.45%1.12  80.12%  72.83% 
Face Verification
For face verification, we employ the PointandShoot Challenge (PaSC) database (?), which is very challenge and widelyused for verifying faces in videos. It includes 1,401 videos taken by control cameras and 1,401 videos captured by handheld cameras for 265 people. In addition, it also contains 280 videos for training.
On the PaSC database, there are control and handheld face verification tasks, both of which are to verify a claimed identity in the query video by comparing with the associated target video. As done in (?), we also use the training data (COX) (?) with 900 videos. Similar to the last two experiments, the whole training data are also augmented to 12,529 small video clips. For evaluation, we use the approach of (?) to extract stateoftheart deep face features on the normalized face images of size . To speed up the training, we employ PCA to finally get 400dimensional features. As done in (?), we compute an SPD matrix of size for each video to fuse its data covariance matrix and mean.
For the evaluation, we configure the sizes of the SPDNet3BiRe weights to respectively. The time for training the SPDNet3BiRe at each of 100 epoches is around 15m. Table.1 compares the accuracies of the different methods including the stateoftheart methods (HERMLDeLF (?) and VGGDeepFace (?)) on the PaSC database. Since the RSR method is designed for recognition tasks rather than verification tasks, we do not report its results. Although the used softmax output layer in our SPDNet is not favorable for the verification tasks, we find that it still achieves the highest performances.
Finally, we can also obtain the same conclusions as before for the studies on different configurations of the proposed SPDNet as observed from the results on the PaSC dataset.
Conclusion
We proposed a novel deep Riemannian network architecture for opening up a possibility of SPD matrix nonlinear learning. To train the SPD network, we exploited a new backpropagation with an SGD setting on Stiefel manifolds. The evaluations on three visual classification tasks studied the effectiveness of the proposed network for SPD matrix learning. As future work, we plan to explore more layers, e.g., parallel BiMap layers, pooling layers and normalization layers, to improve the Riemannian network. For further deepening the architecture, we would build the SPD network on the top of existing convolutional networks such as (?) that start from images. In addition, other interesting directions would be to extend this work for a general Riemannian manifold or to compress traditional network architectures into more compact ones with the proposed SPD or orthogonal constraints.
Acknowledgments
The authors gratefully acknowledge support by EU Framework Seven project ReMeDi (grant 610902).
References
 [Absil, Mahony, and Sepulchre 2008] Absil, P.A.; Mahony, R.; and Sepulchre, R. 2008. Optimization algorithms on matrix manifolds. Princeton University Press.
 [Arsigny et al. 2007] Arsigny, V.; Fillard, P.; Pennec, X.; and Ayache, N. 2007. Geometric means in a novel vector space structure on symmetric positivedefinite matrices. SIAM journal on Matrix Analysis and Applications 29(1):328–347.
 [Beveridge et al. 2013] Beveridge, J.; Phillips, J.; Bolme, D.; Draper, B.; Givens, G.; Lui, Y.; Teli, M.; Zhang, H.; Scruggs, W.; Bowyer, K.; Flynn, P.; and Cheng, S. 2013. The challenge of face recognition from digital pointandshoot cameras. In BTAS.
 [Beveridge, Zhang, and others 2015] Beveridge, J.; Zhang, H.; et al. 2015. Report on the FG 2015 video person recognition evaluation. In FG.
 [Bonnabel 2013] Bonnabel, S. 2013. Stochastic gradient descent on Riemannian manifolds. IEEE TAC 58(9):2217–2229.
 [Boscaini et al. 2015] Boscaini, D.; Masci, J.; Melzi, S.; Bronstein, M.; Castellani, U.; and Vandergheynst, P. 2015. Learning classspecific descriptors for deformable shapes using localized spectral convolutional networks. In Computer Graphics Forum, volume 34, 13–23.
 [Bottou 2010] Bottou, L. 2010. Largescale machine learning with stochastic gradient descent. In COMPSTAT.
 [Bruna et al. 2014] Bruna, J.; Zaremba, W.; Szlam, A.; and LeCun, Y. 2014. Spectral networks and locally connected networks on graphs. In ICLR.
 [Carreira et al. 2012] Carreira, J.; Caseiro, R.; Caseiro, J.; and Sminchisescu, C. 2012. Semantic segmentation with secondorder pooling. In ECCV.
 [Cybenko 1989] Cybenko, G. 1989. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems 2(4):303–314.
 [Dhall et al. 2014] Dhall, A.; Goecke, R.; Joshi, J.; Sikka, K.; and Gedeon, T. 2014. Emotion recognition in the wild challenge 2014: Baseline, data and protocol. In ICMI.
 [Edelman, Arias, and Smith 1998] Edelman, A.; Arias, T. A.; and Smith, S. T. 1998. The geometry of algorithms with orthogonality constraints. SIAM journal on Matrix Analysis and Applications 20(2):303–353.
 [Faraki, Harandi, and Porikli 2015] Faraki, M.; Harandi, M. T.; and Porikli, F. 2015. Approximate infinitedimensional region covariance descriptors for image classification. In ICASSP.
 [Fathy, Alavi, and Chellappa 2016] Fathy, M. E.; Alavi, A.; and Chellappa, R. 2016. Discriminative LogEuclidean feature learning for sparse representationbased recognition of faces from videos. In IJCAI.
 [Gao, Guo, and Wang 2016] Gao, J.; Guo, Y.; and Wang, Z. 2016. Matrix neural networks. arXiv preprint arXiv:1601.03805v1.
 [Harandi et al. 2012] Harandi, M. T.; Sanderson, C.; Hartley, R.; and Lovell, B. C. 2012. Sparse coding and dictionary learning for symmetric positive definite matrices: A kernel approach. In ECCV.
 [Harandi, Salzmann, and Hartley 2014] Harandi, M. T.; Salzmann, M.; and Hartley, R. 2014. From manifold to manifold: Geometryaware dimensionality reduction for SPD matrices. In ECCV.
 [Huang et al. 2014] Huang, Z.; Wang, R.; Shan, S.; and Chen, X. 2014. Learning EuclideantoRiemannian metric for pointtoset classification. In CVPR.
 [Huang et al. 2015a] Huang, Z.; Shan, S.; Wang, R.; Zhang, H.; Lao, S.; Kuerban, A.; and X.Chen. 2015a. A benchmark and comparative study of videobased face recognition on COX face database. IEEE TIP 24(12):5967–5981.
 [Huang et al. 2015b] Huang, Z.; Wang, R.; Shan, S.; Li, X.; and Chen., X. 2015b. LogEuclidean metric learning on symmetric positive definite manifold with application to image set classification. In ICML.
 [Ioffe and Szegedy 2015] Ioffe, S., and Szegedy, C. 2015. Batch Normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167.
 [Ionescu, Vantzos, and Sminchisescu 2015] Ionescu, C.; Vantzos, O.; and Sminchisescu, C. 2015. Matrix backpropagation for deep networks with structured layers. In ICCV.
 [Jarrett et al. 2009] Jarrett, K.; Kavukcuoglu, K.; Ranzato, M.; and LeCun, Y. 2009. What is the best multistage architecture for object recognition? In ICCV.
 [Jayasumana et al. 2013] Jayasumana, S.; Hartley, R.; Salzmann, M.; Li, H.; and Harandi, M. T. 2013. Kernel methods on the Riemannian manifold of symmetric positive definite matrices. In CVPR.
 [Krizhevsky, Sutskever, and Hinton 2012] Krizhevsky, A.; Sutskever, I.; and Hinton, G. 2012. Imagenet classification with deep convolutional neural networks. In NIPS.
 [LeCun et al. 1998] LeCun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P. 1998. Gradientbased learning applied to document recognition. Proceedings of the IEEE.
 [Liu et al. 2014] Liu, M.; Shan, S.; Wang, R.; and Chen, X. 2014. Learning expressionlets on spatiotemporal manifold for dynamic facial expression recognition. In CVPR.
 [MarceauCaron and Ollivier 2016] MarceauCaron, G., and Ollivier, Y. 2016. Practical Riemannian neural networks. arXiv preprint arXiv:1602.08007.
 [Masci et al. 2015] Masci, J.; Boscaini, D.; Bronstein, M.; and Vandergheynst, P. 2015. Geodesic convolutional neural networks on Riemannian manifolds. In ICCV Workshops.
 [Müller et al. 2007] Müller, M.; Röder, T.; Clausen, M.; Eberhardt, B.; Krüger, B.; and Weber, A. 2007. Documentation: Mocap database HDM05. Tech. Rep. CG20072.
 [Nair and Hinton 2010] Nair, V., and Hinton, G. 2010. Rectified linear units improve restricted boltzmann machines. In ICML.
 [Ollivier 2013] Ollivier, Y. 2013. Riemannian metrics for neural networks I: Feedforward networks. arXiv preprint arXiv:1303.0818.
 [Parkhiand, Vedaldi, and Zisserman 2015] Parkhiand, O.; Vedaldi, A.; and Zisserman, A. 2015. Deep face recognition. In BMVC.
 [Pennec, Fillard, and Ayache 2006] Pennec, X.; Fillard, P.; and Ayache, N. 2006. A Riemannian framework for tensor computing. IJCV 66(1):41–66.
 [Quang, Biagio, and Murino 2014] Quang, M.; Biagio, M.; and Murino, V. 2014. LogHilbertSchmidt metric between positive definite operators on Hilbert spaces. In NIPS.
 [Sanin et al. 2013] Sanin, A.; Sanderson, C.; Harandi, M. T.; and Lovell, B. C. 2013. Spatiotemporal covariance descriptors for action and gesture recognition. In WACV Workshop.
 [Sra 2011] Sra, S. 2011. Positive definite matrices and the sdivergence. arXiv preprint arXiv:1110.1773.
 [Tosato et al. 2010] Tosato, D.; Farenzena, M.; Cristani, M.; Spera, M.; and Murino, V. 2010. Multiclass classification on Riemannian manifolds for video surveillance. In ECCV.
 [Tuzel, Porikli, and Meer 2006] Tuzel, O.; Porikli, F.; and Meer, P. 2006. Region covariance: A fast descriptor for detection and classification. In ECCV.
 [Tuzel, Porikli, and Meer 2008] Tuzel, O.; Porikli, F.; and Meer, P. 2008. Pedestrian detection via classification on Riemannian manifolds. IEEE TPAMI 30(10):1713–1727.
 [Wang et al. 2012] Wang, R.; Guo, H.; Davis, L.; and Dai, Q. 2012. Covariance discriminative learning: A natural and efficient approach to image set classification. In CVPR.
 [Zhang et al. 2015] Zhang, S.; Kasiviswanathan, S.; Yuen, P. C.; and Harandi, M. T. 2015. Online dictionary learning on symmetric positive definite manifolds with vision applications. In AAAI.