Locality preserving projection on SPD matrix Lie group: algorithm and analysis

# Locality preserving projection on SPD matrix Lie group: algorithm and analysis

[    [
###### Abstract

Symmetric positive definite (SPD) matrices used as feature descriptors in image recognition are usually high dimensional. Traditional manifold learning is only applicable for reducing the dimension of high-dimensional vector-form data. For high-dimensional SPD matrices, directly using manifold learning algorithms to reduce the dimension of matrix-form data is impossible. The SPD matrix must first be transformed into a long vector, and then the dimension of this vector must be reduced. However, this approach breaks the spatial structure of the SPD matrix space. To overcome this limitation, we propose a new dimension reduction algorithm on SPD matrix space to transform high-dimensional SPD matrices into low-dimensional SPD matrices. Our work is based on the fact that the set of all SPD matrices with the same size has a Lie group structure, and we aim to transform the manifold learning to the SPD matrix Lie group. We use the basic idea of the manifold learning algorithm called locality preserving projection (LPP) to construct the corresponding Laplacian matrix on the SPD matrix Lie group. Thus, we call our approach Lie-LPP to emphasize its Lie group character. We present a detailed algorithm analysis and show through experiments that Lie-LPP achieves effective results on human action recognition and human face recognition.

Locality Preserving Projection on SPD Matrix Lie Group

1,2]Yangyang LIliyangyang12@mails.ucas.ac.cn 1]Ruqian LU

Locality preserving projection on SPD matrix Lie group: algorithm and analysis

 [ ††thanks: Email:
 \hb@xt@1ex1Academy of Mathematics and Systems Science Key Lab of MADIS Chinese Academy of Sciences, Beijing 100190, China; \hb@xt@1ex2University of Chinese Academy of Sciences, Beijing 100049, China
 Abstract Keywords Manifold learning, SPD matrix Lie group, Locally preserving projection, Laplace operator, Log-Euclidean metric Citation   Yangyang LI, Ruqian LU. \@titlecitation. Sci China Inf Sci, for review

## 1 Introduction

Image recognition, including dynamic action recognition and static image recognition, is a popular research subject in the field of machine vision and pattern recognition [1] [2] [3] [14]. This technique has a wide range of applications in many fields, such as intelligent video retrieval and perceived interaction. One key step of image recognition is to construct a high-quality image feature descriptor, which determines the accuracy rate of recognition. The original image feature descriptor is a pixel matrix, which is usually transformed into a high-dimensional row feature vector because image recognition on the pixel matrix space is difficult. Given the high dimension of the feature vector, manifold learning algorithms are applied to implement image recognition [1] [9] [15] [16]. However, the vector-form feature descriptor breaks the geometric structure of the pixel matrix space and is highly sensitive to various factors such as illumination intensity, background, and object location. To avoid these disadvantages, tensor space dimensionality reduction based on locality preserving projection (LPP) [5] was proposed in [6]; this approach is linear and deals with the image pixel matrix directly. F. Porikli et al. [12] presented a new feature descriptor by computing a feature covariance matrix within any size region in an image, which preserves the local geometric structure of the pixel matrix (see Appendix A in the supplementary file). Covariance matrix, which is also called symmetric positive definite (SPD) matrix descriptor, has mainly been used in static image recognition [10] [12] [14] [11]. For human action recognition, A. Sanin et al. [4] proposed a new method based on spatial-temporal covariance descriptors. H. Tabia et al.[13] applied SPD matrices as descriptors of 3D skeleton location by constructing covariance matrices on 3D joint locations.

A set of SPD matrices with the same size forms a Riemannian manifold. This SPD Riemannian manifold has a group structure that forms an SPD matrix Lie group . The group operation in [7] is shown as

 S1,S2∈SD+,S1⊙S2≐exp(log(S1)+log(S2)).

We proposed that the dimensions of SPD matrices calculated from image regions are especially high in general. As a result of the matrix-form data of the matrix Lie group, directly reducing the dimension of high-dimensional matrix Lie group is difficult without breaking the structure of the matrix form.

Harandi et al. [18] proposed to learn a kernel function to first map the SPD matrices into a higher-dimensional Euclidean space and then use LPP [5] to reduce their dimension. However, this method would distort the geometric and algebraic structure of the SPD manifold and would lose a considerable amount of important structure information. To overcome this limitation, Harandi et al. [19] suggested mapping the original SPD manifold to a Grassmann manifold and then solving the dimension reduction problem on the latter. However, this method has a high time cost. A similar study was conducted by Huang et al. [7], who proposed transforming SPD matrices to their corresponding tangent space and learning a linear dimensionality reduction map on that tangent space. However, this algorithm needs several parameters, which are sensitive factors that influence the algorithm. Overall, all three methods [7] [18] [19] require a linear space and rely on nonlinear dimensionality reduction mappings.

According to the definition of covariance matrix in [8] [12], each covariance matrix can be represented by the product of a set of feature vectors. Covariance matrix summarizes the linear bivariate relationships among a set of variables. Thus, we can solve the dimensionality reduction problem directly on the SPD matrix Lie group. We extend the idea of LPP [5] to the dimensionality reduction learning on the SPD matrix Lie group in a new approach called Lie-LPP.

The main contributions of our work can be summarized as follows:

• The LPP algorithm in [5] is extended to Lie-LPP and applied to the SPD matrix Lie group. A bilinear dimensionality reduction mapping on the SPD matrix Lie group is obtained, which preserves the intrinsic geometric and algebraic structure of the SPD matrix Lie group.

• To overcome the limitation of other methods regarding dimensionality reduction of the SPD matrix Lie group, our method solves the dimensionality reduction problem of the SPD matrix Lie group directly without mapping to other spaces and without needing numerous sensitive parameters, thereby resulting in a simple and straightforward approach.

• The graph Laplacian matrix is constructed on the SPD matrix Lie group to reflect the intrinsic geometric and algebraic structure of the original SPD matrix Lie group.

• A detailed algorithm analysis in theory is given to analyze the difference and relationship between Lie-LPP and LPP. The main conclusions are shown in Theorems 4.1 and 4.2 and in Proposition 4.1.

This paper is a further extension of the algorithm and analysis in our previous paper [24]. In [24], we simply presented a simple description of our algorithm. In this paper, we present the detailed algorithm analysis in theory, as well as full experiments on human action and face databases.

## 2 Background

### 2.1 Covariance Matrix

Suppose a set of feature vectors from an image region are expressed as the following matrix , where the dimensionality of each feature vector is :

 F=(f1,f2,…,fn),

is the feature vector. The corresponding covariance matrix with respect to these feature vectors is defined as [8]

 C=1nFFT=1nn∑i=1fifTi. (1)

In this definition, we assume the expectation of feature vectors is zero and each term in the summation is the outer product of the feature vector . Obviously, covariance matrix is a semi-positive definite matrix. In this paper, we consider only the positive covariance matrix called the SPD matrix. When the feature vectors are adjacent, the corresponding covariance matrices are also adjacent. Thus, the covariance matrices preserve the local geometric structure of the corresponding feature vectors. The detailed proof is shown in Appendix A.

Covariance matrices (SPD matrices) have several advantages as the feature descriptors of images. First, covariance matrices can fuse all the features of images. Second, they provide a way of filtering noisy information such as illumination and the location of the object in the image. In addition, the size of the covariance matrix is dependent on the dimensionality of feature vectors other than the size of the image region. Thus, we can construct the covariance matrices with the same size from different regions.

### 2.2 Geometric structure of SPD Matrix Lie group

In machine learning, we usually have to learn an effective metric for comparing data points. In particular, in the image recognition step, a metric is required to measure the distance between two different image feature descriptors. In this paper, we use the SPD matrices as feature descriptors of images. Thus, the corresponding Riemannian metric needs to be constructed on the SPD matrix Lie group to compute the intrinsic geodesic distances between any two SPD matrices.

The SPD matrix Lie group that we consider in this paper is represented by , where every point is a size matrix. The tangent space of at the identity is Sym(D), a bilinear space of symmetric matrices. The learned lower-dimensional SPD matrix Lie group is represented by . The family of all scalar products on all tangent spaces of SPD matrix Lie group is known as the Riemannian metric. The geodesic distance between any two points can be computed under this Riemannian metric. In this paper, we choose the Log-Euclidean metric (LEM) from [23] as the Riemannian metric of SPD matrix Lie group. The detailed definition of LEM is shown in Definition 2.1.

Definition 2.1. (Log-Euclidean Metric) [23] The Riemannian metric at a point is a scalar product defined in the tangent space :

 ⟨T1,T2⟩=⟨Dlog.T1,Dlog.T2⟩, (2)

where , indicates the directional derivative of the matrix logarithm along [7].

LEM is a bi-invariant metric defined on the SPD matrix Lie group. The corresponding theoretical conclusion is shown in [23].

Definition 2.2. (Bi-invariant Metric) [23] Any bi-invariant metric on the Lie group of SPD matrices is also called a LEM because it corresponds to a Euclidean metric (EM) in the logarithm domain.
The logarithm domain is the tangent space of the SPD matrix Lie group.

Corollary 2.3. (Flat Riemannian Manifold) [23] Endowed with a bi-invariant metric, the space of SPD matrices is a flat Riemannian space; its sectional curvature is null everywhere.

Thus, under LEM, is a flat manifold and locally isometric to the tangent space Sym(D). In a local neighborhood, the mapping between the SPD matrix Lie group and the corresponding tangent space is represented by the exponential map, and the inverse map is the logarithm shown in Eqs. 3 and 4.

 expS1(T1)=exp(log(S1)+DS1log⋅T1), (3) logS1(S2)=Dlog(S1)exp⋅(log(S2)−log(S1)), (4)

where the exponential map is defined at point , a tangent vector. is the exponential representation of on , and the corresponding logarithmic representation of at is .

The geodesic distance between under LEM is defined as follows [7]:

 dG(S1,S2)=⟨logS1(S2),logS1(S2)⟩=∥log(S1)−log(S2)∥2F, (5)

where represents the Frobenius norm of the matrix and is the logarithm of at .

Under LEM, the SPD Matrix Lie group is a complete manifold. Thus, every two points on the SPD matrix Lie group are linked by the shortest geodesic line.

## 3 Algorithm

In this section, we first analyze the LPP algorithm. Then, we give the construction process of graph Laplacian matrix on the SPD matrix Lie group. Finally, we describe the process of our proposed dimensionality reduction algorithm Lie-LPP.

### 3.1 Locally Preserving Projection (LPP)

LPP aims to learn a linear dimensionality reduction map to reduce the dimension of high-dimensional vector-form data points, which can be seen as a linear approximation of LEP [17]. This linear dimensionality reduction map optimally preserves the local neighborhood geometric structure of a dataset by building a graph Laplacian matrix on the dataset. The graph Laplacian matrix is a discrete approximation of a differential Laplace operator that arises from the manifold. Let is the input data set distributed on a -dimensional manifold , which is embedded into . is the linear dimensionality reduction map. The learned lower-dimensional data set is .

The algorithm of LPP is stated as follows:

• Constructing the adjacency graph: Denote a graph with nodes. If and are ”close”, then a connection exists between nodes and . The ”closenes” between two nodes is measured by the -nearest neighbor method.

• Choosing the weights: Here, the authors denote matrix as the corresponding weight matrix, where is a sparse symmetric matrix with size . The element is defined as follows:
, if nodes and are connected,
, if nodes and are not connected.

• Eigenmaps: The following generalized eigenvector problem is solved to obtain the corresponding dimension reduction map :

 XLXTa=λXDXTa, (6)

where is a diagonal matrix, . is the graph Laplacian matrix.

Thus, the critical step of LPP is to construct the graph Laplacian matrix on data points. The global lower dimensional representations are learned by solving the corresponding generalized eigenfunction.

### 3.2 Laplace operator on SPD matrix Lie group

The Laplace operator is a significant operator defined on Riemannian manifold [28]. It measures the intrinsic structure of manifold such as the curvature of manifold, the similarities among different points on Riemannian manifold. The Laplacian matrix on a graph is a discrete analog of the Laplace operator that we are familiar with in functional analysis [27]. The critical step of the LPP algorithm is to construct a graph Laplacian matrix to represent the intrinsic local geometric structure of data points, which is applied only on vector-form data points. For Lie-LPP, we aim to uncover the intrinsic structure of SPD matrices, which are essentially different from vector-form data points in a spatial structure. The Laplace operator is defined based on the Riemannian metric. For vector-form data points, the Laplacian matrix is constructed based on the EM in each local patch. For SPD matrices, we need to use LEM [22] [23] to construct the corresponding Laplacian matrix.

Several difficulties are encountered for the learning of the Laplacian matrix on SPD matrices. One critical difficulty is the discrete representations of the first- and second-order derivatives on SPD matrix space. Another difficulty is the need to find the representation of the Laplace operator on the SPD matrix Lie group, which is different from the structure on vector space. To construct a Laplacian matrix on SPD matrices, we need to solve these difficulties during the learning process. In addition, the core of Lie-LPP is to construct an accurate Laplacian matrix on the SPD matrix Lie group before dimensionality reduction. The construction process of a Laplacian matrix on SPD matrices and the descriptions for solving these difficulties are given in detail in a separate section.

To better understand the construction of the Laplacian matrix, we first present an intuitive example of a set of one dimensional nodes. Consider a graph with nodes. Every node is adjacent to two nodes and . If we assign value to node , then the Laplacian is represented as . Thus, is the discrete analog for a first-order derivative defined over the real number line. is the discrete approximation of the second-order derivative. For higher dimensions the ’normalized’ graph Laplacian about function is defined as:

 (Δnmf)(i)=f(i)−1deg(i)n∑j=1Wijf(j), (7)

where the degree function is defined as . is the heat kernel weight defined in the same way as in the second algorithmic procedure of LPP [5].

The abovementioned Laplacian matrix is defined on vector-form data points only. In the following, we construct the ’normalized’ graph Laplacian matrix on the SPD matrix Lie group endowed with LEM. Suppose a parameterized SPD matrix Lie group defined as , we call the vector as the standard first-order derivative on [21]:

 −−−−−−−−−−−→Σ(x)Σ(x+u)=Σ(x)12(logΣ(x+u)−logΣ(x))Σ(x)12. (8)

The Laplace-Beltrami operator about function is defined as:

 ΔΣ=d∑i=1ΔiΣ,ΔiΣ=∂2iΣ−2(∂iΣ)Σ(−1)(∂iΣ). (9)

To approximate the graph Laplacian matrix on SPD matrices, we first need to approximate the first- and second-order derivatives on SPD matrix Lie group. Under the approximation of the second-order derivative, the corresponding approximation of the Laplace-Beltrami operator on [21] is:

 ΔuΣ=∂2uΣ−2(∂uΣ)Σ(−1)(∂uΣ)=−−−−−−−−−−−→Σ(x)Σ(x+u)+−−−−−−−−−−−→Σ(x)Σ(x−u)+O(∥u∥4). (10)

To compute the complete Laplacian matrix of the SPD matrix Lie group of Eq. 9, we only have to compute the Laplace operator along orthonormal directions.

For a discrete dataset, suppose are a set of SPD matrices generated from . The ?normal’ graph Laplacian matrix on this data set is

 (ΔnmΣi)=Σ12i(log(Σi)−n∑j=1˜Wijlog(Σj))Σ12i, (11)

where , if and are connected, else . The corresponding graph Laplacian matrix on a set of SPD matrices is , where is a symmetric matrix, defined as above and is a diagonal matrix, . is a discrete representation of the Laplace operator on the SPD matrix Lie group.

### 3.3 Lie-LPP Algorithm

On the basis of the definition of graph Laplacian matrix on and the algorithmic procedures of LPP, we show our Lie-LPP algorithm as follows:

#### 3.3.1 Lie-LPP Algorithm

For the SPD matrix Lie group , the SPD matrix logarithms in the tangent space are also symmetric matrices. The bilinear mapping between tangent spaces is defined as follows:

 f(log(S1))=ATlog(S1)A. (12)

The corresponding mapping between SPD matrix Lie groups is

 g(S1)=exp∘f(log(S1))=exp(ATlog(S1)A), (13)

where , is a linear map matrix, is the corresponding map defined on Lie algebras, and is the derived map defined on SPD matrix Lie groups.

is still an SPD matrix; this idea is easily proven. In this paper, we attempt to learn a transformation matrix where is a full column rank matrix. is the size of , and is the mapped size of , . The linear map matrix is proven to preserve the algebraic structure of . To obtain a discriminative SPD matrix Lie group , should also inherit and preserve the geometric structure of . According to the idea of LPP, the key step of the Lie-LPP algorithm is to construct the Laplacian matrix on , which reflects the local geometrical structure of . Under LEM, is locally isometric to the tangent space of . Thus, the geodesic distance between two SPD matrices is equal to the Euclidean distance between the corresponding points on tangent space.

Suppose the input data points are , and the output sample points are , where is the number of sampled points.

The algorithm steps of Lie-LPP are as follows:

• The first step is to divide the input SPD matrices into a set of local patches. We use the -nearest method to find the -nearest neighborhoods of every point , where the distance metric between two points is defined as their geodesic distance on the SPD matrix Lie group using Eq. 5.

• The second step is to construct a weight matrix on each local patch to represent the local intrinsic geometric structure of .

, if ,

, else if .

The definition of weight value is based on the construction of a Laplacian matrix on input SPD matrices from Eq. 11.

• The third step is to compute the eigenvalues and the corresponding eigenvectors for the generalized eigenfunction problem

 ST˜LSA=λST˜DSA, (14)

where is a partitioned matrix.

#### 3.3.2 Optimal Embedding

The optimal dimension reduction map is obtained by minimizing the following energy function:

 12∑i,jdG(Yi,Yj)˜Wij, (15)

where is the geodesic distance between and , while is the corresponding weight.

According to the definition of geodesic distance and LEM on the SPD matrix Lie group in Eq. 5, the energy function Eq. 15 can be transformed into the following equation:

 12∑i,j∥logYi−logYj∥2F˜Wij, (16)

According to Eq. 12, the optimization function Eq. 16 is represented as follows:

 12∑i,j∥logYi−logYj∥2F˜Wij=12∑i,j∥ATlogSiA−ATlogSjA∥2F˜Wij=tr(PTST(˜D−˜W)SP)=tr(PTST˜LSP), (17)

where and are two sparse block matrices, , . and are all semi-SPD matrices. Thus, is also a semi-SPD matrix, and its eigenvalues are all non-negative. We have . To compute the minimum value of Eq. 17, we just need to compute the minimum eigenvalues of matrix .
To avoid obtaining a singular solution, we impose a constraint as follows:

 PTST˜DSP=I. (18)

Then, the corresponding minimization problem turns to

 mintr(PTST˜LSP),s.t.PTST˜DSP=I. (19)

We use the Lagrange multiplier method to solve the minimization problem

 L(A,λ)=tr(PTST˜LSP)−λ(tr(PTST˜DSP−I))=tr(AAT˜LSAAT−λ(AATST˜DSAAT−I)). (20)

As ’s derivative to , we obtain the following:

 ∂L(A,λ)∂A=4tr(ATST˜LSAAT−λATST˜DSAAT), (21)

where and are both semi-SPD matrices, . Eq. 21 can be derived as

 ∂L(A,λ)A=4tr(ATAATST˜LS−λATAATST˜DS)=4tr((ST˜LSA−λST˜DSA)ATA). (22)

is a full rank matrix; thus, is a SPD matrix. To obtain , we need to minimize the following generalized eigenfunction problem:

 ST˜LSA=λST˜DSA. (23)

We obtain the bottom smallest eigenvalues , and the corresponding eigenvectors . is the learned linear dimensionality reduction map matrix. The corresponding dimension reduction map between and is

 Yi=g(Si)=exp(ATlog(Si)A). (24)

The lower-dimensional SPD matrix Lie group preserves the local geometric and algebraic structure of , which is maintained by the Laplacian matrix on . Good similarity between two points corresponds to a large weight between them. In addition, through the construction of a graph Laplacian matrix on the SPD matrices, the reduction map is learned based on global data points. The Laplacian matrix can be viewed as an alignment matrix that aligns a set of local patch structures to obtain global lower-dimensional representations by solving a generalized eigenfunction. Unlike other methods [7] [18] [19], our method uncovers the intrinsic structure of SPD matrices by constructing this discrete Laplacian matrix without the help of other spaces.

## 4 Algorithm Analysis

In this section, we mainly analyze the relationships between the proposed Lie-LPP and LPP [5] in theory. In the first subsection, we present dimension reduction error comparisons between these two algorithms. In the second subsection, we analyze the similarity relation between them.

### 4.1 Comparison with LPP

We analyze the reconstruction errors during dimension reduction of Lie-LPP and LPP from two aspects. First, we analyze the local weight matrix construction. Second, we analyze the global alignment matrix and the null space of the alignment matrix. The dimension reduction losses of the two algorithms are determined by the corresponding graph Laplacian matrices defined on data points. In this section, we mainly compare Lie-LPP and LPP in theory to analyze the improvements of our algorithm.

First, we show the relationship between two graph Laplacian matrices, which are defined on vector-form data points and SPD matrix-form data points.

The local weight matrix of LPP is defined in the second step of the LPP algorithm:

 Wij=e−∥xi−xj∥2t,

where and are in the same neighborhood.

The local weight matrix of Lie-LPP is defined in the second step of the Lie-LPP algorithm

 ˜Wij=e−∥log(Si)−log(Sj)∥2Ft,

where and are also in the same neighborhood.

The distance between and is computed based on EM. However, the distance between and is computed based on LEM. EM is not the real Riemannian metric of the embedded manifold defined in the LPP algorithm. We mentioned in Subsection 2.2 that LEM is the intrinsic Riemannian metric defined on the SPD matrix Lie group. Thus, the distance under EM is not the intrinsic geodesic distance on and obviously . Under LEM, the intrinsic geometric structure of SPD matrix Lie group can be determined, and is the real geodesic distance between and . To our knowledge, and are two different feature descriptors of the same image in computer vision. In the Appendix, we prove that the SPD matrix descriptors preserve the geometric structures of vector-form feature descriptors. Thus, we have

 ∥log(Si)−log(Sj)∥2F⩾∥xi−xj∥2.

Under this analysis, we have

 Wij⩾˜Wij. (25)

The corresponding graph Laplacian matrices defined on and are represented as and , respectively. On the basis of the above analysis, we present our first comparison conclusion in Theorem 4.1.

Theorem 4.1. If the datasets and are two different feature descriptors of the same images, then we have , that is,

 λi(L)⩾λi(˜L),

for all .
Proof: According to the definition of weight matrices and , we obtain Eq. 25. Then is a positive diagonal matrix. We know that and ; thus, is also a Laplacian matrix. is a semi-positive definite symmetric matrix. We have

 λi(L−˜L)⩾0,     L⪰˜L,

for all .

For two symmetric matrices , if , then we write . If , then

 λi(L)⩾λi(˜L),

for every .

For a special situation, if the original sub-manifold in LPP is highly curved and the Riemannian curvature is not zero everywhere, then the eigenvalues of are strictly greater than those of , that is, for every under this situation. Therefore, we have proven this theorem.

After analyzing the relationship between and in Theorem 4.1, we need to analyze the dimension reduction errors of Lie-LPP and LPP. In the dimension reduction step, the two algorithms need to minimize the following generalized eigenvalue function:

 E=12∑i,j(yi−yj)2Wij=YTLY,

where are lower-dimensional representations of or , for .

The dimension reduction errors of Lie-LPP and LPP are measured by the smallest eigenvalues of graph Laplacian matrices. Suppose the dimension reduction error under LPP is represented as and the error under Lie-LPP is represented as . On the basis of Theorem 4.1, we present our second comparison conclusion in Theorem 4.2.

Theorem 4.2. The dimension reduction error under Lie-LPP is less than the dimension reduction error under LPP, that is, we have

 ∥˜E∥F⩽∥E∥F.

Proof: According to the algorithm procedures of the Laplacian eigenmap, the dimension reduction errors of LPP and Lie-LPP are mainly determined by the smallest eigenvalues of the graph Laplacian matrix. Thus, the norm of general reconstruction error is measured as

 ∥E∥F=d∑i=1λi,

where is the intrinsic dimension of lower-dimensional representations.

From Theorem 4.1, we can deduce that for the same image database, the graph Laplacian matrix constructed on the SPD matrices is lower than that on the vector-form descriptor. Thus, we have , for all . Then, on the basis of the definition of the dimension reduction error norm of Lie-LPP and LPP, we have

 ∥˜E∥F⩽∥E∥F.

Under the same special situation in Theorem 4.1, if the embedded manifold in LPP is highly curved, then the reconstruction error is strictly greater than .

 ∥˜E∥F<∥E∥F.

The key reason for this situation is that in LPP, the authors used EM to determine the local intrinsic geometric structure of , which is not the real local Riemannian metric of .

### 4.2 Connection to LPP

We also present a theoretical analysis of the similarity relation between Lie-LPP and LPP aside from reconstruction error comparisons between these two algorithms. Through the analysis, we can see that under the following special situation, Lie-LPP is equivalent to LPP by defining a new weight matrix. Suppose the vector form descriptor of an object is represented as a row vector. The corresponding SPD matrix descriptor of this object is shown as . SPD matrix Lie group is a flat Riemannian manifold, where the local neighborhood of the Lie group is locally isometric to the corresponding tangent space. Thus, the local tangent space can be approximately represented by the local neighborhood of the SPD matrix Lie group. Under this special situation and theoretical analysis, Lie-LPP can be transformed into LPP with a special weight matrix. The theoretical analysis is stated in Proposition 4.1.

Proposition 4.1. The vector form descriptor of an object be and the corresponding SPD matrix descriptor be . Under this special situation, Lie-LPP can be transformed into LPP by defining a new weight matrix.

Proof: First, we give the following representation of the generalized eigenvalue function of Lie-LPP:

 ST˜LSA=λST˜DSA,ST˜WSA=(1−λ)ST˜DSA,

where is the corresponding graph Laplacian matrix defined on a set of SPD matrices represented as , and is the transpose matrix of .
By rewriting as a matrix representation, we obtain the following representation:

 ST=[xT1,xT2,⋯,xTN]⎛⎜ ⎜⎝x1⋯0⋮⋱⋮0⋯xN⎞⎟ ⎟⎠,

Under this representation, we obtain

 ST˜WSA=[xT1,xT2,⋯,xTN]⎛⎜ ⎜⎝[]cccx1⋯0⋮⋱⋮0⋯xN⎞⎟ ⎟⎠˜W⎛⎜ ⎜ ⎜⎝xT1⋯0⋮⋱⋮0⋯xTN⎞⎟ ⎟ ⎟⎠⎛⎜ ⎜ ⎜ ⎜⎝x1x2⋮xN⎞⎟ ⎟ ⎟ ⎟⎠A.

Define

 WV=⎛⎜ ⎜⎝x1⋯0⋮⋱⋮0⋯xN⎞⎟ ⎟⎠˜W⎛⎜ ⎜ ⎜⎝xT1⋯0⋮⋱⋮0⋯xTN⎞⎟ ⎟ ⎟⎠,DV=⎛⎜ ⎜⎝x1⋯0⋮⋱⋮0⋯xN⎞⎟ ⎟⎠˜D⎛⎜ ⎜ ⎜⎝xT1⋯0⋮⋱⋮0⋯xTN⎞⎟ ⎟ ⎟⎠.

Under this new weight matrix , we rewrite as follows:

Suppose , .

Thus, under this new weight matrix , Lie-LPP is equivalent to LPP. The above analysis shows that the Lie-LPP algorithm can be transformed into LPP by defining a new weight matrix if the feature descriptors of vector form and SPD matrix form are and , , respectively.

## 5 Experiments

In this section, we first report the results we obtained after running Lie-LPP on two human action databases, namely, Motion Capture HDM05 [25] and CMU Motion Graph. We compare our algorithm with traditional manifold learning algorithms through two experiments. In the second part, we test our algorithms on a static face database e.g., extended Yale Face Database B (YFB DB). We compare LEML algorithm [7] and SPD-ML algorithm [19] with our algorithm, after which we present the experimental comparison between Lie-LPP and LPP [5].

### 5.1 Human Action Recognition

In this subsection, we test Lie-LPP on two human action databases. Each action segment trajectory can be seen as a curve traversing a manifold. Action recognition involves classifying the different action curves. In the recognition step, we use the nearest neighborhood framework to classify human action sequences. After feature extraction, the embedded covariance feature descriptors form an SPD matrix, which belongs to a low-dimensional SPD matrix Lie group.

#### 5.1.1 Motion Capture HDM05 Database

The HDM05 database [25] contains more than motion classes in to realizations executed by various actors. The actions are performed by five subjects whose names are ’bd’,’bk’, ’dg’, ’mm’, and ’tr’. For each subject, we choose the following actions: ’clap above head’, ’deposit floor’, ’elbow to knee’, ’grab high’, ’hop on both legs’, ’jog’, ’kick forward’, ’lie down on the floor’, ’rotate both arms backward’, ’sit down on chair’, ’sneak’, ’stand up’, and ’throw basketball’. We choose three motion fragments per subject per action. Thus, the total number of motion fragments in this experiment is . The dataset provides the locations of joints over time acquired at the speed of frames per second. In our experiment, we describe each action observed over frames [3] by its joint covariance descriptor, which is an SPD matrix of size . is a parameter. In this experiment, we choose .

If we directly perform the classification in the original dimensional SPD matrix Lie group, the experimental time cost would be as high as up to s. In practice, each joint action is controlled by only a few features, which can be much less than . Therefore, dimension reduction is necessary before recognition. We perform eight groups of experiments. Each experiment is divided into two parts. The first part involves using the use Lie-LPP algorithm to reduce the dimension of SPD matrices, while the second part involves using the leave-one-out method to compute the recognition rate. We measure the similarity between two SPD matrices under two Riemannian metrics, namely, EM and LEM, for comparison. The final comparison results are shown in Table 1. The purpose of this experiment is to analyze the time complexity of Lie-LPP. Thus, we present comparisons of our method under different Riemannian metrics and different reduced dimensions. Under the same reduced dimensions , the accuracy rates under LEM are higher than those under EM overall, but the time cost under LEM is higher than that under EM because of the especially high time cost of the log operation on the SPD matrix. Under the same Riemannian metric, the time cost increases along with the growth of the reduced dimensions. If we do not reduce the dimensionality of SPD matrices before recognition, then the recognition accuracies under both Riemannian metrics are and , respectively. However, the recognition rates under the low-dimensional SPD matrix Lie group with LEM are , , and for dimensions reduced , , and , respectively (Table 1); these rates are much higher than they would be without dimension reduction. In conclusion, reducing the dimension of the original SPD matrices is necessary before recognition. Our method reduces the time cost of the experiment and improves the recognition rate. In addition, on the basis of the LEM, the recognition rates are relatively higher. Thus, the intrinsic structure of SPD matrices is more accurately determined based on LEM.

#### 5.1.2 CMU Motion Graph Database

We consider four different action classes, namely, walking, running, jumping, and swing. Each class contains 10 sequences or a total of 40 sequences for our experiment. A total of joints are marked in the skeletal joint sequences. Only the root joint is represented by a vector; the other joints are represented by rotation angle vectors. Each action frame is represented by a vector. The SPD matrix, which is a matrix, is constructed by computing the covariance of frames subwindows, where we choose . To guarantee connectivity between subwindows, we take a frames that overlap between adjacent subwindows, as mentioned in [3].

In this experiment, we use leave-one-out method to compute the recognition rate. Every time, we choose one sequence of each class as the test set and the remaining sequences as the training set. The comparison results in Table 2 show the recognition accuracies for different action classes, where the recognition accuracy of our method is , which is higher than that of the other three methods. Notably, the SPD matrix descriptor under joint locations in our method obtains a surprising result.

### 5.2 Human Face Recognition

In [24], we tested Lie-LPP on human face databases, namely, Yale Face DB and YTC DB. In this subsection, we further test our algorithm on another static human face database, namely, YFB DB. Only two other algorithms, namely, LEML [7] and SPD-ML [19], are similar to our method in that they reduce the dimensionality of SPD matrix manifold. In this subsection, we report the comparison results for our method with LEML, SPD-ML, the manifold learning algorithm LPP [5], and linear dimensionality reduction PCA [20].

#### 5.2.1 Extended Yale Face Database B

YFB DB [26] contains single-light-source images of individuals each seen under approximately near-frontal images and under different illuminations per individual. For every subject in a particular pose, an image with ambient illumination was captured. The face region in each image is resized to . We use the raw intensity feature to construct the corresponding SPD matrix, in which we follow [7] to construct the SPD matrix for each image. For this dataset, because the image size is , the size of the corresponding SPD matrix is .

#### 5.2.2 Recognition

In the recognition step, SPD-ML [19] used the affine-invariant metric and the Stein divergence metric to measure the similarities among SPD matrices. LEML [7] used LEM, under which the SPD matrix Lie group is locally isometric to Lie algebra. LPP and PCA used EM to measure the similarities among vector-form data points. For Lie-LPP, we also use LEM to compute the geodesic distances between two SPD matrices. Unlike the other two algorithms, our algorithm aims to construct a Laplace?Beltrami operator on the SPD matrix Lie group and then learn a more discriminable Lie group, which preserves the geometric and algebraic structure of the original one.

For YFB DB, we choose images per subject and run the experiment four times by using each algorithm. In each experiment, we randomly choose images per subject as the training dataset and the remaining images as the test dataset respectively. For PCA and LPP, we choose the dimension of low dimensional space with after dimension reduction, where . For Lie-LPP, LEM, and SPD-ML, we choose the low size of the SPD matrix, that is, . Under these choices, the low dimension of PCA and LPP is equal to the low dimension of Lie-LPP. The recognition accuracy results are reported in Table 3. We choose the same classification methods with LEML and SPD-ML in the recognition step. Table 3 shows that the recognition results of Lie-LPP outperform PCA and LPP in any case. Comparisons among Lie-LPP, LEM, and SPD-ML show that Lie-LPP is slightly higher than LEML only when we choose images per subject. The accuracy recognition rates of Lie-LPP especially outperform the accuracies of LEML and SPD-ML when we choose . These results show that the effect of dimensionality reduction for SPD matrices by Lie-LPP is better than the effects of LEML and SPD-ML. In addition, LEML and SPD-ML need several parameters when performing their algorithms. Our algorithm only needs to analyze the Laplacian operator of the SPD matrix Lie group and solves the dimensionality reduction problem directly on the SPD matrix Lie group.

## 6 Conclusions and Future directions

In summary, the following are the main conclusions of this paper:

1. We construct a Laplace?Beltrami operator on the SPD matrix Lie group and give the corresponding discrete Laplacian matrix.
2. We extend the manifold learning algorithm LPP to Lie-LPP and used it on the SPD Matrix Lie group. We have shown that Lie-LPP can be successfully applied to human action recognition and human face recognition.
3. Our analysis of the geometric and algebraic structure of the SPD matrix Lie group shows that the SPD matrix Lie group is a complete and compact manifold. The sectional curvature on every point is zero; thus, the SPD matrix Lie group is a flat manifold and locally isometric to Lie algebra;
4. However, this is not a simple application of the idea in paper [5]. We developed a new algorithm called Lie-LPP, which is a substantial extension of the LPP algorithm;
5. We analyze the relationships between Lie-LPP and LPP in theory and obtain three theoretical conclusions;
6. We conducted practical experiments, the results of which show that Lie-LPP outperforms the existing ones significantly.

In the future, we will further improve this algorithm. We will attempt to add a time dimension to enhance human action recognition, introduce manifold learning algorithms to other types of Lie groups, and introduce new manifold learning algorithms on higher-dimensional tensor space.

## Acknowledgments

This work is supported by the National Key Research and Development Program of China under grant no. 2016YFB1000902; the National Natural Science Foundation of China project Nos. 61232015, 61472412, and 61621003; the Beijing Science and Technology Project on Machine Learning-based Stomatology; and the Tsinghua-Tencent-AMSS-Joint Project on WWW Knowledge Structure and its Application.

## References

• Ma A J, Yuen P C, Zou W W W, et al. Supervised spatio-temporal neighborhood topology learning for action recognition. IEEE Trans. Circuits and Systems for Video Technology, 2013, 23(8): 1447-1460
• Hussein M E, Torki M, Gowayyed M A, et al. Human action recognition using a temporal hierarchy of covariance descriptors on 3D joint locations. In: Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, Beijing, 2013. 2466-2472
• Vemulapalli R, Arrate F, Chellappa R. Human action recognition by representing 3D skeletons as points in a Lie group. In: Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, 2014. 82
• Sanin A, Sanderson C, T.Harandi M, et al. Spatio-temporal covariance descriptors for action and gesture recognition. In: Proceedings of the 2013 IEEE Workshop on Applications of Computer Vision, Washington, 2013. 103-110
• He X F, Niyogi P. Locality preserving projections. In: Proceedings of the sixteenth Annual Conference on Neural Information Processing Systems, Chicago, 2003
• He X F, Cai D, Niyogi P. Tensor subspace analysis. In: Proceedings of the eighteenth Annual Conference on Neural Information Processing Systems, British Columbia, 2005. 499-506
• Huang Z W, Wang R P, Shan S G, et al. Log-Euclidean metric learning on symmetric positive definite manifold with application to image set classification. In: Proceedings of the 32th International Conference on Machine Learning, Lille, 2015. 720-729
• Kwatra V, Han M. Fast covariance computation and dimensionality reduction for sub-window features in image. In: Proceedings of the 11th European conference on Computer vision: Part II, Heraklion, 2010. 156-169
• Wang L, Suter D. Learning and matching of dynamic shape manifolds for human action recognition. IEEE transactions on image processing, 2007, 16(6):1646-1661
• 10  Tuzel O, Porikli F, Meer P. Pedestrian detection via classification on Riemannian manifolds. IEEE Transaction on PAMI, 2008, 30(10):1713-1727
• 11  Porikli F, Tuzel O, Meer P. Covariance tracking using model update based on Lie algebra. In: Proceedings of the 2006 IEEE Conference on Computer Vision and Pattern Recognition, Washington, 2006. 94
• 12  Porikli F, Tuzel O. Fast Construction of covariance matrices for arbitrary size image windows. In: Proceedings of the International Conference on Image Processing, Atlanta, 2006. 1581-1584
• 13  Tabia H, Laga H, Picard D, et al. Covariance descriptors for 3D shape matching and retrieval. In: Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, 2014. 533
• 14  Tuzel O, Porikli F, Meer P. Region covariance: a fast descriptor for detection and classification. In: Proceedings of the 7th European conference on Computer vision: Part II, Graz, 2006. 589-600
• 15  Roweis S, Saul L. Nonlinear dimensionality reduction by locally linear embedding. Science, 2000, 290(5500):2323-2326
• 16  B.Tenenbaum J, Silva V, Langford J. A global geometric framework for nonlinear dimensionality reduction. Science, 2000, 290(5500):2319-2323
• 17  Belkin M, Niyogi P. Laplacian eigenmaps and spectral techniques for embedding and clustering. In: Proceedings of the International Conference Advances in Neural Information Processing Systems, 2001. 585-591
• 18  Harandi M, Sanderson C, Wiliem A, Lovell B. Kernel analysis over Riemannian manifolds for visual recognition of actions, pedestrians and textures. In: Proceedings of the IEEE Workshop on the Applications of Computer Vision, Breckenridge, 2012. 6163005
• 19  Harandi M, Salzmann M, Hartley R. From manifold to manifold: geometry-aware dimensionality reduction for SPD matrices. In: Proceedings of the 15th European conference on Computer vision, Zurich, 2014. 17-32
• 20  Smith L. A tutorial on principal components analysis. February 26, 2002
• 21  Pennec X, Fillard P, Ayache N. A Riemannian framework for tensor computing. IJCV, 2006, 66(1):41-66
• 22  Li X, Hu W, Zhang Z, et al. Visual tracking via incremental log-Euclidean Riemannian subspace learning. In: Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, 2008. 4587516
• 23  Arsigny V, Fillard P, Pennec X, et al. Geometric means in a novel vector space structure on symmetric positive-definite matrices. SIAM J.Matrix Analysis and Applications, 2007, 29(1):328-347
• 24  Li Y Y. Locally preserving projection on symmetric positive definite matrix Lie group. In: Proceedings of the International Conference on Image Processing, Beijing, 2017
• 25  Muller M, Roder T, Clausen M, et al. Documentation: Mocap database HDM05. Tech. Rep. CG-2007-2, Universitat Bonn, 2007
• 26  Yale Univ. Face database. http://cvc.yale.edu/projects/yalefaces/yalefaces/ html
• 27  Belkin M, Niyogi P. Towards a theoretical foundation for Laplacian-based manifold methods. Journal of Computer and System Sciences, 2005, 74(8):1289-1308
• 28  Willmore T. Riemannian Geometry. Oxford University Press, Oxford, 1997
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters