Self Functional Maps
Abstract
A classical approach for surface classification is to find a compact algebraic representation for each surface that would be similar for objects within the same class and preserve dissimilarities between classes. We introduce self functional maps as a novel surface representation that satisfies these properties, translating the geometric problem of surface classification into an algebraic form of classifying matrices. The proposed map transforms a given surface into a universal isometry invariant form defined by a unique matrix. The suggested representation is realized by applying the functional maps framework to map the surface into itself. The key idea is to use two different metric spaces of the same surface for which the functional map serves as a signature. Specifically, in this paper, we use the regular and the scale invariant surface laplacian operators to construct two families of eigenfunctions. The result is a matrix that encodes the interaction between the eigenfunctions resulted from two different Riemannian manifolds of the same surface. Using this representation, geometric shape similarity is converted into algebraic distances between matrices.
O. Halimi & R, Kimmel]
O. Halimi and
R. Kimmel
Faculty of Electrical Engineering, Technion University, Haifa 32000, Israel
Faculty of Computer Science, Technion University, Haifa 32000, Israel
Self Functional Maps
\@tabular
[t]@c@[\@close@row
Since the scientific revolution attributed to René Descartes in the 15th and 16th centuries, researchers have been occupied with the problem of bridging between geometry and algebra. With recent reincarnation of convolutional neural networks, some of the fundamental difficulties of translating geometric problems into algebraic ones are resurfacing. In this paper we deal with the topic of computational surface classification. Surfaces given as two dimensional curved manifolds are encountered in growing numbers on the Internet due to the rapid development of 3D sensing devices and 3D modeling techniques. Shape retrieval and classification problems are of great importance in the current age, when the prevalence of 3D modeling is increasing with acceleration in a variety of fields like ecommerce, virtual and augmented realities, computer aided design, 3D printing, medical 3D modeling, video games, the movies industry, and 3D mapping. The rapid growth of the amount of geometric data requires the design of efficient contentbased shape retrieval engines, that could perform similar tasks to those of existing search engines for textual information, using the shape itself as a query. For a proper operation of a geometry retrieval engine, the result should contain all occurrences of a query shape up to a predefined group of transformations. It is often desired to find a compact representation for a given shape that is invariant under transformations that keep the shape within its class.
The properties of good signatures are compactness, structured representation, computational efficiency in extraction and retrieval, descriptiveness and robustness to deformations, and invariance to different parameterizations of the surface. The essence of a signature is to provide effective means to measure reliable similarity between shapes. A complete review on the topic of descriptors for 3D retrieval is beyond the scope of this paper and we refer the reader to [LH14] for a comprehensive survey of spectral shape descriptors for nonrigid 3D shape retrieval, and to [LBZ13] for a review with emphasis on partial shape retrieval. For a recent review on the problem of 3D shapes similarity assessment and the approaches for addressing it we refer to [BCBB16]. Here, we provide a short incomplete sampling of recent efforts. In [FMK03] an orientation invariant 3D shapedescriptor based on spherical harmonics was proposed. The requirement to pose invariance lead to use of LBO spectral geometry. In [RWP06], the authors proposed the use of Laplace Beltrami operator spectra as a “ShapeDNA” of surfaces and solids, achieving isometry invariance. The global point signature (GPS) was introduced in [Rus07], where the histogram of pairwise distances in the GPS embedding space was used as a shape signature. In [SOG09], the heat kernel signature (HKS) was suggested and in [DLL10] a poseoblivious matching algorithm was explored using the feature vectors calculated at the set of persistent HKS maxima. In [BK10] a scaleinvariant version of the HKS was proposed. Isometry invariant volumetric descriptors based on a volumetric extension of the HKS were suggested in [RBBK10]. Later on, the “bag of geometric words” approach for 3D shape retrieval was introduced by [BBGO11], presenting Shape Google for isometry invariant shape retrieval. The same paradigm for partial shape retrieval was used in [LSF11, Lav12]. Intrinsic Shape Context Descriptors (ISC) were introduced in [KBLB12], generalizing shape context descriptors, previously used for analysis of planar shapes, to surfaces.
Another perspective of the problem of measuring shapes similarity is treating each surface as a metric space and using the GromovHausdorff distance between such geometric structures. The GromovHausdorff framework for nonrigid shapes comparison was first suggested in [MS05]. The same definition of the GromovHausdorff distance [BBI] as the distortion of embedding one metric space into another, lead to the introduction of the generalized MDS framework (GMDS) for measuring nonrigid shapes similarity [BBK06]. In a follow up paper [BBK10], the geodesic distances used in the GromovHausdorff distance were replaced with diffusion distances. While in [ADK16] the GMDS problem was translated to the spectral domain. In [BBG94] the Riemannian manifold was embedded in the heat kernel space and GromovHausdorff distance was measured in the heat kernel space. In [Mém11] the GromovWasserstein distance was defined as a reformulation of the distance measure between the shapes.
A different approach for quantifying shape similarity was to embed the given surfaces into a new space in which isometry is translated into simple transformations. One of the early papers that exploit this concept is [SSW89], where surfaces were flattened to a plain while preserving their geodesic distances for the purpose of finding a unique parametrization of cortical surfaces in computer aided neuroanatomy. Shapes similarity can be treated with conformal geometry tools, locally preserving the angles on the original surface. For example, in [WWJ07] quasiconformal maps for matching surfaces were explored. Shape descriptors based on conformal geometry were proposed in [BCG08], using the conformal factor to quantify dissimilarity. Comparing shapes using conformal Wasserstein distance was suggested in [LD11], measuring the optimal mass transportation distance between conformal factor profiles obtained by mapping surfaces to a unit disc. A different type of embedding is trying to preserve the metric properties of the original curved surface. Embedding of the surface into a low dimensional Euclidean space that translates the geodesic distances between surface points into distances in the Euclidean space was suggested in [EK03], creating isometry invariant signatures in the embedding space, referred to as canonical forms. Canonical forms were used for face recognition in [BBK05]. In [LJ07] similar method was suggested for the classification of planar shapes. In [SK17] the Geodesic Distance Descriptor (GDD) embedding was proposed, mapping the original surface to the complex Euclidean space. Diffusion maps [CL06] can be regarded as yet another type of embedding into a Euclidean space, preserving the diffusion distances defined on the surface. Finally, in [QH07], a graph embedding procedure was proposed, preserving the commute time between the nodes. Here, we treat each surface as an interaction between two specific metric spaces, where the spectral analysis of such an interaction provides a useful representation for various geometry analysis and processing tasks.
Functional maps [OBCS12] were introduced as an approach for transferring functions between shapes without using the explicit knowledge of pointtopoint correspondence between the shapes. It provides a way to transform the representation of a function in the basis defined in the original shape to the representation of its mapped version in the basis of the destination shape. Due to their optimality in representing smooth functions [BGC17, ABB16] the eigenfunctions of the LaplaceBeltrami operator (LBO) were chosen as a preferred basis. Still, the method is not restricted to a specific choice of bases. The transformation between the representation coefficients is linear and can be represented by a matrix that can be calculated using a knowledge about the way known descriptors map from one shape to the other. Here, we extend this concept and define the self functional map as a convenient universal representation of a given surface. We will demonstrate the power of this representation as a shape signature for classification.
Functional maps have been extended in various directions that can be also applied to self functional maps. In [NMR] it was suggested to extend the original basis with pointwiseproduct of the leading basis eigenfunctions. This method can be used to compactly and accurately compute functional maps using only the first few basis elements which are considered to be more stable than the higher frequencies extracted numerically from the LBO. Following the definitions in [SK15] the original basis can be enriched by adding the inner product of the spectral gradient fields, as well as the cross product between two spectral gradient fields in the normal direction. In [RCB17] it was suggested that there is an underlying relation between the functional maps of a shape and part of the shape. A similar analysis can be applied to relate between the self functional map of a shape as a whole and its partial version.
A surface coupled with a metric tensor defines a Riemannian manifold. The same surface can thus produce different Riemannian manifolds using different metrics. The self functional map is defined as a functional map between two Riemannian manifolds generated from the same surface. Choosing a pair of metrics on the surface, the Laplace Beltrami operator is determined for each metric and the eigendecomposition of the operators yields two different sets of basis functions. The self functional map transforms the representation coefficients of a scalar function between two different basis functions that are defined by the different metrics on the same surface. The choice of the basis functions for the definition of a functional map is important. To obtain informative structures for offdiagonal entries of the self functional maps, the basis functions should be derived from significantly different metrics. With an appropriate choice of metrics, the resulting self functional maps would yield unique algebraic structures that could serve for object classification. When the basis functions and the inner products are both isometry invariant, the resulting self functional map is isometry invariant by construction. In this paper the two specific metrics we have chosen to explore are the regular and the scale invariant ones. See, for example, [AKR13, RBB11] for a scale and affine invariant metrics.
The suggested framework bridges between two apparently different methodologies. Namely, spectral transfer of functions between shapes, and the notion of shape invariant signatures. We introduce a signature that reflects the interaction between two different metric spaces of the same manifold. The suggested self functional maps translate geometry analysis from its native mesh coordinates that lack the property of being universal, to an image processing like domain, with a regular structure and grid coordinates. The result is an efficient and compact representation, isometry invariant by construction, that could be extended to other groups of transformations beyond the isometry presented here.
In this paper, a shapes in 3D is considered as a surface identifying the shape with its boundary where, by definition, a surface is a twodimensional manifold. Assigning a metric tensor to the surface , it can be treated as a Riemannian manifold denoted as . For the same surface, different metric tensors can be assigned like and , resulting in different Riemannian manifolds like and , respectively. Given the Riemannian metric tensor, geometric quantities such as lengths of curves, distances on the manifold, angles and curvatures can be defined in terms of the metric tensor. Consider , a parametric surface, . Locally, the vectors form a basis for the tangent plane , about each surface point . The regular metric tensor is defined as
(\theequation) 
where . The metric is used to define an infinitesimal measure of length on the surface. Given a vector on the tangent plane defined at point , represented in the local coordinates of , a small displacement about can be written as , and an arclength on the surface is thereby defined as
(\theequation) 
where we used Einstein summation convention in the third term, and denotes the metric tensor in matrix form. Let be two points on the surface, and let be a parameterized curve on the surface connecting these points so that . Then, the length of the trajectory is given by summing the infinitesimal lengths along the curve
(\theequation)  
(\theequation)  
(\theequation) 
The distance between two points on the surface is calculated by taking the minimum of over all possible trajectories connecting the two points.
Two shapes are isometric if there exists a mapping from one surface to the other such that all the pairwise distances on the surfaces are preserved. Using the regular metric, isometric surfaces refer to transformations that are used to model different pose variations that semirigid bodies can undergo. Therefore, shape signatures are expected to be isometry invariant.
In [AKR13] a local scaleinvariant pseudometric was defined as
(\theequation) 
Scale invariance in a differential terms was achieved by multiplying the regular metric by the local Gaussian curvature . Intuitively, with this metric definition, the measure of interest is not the actual distance on the surface but rather the distance normalized by the geometric mean of the principal curvature radii. With this definition the pseudometric is both isometryinvariant as well as invariant under semilocal scaling of the shape. From the perspective of the scale invariant metric, regions of zero Gaussian curvature such as flat or cylindrical regions, effectively shrink to a point. In order to make the scale invariant pseudometric a proper metric and resolve the degeneracy where the Gaussian curvature vanishes, Aflalo et al. slightly modified the metric to
(\theequation) 
for some small positive constant .
Similarity between shapes can have a different interpretations depending on the metric one chooses to use. Two given shapes can be said to be isometric from a point of view of one metric and nonisometric from a point of view of a different one. To exemplify this statement, consider the two different Riemannian manifolds defined above, namely, the regular one defined by , and the scale invariant one defined by . Considering a sphere of radius one, the pairwise distances on the surface overlap in both geometries, as the geodesic arclength and the scale invariant arclength coincide. Scaling the sphere by a uniform factor results in nonisometric surfaces from the regular perspective and isometric surfaces from the scale invariant one. Returning to the original sphere with radius one, assuming the sphere is cut symmetrically and the halves are glued with a thin stripe to form a spherocylinder with , as shown in Figure 2.
From the regular perspective, when , the deformed sphere is almost isometric to the original one as the introduction of the infinitesimal cylinder connecting the two halfspheres has almost no influence on the geodesic distances. In contrast, from the scale invariant point of view, this deformation has a dramatic effect, since on the cylinder part the Gaussian curvature vanishes. The introduction of the infinitesimal cylinder leads to trajectories of distance that approach zero between every pair of points along the equator, effectively shrinking all the points along the equator to a single point. Next, assume the spherocylinder height is gradually extended. As a consequence the geodesic distances between pairs of points on different halves of the sphere would increase when the regular metric is used, resulting in nonisometric family of surfaces. While from the scale invariant point of view, all the resulting spherocylinders with varying are still isometric to one another. This example emphasizes the fact that similarity is influenced by the method used to measure distances on the surface, that is, by the definition of a metric tensor. Considering two different metric tensors can be thought of as two different observations of the same surface, where each is sensitive to different types of deformations.
Shape analysis can be performed in the spectral domain, using the eigendecomposition of the Laplace Beltrami operator. Modeling the surface as a two dimensional Riemannian manifold , possibly with boundary , the Laplace Beltrami operator (LBO) generalizes the classical Laplacian operator on the Riemannian manifold. In local coordinates, the LBO can be expressed as
(\theequation) 
where are the components of the inverse of the metric tensor and its determinant. The LaplaceBeltrami operator admits an eigendecomposition
(\theequation)  
(\theequation) 
with Neumann boundary condition (Self Functional Maps) for a surface with boundary, where is the normal to the surface along the boundary, and is the intrinsic gradient defined on the Riemannian manifold. The eigendecomposition yields a discrete set of eigenfunctions that are invariant to isometries. The eigenfunctions of the LBO on a regularly parametrized torus define the Fourier Transform, while its decomposition on general compact Riemannian manifolds produces eigenfunctions that, when ordered by their corresponding eigenvalues or frequencies, have been shown to be optimal for representing smooth functions on the manifold [BGC17, ABB16]. The invariance of the operator with respect to the isometry it was constructed by, makes it particularly useful for shape analysis. For example, pose variations can be modeled as isometries in their natural sense . Thus, articulated shapes would share similar spectral properties with respect to .
For a comprehensive review of the literature in the field of spectral geometry we refer the reader to [LH14]. We provide here a short sampling of the literature. Diffusion maps [CL06] have been used to embed a surface in the Euclidean space defined by the LBO eigenfunctions and eigenvalues. The Euclidean distance in the embedded space is equivalent to the diffusion distance on the surface. In [LBB11] introduced a diffusiongeometric framework for stable component detection in nonrigid 3D shapes, analogous to MSER in image analysis. Heat Kernel Signatures (HKS) [SOG09] were used as point descriptors defined by the LBO eigenfunctions and eigenvalues, that measure the rate of virtual heat dissipation from a surface point. It has been shown that HKS can be applied for detection of interest points on the surface as well as for shape matching and symmetry detection [OMMG10]. The Wave Kernel Signature (WKS) [ASC11b] is a point descriptor that was introduced by solving the Schrödinger equation on the surface. Treating the solution as a quantum wave function that describes the position of some quantum particle, WKS represents its probability to be found in a specific point on the surface. It can be defined by the LBO eigenfunctions and eigenvalues. The authors of [ASC11a] demonstrated the usefulness of the descriptor for poseconsistent shape segmentation. Global Point Signature (GPS) [Rus07] can be used for representation of the surface in the Euclidean space with coordinates defined using the eigenfunctions and eigenvalues of the LBO. This embedding can be used for pose invariant segmentation and for shape classification using the histogram of pairwise distances between uniformly sampled surface points.
Defining the scale invariant Laplace Beltrami operator [AKR13], the conformal relation between the regular and the scale invariant metrics yields the following relation between the regular operator and the scale invariant one
(\theequation) 
The derivation of this nice property for surfaces is provided in Appendix XX09. The eigendecomposition of the scale invariant LBO yields an isometryinvariant, differential scaleinvariant discrete set of eigenfunctions defined by
(\theequation) 
Comparing the eigenfunctions of the operators, the scaleinvariant LBO eigenfunctions are characterized by rapid changes of the phase in curved regions, while in flat regions the phase is approximately constant. For the regular LBO, the eigenfunctions change of phase is observed in flat regions as well. See Figure 3.
In the discrete domain, for triangulated surfaces the LBO is approximated using the cotangent weights scheme, see [MDSB03], where
(\theequation) 
approximates the regular LBO , and
(\theequation) 
is scale invariant version approximating . Here, is the cotangent weights matrix. It is defined by the angles about the edge that are depicted in Figure 4. Define , then
(\theequation) 
where is the set of all edges of our triangulated surface, and is a diagonal matrix of the pervertex area. That is, is third the sum of areas of all the triangles containing vertex . is a diagonal matrix, where is the absolute value of the Gaussian curvature at vertex , approximated, for example, by the angular deficiency formula, [XX09], and regularized by
(\theequation) 
where is the angle at vertex of the th triangle that contains vertex , and , as in the continuous case, is a small constant introduced to keep the metric valid when the Gaussian curvature vanishes.
Functional maps is a tool for transferring functions between surfaces without establishing an explicit pointtopoint correspondence between the surfaces. Instead, the functional mapping is constructed using linear constrains, derived from partial knowledge about the mapping. Let , and be two shapes, relating to each other by the transformation, . Let be the real function space defined on and be the real function space defined on . maps every scalar function defined on to its corresponding function defined on . is called the functional representation of the mapping . It has been shown that and can be recovered from each other. Clearly if is known, the vertexcorrespondence can be calculated by mapping delta functions concentrated at each vertex from shape to the other. Specifically, it has been shown that if the transformation is known, at least for some descriptor functions, the functional mapping can be constructed. Assume the orthonormal bases and defined on each surface respectively. Suppose a function is defined on , with the basis expansion
(\theequation) 
The mapped function on can be expanded in the basis , as
(\theequation) 
and the relation between the expansion coefficients is linear and given by
(\theequation) 
where the matrix is the functional map matrix, given by
(\theequation) 
To solve for the matrix , linear constrains are derived from the knowledge of specific corresponding functions on the two surfaces. Given a pair of corresponding functions and with the coefficient vectors and in the bases and , respectively. The correspondence imposes the following linear constrain on
(\theequation) 
Corresponding functions are functions that preserve their value under the mapping . For example, if is an isometry between the shapes, HKS or WKS signatures can serve as corresponding functions, or if a corresponding landmark or segment is given, the distance functions from it is corresponding between the shapes. Each pair of corresponding functions is translated to a linear constraint. Thus, the requirement for specific knowledge of the pointtopoint correspondence is replaced by the relaxed requirement of knowledge about function correspondence. In addition, other constraints can be imposed like commutativity with respect to specific operators, such as the Laplace Beltrami operator and symmetry operators. Finally, the resulting optimization problem can be solved using numeric linear solvers.
Using two different metric tensors and , the same surface manifold can be regarded as two different Riemannian manifolds and , respectively. Next, we harness the functional maps formulation to define the self functional map as the functional map between the two different Riemannian manifolds and . The transformation is defined as the trivial map from the surface to itself, while the basis functions are defined as the eigenfunctions of the corresponding Laplace Beltrami operators calculated with respect to the Riemannian metrics
(\theequation)  
(\theequation) 
The self functional map of the surface is defined as
(\theequation) 
where the inner product can be calculated with respect to or depending on the direction of the transformation. The self functional map is a fundamental representation of the shape, reflecting the interaction between the eigenfunctions of the Laplace Beltrami operator calculated on different Riemannian manifolds originated in the surface that defined the shape. These metrics should be different yet intrinsic in the sense that distances can be computed without relating to the embedding space. One possible way to construct the self functional map is to use the regular metric tensor and the scale invariant metric tensor .
The self functional maps are isometry invariant by construction since the eigenfunctions and the inner product are isometry invariant. Therefore, for two isometric surfaces and , it holds that
(\theequation) 
An interesting property of the self functional map is that it encodes the transformation of representation coefficients between basis functions, derived from different metrics, even between different isometric surfaces. Assume two isometric shapes and , and denote the LBO eigenfunctions calculated on the isometric surfaces and , respectively. Since the regular Laplace Beltrami operator is isometry invariant, the basis functions produced in this way are also isometry invariant. Assume the isometric transformation between the shapes is given by , then, from the isometry invariance of the eigenfunctions it follows that
(\theequation) 
In addition, assume we calculate on both shapes, and , a basis of the scale invariant LBO eigenfunctions, denoted by and , respectively. Since the scale invariant Laplace Beltrami operator is isometry invariant it follows that
(\theequation) 
Suppose we want to construct a functional map between and and in we use the regular LBO eigenfunctions for representing functions, while in we use the scale invariant LBO eigenfunctions . Then, the functional map between and , denoted by , is given by
(\theequation) 
where the second equality follows from Equation (Self Functional Maps). From this relation, since , it appears as if, in the case of isometric shapes, one can calculate the functional map between the shapes and , by operating only on the shape , without processing the shape . In fact, in does not matter if the transformation is from to or from to itself, with respect to different metrics, we obtain the same linear transformation for interchanging the representation coefficients between the basis derived from the regular LBO and the basis derived from the scale invariant one. Therefore, the transformation depends only on the metrics by which the different basis functions are constructed and not on the source and destination surfaces of the transformation. See Appendix ‣ Self Functional Maps for a different perspective on this idea.
Let denote the number of shapes in our dataset. In order to calculate the self functional map for each shape we apply the following steps,

The discrete regular Laplace Beltrami operator is calculated using the cotangent weights scheme
(\theequation) The matrices are calculated according to the definition in Subsection Self Functional Maps.

The eigendecomposition of the regular laplacian is computed and the eigenfunctions extracted up to order , and denoted by

The discrete scale invariant Laplace Beltrami operator is calculated using the expression
(\theequation) using the discretization in Subsection Self Functional Maps.

The eigendecomposition of is calculated and the eigenfunctions extracted up to order , denoted by .

The basis functions are obviously normalized with respect to the scale invariant metric,
(\theequation) (\theequation) 
The self functional map is calculated with respect to the scale invariant metric space,
(\theequation) 
To resolve the sign ambiguity of the eigenfunctions the self functional map was updated by Hadamard multiplication with a sign matrix
(\theequation) (\theequation) where represents sign inversion of rows in the original matrix and represents sign inversion of columns.
In order to determine the sign vectors and for each shape, we follow the next procedure.

A representative shape of each class was chosen. Let be a mapping from the index of the current shape to the index of the class representative . The self functional map of the representative shape is kept unchanged with respect to the sign of its row and columns. The self functional map of the other shapes of the same class are updated by the sign matrix.

The sign vectors for the ’th shape are given by
(\theequation)
The self functional maps can be used for shape classification. Classification was achieved by defining the following distance between the self functional maps::
(\theequation) 
With this distance the we could use a simple classification algorithm. Specifically we used kmeans algorithm with . Then the confusion matrix was calculated by using the true class labels and the labels predicted by the kmeans algorithm.
Additionally, for visualization purpose, shape clustering can be performed by embedding the self functional maps into a Euclidean space for low , typically . This was done by the following procedure.

The distance matrix between the shapes, , is calculated using the pairwise squared distances between the self functional maps,
(\theequation) 
The distance matrix is used to embed the shapes in the Euclidean space , given as an input to nonmetric MDS algorithm, using stress minimization.
The experiments were done in Matlab 2017b, using the functions kmeans and mdscale. We ran kmeans typically with few thousands iterations and few thousands replicates, and we used the squared distance between the matrices (not in Matlab interface), while mdscale was used with the default configuration.
We used TOSCA [BBK08] and SHREC 2010 [BBC10], SHREC 2014 [PSR14] and FAUST [BRLB14] datasets for our experiments. TOSCA dataset contains highresolution ( vertices) 3D nonrigid shapes in a variety of poses. The database contains a total of objects, including cats, dogs, wolves, horses, centaurs, female figures, and two different male figures. The MDS clustering of the self functional maps in is shown in Figure 5 with and eigenfunctions of the regular and the scale invariant LBO, respectively.
In order to test the robustness of the self functional maps under a variety of deformations we used SHREC’10 dataset [BBC10]. The dataset consists of highresolution ( vertices) triangular meshes and contains three classes of objects, including human, horse and a dog, susceptible to nonrigid deformations as well as other distortions like noise, holes, topological changes, global and local scale, different sampling and more, as depicted by the following icons on the SHREC website . We demonstrate the performance of the self functional maps in classification in conditions of high intraclass variations. Figure 6 shows the resulting MDS clustering of the self functional maps in on SHREC’10, with .
To evaluate the efficiency of the self functional maps as signatures with the ability to separate between similar classes we conducted experiments on SHREC’14 dataset [PSR14]. This dataset consists of real and synthetic surfaces describing different human figures. The synthetic dataset consists of different human models, as depicted in the following icon taken from the SHREC’14 website , each identity has its own unique body characteristics. Five of the figures are male, five female, and five are children. Each of these models appears in different poses, resulting in a dataset of models. The same poses have been applied to each virtual identity. A typical mesh contains around vertices, and the meshes were downsampled by a factor of . Figure 7 shows the resulting pairwise squared distance matrix between the self functional maps of the different shapes, calculated on the SHREC’14 synthetic data, with . The MDS embedding in is depicted in Figure 8, in Cartesian coordinates and in spherical coordinates.
The real human dataset is composed of meshes, where there are human subjects, as depicted in the following icon taken from the SHREC’14 website , each identity in different poses. Half the human subjects are male, and half are female. Here, again, the original meshes of vertices were downsampled to vertices. The resulting pairwise squared distance matrix between the self functional maps of the different shapes, calculated on the SHREC’14 real data, with is depicted in Figure 9. The MDS embedding is depicted in Figure 10 Since the real dataset contains a large number of classes and points, for a better visualization of the 3D embedding we display the points in spherical angular coordinates.
In addition, we conducted experiments on the FAUST dataset [BRLB14] that contains real human scans of different human subjects, each subject appears in different poses, as depicted by the following icon from FAUST website . Each mesh contains few thousands vertices. The MDS embedding is depicted in Figure 11.
The confusion matrix was calculated for each dataset and is represented in Figure 12. Confusion matrix dimensions are . The vertical axis represents the true class label and the horizontal axis represents the label predicted by the kmeans algorithm. The value of the entry is the number of shapes with true class that were classified with label , normalized by the total number of shapes in class (values are between 0 and 1). The following confusion matrices show perfect classification the tested datasets.
The self functional maps framework was introduced as a universal representation for the task of classification of surfaces of articulated objects. The suggested transition from a purely geometric problem into an algebraic form has the potential to be relevant for other applications in shape processing and analysis. Possible examples are shape matching using either axiomatic classical methods or convolutional neural networks, in which the self functional map matrix serves as the input image. Note, that there is a meaningful order to the coordinates of the resulting matrix. Other metrics could serve for generating alternative self functional maps that are tailored for specific tasks. For example, in the case of matching between rigid manmade objects, different measures could be used to define different operators from which the signature could be derived. One example is the recent operator suggested in [WBCPS17]. Stacking different metric pairs we could define a self functional tensor, as a tensor of self functional maps derived from different pairs of operators. Future work can conduct perturbation analysis, similar to the one used in [RCB17], on the self functional map to discover the influence of partiality on the maps. Extending the original set of the eigenfunctions by using pointwise products or local products of their gradients, similar to those presented in [SK15, NMR], it is possible to create larger self functional maps matrices containing additional information which is supported by reliable high frequency functions.
 [ABB16] Aflalo Y., Brezis H., Bruckstein A., Kimmel R., Sochen N.: Best bases for signal spaces. Comptes Rendus Mathematique 354, 12 (2016), 1155–1167.
 [ADK16] Aflalo Y., Dubrovina A., Kimmel R.: Spectral generalized multidimensional scaling. International Journal of Computer Vision 118, 3 (2016), 380–392.
 [AKR13] Aflalo Y., Kimmel R., Raviv D.: Scale invariant geometry for nonrigid shapes. SIAM Journal on Imaging Sciences 6, 3 (2013), 1579–1597.
 [ASC11a] Aubry M., Schlickewei U., Cremers D.: Poseconsistent 3D shape segmentation based on a quantum mechanical feature descriptor. In Joint Pattern Recognition Symposium (2011), Springer, pp. 122–131.
 [ASC11b] Aubry M., Schlickewei U., Cremers D.: The wave kernel signature: A quantum mechanical approach to shape analysis. In Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on (2011), IEEE, pp. 1626–1633.
 [BBC10] Bronstein A., Bronstein M., Castellani U., Dubrovina A., Guibas L., Horaud R., Kimmel R., Knossow D., Von Lavante E., Mateus D., et al.: Shrec 2010: robust correspondence benchmark. In Eurographics Workshop on 3D Object Retrieval (2010).
 [BBG94] Bérard P., Besson G., Gallot S.: Embedding riemannian manifolds by their heat kernel. Geometric & Functional Analysis GAFA 4, 4 (1994), 373–398.
 [BBGO11] Bronstein A. M., Bronstein M. M., Guibas L. J., Ovsjanikov M.: Shape google: Geometric words and expressions for invariant shape retrieval. ACM Transactions on Graphics (TOG) 30, 1 (2011), 1.
 [BBI] Burago D., Burago Y., Ivanov S.: A course in metric geometry.
 [BBK05] Bronstein A. M., Bronstein M. M., Kimmel R.: Threedimensional face recognition. International Journal of Computer Vision 64, 1 (2005), 5–30.
 [BBK06] Bronstein A. M., Bronstein M. M., Kimmel R.: Generalized multidimensional scaling: a framework for isometryinvariant partial surface matching. Proceedings of the National Academy of Sciences 103, 5 (2006), 1168–1172.
 [BBK08] Bronstein A. M., Bronstein M. M., Kimmel R.: Numerical geometry of nonrigid shapes. Springer Science & Business Media, 2008.
 [BBK10] Bronstein A. M., Bronstein M. M., Kimmel R., Mahmoudi M., Sapiro G.: A gromovhausdorff framework with diffusion geometry for topologicallyrobust nonrigid shape matching. International Journal of Computer Vision 89, 23 (2010), 266–286.
 [BCBB16] Biasotti S., Cerri A., Bronstein A., Bronstein M.: Recent trends, applications, and perspectives in 3d shape similarity assessment. In Computer Graphics Forum (2016), vol. 35, Wiley Online Library, pp. 87–119.
 [BCG08] BenChen M., Gotsman C.: Characterizing shape using conformal factors. In 3DOR (2008), pp. 1–8.
 [BGC17] Brezis H., GómezCastro D.: Rigidity of optimal bases for signal spaces. Comptes Rendus Mathematique 355, 7 (2017), 780–785.
 [BK10] Bronstein M. M., Kokkinos I.: Scaleinvariant heat kernel signatures for nonrigid shape recognition. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on (2010), IEEE, pp. 1704–1711.
 [BRLB14] Bogo F., Romero J., Loper M., Black M. J.: FAUST: Dataset and evaluation for 3D mesh registration. In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) (Piscataway, NJ, USA, June 2014), IEEE.
 [CL06] Coifman R. R., Lafon S.: Diffusion maps. Applied and computational harmonic analysis 21, 1 (2006), 5–30.
 [DLL10] Dey T. K., Li K., Luo C., Ranjan P., Safa I., Wang Y.: Persistent heat signature for poseoblivious matching of incomplete models. In Computer Graphics Forum (2010), vol. 29, Wiley Online Library, pp. 1545–1554.
 [EK03] Elad A., Kimmel R.: On bending invariant signatures for surfaces. IEEE Transactions on pattern analysis and machine intelligence 25, 10 (2003), 1285–1295.
 [FMK03] Funkhouser T., Min P., Kazhdan M., Chen J., Halderman A., Dobkin D., Jacobs D.: A search engine for 3D models. ACM Transactions on Graphics (TOG) 22, 1 (2003), 83–105.
 [KBLB12] Kokkinos I., Bronstein M. M., Litman R., Bronstein A. M.: Intrinsic shape context descriptors for deformable shapes. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on (2012), IEEE, pp. 159–166.
 [Lav12] Lavoué G.: Combination of bagofwords descriptors for robust partial shape retrieval. The Visual Computer 28, 9 (2012), 931–942.
 [LBB11] Litman R., Bronstein A. M., Bronstein M. M.: Diffusiongeometric maximally stable component detection in deformable shapes. Computers & Graphics 35, 3 (2011), 549–560.
 [LBZ13] Liu Z.B., Bu S.H., Zhou K., Gao S.M., Han J.W., Wu J.: A survey on partial retrieval of 3D shapes. Journal of Computer Science and Technology 28, 5 (2013), 836–851.
 [LD11] Lipman Y., Daubechies I.: Conformal wasserstein distances: Comparing surfaces in polynomial time. Advances in Mathematics 227, 3 (2011), 1047–1077.
 [LH14] Li C., Hamza A. B.: Spatially aggregating spectral descriptors for nonrigid 3D shape retrieval: a comparative survey. Multimedia Systems 20, 3 (2014), 253–281.
 [LJ07] Ling H., Jacobs D. W.: Shape classification using the innerdistance. IEEE transactions on pattern analysis and machine intelligence 29, 2 (2007), 286–299.
 [LSF11] Laga H., Schreck T., Ferreira A., Godil A., Pratikakis I., Veltkamp R.: Bag of words and local spectral descriptor for 3D partial shape retrieval. In Proceedings of the Eurographics Workshop on 3D Object Retrieval (3DOR’11) (2011), Citeseer, pp. 41–48.
 [MDSB03] Meyer M., Desbrun M., Schröder P., Barr A. H.: Discrete differentialgeometry operators for triangulated 2manifolds. In Visualization and mathematics III. Springer, 2003, pp. 35–57.
 [Mém11] Mémoli F.: Gromov–wasserstein distances and the metric approach to object matching. Foundations of computational mathematics 11, 4 (2011), 417–487.
 [MS05] Mémoli F., Sapiro G.: A theoretical and computational framework for isometry invariant recognition of point cloud data. Foundations of Computational Mathematics 5, 3 (2005), 313–347.
 [NMR] Nogneng D., Melzi S., Rodolà E., Castellani U., Bronstein M., Ovsjanikov M.: Improved functional mappings via product preservation.
 [OBCS12] Ovsjanikov M., BenChen M., Solomon J., Butscher A., Guibas L.: Functional maps: a flexible representation of maps between shapes. ACM Transactions on Graphics (TOG) 31, 4 (2012), 30.
 [OMMG10] Ovsjanikov M., Mérigot Q., Mémoli F., Guibas L.: One point isometric matching with the heat kernel. In Computer Graphics Forum (2010), vol. 29, Wiley Online Library, pp. 1555–1564.
 [PSR14] Pickup D., Sun X., Rosin P. L., Martin R. R., Cheng Z., Lian Z., Aono M., Ben Hamza A., Bronstein A., Bronstein M., Bu S., Castellani U., Cheng S., Garro V., Giachetti A., Godil A., Han J., Johan H., Lai L., Li B., Li C., Li H., Litman R., Liu X., Liu Z., Lu Y., Tatsuma A., Ye J.: SHREC’14 track: Shape retrieval of nonrigid 3d human models. In Proceedings of the 7th Eurographics workshop on 3D Object Retrieval (2014), EG 3DOR’14, Eurographics Association.
 [QH07] Qiu H., Hancock E. R.: Clustering and embedding using commute times. IEEE Transactions on Pattern Analysis and Machine Intelligence 29, 11 (2007).
 [RBB11] Raviv D., Bronstein A. M., Bronstein M. M., Kimmel R., Sochen N.: Affineinvariant geodesic geometry of deformable 3D shapes. Computers & Graphics 35, 3 (2011), 692–697.
 [RBBK10] Raviv D., Bronstein M. M., Bronstein A. M., Kimmel R.: Volumetric heat kernel signatures. In Proceedings of the ACM workshop on 3D object retrieval (2010), ACM, pp. 39–44.
 [RCB17] Rodolà E., Cosmo L., Bronstein M. M., Torsello A., Cremers D.: Partial functional correspondence. In Computer Graphics Forum (2017), vol. 36, Wiley Online Library, pp. 222–236.
 [Rus07] Rustamov R. M.: Laplacebeltrami eigenfunctions for deformation invariant shape representation. In Proceedings of the fifth Eurographics symposium on Geometry processing (2007), Eurographics Association, pp. 225–233.
 [RWP06] Reuter M., Wolter F.E., Peinecke N.: Laplace–beltrami spectra as Shape–DNA of surfaces and solids. ComputerAided Design 38, 4 (2006), 342–366.
 [SK15] Shtern A., Kimmel R.: Spectral gradient fields embedding for nonrigid shape matching. Computer Vision and Image Understanding 140 (2015), 21–29.
 [SK17] Shamai G., Kimmel R.: Geodesic distance descriptors. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Hawaii, Honolulu (2017).
 [SOG09] Sun J., Ovsjanikov M., Guibas L.: A concise and provably informative multiscale signature based on heat diffusion. In Computer graphics forum (2009), vol. 28, Wiley Online Library, pp. 1383–1392.
 [SSW89] Schwartz E. L., Shaw A., Wolfson E.: A numerical solution to the generalized mapmaker’s problem: flattening nonconvex polyhedral surfaces. IEEE Transactions on pattern analysis and machine intelligence 11, 9 (1989), 1005–1008.
 [WBCPS17] Wang Y., BenChen M., Polterovich I., Solomon J.: Steklov geometry processing: An extrinsic approach to spectral shape analysis. arXiv preprint arXiv:1707.07070 (2017).
 [WWJ07] Wang S., Wang Y., Jin M., Gu X. D., Samaras D.: Conformal geometry and its applications on 3d shape matching, recognition, and stitching. IEEE Transactions on Pattern Analysis and Machine Intelligence 29, 7 (2007), 1209–1220.

[XX09]
Xu Z., Xu G.:
Discrete schemes for gaussian curvature and their convergence.
Computers & Mathematics with Applications 57, 7 (2009),
1187–1195.
\@xsect
Assuming we have classified two surfaces and to belong to the same class.
Then, we have that their selffunctional maps are similar to one another.
That is, for all and we have
(\theequation) (\theequation) (\theequation) (\theequation) (\theequation) (\theequation) (\theequation) (\theequation) (\theequation) (\theequation) (\theequation) (\theequation) (\theequation) (\theequation) (\theequation)