On the optimality of shape and data representation in the spectral domain

On the optimality of shape and data representation in the spectral domain

Yonathan Aflalo Computer Science Department, Technion, Israel Institute of Technology, Haifa 32000, Israel    Haim Brezis Rutgers University, Department of Mathematics, Hill Center, Busch Campus, 110 Frelinghuysen Road, Piscataway, NJ 08854, USA, and Department of Mathematics, Technion, Israel Institute of Technology, 32000 Haifa, Israel    Ron Kimmel 11footnotemark: 1

A proof of the optimality of the eigenfunctions of the Laplace-Beltrami operator (LBO) in representing smooth functions on surfaces is provided and adapted to the field of applied shape and data analysis. It is based on the Courant-Fischer min-max principle adapted to our case. The theorem we present supports the new trend in geometry processing of treating geometric structures by using their projection onto the leading eigenfunctions of the decomposition of the LBO. Utilization of this result can be used for constructing numerically efficient algorithms to process shapes in their spectrum. We review a couple of applications as possible practical usage cases of the proposed optimality criteria. We refer to a scale invariant metric, which is also invariant to bending of the manifold. This novel pseudo-metric allows constructing an LBO by which a scale invariant eigenspace on the surface is defined. We demonstrate the efficiency of an intermediate metric, defined as an interpolation between the scale invariant and the regular one, in representing geometric structures while capturing both coarse and fine details. Next, we review a numerical acceleration technique for classical scaling, a member of a family of flattening methods known as multidimensional scaling (MDS). There, the optimality is exploited to efficiently approximate all geodesic distances between pairs of points on a given surface, and thereby match and compare between almost isometric surfaces. Finally, we revisit the classical principal component analysis (PCA) definition by coupling its variational form with a Dirichlet energy on the data manifold. By pairing the PCA with the LBO we can handle cases that go beyond the scope defined by the observation set that is handled by regular PCA.

1 Introduction

The field of shape analysis has been evolving rapidly during the last decades. The constant increase in computing power allowed image and shape understanding algorithms to efficiently handle difficult problems that could not have been practically addressed in the past. A large set of theoretical tools from metric geometry, differential geometry, and spectral analysis has been imported and translated into action within the shape and image understanding arena. Among the myriad of operators recently explored, the Laplace-Beltrami operator (LBO) is ubiquitous. The LBO is an extension of the Laplacian to non-flat multi-dimensional manifolds. Its properties have been well studied in differential geometry and it was used extensively in computer graphics. It is used to define the heat equation, that models the conduction of heat in solids, and is fundamental in describing basic physical phenomena. In its more general setting, the Laplace-Beltrami operator admits an eigen-decomposition that yields a spectral domain that can be viewed as a generalization of the Fourier analysis to any Riemannian manifold. The LBO invariance to isometric transformations allowed the theories developed by physicists and mathematician to be useful for modern shape analysis. Here, we justify the selection of the leading eigenfunctions in the spectral domain as an optimal sub-space for representing smooth functions on a given manifold. It is used for solving and accelerating existing solvers of various problems in data representation, information processing, and shape analysis. As one example, in Section 4 we pose the dilemma of metric selection for shape representation while interpolating between a scale invariant metric and the regular one. Next, in Section 5 it is shown how the recently introduced spectral classical scaling can benefit from the efficacy property of the suggested subspace. Finally, in Section 6 we revisit the definition of the celebrated principal component analysis (PCA) by regularizing its variational form with an additional Dirichlet energy. The idea is to balance between two optimal sub-spaces, one for the data points themselves - captured by the PCA, and one optimally encapsulating the relation between the data points as defined by decomposition of the LBO.

2 Notations and motivation

Consider a parametrized surface (with or without boundary) and a metric that defines the affine differential relation of navigating with coordinates in to a distance measured on . That is, an arc length on expressed by and would read . The Laplace-Beltrami operator acting on the scalar function is defined as

where is the determinant of the metric matrix, and is the inverse metric, while is a derivative with respect to the coordinate . The LBO operator is symmetric and admits a spectral decomposition , with , such that

where , and . In case has a boundary, we add Neumann boundary condition

Defined by the metric rather than the explicit embedding, makes the LBO and its spectral decomposition invariant to isometries and thus, a popular operator for shapes processing and analysis. For example, the eigenfunctions and eigenvalues can be used to efficiently approximate diffusion distances and commute time distances [26, 6, 12, 13, 11], that were defined as computational alternatives to geodesic distances, and were shown to be robust to topology changes and global scale transformations. At another end, Lévy [21] proposed to manipulate the geometry of shapes by operating in their spectral domain, while Gotsman and Karni [18] chose the eigenfunctions as a natural basis for approximating the coordinates of a given shape. Feature point detectors and descriptors of surfaces were also extracted from the same spectral domain. Such measures include the heat kernel signature (HKS) [29, 15], the global point signature (GPS) [27], the wave kernel signature (WKS) [5], and the scale-space representation [33].

Given two surfaces and , and a bijective mapping between them, , Ovsjanikov et al. [24] emphasized the fact that the relation between the spectral decomposition of a scalar function and and its representative on , that is , is linear. In other words, the geometry of the mapping is captured by , allowing the coefficients of the decompositions to be related in a simple linear manner. The basis extracted from the LBO was chosen in this context because of its intuitive efficiency in representing functions on manifolds, thus far justified heuristically. The linear relation between the spectral decomposition coefficients of the same function on two surfaces, when the mapping between manifolds is provided, was exploited by Pokrass et al. [25] to find the correspondence between two almost isometric shapes. They assumed that the matrix that links between the LBO eigenfunctions of two almost isometric shapes should have dominant coefficients along its diagonal, a property that was first exploited in [20].

One could use the relation between the eigen-structures of two surfaces to approximate non-scalar and non-local structures on the manifolds [1]. Examples for such functions are geodesic distances [19, 31, 28, 23, 30], that serve as an input for the Multi-Dimensional Scaling [7], the Generalized Multi-Dimensional Scaling [10], and the Gromov-Hausdorff distance [22, 9]. Using the optimality of representing surfaces and functions on surfaces with truncated basis, geodesic distances can now be efficiently computed and matched in the spectral domain.

Among the myriads reasons that motivate the choice of the spectral domain for shape analysis, we emphasize the following,

  • The spectral domain is isometric invariant.

  • Countless signal processing tools that exploit the Fourier basis are available. Some of these tools can be generalized to shapes for processing, analysis, and synthesis.

  • Most interesting functions defined on surfaces are smooth and can thus be approximated by their projection onto a small number of eigenfunctions.

  • For two given perfectly isometric shapes, the problem of finding correspondences between the shapes appears to have a simple formulation in the spectral domain.

Still, a rigorous justification for the selection of the basis defined by the LBO was missing in the shape analysis arena. Along the same line, combing the eigenstructure of the LBO with classical data representation and analysis procedures that operate in other domains like the PCA [17], MDS [7], and GMDS [10] was yet to come. Here, we review recent improvements of existing tools that make use of the decomposition of Laplace-Beltrami operator. We provide a theoretical justification for using the LBO eigen-decomposition in many shape analysis methods. With this property in mind, we demonstrate that it is possible to migrate algorithms to the spectral domain while establishing a substantial reduction in complexity.

3 Optimality of the LBO eigenspace

In this section we provide a theoretical justification to the choice of the LBO eigenfunctions, by proving that the resulting spectral decomposition is optimal in approximating functions with bounded gradient magnitudes. Let be a given Riemannian manifold with a metric , an induced LBO, , with associated spectral basis , where . It is shown, for example in [3], that for any , the representation error


Our next result asserts that the eigenfunctions of the LBO are optimal with respect to estimate error (1).

Theorem 3.1.

Let . There is no integer and no sequence of linearly independent functions in such that


Recall the Courant-Fischer min-max principle, see [8] Problems 37 and 49, and [32]. We have for every ,


That is, the is taken over a linear subspace (where is the Sobolev space ) of co-dimension and the is taken over all such subspaces.

Set , so that is a subspace of codimension . By (2) we have ,

and thus


On the other hand, by (3)


Combining (4) and (5) yields . ∎

For the convenience of the reader we present in the appendix a direct proof of a special case of the above result which does not make use of the Courant-Fischer min-max principle. The above theorem proves the optimality of the eigenfunctions of the LBO in representing functions on manifolds. In the following sections we apply the optimality property for solving various shape analysis problems.

4 Scale invariant geometry

Almost isometric transformation are probably the most common ones for surfaces and volumes in nature. Still, in some scenarios, relations between surfaces should be described by slightly more involved deformation-models. Though a small child and an adult are obviously not isometric, and the same goes for a whale and a dolphin, the main characteristics are morphometrically similar for mammals in large. In order to extend the scope of matching and comparing shapes, a semi-local scale invariant geometry was introduced in [4]. There, it was used to define a new LBO by which one can construct an eigenspace which is invariant to semi-local and obviously global scale transformations.

Let , be the regular metric defined on the manifold. In [4] the scale invariant pseudometric is defined as

where is the Gaussian curvature at each point on the manifold. One could show that this metric is scale invariant and the same goes for the LBO that it induces, namely . A discretization of this operator and experimental results that outperformed state of the art algorithms for shape matching, when scaling is involved, were presented in [4] . Specifically, the scale invariant geometry allows to find correspondence between two shape related by semi-local scale transformation.

Next, one could think of searching for an optimal representation space for shapes by interpolating between the scale-invariant metric and the regular one. We define the interpolated pseudometric to be

where represents the new pseudometric, is the Gaussian curvature, and is the metric interpolation scalar that we use to control the representation error. In our setting, depends on and represents the regular metric when , or the scale invariant one for .

Figure 1 depicts the effect of representing a shape’s coordinates projected to the first 300 eigenfunction of the LBO with a regular metric (left), the scale invariant one (right), and the interpolated pseudometric with (middle). The idea is to use only part of eigenfunctions to approximate smooth functions on the manifold, treating the coordinates as such. While the regular natural basis captures the global structure of the surface, as expected, the scale invariant one concentrates on the fine features with effective curvature. The interpolated one is a good compromise between the global structure and the fine details.

Figure 1: Top: A given horse model. Bottom: The horse coordinates projected to the first 300 LBO eigenfunctions using a regular metric (left), an intermediate metric (middle), and a scale invariant one (right).

We proved that once a proper metric is defined, the Laplace-Beltrami eigenspace is the best possible space for functional approximation of smooth functions. Next, we exploit this property to reformulate classical shape analysis algorithms such as MDS in the spectral domain.

5 Spectral classical scaling

Multidimensional Scaling [7] is a family of data analysis methods that is widely used in machine learning and shape analysis. Given an pairwise distances matrix , the MDS method finds an embedding of points of , given by an matrix , such that the pairwise euclidean distances between every two points, each defined by a row of , is as close as possible to their corresponding input pair given by the right entry in . The classical MDS algorithm minimizes the following functional

where is a matrix such that , is a centering matrix defined by , where is the identity matrix, is a vector of ones, and is the Frobenius norm. The solution can be obtained by a singular value decomposition of the matrix . This method was found to be useful when comparing between isometric shapes using their inter-geodesic distances [14, 9], and texture mapping in computer graphics [34]. The computation of geodesic distances as well as the SVD of an matrix can be expensive in terms of memory and computational time. High resolution shapes with more than vertices are difficult do handle with this method.

In order to reduce the complexity of the problem, it was proposed in [3] to compute geodesic distances between a small set of sample points, and then, interpolate the rest of the distances by minimizing a bi-harmonic energy in the spectral domain. We find a spectral representation of the matrix , where represents the matrix that discretizes the spectral domain. We then embed our problem into the eigenspace of the LBO, defining , where is an matrix, and in order to reduce the overall complexity. is obtained by minimizing

Experimental results of shape canonization comparing shapes flattened with spectral classical scaling to regular classical scaling results were presented in [3]. The spectral approach outperformed the classical scaling in terms of time and space complexities, that lead to overall better accuracy for the spectral version, see Figure 2.

Figure 2: MDS flat embedding: Result of regular MDS (middle) and the spectral MDS (right) of the given horse surface (left).

In the next section we introduce a novel design of functional spaces that benefit from both the Laplace-Beltrami operator and classical principal component analysis, while extending the scope of each of these measures.

6 Regularized PCA

The spectral domain provided by the decomposition of the LBO is efficient in representing smooth functions on the manifold. Still, in some scenarios, functions on manifolds could contain discontinuities that do not align with our model assumption. Alternatively, some functions could be explicitly provided as known points on the data manifold, in which case, the question of what should be the best representation obtains a new flavor. The principal component analysis [17] concept allows to extract a low rank orthonormal approximate representation from a set of such data points . Given a set of vectors , the PCA algorithm finds an orthonormal basis of , defined by its vectors , by minimizing

It can be shown that this problem can be written as

where is a matrix whose column is the data point . At the other end, given a manifold , the spectral basis minimizes the Dirichlet energy of any orthonormal basis defined on , where,


where is the Kronecker delta symbol, and is the number of desired basis functions. Using a discretization of the Laplace-Belrami operator, it can be shown that the PCA and the computation of a spectral basis could be married. We can combine both energies, namely, the energy defined by the data projection error and the Dirichlet energy of the representation space. The result reads,


Where represents the local area normalization factor. This problem is equivalent to finding a basis that is both smooth and whose projection error on the set of given vectors (data points) is minimal, as shown in [2] and [16]. When in the above model is set to zero, we have the PCA as a solution. At the other end, as goes to infinity, we get back our LBO eigenbasis domain. The parameter controls the smoothness of the desired basis. The benefits of this hybrid model in representing out of sample information are demonstrated in [2], as can be seen in Figure 3.

Figure 3: Reconstruction the dog shapes in the second row, by projecting their coordinates to the first eigenfunctions of the Laplace-Beltrami (third row), to the PCA basis trained with the two shapes from the first row (forth row), and to the basis functions of the regularized-PCA basis trained with the two dogs from the first row (bottom/fifth row).

This model allows us to design an alternative basis which is related to the spectral domain but whose properties can be tuned to fit specific information about the data.

7 Conclusion

A theoretical support for the selection of the leading eigenfunctions of the Laplace-Beltrami operator of a given shape as a natural domain for morphometric study of surfaces was provided. The optimality result motivates the design of efficient shape matching and analysis algorithms. It enabled us to find the most efficient representations of smooth functions on surfaces in terms of both accuracy and complexity when projected onto the first eigenfunctions of the Laplace-Beltrami operator. Our optimality criterion is obviously defined with respect to a given metric. In shape representation, the choice of an appropriate metric is probably as important as the selection of the most efficient sub-space. This was demonstrated in approximating the fine details and the general structure of a shape of a horse in Section 4 using a regular metric, a scale invariant one, and a metric interpolating between the two. Spectral classical scaling and its generalized version, benefit from the presented optimality result, so as the regularization of classical PCA. In both cases it was demonstrated that the decomposition of the LBO provides a natural sub-space to operate in.

The provably optimal representation space allows to construct efficient tools for computational morphometry - the numerical study of shapes. Revisiting the optimality criteria obviously lead to alternative domains and hopefully better analysis tools that we plan to explore in the future.

8 Acknowledgements

YA and RK thank Anastasia Dubrovina, Alon Shtern, Aaron Wetzler, and Michael Zibulevsky for intriguing discussions. This research was supported by the European Community’s FP7- ERC program, grant agreement no. 267414. HB was partially supported by NSF grant DMS-1207793 and also by grant number 238702 of the European Commission (ITN, project FIRST).


Theorem .1.

Given a Riemannian manifold with a metric , the induced LBO, , and its spectral basis , where , and a real scalar value , there is no orthonormal basis of functions , and an integer such that

To prove the optimality of the LBO eigenbasis, let us first prove the following lemma.

Lemma .1.

Given an orthonormal basis , of a Hilbert space , an orthogonal projection operator of , such that

where , then


Let us denote

Because the operator is orthogonal and the basis is orthonormal, we have


By definition, we have that


and since and , we have

Now, assume that

Then, , and we can find a vector such that and . Since , it follows that

But, this contradicts the fact that , because implies


Equipped with this result we are now ready to prove Theorem A.1


Assume that there exists such a basis, . Then, the representation of a function in the eigenbasis of the LBO can be written as

We straightforwardly have

and it follows that


Then, replacing with , we have

and since , we can state that there is a set of orthonormal vectors belonging to an orthonormal basis whose projection error (over the space spanned by ) is smaller than one. According to the previous lemma, the original assumption leads to a contradiction because the dimension of the kernel of the projection on the space spanned by , , is . ∎


  • [1] Y. Aflalo, A. Dubrovina, and R. Kimmel. Spectral generalized multi-dimensional scaling. CoRR, abs/1311.2187, 2013.
  • [2] Y. Aflalo and R. Kimmel. Regularized PCA. Submitted, 2013.
  • [3] Y. Aflalo and R. Kimmel. Spectral multidimensional scaling. Proceedings of the National Academy of Sciences, 110(45):18052–18057, 2013.
  • [4] Y. Aflalo, R. Kimmel, and D. Raviv. Scale invariant geometry for nonrigid shapes. SIAM Journal on Imaging Sciences, 6(3):1579–1597, 2013.
  • [5] M. Aubry, U. Schlickewei, and D. Cremers. The wave kernel signature: A quantum mechanical approach to shape analysis. In Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on, pages 1626–1633. IEEE, 2011.
  • [6] P. Bérard, G. Besson, and S. Gallot. Embedding riemannian manifolds by their heat kernel. Geometric and Functional Analysis, 4(4):373–398, 1994.
  • [7] I. Borg and P. Groenen. Modern Multidimensional Scaling: Theory and Applications. Springer, 1997.
  • [8] H. Brezis. Functional Analysis, Sobolev Spaces and Partial Differential Equations. Universitext Series. Springer, 2010.
  • [9] A. M. Bronstein, M. M. Bronstein, and R. Kimmel. Efficient computation of isometry-invariant distances between surfaces. SIAM Journal on Scientific Computing, 28(5):1812–1836, 2006.
  • [10] A. M. Bronstein, M. M. Bronstein, and R. Kimmel. Generalized multidimensional scaling: A framework for isometry-invariant partial surface matching. Proceedings of the National Academy of Sciences, 103(5):1168–1172, 2006.
  • [11] A. M. Bronstein, M. M. Bronstein, R. Kimmel, M. Mahmoudi, and G. Sapiro. A gromov-hausdorff framework with diffusion geometry for topologically-robust non-rigid shape matching. International Journal of Computer Vision, 89(2-3):266–286, 2010.
  • [12] R. R. Coifman and S. Lafon. Diffusion maps. Applied and Computational Harmonic Analysis, 21(1):5 – 30, 2006. Special Issue: Diffusion Maps and Wavelets.
  • [13] R. R. Coifman, S. Lafon, A. B. Lee, M. Maggioni, B. Nadler, F. Warner, and S. W. Zucker. Geometric diffusions as a tool for harmonic analysis and structure definition of data: Diffusion maps. Proceedings of the National Academy of Sciences, 102(21):7426?7431, 2005.
  • [14] A. Elad and R. Kimmel. On bending invariant signatures for surfaces. IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI), 25(10):1285–1295, 2003.
  • [15] K. Gȩbal, J. A. Bærentzen, H. Aanæs, and R. Larsen. Shape analysis using the auto diffusion function. In Computer Graphics Forum, volume 28, pages 1405–1413. Wiley Online Library, 2009.
  • [16] B. Jiang, C. Ding, B. Luo, and J. Tang. Graph-Laplacian PCA: Closed-form solution and robustness. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference, pages 3492–3498, June 2013.
  • [17] I. T. Jolliffe. Principal Component Analysis. Springer, second edition, Oct. 2002.
  • [18] Z. Karni and C. Gotsman. Spectral compression of mesh geometry. ACM Transactions on Graphics, pages 279–286, 2000.
  • [19] R. Kimmel and J. A. Sethian. Fast marching methods on triangulated domains. Proceedings of the National Academy of Sciences, 95:8341–8435, 1998.
  • [20] A. Kovnatsky, M. M. Bronstein, A. M. Bronstein, K. Glashoff, and R. Kimmel. Coupled quasi-harmonic bases. Computer Graphics Forum, 32:439–448, 2013.
  • [21] B. Lévy. Laplace-Beltrami eigenfunctions towards an algorithm that “understands” geometry. In Shape Modeling and Applications, 2006. SMI 2006. IEEE International Conference on, pages 13–13. IEEE, 2006.
  • [22] F. Memoli and G. Sapiro. A theoretical and computational framework for isometry invariant recognition of point cloud data. Foundations of Computational Mathematics, 5(3):313–347, 2005.
  • [23] J. S. B. Mitchell, D. M. Mount, and C. H. Papadimitriou. The discrete geodesic problem. SIAM J. Comput., 16(4):647–668, Aug. 1987.
  • [24] M. Ovsjanikov, M. Ben-Chen, J. Solomon, A. Butscher, and L. Guibas. Functional maps: a flexible representation of maps between shapes. ACM Transactions on Graphics, 31(4):1:11, July 2012.
  • [25] J. Pokrass, A. M. Bronstein, M. M. Bronstein, P. Sprechmann, and G. Sapiro. Sparse modeling of intrinsic correspondences. Computer Graphics Forum (EUROGRAPHICS), 32:459–268, 2013.
  • [26] H. Qiu and E. R. Hancock. Clustering and embedding using commute times. IEEE Trans. Pattern Anal. Mach. Intell., 29(11):1873–1890, Nov. 2007.
  • [27] R. Rustamov, M. Ovsjanikov, O. Azencot, M. Ben-Chen, F. Chazal, and L. Guibas. Map-based exploration of intrinsic shape differences and variability. In SIGGRAPH. ACM, 2013.
  • [28] J. A. Sethian. A fast marching level set method for monotonically advancing fronts. Proceedings of the National Academy of Sciences, 93:1591–1595, 1996.
  • [29] J. Sun, M. Ovsjanikov, and L. Guibas. A concise and provably informative multi-scale signature based on heat diffusion. In Proceedings of the Symposium on Geometry Processing, SGP ’09, pages 1383–1392, Aire-la-Ville, Switzerland, 2009. Eurographics Association.
  • [30] V. Surazhsky, T. Surazhsky, D. Kirsanov, S. J. Gortler, and H. Hoppe. Fast exact and approximate geodesics on meshes. ACM Transactions on Graphics, 24(3):553–560, July 2005.
  • [31] J. Tsitsiklis. Efficient algorithms for globally optimal trajectories. Automatic Control, IEEE Transactions on, 40(9):1528–1538, Sep 1995.
  • [32] H. Weinberger. Variational Methods for Eigenvalue Approximation. Society for Industrial and Applied Mathematics, 1974.
  • [33] A. Zaharescu, E. Boyer, K. Varanasi, and R. Horaud. Surface feature detection and description with applications to mesh matching. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 373–380. IEEE, 2009.
  • [34] G. Zigelman, R. Kimmel, and N. Kiryati. Texture mapping using surface flattening via multidimensional scaling. Visualization and Computer Graphics, IEEE Transactions on, 8(2):198–207, Apr 2002.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description