Fast Classical Scaling

Fast Classical Scaling

Gil Shamai    Michael Zibulevsky    Ron Kimmel Computer Science Department, Technion, Israel Institute of Technology, Haifa 32000.
Abstract

Multidimensional-scaling (MDS) is a dimensionality reduction tool used for information analysis, data visualization and manifold learning. Most MDS procedures find embedding of data points in low dimensional Euclidean (flat) domains, such that distances between the points are as close as possible to given inter-points dissimilarities. We present an efficient solver for Classical Scaling, a specific MDS model, by extrapolating the information provided by distances measured from a subset of the points to the rest. The computational and space complexities of the new MDS procedures are be thereby reduced from quadratic to quasi-linear in the number of data points. Incorporating both local and global information about the data allows us to construct a low rank approximation to the inter-geodesic distances between the data points. As a by-product, the proposed method allows for efficient computation of geodesic distances. Finally, we show how to apply our method to two geometric analysis applications and obtain state of the art results.

1 Introduction

need for data simplification and dimensionality reduction tools constantly expands with the increasing availability of digital information. Self-Organizing Map (SOM) [17], and Local Coordinate Coding [34, 35] are examples of such data reduction techniques, that can be used to simplify the data and hence reduce computational and space complexities of procedures dealing with it. Multidimensional Scaling (MDS) [5] is an example of such methods that embeds the data into a low dimensional Euclidean space, while attempting to preserve the distance between each pair of points. Flat embedding is a fundamental step in various applications in the fields of data mining, statistics, manifold learning, non-rigid shape analysis, and more. For example, in [21] Panazzo et al. demonstrated how to generalize weighted averages to surfaces using MDS. It enabled them to efficiently compute splines on surfaces, solve remeshing problems, find shape correspondences, transfer texture and evaluate morphing. Zigelman et al. [36] used MDS for texture mapping, while Schwartz et al. [27, 11], utilized it to flatten models of monkeys’ cortical surfaces for further analysis. In [28, 25, 23], MDS was applied to image and video analysis. In [12], MDS was used to embed surfaces into a Euclidean space, such that the Euclidean distance between each pair of embedded points is as similar as possible to the geodesic distance between the corresponding surface points. This embedding is referred as a canonical form and is invariant to isometric deformations of the surface. Such canonical forms reveal the intrinsic structure of the surface, and can be used to simplify tasks such as non-rigid objects matching and classification.

The bottleneck step in MDS is computing and storing all pairwise distances. For example, when dealing with geodesic distances computed on surfaces, this step is time consuming and in some cases impractical. The fast marching method [16] is an example of one efficient procedure for approximating the geodesic distance map between all pairs of points on a surface. Its complexity is when used in a straightforward manner. Even when using such an efficient procedure, the complexities involved in computing and storing these distances are at least quadratic in the number of points.

One way to reduce these complexities is by considering only local distances between nearby data points. Such attempts were made, for example, in Locally Linear Embedding (LLE, [24]) Hessian Locally Linear Embedding (HLLE, [9]), and in Laplacian eigenmaps [3]. This way, the effective time and space complexities can be reduced to . However, only the local structure is captured and the global geometry is ignored. In [31], De Silva and Tenenbaum suggested to compute only distances between a small subset of landmark points. The landmarks were then embedded ignoring of the rest of the distances, and then the rest of the points were interpolated into the target flat space.

The Nyström method [2] is an efficient technique to reconstruct a positive semidefinite matrix using only a small subset of its columns and rows. In [33, 7] only a few columns of the pairwise distance matrix were computed and then the Nyström method was used to interpolate the rest of the matrix. The main differences between these methods are the columns sampling schemes and the way to compute the initial geodesic distances. In addition, a regularization term was used in [7] for the pseudo-inverse in the Nyström method, which further improves the reconstruction. [19] combined the Nyström method with Kernel-PCA [26] and demonstrated an efficient application of mesh segmentation. Recently, Spectral MDS (SMDS) [1] obtained state-of-the-art results for efficiently approximating the embedding of MDS. In this method, complexities are reduced by translating the problem into the spectral domain, and representing all inter-geodesic distances considering only the first eigenvectors of the Laplace-Beltrami operator.

We propose two methods for improving both accuracy and time complexities of MDS when compared to existing methods. Inspired by some ideas of SMDS, Fast-MDS (FMDS) [29] interpolates the distance map from a small subset of landmarks using a smoothness assumption as a prior. As opposed to the SMDS, the problem is solved in the spatial domain and accuracy is improved since no eigenvectors are omitted. Nyström-MDS (NMDS) [30] further improves the solution by learning the distance interpolation coefficients from an initial set of computed distance. In this paper, similar to most above mentioned MDS acceleration techniques, Classical Scaling is used as our MDS model. It provides an optimal closed form solution. Both methods explored in this paper reconstruct the full distance matrix as a low-rank multiplication of small matrices, and reformulate classical scaling through these matrices, so that only small matrices are stored and involved in the computation. Numerical experiments support the above model considerations and demonstrate high accuracy in approximating the Classical Scaling embedding results. Using only samples, the approximated embedding results are similar to the ones obtained without any approximation, with an average error of up to .

The paper is organized as follows. In Section 2 we briefly review the Classical Scaling method. In Section 3 we develop two methods for geodesic distances interpolation from a few samples, formulated as a multiplication of small-sized matrices. Section 4 discusses how to select the samples. Next, in Section 5 we reformulate Classical Scaling through the small-sized matrices obtained by the interpolation. Section 6 provides support for the proposed methods with experimental results, by comparing to other methods. Finally, before concluding, Section 7 demonstrates how the proposed methods are applied to two modern applications, namely, texture mapping and weighted averages on surfaces, and discusses an extension of embedding onto a sphere.

2 Classical Scaling

Given points equipped with some similarity measures between them, Multidimensional Scaling (MDS) methods aim at finding an embedding in a low dimensional Euclidean space such that the Euclidean distances are as close as possible to . When the points lie on a manifold, the affinities can be defined as the geodesic distances between them. In this case, which we focus on in this paper, are invariant to isometric deformations of the surface, thus revealing its intrinsic geometry. One way to define this problem, termed Classical Scaling, is through the minimization

(1)

where , and , and as usual, for and for all . Denoting by the thin eigenvalue decomposition of the symmetric matrix , with only the largest eigenvalues and corresponding eigenvectors, the solution for this problem is given by . This requires the computation of all pairwise distances in the matrix , which is not practical to obtain when dealing with more than a few thousand points. In this paper, we reconstruct from a small set of its columns. We formulate the reconstruction as a multiplication of small-sized matrices, and show how to solve the above minimization problem without explicitly computing the reconstruction. Thus, we overcome the bottleneck of Classical Scaling and significantly reduce both space and time complexities from quadratic to quasi-linear.

3 Distances Interpolation

Let be a manifold embedded in some space. Denote by the squared geodesic distance between . In the discrete domain is represented by points, and is represented by a matrix. In this section we develop two methods for reconstructing by decomposing it into smaller-sized matrices .

3.1 A smoothness based reconstruction

The task of finding small-sized matrices that, when multiplied together, approximate a matrix has been addressed before, see for example [10, 20]. Compared to such classical decomposition, better approximations can be obtained when considering a specific data model. Here, we exploit the fact that the elements of the matrix to be reconstructed are squared geodesic distances, derived from some smooth manifold.

Starting from the continuous case, let be an arbitrary point on the manifold, and denote by the squared geodesic distance from to all other points . Assume the values of are known in a set of samples , and denote these values by . We expect the distance function to be smooth on the manifold, as nearby points should have similar values. Therefore, we aim to find a function that is both smooth and satisfies the constraints . This can be formulated through the minimization

(2)

where the energy is some smoothness measure of on . One possible, yet somewhat problematic, choice of is the Dirichlet’s energy

(3)

used in [1], where is the infinitesimal volume element, and the gradient is computed with respect to the manifold. Here, instead, we use a different measure, termed Laplacian energy, that yields a decomposition which is both simple and more accurate, as we will see next. Denoting by the result of the Laplace-Beltrami operator on applied to , this energy is defined as

(4)

When dealing with two dimentional triangulated surfaces, denote by the vectors and the discretization of and , and define the diagonal matrix such that its diagonal is a discretization of about each correspoding vertex of the triangulations. Denote by the discretization of the Laplace-Beltrami operator, such that . We use the cotangent weights Laplacian in our experiments [22], [3]. However, any discretization matrix of the Laplace-Beltrami can be used. Following these notations, the energy now reads

(5)

and the interpolation becomes

(6)

where is a vector holding the values . Recall, is a subset of . Hence, the matrix can be easily defined such that . Since the constraints may be contaminated with noise, a relaxed form of the problem using a penalty function is used instead. This can be formulated as

(7)

where is a sufficiently large scalar. This is a quadratic equation and its solution is given by

(8)
(9)

Recapping, given a set of samples of the distance function , is a reconstruction of in a sense of being a smooth interpolation from its samples. Figures 1 and 2 demonstrate the reconstruction of from its samples for flat and curved manifolds. Recall, is the squared geodesic distance from a point to the rest of the points on the manifold. In this example we chose at the center of the manifold . In Figure 1 the surface is flat, and hence the function is simply the squared Euclidean distance from a point in , , where and are the Euclidean coordinates. We compare the suggested Laplacian energy to the Dirichlet’s energy. As can be seen, when using the Dirichlet’s energy, the function includes sharp discontinuities at the constraint points.

Figure 1: Reconstruction of (a column of ) on a flat surface. Left: the true values of . The chosen samples are marked in red points. Middle and right: the reconstructed function using Dirichlet and Laplacian energies. For comparison, we colored the function according to the absolute error .
Figure 2: Reconstruction of (a column of ) on a curved surface. Left: the curved surface. Middle: the true values of the distance function , measured from the middle point of the surface, and sampled at red points. Right: The reconstructed function using Laplacian energy.

Next, since is actually a column of the matrix , we reconstruct all the other columns in the same manner. Notice that the matrix does not depend on the values of , but only on the samples locations. Hence, it can be computed once and used to reconstruct all columns. Moreover, it is possible to formulate the reconstruction of all the columns of simultaneously by a simple matrix multiplication. Let us choose columns of and let the matrix hold these columns. The locations of the columns in that correspond to the locations of the samples in the manifold. Each column of corresponds to a vector , which contains distances from the samples to a point which corresponds to the column. The columns selection and the computation of are discussed in Section 4. Now, all columns of can be interpolated simultaniously by the multiplication

(10)

Since is symmetric by definition, we symmetrize its reconstruction by

(11)

Notice that is a low-rank reconstruction of with a maximal rank of . This is because is of size which means the rank of is at most . Therefore, the rank of is at most . Consequently, can be written as a multiplication of matrices that are not larger than . These matrices can be obtained as follows. Denote by the horizontal concatenation of and , and define the block matrix , where is the identity matrix. It can be verified that

(12)

Note that we actually only need to keep the matrices and instead of the whole matrix , which reduces the space complexity from to . In our experiments, samples were enough for accurate reconstruction of , where the number of vertices was in the range of to .

In this subsection we derived a low-rank approximation to the matrix from a set of known columns. However, the same interpolation ideas can be used for other tasks, such as matrix completion, in which a matrix is reconstructed from a set of known entries not necessarily arranged in a few columns.

3.2 A learning based reconstruction

In the previous subsection we derived a reconstruction by exploiting the smoothness assumption of the distances on the manifold. Although the laplacian energy minimization results in a simple low-rank reconstruction of , this might not be the best way to interpolate the distances. In this subsection, we formulate a different decomposition based on a learning approach.

As before, we choose arbitrary columns of and let hold these columns. In the previous section, we constructed an interpolation matrix using the Laplace-Beltrami operator as a smoothness measure, such that , or similarly . Here we try to tackle the following question: Is it possible to construct a matrix that would yield a better reconstruction? The matrix for the best reconstruction of in terms of Frobenius norm is obtained by

(13)

and the solution is given by . In this case, the reconstruction is nothing but a projection of onto the subspace spanned by the columns of ,

(14)

However, we cannot compute the best since we do not know the entire matrix . Hence, we suggest to learn the coefficients of from the part of that we do know. This can be formulated using a hadamard product as

(15)

where is a mask of the location of in (the known rows of ). The known rows of can be thought of as our training data for learning . Let hold the rows of that correspond to the known rows of (see Figure 2(a)). The above equation is equivalent to

(16)

In this form, the number of coefficients in is large with respect to our training data, and hence this will result in over-fitting. For instance, for typical with independent columns, the reconstruction of the training data is perfect. Therefore, we next regularize this equation by reducing the number of learned coefficients, in order to avoid over-fitting to the training data. Let hold a subset of columns of , and let hold the rows of that correspond to the known rows of . This is demonstrated in Figure 2(b).

(a)
(b)
Figure 3: Demonstration of the partition of , assuming the columns were chosen as the first ones. and are part of the bigger matrices and and not separate.

The regularized formulation reads

(17)

where is now a smaller matrix of size , and the solution is given by

(18)

yielding what is known as a CUR decomposition of ,

(19)

CUR decompositions are widely used and studied, see for example [10, 20]. Notice that when we obtain

(20)

which is known as the Nyström decomposition for positive semidefinite matrices [2]. In Figure 4 (CUR) we demonstrate the reconstruction error of with respect to , where . Notice the significant potential improvement over the previous un-regularized case of , which results in the Nyström method.

Figure 4: The reconstruction error of with respect to , given by the average of over all , where , using the learning approach (CUR) and its proposed modification. is computed from the Giraffe shape in SHREC database (see experimental results).

In order to reduce overfitting, we defined the matrix as a subset of the matrix . However, a different choice of can lead to a better reconstruction of . Denote by the eigenvalue decomposition of and denote by the thin eigenvalue decomposition of with only the eigenvalues with largest magnitude and corresponding eigenvectors. The choice of

(21)

formulates as a linear combination of the columns of instead of a subset of , thus exploiting better learning of the data. Moreover, this will result in a symmetric decomposition of as we will see next. Plugging and in Equation (19) we obtain

(22)

which can be reduced to

(23)

Finally, by defining and we obtain the desired decomposition

(24)

This results in a symmetric low-rank decomposition of with rank (compared to rank in the previous section). It can be thought of as a variant of the Nyström method where is a regularized version of . In Figure 4 (modified) we demonstrate the improvement of this modification. In all our experiments was a good choice for .

4 Choosing the set of columns

The low-rank approximations of developed in Section 3 are constrained to the subspace spanned by chosen columns. Thus, a good choice of columns would be one that captures most of the information of . The farthest point sampling strategy is a 2-optimal method for selecting points from a manifold that are far away from each other [14]. The first point is chosen arbitrarily. Then, recursively, the next point is chosen as the farthest (in geodesic sense) from the already chosen ones. After each point is selected, the geodesic distance from it to the rest of the points is computed.

The computation of geodesic distances from a point to the rest can be performed efficiently using the fast marching method [16] for two dimensional triangulated surfaces, and Dijkstra’s shortest path algorithm [8] for higher dimensions. For surfaces with vertices, both methods have complexities of . Therefore, the complexity of choosing points with the farthest point sampling strategy is .

Using the farthest point sampling strategy we obtain samples from the manifold and their distances to the rest of the points, which when squared, correspond to columns of . Since the samples are far from each other, the corresponding columns are expected to be far as well (in sense) and serve as a good basis for the image of . While other columns selection methods need to store the whole matrix or at least scan it several times to decide which columns to choose, here, by exploiting the fact that columns correspond to geodesic distances on a manifold, we do not need to know the entire matrix in advance. The Farthest point sampling method is described in Procedure 1.

1:A manifold with a set of vertices and desired number of chosen vertices .
2:A set of samples and their distances to the rest of the vertices
3:Choose an initial vertex ,
4:Compute (see comment below).
5:for  to  do
6:     Find the farthest vertex from the already chosen ones,
7:     Update the set of selected samples,
8:     Compute .
9:end for
10: returns a vector of geodesic distances from vertex to the rest of the vertices.
Procedure 1 Farthest point sampling

5 Accelerating Classical Scaling

In this section, we show how to obtain the solution for Classical Scaling using the decomposition constructed in the previous sections. A straightforward solution would be to plug instead of and repeat the steps: (1) Compute the thin eigenvalue decomposition of with the largest eigenvalues and corresponding eigenvectors. This can be written as

(25)

(2) Compute the embedding by

(26)

However, this solution requires storing and computing , resulting in high computational and memory complexities.

To that end, we propose the following alternative. Denote by the thin QR factorization of . Note that the columns of are orthonormal, and is a small upper triangular matrix. We get

(27)

Denote by the thin eigenvalue decomposition of , keeping only the first eigenvectors and corresponding largest eigenvalues. This can be written as

(28)

and hence

(29)

Since is orthonormal as a product of orthonormal matrices and is diagonal, this is actually the thin eigenvalue decomposition of as in Equation (25), obtained without explicitly computing . Finally, the solution is given by

(30)

We term the acceleration involving the decomposition in Subsection 3.1 as Fast-MDS (FMDS) and the acceleration involving the decomposition in Subsection 3.2 as Nyström-MDS (NMDS), and sum them up in Procedures 2 and 3.

1:A manifold represented by vertices, the number of samples and the embedding dimension .
2:A matrix which contains the coordinates of the embedding.
3:Choose vertices from and construct the matrix , using farthest point sampling described in Section 4.
4:Compute the discretized Laplace-Beltrami matrix of using, for example, cotangent weights [22].
5:Compute according to Equation (8).
6:Define , according Equation 12, and according to Section 2.
7:Compute the QR factorization .
8:Compute and , which contain the largest eigenvalues and corresponding eigenvectors of , using eigenvalue decomposition.
9:Return the coordinates matrix .
Procedure 2 FMDS
1:A manifold represented by vertices, the number of samples and the embedding dimension .
2:A matrix which contains the coordinates of the embedding.
3:Choose vertices from and construct the matrix , using farthest point sampling described in Section 4.
4:Denote by the rows of which corresponds to the selected vertices (See Figure 2(a)).
5:Compute , the thin eigenvalue decomposition of with the eigenvalues with largest magnitude and corresponding eigenvectors, and denote .
6:Compute the QR factorization .
7:Compute and , which contain the largest eigenvalues and corresponding eigenvectors of , using eigenvalue decomposition.
8:Return the coordinates matrix
Procedure 3 NMDS

6 Experimental Results

Throughout this section, we refer to the classical scaling procedure (Section 2) as MDS and compare it to its following approximations: The proposed fast-MDS (FMDS, Procedure 2), The proposed Nyström-MDS (NMDS), (Procedure 3), Spectral MDS (SMDS [1]), Landmark-Isomap [31], Sampled Spectral Distance Embedding (SSDE, [7]), and ISOMAP using Nyström and incremental sampling (IS-MDS [33]). All the above methods are mentioned in the introduction. In Figures 8 and 9, we refer by best to the best rank- approximation of a matrix with respect to the Frobenious norm. For symmetric matrices this is known to be given by the thin eigenvalue decomposition, keeping only the eigenvalues with largest magnitude and corresponding eigenvectors. Unless specified otherwise, we use the non-rigid shapes from SHREC2015 database [18] for the experiments, and each shape is down-sampled to approximately vertices. For SMDS we use eigenvectors. The parameter is set to .

The output of MDS, known as a canonical form [12], is invariant to isometric deformations of its input. We demonstrate this idea in Figure 5, showing nearly isometric deformations of the Giraffe, Paper and Watch shapes and their corresponding canonical forms in obtained using the NMDS method described in Procedure 3. It can be seen that the obtained canonical forms are approximately invariant to the different poses.

Giraffe
Canonical form
Paper
Canonical form
Watch
Canonical form
Figure 5: Shapes with different poses from SHREC database, and their corresponding canonical forms obtained by NMDS. Here, each shape consists of vertices

Unless a surface is an isometric deformation of another flat surface, flattening it into a canonical form would unavoidably involve some deformation of its intrinsic geometry. This deformation is termed as the embedding error, that is defined by the value of the objective function of the MDS,

(31)

In our next experiment, shown in Figure 6, we compare the canonical forms of the Giraffe and Hand shapes, obtained using the different methods. For presentation purposes we scale each embedding error to and show it below its corresponding canonical form, where is the number of vertices. Qualitative and quantitative comparison show that the proposed methods are similar to MDS in both minimizers and minima of the objective function. Additionally, the embedding results of NMDS are similar to the those of the MDS up to negligible differences.


MDS NMDS FMDS SMDS Landmark SSDE

Stress 5.355 5.355 5.408 5.557 6.288 10.748

Stress 3.883 3.887 4.047 4.472 9.128 11.191
Figure 6: Canonical forms of Giraffe and Hand shapes, using samples for the compared methods. The stress error of each embedding is displayed at the bottom of its corresponding form.

In our next experiment, we directly compare the canonical forms of the different methods () to that of MDS (). In order to compare two canonical forms, we align them using ICP [4] and then compute the maximal relative distance among all points, given by , where is the point in the canonical form . Figure 7 shows the results with respect to the number of samples , on the Hand shape. It can be seen that both proposed methods outperform the others in approximating the embedding of MDS.

Figure 7: Distance of approximated canonical forms to the true one with respect the number of samples , using different methods.

The proposed methods not only provide fast and accurate approximations to MDS, but can also be used to significantly accelerate the computation of geodesic distances by approximating them. In the next two figures we use the Giraffe shape to show how accurate the approximation is. In Figure 8, we show the average relative reconstruction error of , given by the average of over all , where . Since the compared methods are essentially different low-rank approximations to , we also add the best rank- approximation to the comparison.

Figure 8: The reconstruction error of with respect to the number of samples , using different methods.

In Figure 9, we compare a randomly chosen column of to its reconstruction, thus showing the approximation’s distribution over the surface. Figures 8 and 9 show that the proposed methods can be used for accelerating the computation of geodesic distances up to an average error of approximately for FMDS and for NMDS, depending on the number of samples . This can be useful for applications that need to compute and store pairwise geodesic distances.

Figure 9: Reconstruction of a randomly chosen column of with respect to the true column, using SMDS, FMDS, NMDS and the best low-rank reconstruction. The plots are shifted horizontally to present them together. In this experiment we used .

Finally, in Figure 10, we evaluated the average computation time of MDS, NMDS, FMDS, and SMDS on shapes from the TOSCA database [6], including the computation of the geodesic distances. We change the number of vertices by sub-sampling and re-triangulating the original shapes. The computations were evaluated on a 2.8 GHz i7 Intel computer with 16GB RAM. Due to time and memory limitations, MDS was computed only for small values of .

Figure 10: Average computation times (in seconds) for MDS, NMDS, FMDS and SMDS with respect to the number of vertices.

7 Extensions and applications

In Subsection 7.1, we extend the proposed methods for embedding onto a sphere instead of a Euclidean space. In Subsections 7.2 and 7.3 we demonstrate the proposed methods on two applications. Throughout this section, we use NMDS for computing the embedding.

7.1 Sphere Embedding

So far we discussed MDS as a flattening procedure, meaning that the data is embedded into a flat Euclidean space. Nevertheless, In some cases, it may be beneficial to embed the data onto a sphere, on which distances are measured according to the sphere’s metric.

A dimensional sphere is the set of all points in at an equal distance from a fixed point. Without loss of generality, we assume that this fixed point is the origin, so that the sphere is centered. The geodesic distance between two points on a sphere of radius can be computed by the arc length

(32)

where is the angle between the vectors in . Given a manifold with pairwise geodesic distances , we aim to find a set of points in , representing the embedding of the manifold on a sphere, such that . Dividing by and applying on both sides leads to

Let be a matrix holding the distances , and define . Since , we can write in matrix formulation

(33)

Then, we can formulate the problem through the minimization

(34)

similar to Classical Scaling, where . This minimization can then be solved using the same techniques presented in FMDS or NMDS. Namely, compute the decomposition of with one of the methods in Section 3. Then, follow Section 5 to obtain the truncated eigenvalue decomposition while ignoring the matrix and the scalar . The final solution is then given by .

A step of normalization of the rows of can be added in order to constraint the points to lie on the sphere. Notice that when there exists an exact spherical solution, it will be obtained without this step. This is due to the fact that and therefore . Hence, for an exact solution without embedding errors we get , which is true only when the rows of are normalized.

In the following experiment, a camera was located on a chair at the center of a room, and a video was taken while the chair was spinning. The frames of the video can be thought of as points in a high dimensional space, lying on a manifold with a spherical structured intrinsic geometry. Using the method described in this subsection, we embed this manifold onto a dimensional sphere (without the normalization step), and show the results in Figure 11. A nice visualization of this experiment is added to the supplementary material. As can be seen, both MDS and NMDS result in a similar circular embedding, revealing the intrinsic geometry of the manifold. The pose of the camera for any frame can then be extracted from the embedding, even if the order of the frames in the video is not known. This experiment also demonstrates that the proposed methods are capable of dealing with more complex manifolds in high dimensional spaces and with embedding to metrics other than Euclidean ones.

(a) Using MDS.
(b) Using NMDS.
Figure 11: Embedding points onto a sphere using Full MDS (left) and the proposed NMDS (right). The points are frames of a video of a room taken from its center while rotating two whole rounds. The images at the top show the embedding on a 1 dimensional sphere. For better visualization, we add an additional axis of the frame number, shown in the images at the bottom.

7.2 Texture Mapping

Texture mapping is a procedure in which flat texture is mapped onto a curved surface. The main problems with most existing methods for texture mapping are that they involve high computational complexities and introduce large distortions to the texture. Zigelman et al. [36] solved this problem using a flat embedding of the surface, obtained with MDS. This embedding defines a mapping of the surface to a plan, on which the texture lies. Unlike other methods, using MDS does not require boundary conditions, and preserves both the local and global structure of the texture. Plugging our acceleration for MDS in this method results in a simple, efficient and accurate texture mapping procedure. In Figure 12 a texture of a chess-board is mapped onto a synthetic surface with positive and negative gaussian curvatures, using both MDS and NMDS. In Figure 13 a texture of stripes and an image of a sheep are projected onto a curved face surface.

(a) Using MDS.
(b) Using NMDS.
Figure 12: Mapping a chess-board onto a synthetic surface with minima, maxima and saddle points. Each surface is flattened using MDS (left) and NMDS (right) methods, creating a mapping of the surface to the plane. Then, the desired texture is back-projected onto the surface using this mapping.
(a)
(b)
Figure 13: Stripes and sheep image projected onto a face with vertices, using NMDS.

7.3 Weighted averages on surfaces

Finding affine combinations in Euclidean space is a fundamental and simple problem. Given anchor points and corresponding weights which sum up to , the affine combination is defined as the weighted average

(35)

Conversely, given the affine combination, it is required to find the satisfying weights. However, the generalization of weighted averages to surfaces is not trivial, yet if achieved, enables applications such as splines on surfaces, remeshing, shape correspondences, texture transfer and morphing, and more. One way to generalize this problem to surfaces is the Frechet Mean [13], which defines the weighted average as the point that minimizes the sum of weighted squared geodesic distances to the anchors:

(36)

Panazzo et al. [21] showed that it is possible to get a good estimation for this equation using flat embeddings. According to their method, the surface is first embedded into a Euclidean space using MDS or any other flattening technique. Then, the weighted average is computed as a simple affine combination in the Euclidean space. Since this may result in a point outside of the surface, a third step of projecting it on the surface has to be done. Panazzo et al. combined Metric Multidimensional-Scaling with least-squares meshes [32] for the flattening step. For convenience we term this method MMDS. In MMDS, similar to Landmark-Isomap, a small set of the surface points is chosen and flattened using MDS, and then the rest of the points are interpolated. In the following experiments we analyze the result of plugging the proposed methods in the flattening step.

In figure 14 we compare the canonical forms of the Cat shape from TOSCA database obtained using MDS, NMDS, and MMDS. As for MMDS, it can be seen that problems are expected in extremities such as tail and legs. Using weighted averages on surfaces, geodesic paths between anchors can be computed by correctly adjusting the weights.

(a) MMDS
(b) NMDS
(c) MDS
Figure 14: Embedding of the Cat shape into a flat 3 dimensional space as the first step of the procedure, using MMDS (150 samples), NMDS (30 samples), and MDS

In Figure 15 we show the obtained points on the geodesic paths between four selected anchors on the tail of the cat. It can be seen that NMDS significantly improves the results of [21]. In addition, it was enough to use 30 samples for this task in NMDS, where 150 samples used in MMDS were still insufficient, thus reducing complexities by a factor of 5.

(a) MMDS
(b) NMDS
(c) MDS
Figure 15: Computing points on the geodesic paths between 4 selected anchors, using MMDS (150 samples), NMDS (30 samples) and MDS

Given an initial set of corresponding points on two shapes, weighted averages on surfaces can be used for completing the correspondence for the rest of the points. For each point in the first shape, this is done by computing its weights with respect to the anchors, and then finding the point in the second shape that corresponds to these weights. In figure 16 we compare the correspondence results of Cat0 and Cat2 from TOSCA database, using the protocol from [15]. In both methods the same points were chosen for the initial correspondence, and the same samples were used for the embedding. It can be seen that better results are achieved for NMDS than for MMDS.

Figure 16: Correspondence evaluation between two cat shapes from TOSCA database, using NMDS and MMDS.

8 Conclusions

In this paper we develop two methods for approximating MDS in its Classical Scaling version. In FMDS, following the ideas of spectral-MDS, we interpolate the geodesic distances from a few samples using a smoothness assumption. This interpolation results in a multiplication of small matrices that is a low-rank approximation to the distances matrix. Then, time and space complexities of MDS are significantly reduce by reformulating its minimization through the small matrices. The main contribution to the improvement in time complexity and accuracy over SMDS is the fact that we work entirely in the spacial domain, as opposed to SMDS which translates the problem into the spectral domain and truncates the eigenspace. FMDS sets the stage for the second method we term as NMDS. Instead of constructing an interpolation matrix using smoothness assumptions it is now learned from the sampled data, which results in a different low-rank approximation for the distances matrix. A small modification of the method shows its relation to Nyström approximation. Although NMDS performs better than FMDS in all experiments, the interpolation ideas of FMDS can also be used for other tasks, such as Matrix Completion, where instead of being arranged in columns, the known entries might be spread over the matrix. Experimental results show both methods achieve state of the art results in canonical forms computations. In addition, these methods offer an acceleration for computing geodesic distances on manifolds. Two applications show the benefits of being able to accelerate MDS. Finally, An extension to embedding on spheres show these methods can deal with more complex manifold in high dimensional space embedded into non-flat metrics.

9 Acknowledgments

The research leading to these results has received funding from the European Research Council under European Union’s Seventh Framework Programme, ERC Grant agreement no. 267414

References

  • [1] Y. Aflalo and R. Kimmel. Spectral multidimensional scaling. Proceedings of the National Academy of Sciences, 110(45):18052–18057, 2013.
  • [2] N. Arcolano and P. J. Wolfe. Nyström approximation of wishart matrices. In Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on, pages 3606–3609. IEEE, 2010.
  • [3] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In NIPS, volume 14, pages 585–591, 2001.
  • [4] Paul J Besl and Neil D McKay. Method for registration of 3-d shapes. In Robotics-DL tentative, pages 586–606. International Society for Optics and Photonics, 1992.
  • [5] I. Borg and P. J. Groenen. Modern multidimensional scaling: Theory and applications. Springer, 2005.
  • [6] A. M. Bronstein, M. M. Bronstein, and R. Kimmel. Numerical geometry of non-rigid shapes. Springer, 2008.
  • [7] A. Civril, M. Magdon-Ismail, and E. Bocek-Rivele. SSDE: Fast graph drawing using sampled spectral distance embedding. In Graph Drawing, pages 30–41. Springer, 2007.
  • [8] E. W. Dijkstra. A note on two problems in connexion with graphs. Numerische Mathematik l, pages 269–27.
  • [9] D. L. Donoho and C. Grimes. Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data. Proceedings of the National Academy of Sciences, 100(10):5591–5596, 2003.
  • [10] P. Drineas, R. Kannan, and M. W. Mahoney. Fast monte carlo algorithms for matrices iii: Computing a compressed approximate matrix decomposition. SIAM Journal on Computing, 36(1):184–206, 2006.
  • [11] H. A. Drury, D. C. Van Essen, C. H. Anderson, C. W. Lee, T. A. Coogan, and J. W. Lewis. Computerized mappings of the cerebral cortex: a multiresolution flattening method and a surface-based coordinate system. Journal of cognitive neuroscience, 8(1):1–28, 1996.
  • [12] A. Elad and R. Kimmel. On bending invariant signatures for surfaces. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 25(10):1285–1295, 2003.
  • [13] Maurice Fréchet. Les éléments aléatoires de nature quelconque dans un espace distancié. In Annales de l’institut Henri Poincaré, volume 10, pages 215–310, 1948.
  • [14] D. S. Hochbaum and D. B. Shmoys. A best possible heuristic for the k-center problem. Mathematics of operations research, 10(2):180–184, 1985.
  • [15] Vladimir G Kim, Yaron Lipman, and Thomas Funkhouser. Blended intrinsic maps. In ACM Transactions on Graphics (TOG), volume 30, page 79. ACM, 2011.
  • [16] R. Kimmel and J. A. Sethian. Computing geodesic paths on manifolds. Proceedings of the National Academy of Sciences, 95(15):8431–8435, 1998.
  • [17] T. Kohonen. The self-organizing map. Neurocomputing, 21(1):1–6, 1998.
  • [18] Z. Lian, J. Zhang, S. Choi, H. ElNaghy, J. El-Sana, T. Furuya, A. Giachetti, R. A. Guler, L. Lai, C. Li, H. Li, F. A. Limberger, R. Martin, R. U. Nakanishi, A. P. Neto, L. G. Nonato, R. Ohbuchi, K. Pevzner, D. Pickup, P. Rosin, A. Sharf, L. Sun, X. Sun, S. Tari, G. Unal, and R. C. Wilson. Non-rigid 3D Shape Retrieval. In I. Pratikakis, M. Spagnuolo, T. Theoharis, L. Van Gool, and R. Veltkamp, editors, Eurographics Workshop on 3D Object Retrieval. The Eurographics Association, 2015.
  • [19] R. Liu, V. Jain, and H. Zhang. Sub-sampling for efficient spectral mesh processing. In Advances in Computer Graphics, pages 172–184. Springer, 2006.
  • [20] M. W. Mahoney and P. Drineas. CUR matrix decompositions for improved data analysis. Proceedings of the National Academy of Sciences, 106(3):697–702, 2009.
  • [21] Daniele Panozzo, Ilya Baran, Olga Diamanti, and Olga Sorkine-Hornung. Weighted averages on surfaces. ACM Transactions on Graphics (TOG), 32(4):60 2013.
  • [22] Ulrich Pinkall and Konrad Polthier. Computing discrete minimal surfaces and their conjugates. Experimental mathematics, 2(1):15–36, 1993.
  • [23] R. Pless. Image spaces and video trajectories: Using isomap to explore video sequences. In ICCV, volume 3, pages 1433–1440, 2003.
  • [24] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, 2000.
  • [25] Y. Rubner and C. Tomasi. Perceptual metrics for image database navigation. Springer, 2000.
  • [26] B. Schölkopf, A. Smola, and K. Müller. Nonlinear component analysis as a kernel eigenvalue problem. Neural computation, 10(5):1299–1319, 1998.
  • [27] E. L. Schwartz, A. Shaw, and E. Wolfson. A numerical solution to the generalized mapmaker’s problem: flattening nonconvex polyhedral surfaces. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 11(9):1005–1008, 1989.
  • [28] H. Schweitzer. Template matching approach to content based image indexing by low dimensional euclidean embedding. In Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on, volume 2, pages 566–571. IEEE, 2001.
  • [29] Gil Shamai, Yonathan Aflalo, Michael Zibulevsky, and Ron Kimmel. Classical scaling revisited. In Proceedings of the IEEE International Conference on Computer Vision, pages 2255–2263, 2015.
  • [30] Gil Shamai, Michael Zibulevsky, and Ron Kimmel. Accelerating the computation of canonical forms for 3d nonrigid objects using multidimensional scaling. In Proceedings of the 2015 Eurographics Workshop on 3D Object Retrieval, pages 71–78. Eurographics Association, 2015.
  • [31] V. D Silva and J. B. Tenenbaum. Global versus local methods in nonlinear dimensionality reduction. In Advances in neural information processing systems, pages 705–712, 2002.
  • [32] Olga Sorkine and Daniel Cohen-Or. Least-squares meshes. In Shape Modeling Applications, 2004. Proceedings, pages 191–199. IEEE, 2004.
  • [33] H. Yu, X. Zhao, X. Zhang, and Y. Yang. ISOMAP using Nyström method with incremental sampling. Advances in Information Sciences & Service Sciences, 4(12), 2012.
  • [34] Kai Yu, Tong Zhang, and Yihong Gong. Nonlinear learning using local coordinate coding. In Advances in neural information processing systems, pages 2223–2231, 2009.
  • [35] Yin Zhou and Kenneth E Barner. Locality constrained dictionary learning for nonlinear dimensionality reduction. Signal Processing Letters, IEEE, 20(4):335–338, 2013.
  • [36] Gil Zigelman, Ron Kimmel, and Nahum Kiryati. Texture mapping using surface flattening via multidimensional scaling. Visualization and Computer Graphics, IEEE Transactions on, 8(2):198–207, 2002.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
23874
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description