Solving variational problems and partial differential equations that map between manifolds via the closest point method
Abstract
Maps from a source manifold to a target manifold appear in liquid crystals, colour image enhancement, texture mapping, brain mapping, and many other areas. A numerical framework to solve variational problems and partial differential equations (PDEs) that map between manifolds is introduced within this paper. Our approach, the closest point method for manifold mapping, reduces the problem of solving a constrained PDE between manifolds and to the simpler problems of solving a PDE on and projecting to the closest points on In our approach, an embedding PDE is formulated in the embedding space using closest point representations of and This enables the use of standard Cartesian numerics for general manifolds that are open or closed, with or without orientation, and of any codimension. An algorithm is presented for the important example of harmonic maps and generalized to a broader class of PDEs, which includes harmonic maps. Improved efficiency and robustness are observed in convergence studies relative to the level set embedding methods. Harmonic and harmonic maps are computed for a variety of numerical examples. In these examples, we denoise texture maps, diffuse random maps between general manifolds, and enhance colour images.
keywords:
Variational problems, partial differential equations, manifold mapping, the closest point method, harmonic maps, color image enhancementï¿¼
[ndkcor]Corresponding author
1 Introduction
The need to compute maps from a source manifold to a target manifold is present in numerous fields, e.g., mathematical physics, image processing, computer vision, and medical imaging. In mathematical physics these types of maps occur in the study of liquid crystals (1), micromagnetic materials (2), biomembranes (3), and superconductors (4). Applications in image processing and computer vision include colour image enhancement (5), directional diffusion (6); (7), and texture mapping (8). The field of medical imaging contains applications such as brain image regularization (9), optic nerve head mapping (10), and brain mapping (11); (12).
This paper introduces a numerical framework for solving variational problems and PDEs that define maps from a source manifold to a target manifold Our primary concern is the development of numerical methods for PDEs derived from variational problems, i.e., the EulerLagrange equations. However, our approach also applies to more general PDEs.
Intuition for numerical methods for manifold mapping problems can be gained from methods for unconstrained PDEs on manifolds. A PDE defined on a single manifold is the special case when the solution is not constrained to a target manifold One class of methods for such problems uses a smooth coordinate system or parameterization of the manifold. In general, however, a substantial complication of the surface PDE can arise and artificial singularities can be introduced by the coordinate system (13). A second approach solves the PDE on a triangulated representation of the manifold. There are numerous difficulties that can arise when using triangulations (14). In particular, there is no standard method for computing geometric primitives, e.g., tangents, normals, principal directions, and curvatures. The convergence of numerical methods on triangulated manifolds is also less understood compared to methods on Cartesian grids (15).
Another class of methods is the embedding methods, which embed the surface PDE and solve in a narrow band surrounding the manifold. The embedding PDE is constructed such that its solution, when restricted to the manifold, is the solution to the original surface PDE. An embedding method allows the use of standard Cartesian numerical methods when solving PDEs on complex surface geometries. Two main types of embedding methods have been developed: the level set method and the closest point method. Since these methods were developed for unconstrained PDEs on a manifold we denote them by and respectively. The was introduced by Bertalmío, Cheng, Osher and Sapiro (16). It represents the manifold as the zero level set of a higher dimensional function. The was introduced by Ruuth and Merriman (17). It uses a closest point representation of the manifold.
An obvious limitation of the is that open manifolds with boundaries, or objects of codimensiontwo or higher, do not have a direct level set representation. Another difficulty arises when computations are localized to a band around the manifold. The introduction of boundaries at the edge of the computational domain leads to the use of artificial boundary conditions, which can degrade the convergence order; see (15) for the case of diffusion problems. On the other hand, the boundary values for the are obtained from the manifold. This enables the use of banded computations without degrading the order of the underlying discretization.
Less work has been done on numerical methods for PDEs that map from to than on numerical methods for unconstrained PDEs on a single manifold. Notably, most of the numerical methods that have been developed compute harmonic maps for specific and/or Numerical schemes of this type were first developed for the special case of the unit hypersphere. See, for example, (18); (19); (20); (21) for a number of algorithms that find stable solutions of harmonic maps onto One of the first algorithms proven to converge in a continuous setting was introduced by Alouges in (22). The algorithm was later proven to converge in a finite element setting, with acute triangles, by Bartels (23). Finite element methods for more difficult problems have been developed, e.g., for harmonic maps (24) and the LandauLifschitzGilbert equation (25). A finite element method for more general target manifolds has also been introduced; see (26).
A different, parametric approach was taken by Vese and Osher (7) for harmonic maps onto Their method successfully denoises colour images, however, it is restricted to
The was extended by Mémoli, Sapiro and Osher (27) to solve variational problems and PDEs that define maps from to This method will be denoted by throughout our paper. In a similar fashion, we extend the to solve manifold mapping problems. Fundamental to our approach is the adoption of closest point representations of the source and target manifolds, and This leads to improved geometric flexibility, as well as a means to avoid the introduction of artificial boundary conditions in banded computations. Since the method will handle problems that define maps between manifolds, it will be referred to as the closest point method for manifold mapping and will be denoted by
The paper is organized as follows. We begin with a brief review of the original for unconstrained PDEs on manifolds (Section 2). Section 3 introduces our numerical framework for variational problems and PDEs that define maps from to i.e., the . A comparison of the and the is given in Section 4. The behaviour and performance of our method is illustrated with numerical examples in Section 5. In that section, noisy texture maps onto different target manifolds are denoised. In addition, diffusion of a random map between general manifolds is shown and a method for colour image enhancement is illustrated. Section 6 gives conclusions and a discussion of possible future work.
2 The closest point method for unconstrained PDEs on manifolds
Our new algorithm is built on the explicit , (17). We begin with a review of this method. An alternative, based on implicit timestepping, is also possible; see (28) for further details on this method and its implementation.
As the name suggests, the closest point method relies on a closest point representation of the manifold Closest point representations are less restrictive than level set representations. A standard level set representation needs a welldefined inside and outside, which makes handling open manifolds and manifolds of codimensiontwo or higher more difficult. A closest point representation of the manifold in the embedding space assumes that for every there exists a point The point is the closest point on to in Euclidean distance:
Definition 1.
Let be some point in the embedding space Then,
is the closest point of to the manifold
In general, the point may not be unique. However, for a smooth manifold it is unique if is sufficiently close to (28); (29). Near such a smooth manifold, the closest point function and the wellknown signed distance function of (30) are related via
(1) 
The neighbourhood over which is unique depends on the geometry of e.g., the size of its principal curvatures. Properties of the closest point function and calculus involving have been investigated further by März and Macdonald (29). There they discuss the relationship between finitely smooth manifolds, finitely smooth functions on manifolds and PDE order. The definition of the closest point function is also extended to involve nonEuclidean distance.
The is an embedding method: it extends the problem defined on a manifold to the embedding space surrounding The relies on two principles and the extension of surface data to construct an embedding PDE defined on Briefly, the intrinsic surface gradient and surface divergence operators are replaced by the standard Cartesian gradient and divergence operators via the following principles (17):
Principle 1.
Let be any function on that is constant along normal directions of Then, at the surface, intrinsic gradients are equivalent to standard gradients,
Principle 2.
Let be any vector field on that is tangent to and tangent to all surfaces displaced by a fixed distance from Then, at the surface,
Higher order derivatives can be handled by combining Principles 1 and 2 with constant normal extensions of the surface data into the embedding space. Constant normal extensions of the data are referred to as closest point extensions since they are implemented efficiently by composing surface data with the closest point function. That is, is the closest point extension of at the point To illustrate this idea, consider the LaplaceBeltrami operator If is a function defined on then is constant along normal directions of and therefore on by Principle 1. Principle 2 implies that on since is always tangent to the level sets of the distance function of In this fashion, an embedding PDE is obtained that involves standard Cartesian derivatives and a closest point function.
The following steps detail the explicit to solve PDEs on manifolds. First, a narrow banded computational domain, surrounding is chosen and the initial surface data is extended onto using the closest point extension. The following two steps are then alternated to obtain the explicit :

Evolution. The embedding PDE is solved on for one time step (or one stage of a RungeKutta method).

Closest point extension. The solution on is extended to the computational domain by replacing with for all
Note that the closest point extension defined in the second step involves interpolation. Interpolation is needed since is not necessarily a grid point in The interpolation order depends on the derivative order and the differencing scheme order and should be chosen large enough to not produce errors greater than the differencing scheme. Following (17), barycentric Lagrange interpolation is applied in a dimensionbydimension fashion with polynomial degree in all our numerical examples.
For efficiency, computations should be localized to a banded region surrounding the manifold. In our algorithms, a uniform hypercube grid is constructed around and an indexing array is used to access points within a Euclidean distance from The width of the computational band, depends on the degree of the interpolating polynomial the differencing stencil, and the dimension of the embedding space It is shown in (17) that for a secondorder centred difference discretization of the Laplacian operator,
3 Manifold mapping variational problems and PDEs
In this section, we introduce our framework for solving variational problems and PDEs that define maps from a source manifold to a target manifold For clarity, we introduce the method for the case of harmonic maps. Other maps may also be approximated using our approach. We conclude this section by detailing an algorithm for these more general maps, which include harmonic maps.
3.1 Harmonic maps
Harmonic maps (31); (32); (33) are important in many applications such as texture mapping (8), regularization of brain images (9), and colour image enhancement (5). Considerable research on the theory of harmonic maps has also been carried out, starting with the work of Fuller (34) in 1954 and the more general theory by Eells and Sampson (35) in 1964. An important property of harmonic maps is their smoothness. They are also one of the most simple manifold mapping problems, whose study can provide insight into other mapping problems. Physically, a map is harmonic when corresponds to a membrane that is constrained to in elastic equilibrium (36).
We now give the mathematical definition of a harmonic map between two Riemannian manifolds and (9); (27). Denote the signed distance functions of and as and , respectively (30). The intrinsic Jacobian of is denoted by and can be written in terms of the standard Jacobian as where is the projection operator onto the tangent space of
Definition 2.
Harmonic maps are the critical points of the Dirichlet energy
(2) 
where is the Frobenius norm and is the volume element of
The map must be a map to ensure that is welldefined. Furthermore, by the Nash embedding theorem (37); (38), any Riemannian manifold can be isometrically embedded in a higher dimensional Euclidean space Therefore, local coordinates on and can be written in terms of coordinates in Euclidean spaces and respectively. That is, one can write with pointwise constraint for any
Mémoli et al. (27) derived the EulerLagrange equations for (2) in terms of the level set representation of under the assumption that is flat and open. The same calculation is carried out by Moser (31) in terms of the closest point representation of There, the closest point function is called the nearest point projection and is used to prove regularity results of harmonic maps (see Chapter 3 of (31)). The EulerLagrange equations corresponding to (2), assuming is flat and open, are
(3) 
where the notation is used. The matrix denotes the Hessian of i.e., the Hessian of each component of is for To illustrate the process, A derives the EulerLagrange equations (3) for the important case where is a flat, open subset of and This corresponds to the application of liquid crystals (1).
A solution to (3) could be obtained by evolving the corresponding gradient descent flow to steady state (cf. (27); (31)). The gradient descent flow is a PDE that introduces an artificial time variable and evolves in the direction of maximal decrease of the energy. Numerically, one could discretize this gradient descent flow and evolve the solution until some long time A simpler approach to numerically approximate the harmonic map via a gradient descent flow is given next. We shall see that our approach has the further benefit of handling more general variational problems and PDEs, including harmonic maps.
3.2 The closest point method for manifold mapping
To design a numerical method we do not discretize (3). Instead, we write the EulerLagrange equations for (2) as (33), where is the projection operator at the point onto the tangent space of The vector is defined componentwise, i.e., The corresponding gradient descent flow is
(4) 
where is a given initial map. A justification for the homogeneous Neumann boundary conditions is given in Appendix A of (27).
To discretize (4), intrinsic geometric quantities are replaced by terms involving standard Cartesian coordinates and closest point functions. As we saw in Section 2, the term can be replaced by using the Furthermore, the projection operator equals the Jacobian of the closest point function, for (29); (31). Applying these replacements gives the embedding gradient descent flow
(5) 
New identities may be required to formulate an embedding PDE for more general variational problems and PDEs. However, the general procedure is the same in all cases: rewrite geometric quantities intrinsic to and in terms of and respectively.
The closest point function, is itself a projection operator onto By splitting the evolution of (5) into two steps we can eliminate the computation of and further simplify the numerical method. More generally, a splitting can be formulated for any PDE with intrinsic geometric terms on that are projected onto the tangent space of e.g., PDEs of the form
(6) 
To solve (6), we first evolve an embedding PDE on
for one time step of size to give at each grid node We emphasize that this step omits the projection (equivalently ) appearing in (6). The second step projects onto via The result, approximates the solution to (6) at points Starting from the steps of the to advance from time to time are given explicitly by Algorithm 1 below.

Evolution. For solve
for one time step.

Closest point extension. Set
In this paper, we apply forward Euler timestepping in Step 1 of the however, other explicit (17) or implicit (28) choices may be used. Note also that the homogeneous Neumann boundary conditions in (4) are imposed automatically by the (17). Therefore, Step 1 of the does not involve direct implementation of boundary conditions when is an open manifold (i.e., a manifold with boundaries).
In the harmonic mapping case (4), Another important special case is the harmonic maps. The extremizing functions of the energy
(7) 
with and
are called harmonic maps. The gradient descent flow for the energy (7) is (27),
(8) 
where the divergence of the matrix is defined as the divergence of each row of the matrix. Noting that we obtain the embedding form of (8)
(9) 
which can be evolved using the (Algorithm 1).
We conclude this subsection by showing the consistency of the applied to (6) in Theorem 1. The proof of Theorem 1 uses the following lemma, which is a specific case of Taylor’s theorem (39) for normed vector spaces.
Lemma 1.
Let and be normed vector spaces and an open subset of Suppose that and such that the segment Let be a mapping whose Hessian, is finite. Then,
Theorem 1.
Proof.
Let and Let be a neighbourhood of where is and is finite. Using Lemma 1, we expand and substitute to obtain
The numerical approximation at time can therefore be expressed as
Applying Lemma 1 to expand the closest point function, with and
yields
where we have used and We apply and Principles 1 and 2 of the to obtain
which holds for any Rearranging and taking the limit as obtains the desired result
∎
In summary, we may evolve (6) by alternating between a step of PDE evolution on (via the ) and an evaluation of the closest point function for Properties of this closest point method for manifold mapping, , are considered in some detail next. Particular attention will be paid to the performance of the algorithm relative to its closest algorithmic companion, the level set method for manifold mapping, .
4 A comparison: the closest point and level set methods for manifold mapping
In this section we compare the closest point and level set methods for manifold mapping, i.e., the and the . The comparison is performed for the problem of computing harmonic maps Both methods compute the harmonic map by numerically approximating the gradient descent flow
(10) 
until steady state.
4.1 A discretization for harmonic maps
To begin, we select a uniform computational grid surrounding the manifold . Assume and are manifolds embedded in and respectively. Then and For the discretization of (10), denote discrete point locations by and the approximate solution at time by
We now compare and contrast the and the . The derivation of the defines an embedding PDE (5) from (10) using

for any

on
By splitting the evolution of the embedding PDE and replacing the projection by a closest point evaluation, we obtain the . In contrast, the embedding PDE for the is derived using different properties based on the level set functions and representing and respectively. Specifically, the uses

for any

on
Denote forward and backward discretizations of the gradient by and respectively. Further denote the discretization of the Laplacian as Using forward Euler timestepping, the (Algorithm 1) and the for (10) can be implemented as

:
(11) 
:
where and are respectively discrete representations of and Truncation errors in the can cause to leave the target manifold Mémoli et al. (27) address this problem by projecting the solution back onto the target manifold after each time step. The implemented in practice is therefore
(12) 
4.2 Comparison of the methods
Seven different areas of comparison between the and the are considered: manifold generality, implementation difficulty, convergence, computational work, memory requirements, accuracy, and convergence rate. The is better in all these aspects as will be discussed below.
The can handle more complex surface geometries. A closest point function can be defined for manifolds that are open or closed, with or without orientation, and of any codimension. A level set function is most natural for closed manifolds of codimensionone. It can be extended to more general manifolds but this requires multiple level set functions, which complicates implementation and analysis (see, e.g., Section 3 of (27)).
The implementation of the is simpler since it does not form the projection matrix Also, for the harmonic mapping problem, one can discretize the Laplacian directly instead of discretizing gradients and forming A further reason the is easier to implement is that its closest point extension step involves only standard interpolation. The performs an “extension evolution” to extend surface data such that Typically, this step is carried out via a fast marching method or by evolving the gradient descent flow
(13) 
to steady state. Note that, for some specific PDEs, the extension evolution for the is only required once to prepare the initial data (16). See also (15), where a modified projection matrix is introduced into the to yield a method that does not require any data reextension. In contrast, we emphasize that the requires a closest point extension at every time step.
For stationary surfaces, the extension evolution step for the can be computed efficiently using the closest point extension. Computational efficiency is prioritized over memory use in our numerical examples. Therefore, we use the closest point extension in the In our implementation, the closest point extension is a small part of the overall computational cost of the . For example, it accounts for less than 1.4% of the cost in the computations described in Section 4.2.1.
The performs better with respect to convergence. Theoretically, when applied to the harmonic mapping problem, the leads to degenerate PDEs (15). This degeneracy can have an adverse effect on discretizations and little is known about the convergence of such schemes (15). In contrast, the involves standard heat flow in its evolution step. This is discretized using standard Cartesian numerical methods in the embedding space. We further note that the has superior stability characteristics in practice.
Computational work per step is another natural area for comparison between the and the . Comparing (11) with the implemented version of the , (12), we need only consider expressions within For the (11) we have a discretization on the manifold alone:
The however, is a discretization of the original PDE (10) between two manifolds:
which involves the obvious added work of constructing and applying the projection matrix Moreover, two discretization matrices are applied, and instead of the one The also requires, in general, an extension evolution to be performed every time steps ( in (27)) to obtain stable results. This extension evolution step not only adds work, but is a further source of error. Finally, we note that the uses a relatively small, analytically defined computational band. This yields further computational savings over the ; see Section 4.2.1 for further details.
The memory requirements per time step of the are less than that of the Both the and store the solution vector the closest point function and a matrix, that applies the closest point extension for The stores one tridiagonal discretization matrix, while the stores two bidiagonal discretization matrices. In addition, the requires the storage of as well as the matrices used to form at each time step. This last group of matrices depends on how is formed, e.g., via interpolation of (which is generally more efficient) or interpolation of Finally, we observe that all the matrices for the are larger than the matrices because the requires a larger computational band to obtain the expected convergence rate; see Section 4.2.1.
Accuracy and convergence rate are our final areas for comparison. In practical computations embedding methods must localize computations to a band around the manifold. In the this leads to the imposition of artificial boundary conditions that degrade the accuracy and convergence rate (15). The obtains values at the boundary of the band from the manifold as part of the closest point extension step.
Accuracy, convergence and computational work in practice are also of interest; we compare these next in numerical experiments for the problem of computing identity maps.
Identity maps
The identity map with is a harmonic map (32). Identity maps are a natural choice for conducting convergence studies since the exact harmonic map, is known. We now provide convergence studies of identity map computations for the unit sphere, an ellipsoid, and a torus.
In each case, an initial noisy map is evolved to steady state. To construct a normally distributed random map in is added to points on The points are then projected back onto using i.e.,
Each component of the random map is set equal to where randn is a Matlab command that returns a random scalar drawn from the standard normal distribution. The scalar is a constant that controls the size of the noise; the value is selected in the following convergence studies.
The addition of random noise produces different convergence rates in each experiment. Tables 13 therefore show convergence rates averaged over 96 realizations of the computation. Secondorder centred finite differences were used for and firstorder differences were used for and The time stepsize was where is the spatial stepsize. The maximum Euclidean distance between and over nodes was used as the error, i.e.,
The errors from the (11) and the (12) were computed at the final time See Tables 13 for the results.
Error  Conv. rate  Comp. time  Speedup  

\arraybackslash  \arraybackslash  \arraybackslash  \arraybackslash  \arraybackslash  \arraybackslash  
\arraybackslash7.54e02  \arraybackslash5.28e02  \arraybackslash  \arraybackslash  \arraybackslash0.997 s  \arraybackslash0.049 s  20.5  
\arraybackslash3.60e02  \arraybackslash2.66e02  \arraybackslash  \arraybackslash  \arraybackslash5.214 s  \arraybackslash0.539 s  9.7  
\arraybackslash1.65e02  \arraybackslash1.39e02  \arraybackslash  \arraybackslash  \arraybackslash69.15 s  \arraybackslash8.875 s  7.8  
\arraybackslash7.93e03  \arraybackslash6.74e03  \arraybackslash  \arraybackslash  \arraybackslash1091 s  \arraybackslash143.4 s  7.6  
\arraybackslash4.06e03  \arraybackslash3.50e03  \arraybackslash  \arraybackslash  \arraybackslash18326 s  \arraybackslash2458 s  7.5 
Error  Conv. rate  Comp. time  Speedup  

\arraybackslash  \arraybackslash  \arraybackslash  \arraybackslash  \arraybackslash  \arraybackslash  
\arraybackslash7.30e02  \arraybackslash5.26e02  \arraybackslash  \arraybackslash  \arraybackslash2.857 s  \arraybackslash0.216 s  13.2  
\arraybackslash3.37e02  \arraybackslash2.64e02  \arraybackslash  \arraybackslash  \arraybackslash17.45 s  \arraybackslash2.231 s  7.8  
\arraybackslash1.61e02  \arraybackslash1.37e02  \arraybackslash  \arraybackslash  \arraybackslash191.7 s  \arraybackslash33.85 s  5.7  
\arraybackslash7.91e03  \arraybackslash6.80e03  \arraybackslash  \arraybackslash  \arraybackslash2730 s  \arraybackslash534.9 s  5.1  
\arraybackslash4.01e03  \arraybackslash3.39e03  \arraybackslash  \arraybackslash  \arraybackslash39922 s  \arraybackslash8532 s  4.7 
Error  Conv. rate  Comp. time  Speedup  

\arraybackslash  \arraybackslash  \arraybackslash  \arraybackslash  \arraybackslash  \arraybackslash  
\arraybackslash7.33e02  \arraybackslash5.57e02  \arraybackslash  \arraybackslash  \arraybackslash1.056 s  \arraybackslash0.058 s  18.1  
\arraybackslash3.44e02  \arraybackslash2.71e02  \arraybackslash  \arraybackslash  \arraybackslash6.143 s  \arraybackslash0.689 s  8.9  
\arraybackslash1.68e02  \arraybackslash1.38e02  \arraybackslash  \arraybackslash  \arraybackslash77.47 s  \arraybackslash11.49 s  6.7  
\arraybackslash8.06e03  \arraybackslash6.90e03  \arraybackslash  \arraybackslash  \arraybackslash1303 s  \arraybackslash220.6 s  5.9  
\arraybackslash4.07e03  \arraybackslash3.48e03  \arraybackslash  \arraybackslash  \arraybackslash23916 s  \arraybackslash3134 s  7.6 
With respect to accuracy we observe that the is the decisive winner. The convergence rates of the and the are similar for all the manifolds: Tables 13 confirm a firstorder convergence rate for both the and the . However, note that the width of the computational band for the had to be three times that of the to obtain firstorder convergence. If a smaller band width is used, the has poor convergence or does not converge at all. This large band width requirement is a contributing factor for the high computational work of the .
The computation time of each example is also given in Tables 13. As expected, the is faster than the in all experiments. The speedup roughly ranges from a factor of 5 to 10. Recall that our implementation of the uses the closest point extension as a replacement for the extension evolution (13), which further improves the efficiency of the over the original implementation (27). For convenience, the rightmost columns of Tables 13 give the speedup, (Comp. time using the )/(Comp. time using the ).
5 Numerical Results
There are numerous areas of application for harmonic maps and general manifold mappings. Some of these, such as direct cortical mapping (11); (12), are interested in the values of the map Other applications are primarily visual. In this section, we highlight the behaviour and performance of our method with three visual applications. Section 5.1 denoises texture maps following an idea from Mémoli et al. (27). Section 5.2 diffuses a random map between two general manifolds to a point. Finally, colour image enhancement via chromaticity diffusion (5) is performed in Section 5.3.
5.1 Diffusion of noisy texture maps
Our first numerical experiments diffuse noisy texture maps. Since texture maps give a means to visualize the map they are helpful for providing intuition and insight into our algorithms.
To begin, a texture map is created using the ideas of Zigelman et al. (40). The map is inverted to yield a map from the planar image domain to the manifold A noisy map is created by adding a normally distributed random map to The sum of and is generally not on so this summation is followed by a projection step onto This gives a noisy map