Measuring Similarity Between Curves on 2-Manifolds via Homotopy Area
Measuring the similarity of curves is a fundamental problem arising in many application fields. There has been considerable interest in several such measures, both in Euclidean space and in more general setting such as curves on Riemannian surfaces or curves in the plane minus a set of obstacles. However, so far, efficiently computable similarity measures for curves on general surfaces remain elusive. This paper aims at developing a natural curve similarity measure that can be easily extended and computed for curves on general orientable -manifolds. Specifically, we measure similarity between homotopic curves based on how hard it is to deform one curve into the other one continuously, and define this “hardness” as the minimum possible surface area swept by a homotopy between the curves. We consider cases where curves are embedded in the plane or on a triangulated orientable surface with genus , and we present efficient algorithms (which are either quadratic or near linear time, depending on the setting) for both cases.
Measuring curve similarity is a fundamental problem arising in many application fields, including graphics, computer vision, and geographic information systems. Traditionally, much research has been done on comparing curves embedded in the Euclidean space. However, in many cases it is natural to study curves embedded in a more general space, such as a terrain or a surface.
In this paper, we study the problem of measuring curve similarity on surfaces. Specifically, given two simple homotopic curves embedded on an orientable -manifold (including the plane), we measure their similarity by the minimum total area swept when deforming one curve to the other (the “area” of the homotopy between them), and present efficient algorithms to compute this new measure.
Related work. From the perspective of computational geometry, the most widely studied similarity measures for curves is the Fréchet distance. Intuitively, imagine that a man and his dog are walking along two paths with a leash between them. The Fréchet distance between these two paths is the minimum leash length necessary for them to move from one end of the paths to the other end without back-tracking. Since the Fréchet distance takes the “flow” of the curves into account, in many settings it is a better similarity measure for curves than alternatives such as the Hausdorff distance [5, 6].
Given two polygonal curves and with total edges in , the Fréchet distance can be computed in time . An lower bound for the decision problem in the algebraic computation tree model is known , and Alt has conjectured that the decision problem is 3SUM-Hard . Recently, Buchin et al.  show that there is a real algebraic decision tree to solve the Fréchet problem with sub-quadratic depth, suggesting that perhaps this is not the case. They also give an improved algorithm which runs in time. Very recently, Agarwal et al. present a novel approach to compute the discrete version of the Fréchet distance between two polygonal curves in sub-quadratic time . This is the first algorithm for any variant of the Fréchet distance to have a sub-quadratic running time for general curves. No previous algorithm, exact or approximate, with running time is known for general curves, although sub-quadratic approximation algorithms for special families of curves are known [6, 7, 23].
While the Fréchet distance is a natural curve similarity measure, it is sensitive to outliers. Variants of it, such as the summed-Fréchet distance, and the partial Fréchet similarity, have been proposed [14, 15, 24], usually at the cost of further increasing the time complexity.
The problem of extending and computing the Fréchet distance to more general metric space has also received much attention. Geodesic distance between points is usually considered when the underlying domain is not . For example, Maheshwari and Yi  computed the geodesic Fréchet distance between two polygonal paths on a convex polytope in roughly time, where and are the complexity of the input paths and of the convex polytope, respectively. Raichel and Har-Peled consider approximating the weak Fréchet distance between simplicial complexes in . Geodesic Fréchet distance between polygonal curves in the plane within a simple polygon has also been studied [8, 19, 25].
Rather than comparing distance between only two curves, Buchin et. al.  propose the concept of a median in a group of curves (or trajectories, in their setting). They give two algorithms to compute such a median. The first is based simply on the concept of remaining in the middle of the set of curves; this algorithm, while fast and simple, has a drawback in that the representative curve might not capture relevant features shared by a majority of the input curves. Their second algorithm addresses this issue by instead isolating a subset of relevant curves which share the same homotopy type with respect to obstacles that are placed in empty regions of the blame; it then computes a medial curve from this relevant subset.
One issue with generalizing Fréchet distance directly to surfaces is that the underlying topology is not taken into account; for example, in geodesic Fréchet distance, while the length of the leash varies continuously, the actual leash itself does not. As a result, several measures of similarity have been proposed which take the underlying topology into account. Chambers et al.  proposed the so-called homotopic Fréchet distance and gave a polynomial (although not efficient) algorithm for when the curves reside in a planar domain with a set of polygonal obstacles. The extra requirement for this homotopic Fréchet distance is that the leash itself and not just its length has to vary in a continuous manner, essentially restricting the homotopy class which the leash is in. A stronger variant called isotopic Fréchet distance has also been proposed and investigated, although no algorithms at all are known to even approximate this distance .
Orthogonal to homotopic Fréchet distance is the concept of the height of a homotopy; instead of minimizing the maximum leash length, this measure views the homotopy as tracing a way for the first curve to deform to the second curve, where the goal is to minimize the longest intermediate curve length. Introduced independently in two very different contexts [10, 18], it is not even known if the problem is in NP.
Recent work on approximating the homotopy height and the homotopic Fréchet distance has yielded efficient approximation algorithms for both of these problems . However, exact algorithms on surfaces for either problem are still unknown.
New work. In this paper, we develop a natural similarity measure for curves on general surfaces that can be computed both quickly and exactly. Intuitively, we measure distances between homotopic curves based on how hard it is to deform one curve into the other one, and define this “hardness” as the minimal total surface area swept by a homotopy between them, which we call the optimal homotopy area. Our similarity measure is natural, and robust against noise (as the area in a sense captures average, instead of maximum, deviation from one curve to the other). To the best of our knowledge, this is the first similarity measure for curves on general surfaces with efficient polynomial-time algorithms to compute it exactly.
It is worth noting that this definition in a way combines homotopic Fréchet distance with homotopy height; those measures compute the “width” and “height” of the homotopy, while our measure calculates the total area. It is thus interesting that while no exact algorithms are known for either of those measures on surfaces, we are able to provide a polynomial running time for computing the area of a homotopy.
We consider both cases where curves are embedded in the plane, or on a closed, triangulated orientable surface with genus . For the former case, our algorithm runs in time, where is the total complexity of input curves and is the number of intersections between them. On a surface, if the input is a triangulation of complexity , then our algorithm runs in time . While our similarity measure is more expensive to compute for the case of curves in the plane than the Fréchet distance when , one major advantage is that this measure can be computed on general orientable surfaces efficiently. In fact, the ideas and algorithms behind the planar case form the foundation for the handling of the case on general surfaces.
The main ideas behind our approach are developed by examining some properties of one natural class of homotopies, including a relation with the winding number of a closed curve. Specifically, the use of the winding number enables us to compute the optimal homotopy area efficiently in the plane, where the homotopy is restricted to be piecewise differential and regular. This forms the basis of our dynamic programming framework to compute similarity between curves in the plane. We also show how to build efficient data structures to keep the total cost of the dynamic program low.
For the case where the underlying surface is a topological sphere, we extend the winding number in a natural way and show how to adapt our planar algorithm without additional blow-up in the time complexity. For the case when the surface has non-zero genus, we must extend our algorithm to run efficiently in the universal cover (which is homeomorphic to the plane) by using only a small portion of it.
We remark that the idea of measuring deformation areas has been used before in practice [20, 30]. For example, similarity between two convex polygons can be measured by their symmetric difference [3, 36]; we note that this is not equivalent to homotopy area, although it may be the same value in some situations. In another paper, the area sandwiched between an -monotone curve and another curve is used to measure their similarity . However, computing the “area” between general curves has not been investigated prior to this work.
2 Definitions and Background
Paths and cycle. We will assume that we are working on an orientable 2-manifold (which could be the plane). A curve (or a path) on a surface is a map ; a cycle (or a loop) is a continuous map where is the unit circle. A curve or a cycle is simple if (resp. ) for any .
Homotopy A homotopy between two paths and (with the same endpoints) is a continuous map where , , and . A homotopy describes a continuous deformation between the two paths or curves: for any value , we let be the intermediate curve at time , where and .
We define the area of a homotopy to be the total area covered by the image of the homotopy on the surface, where an area that is covered multiple times will be counted with multiplicity. More precisely, given a homotopy whose image is piecewise differentiable,
. The minimum homotopy area between and is the infimum of the areas of all homotopies between and , denoted by . If such an infimum does exist and can be achieved by a homotopy, we call that homotopy an optimal homotopy.
We note that it is not immediately clear that this value exists, depending on the curves and underlying homotopy. Minimum area homotopies were considered by Douglas  and Rado  in the context of Plateau’s problem; they noted that not only is the integral improper in general, but the infimum itself may not be continuous. The eventual proof that these exist in relies on a definition using Dirichlet integrals which ensure (almost) conformal parameterizations of the homotopy. See the book by Lawson  for an overview of this result as well as several extensions to minimal area submanifolds in more general settings.
[r] However, beyond a proof of existence, we are interested in computing such homotopies, or at least measuring their actual area, in much simpler settings such as or a surface. To this end, we restrict the input curves to be simple curves which consist of a finite number of piecewise analytic components. We also need to be continuous and piecewise differentiable, so that the integral can be defined. Finally, we will also require that at any time , the intermediate curve is regular (see  for smooth curves and  for piecewise-linear curves). Intuitively, this means that the deformation is “kink”-free , and cannot create or destroy a local loop as shown in the right figure (the singular point in the right curve is a kink). Note that this is required for the minimum homotopy to even exist; again we refer the reader to the book by Lawson  for details.
Decomposing arrangements. Consider two simple piecewise analytic curves and with the same endpoints. Their concatenation forms a (not necessarily simple) closed curve denoted by , where rev is the reversal of . Let denote the arrangement formed by , where vertices in are the intersection points between and . An edge / arc in is a subcurve of either or .
[r] We give (and thus and ) an arbitrary orientation. Hence we can talk about the sidedness with respect to at a point . Specifically, a point is to the right of at if it is a counter-clockwise turn from the orientation of the vector the orientation of (tangent of) at (see the right figure for an example). Given two oriented curves and , an intersection point of them is positive if it is a counter-clockwise turn from the orientation of to that of at . For a curve and a point , the index of x is the parameter of under the arc-length parameterization of . We sometimes use to represent its index along when its meaning is clear from the context. Given two points , we will use to denote the unique sub-curve of between points and .
We say that a homotopy from to is right sense-preserving if for any , we have that either or is to the right of the oriented curve at . If it is the former case, then we say that is a fixed point at time . Similarly, we say that is left sense-preserving if for any , is either a fixed point or deforms to the left of the curve . Our homotopy is sense-preserving if it is either right or left sense-preserving. The sense-preserving property means that we can continuously deform the curve always in the same direction, without causing local folds in the regions swept. Intuitively, any optimal homotopy should have this property to some extent, which we will make more precise and prove later.
3 Structure of Optimal Homotopies
Given two simple curves and (with the same end points) embedded on an orientable -manifold , let denote the set of intersection points between them, sorted by their order along . Given a homotopy from to , a point is called an anchor point with respect to if it remains on at all times . Of course not all intersection points are anchor points. However, if is an anchor point, then it is necessarily an intersection point between and , as and . We exclude the beginning and ending end points of and from the list of anchor points, as they remain fixed for all homotopies. In what follows, we show that any optimal homotopy can be decomposed by anchor points such that each of the resulting smaller homotopies has a simple structure.
Specifically, consider an arbitrary optimal homotopy . Let be the set of anchor points with respect to , the minimum area homotopy. We order the ’s by their indices along . It turns out that the order of their indices along is the same, and the proof of this simple observation is in Appendix A.
The order of ’s along and along are the same.
This observation implies that we can decompose into a list of sub-homotopies, where morphs to . Obviously, each is necessarily optimal, and it induces no anchor points. The following result states that an optimal homotopy without anchor points has a simple structure, which is sense-preserving. Intuitively, if any point changes its deformation direction at any moment, the deformation will sweep across some area redundantly and thus cannot be optimal. The detailed proof is in Appendix B.
If an optimal homotopy from to has no anchor points, then it is sense-preserving.
4 Minimum Area Homotopies In The Plane
In this section, we consider the case where the input consists of two simple polygonal curves in the plane. We develop an algorithm to compute the similarity between and in time, where is the total complexity of input curves and is the number of intersections. Note that in the worst case, although of course it may be much smaller in some cases. Although efficient algorithms for comparing curves in the plane exist (such as the Fréchet distance), our planar algorithm will be the fundamental component for comparing curves on general surfaces in the next section. It turns out that our approach can easily be extended to measure similarity between simple cycles in the plane; see Appendix D.
4.1 Relations to Winding Numbers
We are given two simple (open) curves in the plane which share common endpoints. Previously, we have shown that if an optimal homotopy does not induce anchor points, then it is sense-preserving. The implication of this result is manifested by using the winding number, defined for a loop in the plane at a base point .
Intuitively, imagine starting from a point on , and connecting and by a string. The winding number at w.r.t. , denoted by , is an integer measuring how many times this string winds, in a clockwise manner, around as traverses . More formally, consider an infinite ray based at which is generic (so it has a finite set of transversal intersections / crossings with ). Consider a crossing between the ray and . This crossing is positive if the triangle , , and is oriented counterclockwise, and is negative if oriented clockwise. Then is the number of positive crossings minus the number of negative crossings with respect to any generic ray from .
[r] We say an oriented curve has consistent winding numbers if is either all non-negative, or all non-positive, for all . Note that for a curve with consistent winding numbers, we can always orient the curve appropriately so that is all non-negative. Two examples are shown in the figure on the right, where the second example has consistent winding numbers. Let denote the arrangement formed by the curve . All points in the same cell of the arrangement of have the same winding number, and the winding numbers of two neighboring cells differ by . The relation of consistent winding numbers and sense-preserving homotopies is given below, and the proof can be found in Appendix LABEL:appendix:negativewn.
If there is a sense-preserving homotopy from to , then the closed curve has consistent winding numbers.
Without loss of generality, assume that the map is left sense-preserving, always deforming an intermediate curve to its left. Consider the time-varying function , where is the winding number at with respect to the curve parameterized by . Obviously, , and . During the deformation, changes by either or whenever the intermediate curve sweep over it. Since the homotopy is left sense-preserving, when an intermediate curve sweeps , always moves from the left side of the intermediate curve to its right side. Hence the winding number decreases monotonically. Since in the end, the winding number at each point is zero, .
If the map is left sense-preserving, then a symmetric argument shows that for all . ∎
Next, we describe two results to connect the above lemma to the computation of optimal homotopy. First, we define the total winding number of a curve as
where is the area form111Note that this allows us to use any Riemannian metric on the plane (including the standard Euclidean metric). This will be necessary later when we use the same algorithm for curves in a universal covering space.. The following observation is straightforward.
For any and in the plane,
Take any regular homotopy from to . The area of a regular homotopy in our setting can be reformulated as an integral on the image domain as
where is defined as the number of connected components in the pre-image of under . In other words, is the number of times that any intermediate curve sweeps through . Now consider the function defined as . Obviously, , , and is a continuous function. Furthermore, each time the winding number at a point changes by 1 for some , it means that some intermediate curve sweeps through it. Hence is a lower bound for . We thus have that
for any regular homotopy , implying that
Given and , if has consistent winding numbers, then .
Proof: We provide a sketch of the proof here to illustrate the main idea; see [fullver] for the full proof. We prove the claim by induction on the number of intersections between and . The base case is when there is no intersection between and . In this case, is a Jordan curve which decomposes into two regions, one inside and one outside. The optimal homotopy area in this case is the area of the bounded cell. All points in the bounded cell have winding number (or ) and the claim follows.
Now assume that the claim holds for cases with at most intersections. We next prove it for the case with intersections. Let an -arc denote a subcurve of curve . Consider the arrangement formed by . Since and are simple, every cell in this arrangement has boundary edges alternating between -arcs and -arcs. Assume without loss of generality that has all non-negative winding numbers. Consider a cell with largest (and thus positive) winding number. Since its winding number is greater than all its neighbors, it is necessary that all boundary arcs are oriented consistently as shown in Figure 1 (a), where the cell (shaded region) lies to the left of its boundary arcs.
[r] If has only two boundary arcs, from and from , respectively, then we can morph to another simple curve by deforming through to as illustrated on the right. The area swept by this deformation is exactly the area of cell . Furthermore, after the deformation, every point decreases their winding number by , and no other point changes its winding number. Since any point in this cell initially has strictly positive winding number, the resulting curve still has all non-negative winding numbers. The number of intersections between and is . By induction hypothesis, . Since , we have that . It then follows from Observation 4.2 and the fact that .
Otherwise, the cell has more than one -arc. Take the -arc with the smallest index along , and let be its ending point. Let be the next -arc along the boundary of , and its starting point, and the -arc between and , denoted by in Figure 1. and bound a simple polygon, which we denote by . Since does not intersect , either is on the opposite side of the -arc from the interior of (Figure 1 (b)), or they are on the same side (Figure 1 (d)). It turns out that in both cases, we can deform to another simple curve such that (i) the number of intersections is reduced by , and (ii) still has consistent winding numbers. For example, in the case of Figure 1 (b), is then deformed to sweep the region as shown in Figure 1 (c). By applying the induction hypothesis to , we are able to obtain that . The details are in Appendix C.
4.2 The Algorithm
Lemma 3.2 and 4.1 imply that if the closed curve produces both positive and negative winding numbers, then any optimal homotopy from to must have at least one anchor point. On the other hand, if it has consistent winding numbers, then by Lemma 4.3 we can compute the optimal cost to deform them by simply computing the total winding number. This leads to a simple dynamic-programming (DP) approach to compute .
Specifically, let denote the intersection points between and , ordered by their indices along , where and are the beginning and ending points of and , respectively. Let be the cost of the optimal homotopy between and , and the closed curve formed by . We say that a pair of indices is valid if (1) and have the same order along and along ; and (2) the closed curve has consistent winding numbers. We have the following recursion:
4.3 Time Complexity Analysis
The main components of the DP framework described above are (i) to compute for all pairs of s, and (ii) to check whether each pair is valid or not. These can be done in total time in a straightforward manner. We now show how to compute them in time after pre-processing time. Specifically, we describe how to compute such information in time for all s for a fixed and all indices .
To simplify the description of the algorithm, we extend on both sides until infinity, and obtain . Now collect all intersection points between and , , which is a super-set of previous intersection points, and sort them by their order along the curve . The algorithm can be made to work with directly, but using makes the intuition behind our algorithm, as well as its description, much more clear.
Note that divides the plane into two half-planes. For illustration purpose, we will draw as a horizontal line, and use the upper and lower half-planes to refer to these two sides of . Another way to see that regarding as a horizontal line does not cause any loss of generality is that one can always find a homeomorphism from such that the image of is a horizontal line under this homeomorphism.
Now for a fixed integer , we traverse starting from . We aim to maintain appropriate data structures so that each time we pass through an intersection point with , we can, in time, (1) check whether is valid, and (2) obtain total winding number for .
Total winding numbers. We first explain how to maintain the total winding number for the closed curve as increases. Assume changes from to . Since and are two consecutive intersection points along , the arcs and form a simple closed polygon which we denote by (shaded region Figure 2). Comparing the arrangement with , regardless of where is, only points within will change their winding number, either all by or all by , depending on whether is to the right side or the left side of the -arc , respectively. The winding numbers for points outside are not affected. Hence the change in the total winding number is simply , where is either or . See Figure 2, where all points in will decrease their winding number by as we move from to .
We can pre-compute the area of ’s for all in time, by observing that the set of s satisfy the parentheses property: Namely, either and are disjoint in their interior, or one contains the other.
Specifically, first, we compute the arrangement of and the area of all cells in it in time. Each is the region bounded between a -arc and a corresponding -segment . Since no two -arcs intersect, the containment relationship between such -arcs satisfies parentheses property. In particular, we can use a collection of trees to represent the containment relation among all regions s. See Figure 3 for an illustration. The difference between the region represented at a parent node and the union of regions represented by all its children is a cell in . For example, the shaded cell in the right figure is the difference between and its children and . We can thus compute the area of all s by a bottom-up traversal of these trees. Computing these trees take time by first sorting all intersection points with respect to their order along . Traversing these trees to compute all s takes time. Putting everything together, we need time.
With the area of all s known, updating the total winding number from to takes only constant time.
Checking the validity of s. To check whether is valid or not, we need to check whether all cells in the arrangement have consistent winding numbers. First observe that for any and , is a refinement of the arrangement . That is, a cell in is always contained within some cell in . Hence all points within the same cell of always have the same winding number with respect to any , and we simply need one point from each cell in to maintain the winding number for all cells in , for any and . We now describe how to maintain the winding number for cells of (thus for s) as we pass each , so that we can check whether has consistent winding numbers or not efficiently.
[r] To this end, take four points around each intersection point of and (shown as stars in the right figure). The collection of such representative points hit all cells in . (It does not matter whether there may be more than one point taken from a cell of .) Hence has consistent winding number if and only if all these representative points have consistent winding numbers. Next, we build a data structure to maintain the winding numbers for these points as increases. Specifically, let be the set of representatives that are to the right of , which are the stars above in the right figure. (Those to the left of will be handled in a symmetric manner). Each point has a key associated with it which is its index along . We build a standard balanced -D range tree on based on such keys, where each leaf stores a point from . Every internal node is associated with an interval , where and are the smallest and largest keys stored in the subtree rooted at . In other words, all representatives with an index along within are stored in the subtree rooted at . At every node , interior or not, we also store a value . To compute the winding number for the representative point stored at a leaf node , we identify the path from the root to . The winding number for is simply . Finally, each internal node also stores the maximum and minimum winding numbers associated with all leaves in its subtree. At the beginning, all winding numbers are zero. The size of this tree is with height , and can be built in time once the arrangement is known.
Let denote the index of point along (or can be considered as the -coordinate of ). As we move from to , cells of contained in should either all increase or all decrease their winding number by . Note that representatives of these cells are simply those contained in the interval (or if ). Hence updating the winding number is similar to an interval query of , and the number of nodes in the canonical decomposition of update their values by either or depending on the sideness of with respect to the arc . The minimum and maximum winding numbers can also be updated time per visited node. The entire process visits nodes, and thus takes time as increases from to . To see whether has consistent winding numbers or not, we only need to check the minimum and maximum winding numbers stored at the root of the tree, denoted by and , respectively. If equals to zero, then all winding numbers w.r.t. are either all non-negative or all non-positive. Otherwise, is not valid.
Repeat the above process for every . Overall, after pre-processing, we can check whether is valid or not and compute for all and all in time. Putting everything together, we have the following result.
Given two simple polygonal chains and (with the same endpoints) in the plane of total complexity, and with intersection points between them, we can compute the optimal homotopy and its area in time and space.
The case where we have two simple cycles and in the plane is discussed in Appendix D, and we obtain the following extension:
Given two polygonal cycles and in the plane of total complexity and with intersection points, we can compute the optimal homotopy and its area in time if ; and compute the optimal homotopy area in time if .
5 Minimum Area Homotopies on 2-Manifolds
In this section, we consider optimal homotopy between curves and on an orientable and triangulated -manifold without boundary. Our input is a triangulation of with complexity , and two simple homotopic polygonal curves and sharing endpoints. Edges in and are necessarily edges from the triangulation . The total complexity of and is , and there are number of intersections between them. Note that in this setting, . Below we discuss separately the cases when has non-zero genus and when is a topological sphere.
5.1 Surfaces with Positive Genus
Given an orientable -manifold , let be a universal covering space of with the associated covering map. Note that is continuous, surjective, and a local homeomorphism. (For full details on covering spaces, we refer the reader to topology textbooks that address this area ; we will also build on existing algorithmic techniques developed for the computing and working in the universal cover [21, 34].)
For any path in , if we fix the lift (pre-image) of its starting point, then it lifts to a unique path in , such that . Since and are homotopic with common endpoints, the closed curve formed by is contractible on , and the lift of , denoted by , is a closed curve in . By the Homotopy Lifting Property of the universal cover , we have:
Once we fix the lift of the starting point of and in , there is a one-to-one correspondence between homotopies between and in and those between and in .
We now impose an area measure in by lifting the area measure in ; this can be done via the map , which is a local homeomorphism. Now the area of a homotopy in is the same as the area of its lift in . As such, we can convert the problem of finding an optimal homotopy in to finding one in . Furthermore, for any orientable compact 2-manifold with genus , its universal cover is topologically equivalent to . Intuitively, this means that we can then apply algorithms and results from previous section to the universal covering space.
More specifically, our algorithm proceeds as follows:
Step 1: Compute relevant portion of . We will construct a portion of a universal covering space made from polygonal schema of [35, 21]. Specifically, we use the algorithm from  to construct a reduced polygonal schema in time. The universal covering space consists of an infinite number of copies of this polygonal schema glued together appropriately. We call each copy of the polygonal schema in the constructed universal covering space a tile.
Recall that the universal covering space is homeomorphic to . We fix a lift of the starting endpoint of and in and obtain a specific lift and for and respectively. Since and are homotopic, and form a closed curve, denoted by . Note that the number of intersection points between and is at most , as every intersection point in the lift necessarily maps to an intersection point of and under , but not vice versa.
Consider the arrangement formed by in the planar domain . We will construct the portion of the universal covering space which is the union of tiles that intersect or are contained inside of .
From , we know that the lifted curve passes through tiles in . However, while the total number of tiles in the interior of is for the case where , it can be for the case when . Hence we will separate the case for and , since we wish to avoid the overhead in the genus 1 case.
For the case , we use the algorithm by Dey and Schipper  to compute the relevant portion of the universal covering space in time. The output contains all copies of the polygonal schema in , where each tile is represented by a reduced -gon without being explicitly filled with triangles from . However, recall that and are curves which follow edges of the triangulation; in this construction of the polygonal schema tiles, each edge of can be broken into pieces. So in the worst case, we must break each edge in or into pieces, giving a total complexity for and is in this representation of . Once these are known, we can compute the combinatorial structure of the arrangement of in , as well as the description of the set of tiles each cell in intersects or contains, in time.
For the case , the input manifold is a torus, and the canonical polygon schema for it is a rectangle with oriented boundary arcs . Imagine now that we give the base polygonal schema (which is the tile that contains the lift of the starting point of and ) a coordinate ; we must now assign a coordinate for every other copy of the polygon (as shown in Figure 4(a)). Specifically, a copy of polygonal schema has coordinate if the closed loops whose lifts start in and end in have the same homotopy type as . We can obtain the sequence of the rectangles (and their coordinates) that the curve will pass through in time . Once these coordinates are known, the combinatorial structure of the arrangement of in can also be computed in time. Note that in this case, we have avoided explicitly enumerating the set of tiles fully enclosed within (the shaded tiles in Figure 4 (b)), whose number can be instead of when .
Step 2: Area of cells in . In order to perform our algorithm introduced in Section 4 to the lifted curves and , in addition to the combinatorial structure of , we also need the area of each cell in . We first describe how to compute it for the case .
Take any cell in and assume the boundary of intersects copies of polygonal schema. Even though that may contain copies of (rectangular) tiles in its interior, we do not need to enumerate these interior tiles explicitly to compute their total area.
Indeed, by a scanning algorithm from left to right, we can compute in time how many tiles are completely contained inside (heavily–shaded regions in Figure 4 (b)) (note that the coordinates of each tile traversed by the boundary of are known). Since the area of every polygonal schema is simply the total area of the input triangulation, we can compute the total area of tiles contained inside in time.
Now let be the collection of tiles that intersect the boundary of . It remains to compute the total area of . Call each region in a sub-cell, for any tile . Let denote the boundary curves of the polygonal schema . There are two types of sub-cells: the essential ones which contain at least one intersection point between and as their vertices, and the non-essential ones which have no intersection; see Figure 4 (b) for examples. Note that a non-essential cell is bounded by arcs from alternating with -arcs from or , since there is no intersection of and along the boundary of a non-essential cell. (Here, a -arc refers to either a -arc or a -arc).
First let us consider the collection of non-essential sub-cells formed by alternating -arcs (boundary arcs of a tile) and arcs from and , and compute the area of each such non-essential sub-cells. If we plot all the -arcs within a single tile , no two -arcs can intersect in this tile, since is a simple curve. Imagine that we pick an arbitrary but fixed point on the boundary of the polygonal schema as the origin . Each -arc subdivides into two regions; we let denotes the canonical one excluding . Note that since is a simple curve, the set of canonical regions s for all -arcs must satisfy the parenthesis property, and these regions and their areas, called canonical areas, can be computed in time using a data structure similar to one used in Section 4.3 to compute the area of s. See Figure 5 for an illustration. Similar, we can put all -arcs within the same tile and compute the canonical regions / areas associated with all -arcs in time. Once these areas are known, the area of each non-essential sub-cell can be computed in time where is the number of -arcs and -arcs on the boundary of this sub-cell: Specifically, it is the difference between the canonical area of the top-most /-arc and the union of the canonical areas of all other /-arcs on the boundary of this cell. See Figure 5 (b). Hence the areas of all non-essential sub-cells can be computed in time once all s are known. The total time complexity required here is thus .
What remains is to compute the area of all essential sub-cells. Note that there are essential sub-cells since each contains an intersection between and . Let a -arc to refer to an arc that starts and ends with points on (the boundary of the polygonal schema ) and consists of alternating - and -arcs. An essential sub-cell is either completely contained within a polygonal schema, or its boundary consists of -arcs, -arcs, -arcs and -arcs where no two such arcs can be consecutive: they are separated by -arcs. Now collect all -arcs and -arcs that are involved in the boundary arcs of those essential sub-cells completely contained within a tile. Plot them within the same tile and compute their arrangement as well as the area for each cell in . This can be done in time. Since can have only vertices in the interior of the tile , contains cells. If an essential sub-cell is completely contained within a polygonal schema, then it is a union of a set of cells from . We can simply spend time to go through cells in , identify those contained in and return their total area. Hence it takes time to compute the area of all such essential sub-cells. If an essential sub-cell has -arcs on its boundary, then we need a slightly more complicated way to handle it.
Specifically, for all the remaining essential sub-cells, there can be number of -arcs along their boundaries, denoted by . We collect all -arcs and -arcs involved in and plot them in the same tile and compute their arrangement . Each -arc divides the tile into two regions, and we define to be the canonical one that excluding a specific origin on similar to before. consists of a union of cells from the arrangement , and we can compute the area of in time since has cells. Overall, in time, we can compute the area of all s for all -arcs Now take an essential sub-cell that has number of -, -, or -arcs along its boundary, denoted by . Let be the arc (which can be -, - or -arc) whose endpoints along spans the largest interval. Then, can be represented as , where is the canonical region defined by an arc . Since the area of all canonical regions are known (for -arcs or -arcs, we have computed their canonical areas before), ’s are a can be computed in time. Computing the area of all remaining essential sub-cells thus takes time.
Putting everything together, the total time needed to compute the area of all cells in is when . The case when is similar but much simpler. Indeed, we now can afford to compute all the tiles contained within any cell of explicitly, as their total number is bounded by [21, 34]. The areas of essential and non-essential sub-cells are computed using the same algorithm as above. The total time complexity is .
Step 3: Putting everything together. With the combinatorial structure of and the area of each cell computed, we now apply the algorithm from Section 4.2 to compute the optimal homotopy in time in , which, by 5.1, gives the optimal homotopy between and in in the same time bound. The total time complexity for the entire algorithm is .
5.2 The Case of the Sphere
We now consider the remaining case where the input has , so is a (topological) sphere . All paths on are homotopic. The universal cover of a sphere is itself, and hence is compact. However, the previous algorithm in Section 4.2 works for a domain homeomorphic to and cannot be directly applied. We now sketch how we handle the sphere case. Missing details can be found in Appendix E.
We observe that the results in Section 3 still hold. However, as the sphere is compact, the winding number is not well-defined. For example, see Figure 6, where there are two ways that the curve winds around the point . In the first case, the winding number at is , while in the second case, the winding number is . In order to use a dynamic programming framework as before to compute the optimal homotopy between and , we need to develop analogs of Lemma 4.1 and 4.3 for curves on the sphere.
To this end, note that if we remove one point, say from the sphere , then the resulting space is homeomorphic to , and the concept of the winding number is well defined for . Specifically, can be considered as the point of infinity in . The winding number of w.r.t. and , denoted by ( omitted when its choice is clear), is simply the summation of signed crossing numbers for any path connecting to . As in the planar case, we say that is consistent w.r.t. if is either non-negative, or non-positive for all . Similar to before, we define the total winding number w.r.t. a base point as . Let denote the best cost to morph to within domain .
If there is an optimal homotopy between and that does not sweep through some point , then we have .
Suppose is an optimal homotopy between and with no anchor points. For any cell in , if sweeps through one point in its interior, then it sweeps through all points in .
If there is an optimal homotopy of and with no anchor point, then the image of this optimal homotopy cannot cover all points in .
Given two homotopic paths and from sharing common endpoints, Lemma 5.4 and Observation 5.2 imply that if can be morphed to optimally without anchor points, then there exists some point such that . For this choice of , it is necessary that the closed curve has consistent winding numbers in . Once this is identified, is simply the total winding number of w.r.t. , as suggested by Lemma 4.3, because is homeomorphic to the plane. Furthermore, by Observation 5.3, we only need to pick one point from each cell of to check for the potential . Specifically, let be a set of such representatives, where . The optimal homotopy area is simply the smallest of all for those s with respect to whom the curve has consistent winding numbers. Hence if we assume that if there is an optimal homotopy between and with no anchor points, then we can compute .
Overview of the algorithm for sphere case. To compute the optimal homotopy between and , we follow the same dynamic programming framework as before. If there is no anchor point in an optimal homotopy, then we use the discussion above to compute the optimal homotopy area. Otherwise, we identify the intersection point that serves as next anchor point, and recurse. The main difference lies in the component of computing , assuming that there is an optimal homotopy from to with no anchor points. Previously, this is done by checking whether has consistent winding numbers. Now, we need to check the same condition but against number of potential representatives as the potential point of infinity. This gives a linear-factor blow-up in the time complexity compared to the algorithm for the planar case. However, we show that this linear blow-up can be tamed down and we can again compute all s for all s and all in time, after pre-processing time. See Appendix E.3 for details. Overall, the total time complexity remains the same as before.
Putting both cases ( and ) together, we conclude with the following main result.
Given a triangulation of an orientable compact -manifold with genus , let be the complexity of . Given two homotopic paths and of total complexity with intersection points, we can compute an optimal homotopy and its area in time.
In this paper, we propose a new curve similarity measure which captures how hard it is to deform from one curve to the other based on the amount of total area swept. It is robust to noise (as it is area-based), and can be computed efficiently; to our knowledge, there is no other efficiently computable similarity measure for curves on surfaces. Our algorithm can be extended for cycles in the plane (see Appendix D). It appears that our algorithm can also be extended to cycles on the surfaces. Indeed, if the optimal free homotopy has an anchor point, then we can break cycles into curves that share a common start and end point, which then reduces to the problem of comparing curves on surfaces. However, on a surface the analog of Lemma D.1 no longer holds, so that two curves may intersect in but not have an anchor point in the optimal homotopy; in this case, it is not immediately clear how to bound the size of the universal cover necessary for our algorithm. We leave the problem of working out these details, as well as improving the time complexity for comparing cycles, as an immediate future direction.
Currently, we assume that two input paths are simple paths which share starting and ending points, which makes it easier to define homotopy equivalence. This leads to two natural questions, namely how to handle curves which do not share endpoints and how to deal with non-simple curves. Another interesting problem is to compute optimal isotopy area where we require that any intermediate curve during the deformation is also simple.
Measuring similarity of curves on surfaces is an interesting problem, and many open areas remain. Geodesic Fréchet-based measures ignore the topological constraints of underlying surface, while the homotopic Fréchet distance, homotopy height, and our method require identification of a homotopy which optimizes some cost. As far as other measures of similarity which may be tractable, one interesting new idea would be to develop an area-based curve similarity measure that allows topological changes, such as allowing a region to be swept as long as it has trivial homology. Other directions include developing efficient curve simplification algorithms based on this measure, and studying similarity between curves from more general simplicial complexes than considered in this paper (such as a manifold with boundary or holes, or non-manifolds).
The authors would like to thank Joseph O’Rourke, Rephael Wenger, Michael Davis, and Tadeusz Januszkiewicz for useful discussions at the early stage of this work, and David Letscher, Brody Johnson, and Bryan Clair for helpful discussions at the later stage of this paper. We would also like to thank the anonymous reviewers for their comments. This work is partially supported by the National Science Foundation under grants CCF-0747082 and CCF-1054779.
-  P. K. Agarwal, R. B. Avraham, H. Kaplan, and M. Sharir. Computing the discrete Fréchet distance in subquadratic time. In Proc. 24th Ann. ACM-SIAM Sympos. Discrete Alg. (SODA), 2013.
-  H. Alt. The computational geometry of comparing shapes. In Efficient Algorithms, volume 5760 of Lecture Notes in Computer Science, pages 235–248. Springer Berlin / Heidelberg, 2009.
-  H. Alt, U. Fuchs, G. Rote, and G. Weber. Matching convex shapes with respect to the symmetric difference. Algorithmica, 21:89–103, 1998.
-  H. Alt and M. Godau. Computing the Fréchet distance between two polygonal curves. International Journal of Computational Geometry and its Applications, 5:75–91, 1995.
-  H. Alt and L. J. Guibas. Discrete geometric shapes: Matching, interpolation, and approximation. In J.-R. Sack and J. Urrutia, editors, Handbook of Computational Geometry, pages 121–153. Elsevier Science Publishers B. V. North-Holland, Amsterdam, 2000.
-  H. Alt, C. Knauer, and C. Wenk. Comparison of distance measures for planar curves. Algorithmica, 38(1):45–58, 2004.
-  B. Aronov, S. Har-Peled, C. Knauer, Y. Wang, and C. Wenk. Fréchet distance for curves, Revisited. In Pro. of European Symposium on Algorithms, pages 52–63, 2006.
-  S. Bespamyatnikh. An optimal morphing between polylines. International Journal of Computational Geometry and Applications, 12(3):217–228, 2002.
-  P. Bose, S. Cabello, O. Cheong, J. Gudmundsson, M. var Kreveld, and B. Speckmann. Area-preserving approximations of polygonal paths. Journal of Discrete Algorithms, 4:554–566, 2006.
-  G. R. Brightwell and P. Winkler. Submodular percolation. SIAM J. Discret. Math., 23(3):1149–1178, July 2009.
-  K. Buchin, M. Buchin, C. Knauer, G. Rote, and C. Wenk. How difficult is it to walk the dog? In Proc. 23rd Europ. Workshop Comput. Geom., pages 170–173, 2007.
-  K. Buchin, M. Buchin, W. Meulemans, and W. Mulzer. Four Soviets walk the dog—with an application to Alt’s conjecture. arXiv/1209.4403, 2012.
-  K. Buchin, M. Buchin, M. van Kreveld, M. Löffler, R. I. Silveira, C. Wenk, and L. Wiratma. Median trajectories. In Proc. 18th Annual European Symposium on Algorithms (ESA), volume 6346, pages 463–474. Springer, 2010.
-  K. Buchin, M. Buchin, and Y. Wang. Partial curve matching under the fréchet distance. In Proc. ACM-SIAM Sympos. Discrete Alg. (SODA), 2009.
-  M. Buchin. On the Computability of the Fréchet Distance Between Triangulated Surfaces. PhD thesis, Dept. of Comput. Sci., Freie Universität Berlin, 2007.
-  E. W. Chambers, É. Colin de Verdière, J. Erickson, S. Lazard, F. Lazarus, and S. Thite. Homotopic frÌ©chet distance between curves or, walking your dog in the woods in polynomial time. Computational Geometry, 43(3):295 – 311, 2010. Special Issue on 24th Annual Symposium on Computational Geometry (SoCG’08).
-  E. W. Chambers, T. Ju, D. Letscher, and L. Liu. Isotopic fréchet distance. In CCCG11, 2011.
-  E. W. Chambers and D. Letscher. On the height of a homotopy. In CCCG’09, pages 103–106, 2009.
-  A. F. Cook and C. Wenk. Geodesic Fréchet distance inside a simple polygon. In Proc. 25th Internat. Sympos. Theoret. Asp. Comp. Sci., pages 193–204, 2008.
-  R. G. Cromley. Digital Cartography. Prentice Hall, Englewood Cliffs, NJ, 1992.
-  T. K. Dey and H. Schipper. A new technique to compute polygonal schema for -manifolds with application to null-homotopy detection. Discrete and Computational Geometry, 14(1):93–110, 1995.
-  J. Douglas. Solution of the problem of plateau. Trans. of the American Mathematical Society, 33:263–321, 1931.
-  A. Driemel and S. Har-Peled. Jaywalking your dog: computing the fréchet distance with shortcuts. In Proc. 23rd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’12, pages 318–337. SIAM, 2012.
-  A. Efrat, Q. Fan, and S. Venkatasubramanian. Curve matching, time warping, and light fields, new algorithms for computing similarity between curves. J. Math. Imaging Vis., 27(3):203–216, 2007.
-  A. Efrat, S. Har-Peled, L. J. Guibas, J. S. Mitchell, and T. Murali. New similarity measures between polylines with applications to morphing and polygon sweeping. Discrete and Computational Geometry, 28:535–569, 2002.
-  S. Har-Peled, A. Nayyeri, M. Salavatipour, and A. Sidiropoulos. How to walk your dog in the mountains with no magic leash. In Proc. 28th Symposuim on Computational Geometry, SoCG ’12, pages 121–130, New York, NY, USA, 2012. ACM.
-  S. Har-Peled and B. Raichel. The fréchet distance revisited and extended. In Proc. 27th Annual ACM symposium on Computational geometry, SoCG ’11, pages 448–457, New York, NY, USA, 2011. ACM.
-  H. Lawson. Lectures on minimal submanifolds, volume 1 of Mathematics lecture series. Publish or Perish, 1980.
-  A. Maheshwari and J. Yi. On computing Fréchet distance of two paths on a convex polyhedron. In European Workshop on Computational Geometry, pages 41–44, 2005.
-  R. B. McMaster and K. S. Shea. Generalization in Digital Cartography. Association of American Cartographers, Washington DC, 1992.
-  K. Mehlhorn and C.-K. Yap. Constructive Whitney-Graustein Theorem: Or how to untangle closed planar curves. SIAM J. Comput., 20(4):603–621, 1991.
-  T. Rado. On plateau’s problem. Annals of Mathematics, 31:457–469, 1930.
-  J. J. Rotman. An Introduction to Algebraic Topology. Graduate Texts in Mathematics; 119. Springer-Verlag New York Inc., 1988.
-  H. Schipper. Determining contractibility of curves. In Proc. Sympos. on Computational Geometry, pages 358–367, 1992.
-  G. Vegter and C. K. Yap. Computational complexity of combinatorial surfaces. In Proc. Sympos. on Computational Geometry, pages 102–111, 1990.
-  R. C. Veltkamp. Shape matching: similarity measures and algorithms. In Proc. Shape Modeling International, pages 188–199, 2001.
-  H. Whitney. On regular closed curves in the plane. Compositio Mathematica, 4:276–284, 1937.
Appendix A Proof for Observation 3.1\parpic
[r] Note that is a map from , where is the unit square and a point will be mapped to . See the right figure for an illustration. (Since and share starting and ending endpoint, the left and right sides of should be contracted to a point. We use the square view for simpler illustration.) The top and bottom boundary edges of this square are mapped to and , respectively. Given an anchor point , let and be the parameters of in and , respectively; that is, . By definition of anchor points, the pre-image of under the map necessarily includes a curve in connecting on the bottom edge to on the top boundary edge of . Since , the pre-images of cannot intersect with that of . Hence no two such curves can intersect each other, which means that s must be ordered in the same way as s.
Appendix B Proof for Lemma 3.2
Consider a time in the homotopy, and let . We first show that deforms consistently, so that every point on is either fixed or deforms to the same side of .
First note that if some portion of is left sense-preserving at time and then reverses its direction and becomes right sense preserving at time a small amount later, some portion of the domain has been swept twice. Hence this homotopy cannot have minimal area, since we can create a smaller one which stops at time and moves directly to some intermediate curve at a time greater than without sweeping any portion twice. See Figure 7 (e).
Now suppose that some portion of is deforming to one direction and another is morphing in the opposite direction. Since is a homotopy and is therefore continuous, this means that there is at least one interval of fixed points between these two regions (which may possibly consist of a single point). Let be an extremal point on this interval; see Figure 7 (a) for a picture when is the only fixed point. In addition, since is a fixed point but not an anchor point, where know there is some where is still on and another where is not on .
Now we have several cases to consider. First, consider if has directly reversed the direction of either portion of the curve (before or after ), we are in a similar situation to the one previously discussed, since the curve goes from locally forward to locally backward (or vice versa). In this case, we again know that some area of the domain has been swept twice, which means has been swept over once and then was returned to, so we can reduce the area swept by by reparameterizing to move directly to