Computing the Fréchet Distance with a Retractable Leash
All known algorithms for the Fréchet distance between curves proceed in two steps: first, they construct an efficient oracle for the decision version; second, they use this oracle to find the optimum from a finite set of critical values. We present a novel approach that avoids the detour through the decision version. This gives the first quadratic time algorithm for the Fréchet distance between polygonal curves in under polyhedral distance functions (e.g., and ). We also get a -approximation of the Fréchet distance under the Euclidean metric, in quadratic time for any fixed . For the exact Euclidean case, our framework currently yields an algorithm with running time . However, we conjecture that it may eventually lead to a faster exact algorithm.
Measuring the similarity of curves is a classic problem in computational geometry. For example, it is used for map-matching tracking data [3, 20] and moving objects analysis [8, 9]. In these applications, it is important to take the continuity of the curves into account. Therefore, the Fréchet distance and its variants are popular metrics to quantify (dis)similarity. The Fréchet distance between two curves is obtained by taking a homeomorphism between the curves that minimizes the maximum pairwise distance. It is commonly explained through the leash-metaphor: a man walks on one curve, his dog walks on the other curve. Man and dog are connected by a leash. Both can vary their speeds, but they may not walk backwards. The Fréchet distance is the length of the shortest leash so that man and dog can walk from the beginning to the end of the respective curves.
The algorithmic study of the Fréchet distance was initiated by Alt and Godau . They gave an algorithm to solve the decision version for polygonal curves in time, and then used parametric search to find the optimum in time, for two polygonal curves of complexity . The method by Alt and Godau is very general and also applies to polyhedral distance functions. To avoid the need for parametric search, several randomized algorithms have been proposed that are based on the decision algorithm combined with random sampling of critical values, one running in time , the other in time . Recently, Buchin et al.  showed how to solve the decision version in subquadratic time, resulting in a randomized algorithm for computing the Fréchet distance in time.
In terms of the leash-metaphor, these algorithms simply give several leashes to the man and his dog to try if a walk is possible. By a clever choice of leash-lengths, one then finds the Fréchet distance efficiently. Since no substantially subquadratic algorithm for the problem is known, several faster approximation algorithms have been proposed (e.g. [2, 15]). However, these require various assumptions of the input curves; previous to our work, there was no approximation algorithm that for the general case runs faster than known exact algorithms. Recently, Bringmann  showed that, unless the Strong Exponential Time Hypothesis (SETH) fails, no general-case algorithm can exist to approximate the Fréchet distance within a factor of , for any . The lower bound on the approximation factor was later improved to , even for the one-dimensional discrete case . Subsequent to our work, Bringmann and Mulzer showed that a very simple greedy algorithm yields an approximation factor of in linear time . This leaves us with a gap between the known algorithms and lower bounds for computing and approximating the Fréchet distance.
We present a novel framework for computing the Fréchet distance, one that does not rely on the decision problem. Instead, we give the man a “retractable leash” that can be lengthened or shortened as required. To this end, we consider monotone paths on the distance terrain, a generalization of the free space diagram typically used for the decision problem. Similar concepts have been studied before, but without the monotonicity requirement (e.g., path planning with height restrictions on terrains  or the weak Fréchet distance ).
We present the core ideas for our approach in Section 2. The framework provides a choice of the distance function that is used to measure the distance between points on the curves. However, it requires an implementation of a certain data structure that depends on . We apply our framework to polyhedral distances (Section 3), to show that under such metrics, the Fréchet distance is computable in quadratic time. To the best of our knowledge, there is no previous method for this case that is faster than the classic Alt-Godau algorithm with running time . Our polyhedral implementation can be used to obtain a -approximation for the Euclidean case (Section 4). This leads to an -time algorithm, giving the first approximation algorithm that runs faster than known exact algorithms for the general case. Moreover, as shown by Bringmann , our result is tight up to subpolynomial factors, assuming SETH. Finally, we apply our framework to the Euclidean distance (Section 5), to show that using this approach, we can compute the Fréchet distance in time for two -dimensional curves of complexity and . We conclude with two open problems in Section 6.
Curves and distances.
Consider a curve in a -dimensional space. We denote the vertices of by ; its complexity (number of edges) is . We treat a curve as a piecewise-linear function . That is, holds for any integer and . Similarly, we are given a curve with complexity ; its vertices are denoted by .
Let be the set of all orientation-preserving homeomorphisms, i.e., continuous and nondecreasing functions with and . Then the Fréchet distance is defined as
Here, may be any distance function on . Here, we shall consider polyhedral distance functions (Section 3) and the more typical case of the Euclidean distance function (Section 5). For our framework, we require that is convex. That is, the locus of all points with distance at most one to the origin forms a convex set in .
Consider the joint parameter space of and . A pair corresponds to the points and , and the distance function assigns a distance to . We interpret this distance as the “height” at point . This gives a distance terrain , i.e., with . We partition into cells based on the vertices of and . For integers and , the cell is defined as the subset of the parameter space . The cells form a regular grid, where represents the column and the row of a cell. The sides of are the four line segments , , , and ; the boundary of is the union of its sides. An example of two curves and their distance terrain is given in Figure 1.
A path is bimonotone if it is both - and -monotone, i.e., every horizontal and vertical line intersects in at most one connected component. For , we let be the set of all bimonotone continuous paths from the origin to . The acrophobia function is defined as
Intuitively, represents the lowest height that an acrophobic (and somewhat neurotic) climber needs to master in order to reach from the origin on a bimonotone path through the distance terrain . A bimonotone path from to corresponds to a homeomorphism: we have .
Let and be a bimonotone path from the origin to . Let . We call an -witness for if
For , we call simply a witness: is then an optimal path for the acrophobic climber.
Due to the convexity of the distance function, we need to consider only the boundaries of cells of the distance terrain. It seems natural to propagate through the terrain for any point on a cell side the minimal “height” (leash length) required to reach that point. However, this may entail an amortized linear number of changes when moving from one cell to the next, giving a cubic-time lower bound for such an approach. We therefore do not maintain these functions explicitly. Instead, we maintain sufficient information to compute the lowest for a side. A single pass over the terrain then finds the minimum for reaching the other end, giving the Fréchet distance.
More specifically, we show that as we move through a row of the distance terrain from left to right, the witnesses for the minimum values of the acrophobia function on the vertical boundaries exhibit a certain monotonicity property: if a witness for the -th vertical boundary enters row in column , then there is a witness for the -th vertical boundary that enters row in column or to the right of column . Thus, if we know that the “rightmost” witness for the -th vertical boundary enters row in column , it suffices to consider only witnesses that enter in columns . Furthermore, we can narrow down the set of candidate columns further by observing that it is enough to restrict our attention to those columns for which the minimum value of the acrophobia function on the bottom boundary is smaller than for all bottom boundaries to the right of it, up to (otherwise, we could find an equally good witness further to the right). Now, all we need is an efficient way to decide whether for a given candidate column, there actually exists an optimum witness for the -th vertical boundary that enters row through this column. For this, we describe witness envelopes, a data structure that allows us to characterize an optimum witness that enters row in a given column. Furthermore, we show that these witness envelopes can be maintained efficiently, assuming that an appropriate data structure for dynamic upper envelopes is available. Putting everything together, and proceeding analogously for the columns of the distance terrain, we obtain a new algorithm for the Fréchet distance.
2.2 Analysis of the distance terrain
The Fréchet distance corresponds to the acrophobia function on the distance terrain. To compute , we show that it suffices to consider the cell boundaries. For this, we generalize the fact that cells of the free space diagram are convex  to the distance terrain for convex distance functions.
Let , and suppose that is a convex distance function. For every cell , the set of all points with is convex.
The cell represents the parameter space of two line segments in . Let and be the parameterized lines spanned by these line segments. Both and are affine maps. Consider the map defined by . Being a linear combination of affine maps, is affine. Set . Since is convex, is convex. Let . Since the affine preimage of a convex set is convex, is convex. Thus, , the subset with , is convex, as it is the intersection of two convex sets. ∎
Lemma 2.1 has two important consequences. First, it shows that it is indeed enough to focus on cell boundaries. Second, it tells us that the distance terrain along each side is unimodal, that is, it has a single local minimum.
Let be a cell of the distance terrain, and and two points on different sides of . For any on the line segment , we have .
Let be a cell of the distance terrain. The restriction of to any side of is unimodal.
We denote by and the left and bottom side of the cell (and, by slight abuse of notation, also the restriction of to the side). The right and top side are given by and .111Note that there need not be an actual cell or . With and we denote the acrophobia function along the corresponding side. All these restricted functions depend on a single parameter in the natural way, i.e., , , etc. Assuming that the distance function is symmetric, computing values for rows and columns of is symmetric as well. Hence, we present only how to compute with rows. If is asymmetric, our methods still work, but some extra care needs to be taken when computing distances. In the following, we fix a row , and we write as a shorthand for , for , etc.
Consider a vertical side . We write for the minimum of the acrophobia function along , and similarly for horizontal sides. Our goal is to compute and for all cell boundaries. We say that an -witness passes through a side if there is a with .
Let , and a point on . Let be an -witness for that passes through , for some . Suppose there is a column with . Then there exists an -witness for that passes through .
Let be a point on that achieves , and a witness for . Since is bimonotone and passes through , it must also pass through . Let be the (lowest) intersection point of and , and the subpath of from to . Let be the path obtained by concatenating , the line segment , and . By our assumption on and by Corollary 1, path is an -witness for that passes through ; see Figure 2. ∎
Lemma 2.2 implies that any point has a rightmost witness with the property that if passes through the bottom side , for some , then the acrophobia function on all later bottom sides is strictly greater than the acrophobia optimum at .
Let be a point on . There is a witness for with the following property: if passes through the bottom side , then , for all .
Next, we argue that there is a witness for that enters row at or after the bottom side used by the witness for . That is, the rightmost witnesses behave “monotonically” in the terrain.
Let be a witness for that passes through , for some . Then has a witness that passes through , for some .
Choose maximum so that has a witness that passes through . If , we are done, so assume . Since must pass through , we get . Lemma 2.2 now gives a witness for that passes through , despite the choice of . ∎
We now characterize through a witness envelope. Fix . Suppose has a witness that passes through . Fix a second column . We are interested in the best witness for that passes through . The witness envelope is a function . The witnesses must pass through and (if ), and they end on . Hence,
However, this is not enough to exactly characterize the best witnesses for through . To this end, we introduce truncated terrain functions , for . Since is unimodal, represents the decreasing part until the minimum, remaining constant afterwards. Therefore,
for all . The reason for truncating the function is as follows: to reach , we must cross all below -coordinate . If we pass below the position where the minimum is attained, the height may force a higher value for the acrophobia function. However, the increasing part of does not matter, because we could just pass closer to the minimum. This intuition is not quite accurate, since we need to account for the order of the increasing parts to ensure bimonotonicity. However, we prove below that due to the witness for through , this is not a problem. Thus, the witness envelope for the column interval in row is the upper envelope of the following functions on the interval :
the terrain function ;
the constant function ;
the constant function , if ; and
the truncated terrain functions , for all .
See Figure 3 for an example. We prove with the following lemma that the witness envelope exactly characterizes for witnesses that pass through .
Fix a row and a two columns , with . Suppose that has a witness that passes through . Let , , and . The point has an -witness that passes through if and only if .
Let be an -witness for that passes through . Then, and . If , then must pass through , so . Since is bimonotone, it has to pass through for . Let be the points of intersection, from left to right. Then, and , for all . Hence, .
Now suppose that . The conclusion is immediate for . Otherwise, we have . Let be such that the witness for reaches at point . There are two cases. First, if , we can find an appropriate -witness for by following the witness for , passing to , following to , and then taking the line segment to . Second, if , we construct a curve as before. However, is not bimonotone (the last line segment goes down). This is fixed as follows: let and be the two intersection points of with the horizontal line . We shortcut at the line segment as illustrated in Figure 4. The resulting curve is bimonotone and passes through . To see that is an -witness, it suffices to check that along the segment , the distance terrain never goes above . For this, we need to consider only the intersections of with the vertical sides. Let be such a side. The function is unimodal; let be the value where the minimum of is obtained. We distinguish two cases to argue that and to prove the lemma:
: by definition of truncated terrain functions, , for all . Hence, we know that holds trivially by our assumption of and the fact that is part of the witness envelope.
: by construction, the witness passes at or higher. Hence, holds as is on the increasing part of . It follows that . Since , we have , as desired.
Thus, passes through and is an -witness for . ∎
We are now ready to present the algorithm. We walk through the distance terrain, row by row, in each row from left to right. When processing a cell , we compute and . For each row , we maintain a double-ended queue (deque) that stores a sequence of column indices. We also store a data structure that contains a set of (truncated) terrain functions on the vertical sides in row . The structure supports insertion, deletion, and a minimum-point query that returns the lowest point on the upper envelope of the terrain functions. In other words, implicitly represents a witness envelope, apart from the constant functions and . The implementation of depends on the distance function : in Section 3, we describe the data structure for polyhedral distance functions, and in Section 5, we consider the Euclidean case.
The algorithm is given in Algorithm 1. It proceeds as follows: since all witnesses start at , we initialize to use as its lowest point and compute the distance accordingly. The left- and bottommost sides of the distance terrain are considered unreachable.
In the body of the for-loop, we compute and . Let us describe how to find . First, we remove all indices from the back of the that have an acrophobia optimum on the bottom side that is at least , and we append to . We also add to the upper envelope . Let and be the first two elements of . We perform a minimum query on the witness envelope, combining the result with two constants and , in order to find the smallest for which a point on has an -witness that passes through . Note that should be included as a constant only if , i.e., if ; for simplicity, we omit this detail in the overview. If , there is an -witness for through , so we can repeat the process with (after updating ). If does not exist (i.e., ) or if , we stop and declare to be optimal. Finally, we update to use the truncated terrain function instead of .
We now give the invariant that holds at the beginning of each iteration of the for-loop. The invariant is stated only for a row, analogous data structures and invariants apply to the columns. A point dominates a point if and . As before, we from now on fix a row , and we omit the index from all variables.
At the beginning of iteration in row , we have computed the optima , , , . Let be the column such that a rightmost witness for passes through . Then stores the first coordinates of the points in the sequence , , , that are not dominated by any other point in the sequence. In addition, stores the (truncated) terrain functions for the vertical sides in columns .
Invariant 1 holds initially, so we need to prove that it is maintained in each iteration of the for-loop. This is done in the following lemma.
By the invariant, a rightmost witness for passes through , where is the head of at the beginning of the iteration. Let be the column such that a rightmost witness for passes through . Then is contained in after has been added, because by Lemma 2.3, we have , and by Corollary 3, there can be no column index that dominates .
Now let be the head of before a minimum query on , and the second element of . By Lemma 2.4, the minimum query gives the smallest for which there is an -witness for that passes through . If , then (definition of ); (there is a witness through ); and (the dominance relation ensures that the -values for the indices in are increasing). Thus, the while-loop in line 12 proceeds to the next iteration. If , then by Corollary 3, we have for all , and the while-loop terminates with the correct value for . It is straightforward to check that Algorithm 1 maintains the data structures and according to the invariant. ∎
Let be a convex distance function in . Algorithm 1 computes for in time , where represents the time to insert into, delete from, and query the upper envelope data structure.
Correctness follows from Lemma 2.5. For the running time, observe that we insert each column index only once into and each terrain function at most twice into (once untruncated, once truncated). Hence, we can remove elements at most once or twice. This results in an amortized running time of for a single iteration of the for-loop. Since there are cells, this results in the claimed total execution time, assuming that is . ∎
2.4 Avoiding Truncated Functions
In Algorithm 1, the envelope uses the (full) unimodal distance function only for and the truncated versions for the other cells. Since our algorithm relies on an efficient data structure to maintain dynamic upper envelopes of these distance functions, and since it is easier to design such a data structure if the set of possible functions to be stored is limited, we would like to avoid the need for truncating the functions. In general, this seems hard to do, but we show here that as long as the functions behave like pseudolines (i.e., each pair of functions intersects at most once, and this intersection is proper), we can actually work with the simpler set of untruncated distance functions. Since we compare only functions in the same row (or column), functions in different rows or columns may still intersect more than once. Using the full unimodal functions potentially allows for a more efficient implementation of the envelope structure.
The idea is as follows: since the terrain distance functions on the cell boundaries are unimodal, the initial (from left to right) envelopes of the truncated distance functions and the untruncated distance functions are identical. The two envelopes begin to differ only when the increasing part of an untruncated distance function “cuts off” a part of the envelope. We analyse our algorithm to understand under which circumstances this situation can occur. It turns out that in most cases, the increasing parts of the distance functions are “hidden” by the inclusion of the constant in the witness envelope, except for one case, namely when the deletion of a distance function from the witness envelope exposes an increasing part of a distance function that did not previously appear on the envelope. However, we will see that this case can be detected easily, and that it can be handled by simply removing the increasing distance function from the upper envelope. The fact that the distance functions behave like pseudolines ensures that the removed function does not play any role in later queries to the witness envelope. This idea is formalized and proven below.
We modify Algorithm 1 as follows: we omit the update to in line 19, thus maintains untruncated, unimodal functions. To perform a minimum-point query, we first run the query on the upper envelope of the full unimodal functions. Let be the resulting minimum. If lies on the intersection of an increasing and a decreasing with , we remove from and repeat the query. Otherwise, we return , which is then again combined with the constants and as usual.
Below, we prove that this modified algorithm is indeed correct. Let be the envelope maintained by the modified algorithm (with full functions), and the envelope of the original algorithm (with truncated functions). We let both and include the constants and . The envelopes and are unimodal: they consist of a decreasing part, (possibly) followed by an increasing part. Let and be the decreasing parts of and , up to the global minimum.
First, we make the following observation. With it, we prove that and are identical throughout the algorithm (Invariant 2).
Fix a terrain function . Let such that is contained in at the end of iteration . Then .
By Invariant 1, there is a witness for through . ∎
Suppose we run the original and the modified algorithm simultaneously. Then, after each minimum query, and are identical. Furthermore, any function that the modified algorithm deletes during a minimum query does not appear on in any future iteration.
Initially, Invariant 2 trivially holds as the upper envelopes are empty. The envelopes and are modified when:
inserting a full unimodal terrain function (line 8);
truncating a terrain function (line 19);
deleting a terrain function while updating the queue (line 13).
We now prove that each case indeed maintains the invariant.
Case (a): The invariant tells us that and are identical before adding a full unimodal terrain function, . Hence, affects and in the same manner (either by adding a piece or by shortening them) and Invariant 2 is maintained.
Case (b): The truncated part of is the increasing part and hence does not belong to . As the iteration ends, is increased by one, and is now included in the upper envelope rather than . In the truncated envelope , the value of is determined by and the increasing part of . Hence, the minimum remains the same when truncating , and is unchanged. The modified algorithm skips the truncation step, so is not changed. Again, Invariant 2 is maintained.
Case (c): After deleting a function from and , Invariant 2 may get violated. Although the invariant guarantees that all functions on are stored by the modified algorithm, it may happen that is cut off by the increasing part of a function that is truncated in . In this case, let the minimum of be the intersection of the increasing part of and the decreasing part of in iteration . There are two subcases: (c1) ; or (c2) .
Case (c1) cannot occur: during iteration , both the decreasing part of and the increasing part of are present in . Thus, , and , by Lemma 2.7. Therefore, cannot be a proper prefix of . In case (c2), the modified query algorithm deletes from and repeats. If we argue that does not occur on in any future iteration, the algorithm eventually stops with and identical, and with Invariant 2 maintained. For this, observe that (i) and the decreasing part of lies below ; and (ii) by Lemma 2.7, for any iteration in which is contained in . Thus, always lies below . ∎
Now that we have established the desired invariant, the following theorem can be stated as a direct consequence of it.
Let be a row of the distance terrain such that the distance functions in row intersect pairwise at most once. Then the minima computed by the modified algorithm are identical to the minima computed by the original algorithm.
3 Polyhedral distance
We consider the Fréchet distance with a convex polyhedral distance function , i.e., the “unit sphere” of is a convex polytope in that strictly contains the origin. For instance, the and distance are polyhedral with the cross-polytope and the hypercube as respective unit spheres. Throughout, we assume that has complexity , i.e., its polytope (unit sphere) has facets. The polytope of is not required to be regular or symmetric, but as before, we simplify the presentation by assuming symmetry.
Intuitively, the distance is the smallest scaling factor such that lies on the polytope, centered on and scaled by a factor of . We compute it as follows. Let denote the facets of the polytope of . Let denote the facet distance for facet , that is, the multiplicative factor by which the hyperplane spanned by needs to be scaled from to contain . We assume that a facet is defined through the point on the hyperplane spanned by that is closest to the origin: the vector from the origin to is normal to . The distance is then computed as . This distance may be negative, but there is always at least one facet with non-negative distance. Then , the maximum over all facet distances. For a general polytope, we can compute the facet distance in and the distance between points in time. However, for specific polytopes, we may do better. To make this explicit in our analysis, we denote the time to compute the facet distance by .
The distance terrain functions and are piecewise linear for a convex polyhedral distance function . Each linear part corresponds to a facet of . Therefore, it has at most parts. Moreover, for a fixed line segment (i.e., within the same row or column), each facet has a fixed slope: the parts for this facet are parallel. Depending on the polytope, the maximum number of parts of a single function may be less than . We denote this actual maximum number of parts by . Computing the linear parts of a distance terrain function or requires computing which facets may occur. We denote the time it takes to compute the relevant facets for a given boundary by .
We give three approaches. First, we use an upper envelope structure as in the Euclidean case, but exploiting that the distance functions are now piecewise linear. Second, we use a brute-force approach which is more efficient for small to moderate dimension and complexity . Third, we combine these methods to deal with the case of moderately sized and being much smaller than .
Upper envelope data structure.
As and are piecewise linear, we need a data structure that dynamically maintains the upper envelope of lines under insertions, deletions, and minimal-point queries. Note that the minimal point query now requires us to compute the actual minimal point on the upper envelope of lines (instead of parabolas). We apply the same duality transformation as in the Euclidean case and maintain a dynamic convex hull. That is, every line on the upper envelope dualizes to a point . Any point dualizes to a line . If a point is above a line , then the point is above the line . Hence, the upper envelope corresponds to the dual lower convex hull. Since the minimum of the upper envelope occurs when the slopes change from negative to nonnegative, it dualizes to the line segment on the convex hull that intersects the -axis. The fastest known data structure for this problem is due to Chan : for lines, it has an query and amortized update time, for any .
However, in our case, we can do slightly better by using the data structure by Brodal and Jacob . This data structure does not support the minimal-point query directly. However, we can make it work by observing that we must insert and delete up to linear functions each time; it is acceptable to run multiple queries as well.
We can implement an upper envelope data structure structure on piecewise linear functions of complexity at most with an amortized update time of and a minimal-point query time of .
First, we consider insertions and deletions. Every function is piecewise linear with at most parts, so there are at most lines in the data structure. Hence, it takes amortized time to insert and delete the parts of a single function. To compute the relevant lines that make up the piecewise linear function, we first find the relevant facets of in time. Then we compute the parameters of the corresponding lines by computing for each relevant facet the distance between and and with respect to . This takes time per facet.
For the minimal-point query, we observe that the lines with positive slope (that is, dual points with positive -coordinate) are truncated at the end of each iteration. Hence, at any point during the algorithm, the dual lower hull contains at most points with positive -coordinate. We maintain only the points with nonpositive -coordinate (lines with negative slope) in the data structure. To find the line segment that intersects the -axis, we perform for each current point with positive -coordinate a tangent query in the convex hull structure. We maintain the tangent with the lowest intersection with the -axis: this tangent gives the intersection between the -axis and the actual lower hull (including the points with positive -coordinate). We perform queries, each in time; a minimal-point query takes time. ∎
A very simple data structure can often lead to good results. Here, we describe such a data structure, exploiting that in a single row, the distance function for each facet has a fixed slope. Unlike the other approaches, this method does not require computing the relevant facets and thus not depend on .
After total preprocessing time, we can implement the upper envelope structure with an amortized update and query time of .
During the preprocessing phase, we sort for each segment of and the facets of by the corresponding slope on the witness envelope. This takes total time using the straightforward algorithm.
Consider the upper envelope data structure for a row (columns are again analogous). Structure must represent a number of unimodal functions, each consisting of a number of linear parts. Each linear part corresponds to a certain facet of the polytope and has a fixed slope. For each facet (in sorted order), structure stores a doubly linked list containing lines spanned by these linear parts. Given the fixed slope, lines in a single list do not intersect and are sorted from top to bottom. The upper envelope is fully determined only by top lines in each list .
When processing a cell boundary , we update each list in : remove all lines below the line for from the back of , and append the line for . Per facet, it takes time to compute the -intersection of the line and amortized time for the insertion. We then go through the top lines in the in sorted order to determine the minimal value on the upper envelope in time. ∎
A hybrid approach.
After total preprocessing time, we can implement the upper envelope structure with amortized update time and minimal-point query time .
For each row (or column), we initialize empty lists , . This takes total preprocessing time. The role of the is similar to Lemma 3.2, i.e., each list corresponds to a facet of the polytope. However, unlike Lemma 3.2, we do not sort the facets. Instead, we maintain the upper envelope of the top lines in each , using the method from Lemma 3.1. At each cell boundary, we find the relevant parts and compute their parameters. The parts are inserted into the appropriate lists . If a new part appears at the top of its list, we update the upper envelope structure. Since now this structure stores only lines, this takes amortized time .
Minimal-point queries are done as before (see the proof of Lemma 3.1). Again, the structure contains only lines: a query takes time. ∎
Plugging Lemmas 3.1, 3.2, and 3.3 into Theorem 2.6 yields the following result. The method that works best depends on the chosen polytope and on the given complexity and dimensions, that is, on the relationship between , , and .
Let be a convex polyhedral distance function of complexity in . Algorithm 1 computes the Fréchet distance under in
time, where is the time needed to find the relevant parts of a distance function and the time needed to compute the distance between two points for a given facet of .
For a generic polytope, we have , so the brute-force approach runs in time. The other methods can be faster only if and if we have an -time method to compute the relevant facets for a distance terrain function. The hybrid method improves over the upper-envelope method if is much smaller than . Note that there cannot be more than elements in the upper envelope for the hybrid method. However, if , the upper-envelope method outperforms the hybrid method. Thus, to gain an advantage over the brute force method, a structured polytope is necessary.
Let be a convex polyhedral distance function of complexity in . Algorithm 1 computes the Fréchet distance under in time.
Let us now consider . Its polytope is the hypercube; each facet is determined by a maximum coordinate. We have , and the brute-force method outperforms the other methods. However, a facet depends on only one dimension, so we compute the distance for a given facet in time.
Algorithm 1 computes the Fréchet distance under the distance in in time.
For , the cross-polytope, there are facets. Structural insights help us improve upon the brute-force method. The facets of the cross-polytope are determined by the signs of the coordinates. Let be the line segment and the point defining the terrain distance . At the breakpoints between the parts of , one of the coordinates of changes sign. Therefore, there are at most