Computing the Hausdorff Distance of Two Sets from Their Signed Distance Functions

# Computing the Hausdorff Distance of Two Sets from Their Signed Distance Functions

Daniel Kraft
University of Graz
Institute of Mathematics, NAWI Graz
Universitätsplatz 3, 8010 Graz, Austria
Email: daniel.kraft@uni-graz.at
August 4th, 2016
###### Abstract

The Hausdorff distance is a measure of (dis-)similarity between two sets which is widely used in various applications. Most of the applied literature is devoted to the computation for sets consisting of a finite number of points. This has applications, for instance, in image processing. However, we would like to apply the Hausdorff distance to control and evaluate optimisation methods in level-set based shape optimisation. In this context, the involved sets are not finite point sets but characterised by level-set or signed distance functions. This paper discusses the computation of the Hausdorff distance between two such sets. We recall fundamental properties of the Hausdorff distance, including a characterisation in terms of distance functions. In numerical applications, this result gives at least an exact lower bound on the Hausdorff distance. We also derive an upper bound, and consequently a precise error estimate. By giving an example, we show that our error estimate cannot be further improved for a general situation. On the other hand, we also show that much better accuracy can be expected for non-pathological situations that are more likely to occur in practice. The resulting error estimate can be improved even further if one assumes that the grid is rotated randomly with respect to the involved sets.

Keywords: Hausdorff Distance, Signed Distance Function, Level-Set Method, Error Estimate, Stochastic Error Analysis

## 1 Introduction

The Hausdorff distance (also called Pompeiu-Hausdorff distance) is a classical measure for the difference between two sets:

###### Definition 1.

Let . Then the one-sided Hausdorff distance between and is defined as

 d(A→B)=supx∈Ainfy∈B|x−y|. (1)

This allows us to introduce the Hausdorff distance:

 dH(A,B)=max(d(A→B),d(B→A)) (2)

While one can, in fact, define the Hausdorff distance between subsets of a general metric space, we are only interested in subsets of in the following. Note that in general, such that the additional symmetrisation step in (2) is necessary. For instance, if , then while unless . Since the Euclidean norm is continuous, it is easy to see that (1) and thus also is not changed if we replace one or both of the sets by their interior or closure. The set of compact subsets of is turned into a metric space by . For some general discussion about the Hausdorff distance, see Subsection 6.2.2 of [3]. The main theoretical properties that we need will be discussed in Section 2 in more detail.

Historically, the Hausdorff distance is a relatively old concept. It was already introduced by Hausdorff in 1914, with a similar concept for the reciprocal distance between two sets dating back to Pompeiu in 1905. In recent decades, the Hausdorff distance has found plenty of applications in various fields. For instance, it has been applied in image processing [6], object matching [16], face detection [7] and for evolutionary optimisation [14], to name just a few selected areas. In most of these applications, the sets whose distance is computed are finite point sets. Those sets may come, for instance, from filtering a digital image or a related process. Consequently, there exists a lot of literature that deals with the computation of the Hausdorff distance for point sets, such as [11]. Methods exist also for sets of other structure, for instance, convex polygons [1].

We are specifically interested in applying the Hausdorff distance to measure and control the progress of level-set based shape optimisation algorithms such as the methods employed in [8] and [9]. In particular, the Hausdorff distance between successive iterates produced by some descent method may be useful to implement a stopping criterion or to detect when a descent run is getting stuck in a local minimum. For these applications, the sets and are typically open domains that are described by the sub-zero level sets of some functions. To the best of our knowledge, no analysis has been done so far on the computation of the Hausdorff distance for sets given in this way. A special choice for the level-set function of a domain is its signed distance function:

###### Definition 2.

Let be bounded. We define the distance function of as

 dΩ(x)=infy∈Ω|x−y|.

Note that for all . To capture also information about the interior of , we introduce the signed (or oriented) distance function as well:

 sdΩ(x)={dΩ(x)x∉Ω,−dRn∖Ω(x)x∈Ω

See also chapters 6 and 7 of [3]. Both and are Lipschitz continuous with constant one.

If the signed distance function of is not known, it can be calculated very efficiently from an arbitrary level-set function using the Fast Marching Method [15]. Conveniently, the Hausdorff distance can be characterised in terms of distance functions. It should not come as a big surprise that this is possible, considering that the distance function appears on the right-hand side of (1). This is a classical result, which we will recall and discuss in Section 2. For numerical calculations, though, the distance functions are known only on a finite number of grid points. In this case, the classical characterisation only yields an exact lower bound for . The main result of this paper is the derivation of upper bounds as well, such that the approximation error can be estimated. These results will be presented in Section 3. We also give an example to show that our estimates of Subsection 3.1 are sharp in the general case. In addition, we will see that much better estimates can be achieved for (a little) more specific situations. Since these situations still cover a wide range of sets that may occur in practical applications, this result is also useful. Subsection 3.3 gives a comparison of the actual numerical error for some situations in which is known exactly. We will see that these results match the theoretical conclusions quite well. In Section 4, finally, we show that further improvements are possible if we assume that the orientation of the grid is not related to the sets and . This can be achieved, for instance, by a random rotation of the grid, and is usually justified if the data comes from a measurement process.

Note that our code for the computation of (signed) distance functions as well as the Hausdorff distance following the method suggested here has been released as free software. It is included in the level-set package [10] for GNU Octave [5].

## 2 Characterising dH in Terms of Distance Functions

Let be two compact sets throughout the remainder of the paper. In this case, it is easy to see that compactness implies that the various suprema and infima in Definition 1 and Definition 2 are actually attained:

###### Lemma 1.

For each there exist such that and . Furthermore, there also exist and such that . This, of course, implies that can also be expressed in a similar form.

Let us now, for the rest of this section, turn our attention to the relation between the Hausdorff distance and distance functions. While most of this content is well-known and not new, we believe that it makes sense to give a comprehensive discussion. This is particularly true because the Hausdorff distance is a concept that can be quite unintuitive. Thus, we try to clearly explain potential pitfalls and give counterexamples where appropriate. This discussion forms the basis for the later sections, in which we present our new results.

### 2.1 Distance Functions

One may have the idea to “characterise” the sets and via their distance functions and from Definition 2. Since the distance functions are part of the Banach space of continuous functions, the norm on this space can be used to define a distance between and as . We will now see that this distance is equal to the Hausdorff distance defined in Definition 1:

###### Theorem 1.

For each , the inequality

 |dA(x)−dB(x)|≤dH(A,B) (3)

holds. More precisely, one even has

 dH(A,B)=∥dA−dB∥∞=supx∈Rn|dA(x)−dB(x)|. (4)
###### Proof.

This is a classical result, which is, for instance, given also on page 270 of [3]. Since the argumentation there contains a small gap, we provide a proof here nevertheless for convenience.

Assume first that . In this case, such that . Since

the estimate (3) follows. A similar argument can be applied if . Thus, it remains to consider the case . According to Lemma 1, we can choose with . There also exists such that . Then (3) follows, since

 dA(x)−dB(x)≤|x−z|−|x−y|≤|z−y|=dA(y)≤d(B→A)≤dH(A,B).

To show also (4), let us assume, without loss of generality, that . But since

the claim follows. ∎

Theorem 1 forms the foundation for the remainder of our paper: It gives a representation of the Hausdorff distance in terms of the distance functions. Furthermore, it is also easy to actually evaluate this representation in practice. In particular, if and are given numerically on a grid, one can just consider for all grid points . The largest difference obtained in this way is guaranteed to be at least a lower bound for . If the maximising point for (4) is not a grid point, however, we can not expect to get equality with the Hausdorff distance. Section 3 will be devoted to a discussion of the possible error introduced in this way.

It is sometimes convenient to use not the Hausdorff distance itself, but the so-called complementary Hausdorff distance instead. (Particularly when dealing with open domains in applications.) See, for instance, [2]. In this case, our assumption of compact sets is not fulfilled any more, since the complements are unbounded if the sets themselves are bounded. However, one can verify that Lemma 1 and Theorem 1 are still valid also for this situation.

### 2.2 Signed Distances

We turn our focus now to signed distance functions: Since and are in as well, also can be used as a distance measure between and . See also Subsection 7.2.2 of [3]. This distance is, however, not equal to :

###### Example 1.

Let be given, and define , . This situation is depicted in Figure 1. Then , as highlighted in the sketch with the red line. On the other hand, while . Hence,

 ∥sdA−sdB∥∞≥|sdA(0)−sdB(0)|=R+r>r=dH(A,B).

In fact, one can show that induces a stronger metric between the sets than either the complementary or the ordinary Hausdorff distance alone:

###### Theorem 2.

Let . Then

 |sdA(x)−sdB(x)|=|dA(x)−dB(x)|+∣∣dRn∖A(x)−dRn∖B(x)∣∣. (6)

Consequently, also

 max(dH(A,B),dH(Rn∖A,Rn∖B))≤∥sdA−sdB∥∞≤dH(A,B)+dH(Rn∖A,Rn∖B). (7)
###### Proof.

Choose arbitrary. If , then

 |dA(x)−dB(x)|=0,∣∣dRn∖A(x)−dRn∖B(x)∣∣=|sdA(x)−sdB(x)|.

This implies the claim. For instead, we get

 |dA(x)−dB(x)|=dB(x),∣∣dRn∖A(x)−dRn∖B(x)∣∣=dRn∖A(x),|sdA(x)−sdB(x)|=dRn∖A(x)+dB(x).

Taking these together, we see that the claim is satisfied also in this case. The two remaining cases can be handled with analogous arguments. The relation (7) follows by taking the supremum over in (6). ∎

Unfortunately, equality does not hold in general for the right part of (7). This is due to the fact that taking the supremum in (6) may yield different maximisers for and . One can also construct a simple example where this is, indeed, the case:

###### Example 2.

Choose and . Then , while . For the signed distance functions, we have . See also Figure 2, which sketches this situation.

That is strictly stronger than also manifests itself in the induced topology on the space of compact subsets of :

###### Example 3.

Let . For , we define

 Ak=A∖(−1k,1k).

This defines a compact set for each . Furthermore, as . In other words, in the Hausdorff distance. However, while . In particular, .

Example 3 implies also that the reverse of (7),

 ∥sdA−sdB∥∞≤C⋅dH(A,B),

can not hold for any constant . Thus, one really needs both the ordinary and the complementary Hausdorff distance to get an upper bound on . In other words, and are not equivalent metrics. Compare also Example 2 in [2]: There, it is shown that the topologies induced by the ordinary and the complementary Hausdorff distance are not the same. This is done with a construction similar to Example 3.

### 2.3 The Maximum Distance Function

In the final part of this section, we would like to introduce another lower bound for . This additional bound may improve the approximation of if we are not able to maximise over all but only, for instance, grid points. However, we ultimately come to the conclusion that this bound is probably not very useful for a practical computation of . This will be discussed further at the end of the current subsection. Hence, we will not make use of the results here in the later Section 3. Since the concepts are, nevertheless, interesting at least from a theoretical point of view, we still give a brief presentation here. As far as we are aware, these results have not been discussed in the literature before.

Our initial motivation is the following: We have seen in Theorem 1 that the Hausdorff distance can be expressed as . On the other hand, gives not the Hausdorff distance. If we are given and for the computation, this is unfortunate. While it is, of course, trivial to get and from the signed distance functions, this process throws away valuable information. In particular, the information from the signed distance functions at points inside the sets can not be used. By defining yet another type of “distance function”, which now gives the maximal distance to any point in a set, we get rid of this qualitative difference between interior and exterior points:

###### Definition 3.

Let be bounded. The maximum distance function of is then

 mdΩ(x)=supy∈Ω|x−y|. (8)

Since is bounded, this is well-defined for any . If is compact in addition, an analogous result to Lemma 1 holds.

Indeed, is always non-negative (assuming ). For , it does not immediately matter whether or not. Furthermore, also the maximum distance function gives a lower bound on the Hausdorff distance, similar to (3):

###### Theorem 3.

Let be compact and choose arbitrarily. Then

 |mdA(x)−mdB(x)|≤dH(A,B).
###### Proof.

The proof is similar to the proof of Theorem 1: Let be given. There exist with and with . Note that and . Thus

 mdB(x)−mdA(x)≤|x−y|−|x−z|≤|y−z|≤dH(A,B).

This completes the proof if we apply the same argument also with the roles of and exchanged. ∎

Unfortunately, the analogue of (4) does not hold. In fact, it is possible that everywhere on but the sets and are quite dissimilar. Such a situation is depicted in Figure 3. Due to the “outer ring”, which is part of , the maximum in (8) is always achieved with some from this ring. A typical situation is shown with the point and the red line, which highlights its maximum distance to both and . Consequently, and the differences between and inside the ring are not “seen” by the maximum distance functions at all. Thus, we have to accept that can be the case.

The situations where the maximum distance functions actually carry valuable information (as opposed to Figure 3) are actually similar to those characterised in Definition 5. For such situations, the additional information in and could, indeed, be used to improve the approximation of . However, as we will see below in Theorem 5, those are also the situations where (4) alone already gives a very close estimate of . In these cases, we are not really in need of additional information. On the other hand, for situations like Figure 3 also the approximation of from grid points is actually difficult and extra data would be very desirable. But particularly for those situations, the maximum distance functions do not provide any extra data! Furthermore, it is not clear how and can actually be computed from, say, the level-set functions of and . It seems plausible that those functions are the viscosity solutions of an equation similar to the Eikonal equation, and so it may be possible to develop either a Fast Marching scheme or some other numerical method. However, since we have just argued that we do not expect a real benefit from the usage of the maximum distance functions in practice, the effort involved seems not worthwhile. For the remainder of this paper, we will thus concentrate on Theorem 1 as the sole basis for our numerical computation of .

## 3 Estimation of the Error on a Grid

With the basic theoretical background of Section 2, let us now consider the situation on a grid. In particular, we assume that we have a rectangular, bounded grid in with uniform spacing in each dimension. (While it is possible to generalise some of the results to non-uniform grids in a straight-forward way, we assume a uniform spacing for simplicity.) We denote the finite set of all grid points by , and the set of all grid cells by . For each cell , is the set of all grid points that span the cell (i. e., its corners). For example, for a grid in that extends from the origin into the first quadrant, we have

 N={xij|i,j=0,…,k−1,xij=(i,j)h},C={cij|i,j=1,…,k−1},N(cij)={xi−1,j−1,xi,j−1,xij,xi−1,j}.

Let us assume that we know the distance functions of and on each grid point, i. e., and for all . We furthermore assume that these values are known without approximation error. This is, of course, not realistic in practice. However, the approximation error in describing the geometries and computing their distance functions is a matter outside the scope of this paper. Finally, let us also assume that the grid is large enough to cover the sets. In particular: For each , there should exist a grid cell such that is contained in the convex hull of the corners of . If this is not the case, the grid is simply inadequate to capture the geometrical situation.

### 3.1 Worst-Case Estimates

In order to approximate from the distance functions on our grid, we make use of (4). In particular, we propose the following straight-forward approximation:

 dH(A,B)≈~d(A,B)=maxx∈N|dA(x)−dB(x)| (9)

From (3), we know that this is, at least, an exact lower bound. However, in the general case, an approximation error

 0≤δ=∣∣dH(A,B)−~d(A,B)∣∣=dH(A,B)−~d(A,B)

will be introduced by using (9). This is due to the fact that we only maximise over grid points. The real maximiser of the supremum in (4), on the other hand, may not be a grid point.

Let us now analyse the approximation error . We have seen in the proof of Theorem 1 that

 dH(A,B)=supy∈A∪B|dA(y)−dB(y)|=maxc∈Csupy∈co(N(c))|dA(y)−dB(y)|.

Note that this is still an exact representation, with no approximation error introduced so far. We have just split up the supremum over into grid cells, but we still take into account all points contained in a grid cell, not just its corners. This is achieved by using the convex hull instead of the finite set alone. On the other hand, the approximation (9) can be formulated as

 ~d(A,B)=maxc∈Cmaxx∈N(c)|dA(x)−dB(x)|.

Comparing both equations, we see that the approximation error is introduced precisely by the step from to . We can now formulate and prove a very general upper bound on :

###### Theorem 4.

Let be a grid point and be arbitrary. We set

 t(x,y)={|x−y|x∈A,2|x−y|x∉A.

For a cell , we define furthermore

 ¯¯¯d(c)=supy∈co(N(c))minx∈N(c)(|dA(x)−dB(x)|+t(x,y)). (10)

Then

 d(A→B)≤maxc∈C′(A)¯¯¯d(c).

Here, is the set of all grid cells which contain some part of .

Similarly, and thus can be estimated.

###### Proof.

We will show that

 supy∈A∩co(N(c))|dA(y)−dB(y)|≤¯¯¯d(c)

for each . The claim then follows from (5). So choose , and . It remains to verify that

 |dA(y)−dB(y)|=dB(y)≤|dA(x)−dB(x)|+t(x,y).

Assume first that . Since has Lipschitz constant one, we really get

 |dA(x)−dB(x)|+t(x,y)=dB(x)+|x−y|≥dB(y)

in this case. Assume now . Since and thus , Lipschitz continuity of implies that . Using this auxiliary result, we get that also in this case

 |dA(x)−dB(x)|+t(x,y)≥dB(x)−dA(x)+2|x−y|≥dB(x)+|x−y|≥dB(y).

Hence, the claim is shown. ∎

Even though the formulation of Theorem 4 is complicated, the idea behind it is quite simple: Since the distance functions are Lipschitz continuous, also the function , which we have to maximise over for each grid cell, is Lipschitz continuous. This allows us to estimate the maximum in terms of the function’s values at the corners (which are known). We are even allowed to try all corners and use the smallest resulting upper bound. This is what happens in (10). Furthermore, the Lipschitz constant depends on whether or not the corner is in . (If it is, vanishes, which reduces the Lipschitz constant to just that of . Otherwise, we have to use two as the full Lipschitz constant of .) This is the role that plays. It gives the “distance” between and based on the applicable Lipschitz constant.

Coupled with the fact that is a lower bound for the exact Hausdorff distance, the upper bound in Theorem 4 allows us now to estimate . However, evaluating (10) is difficult and expensive in practice (although it can be done in theory). Hence, we will now draw some conclusions that simplify the upper bound. As a first result, let us consider the worst case where for all corners of some cell:

###### Corollary 1.

Theorem 4 implies for the error estimate:

 δ≤√n⋅h
###### Proof.

Let be some grid cell and . Then there exists such that

 t(x,y)≤2|x−y|≤2⋅√n⋅h2=√n⋅h.

This is simply due to the fact that the grid cell’s longest diagonal has length . Consequently, in the worst case the nearest corner has half that distance to . Hence also

 minx∈N(c)(|dA(x)−dB(x)|+t(x,y))≤maxx∈N(c)|dA(x)−dB(x)|+minx∈N(c)t(x,y)≤maxx∈N(c)|dA(x)−dB(x)|+√n⋅h.

This estimate can be used for from (10). Consequently, Theorem 4 implies

 d(A→B)≤maxc∈Cmaxx∈N(c)|dA(x)−dB(x)|+√n⋅h=~d(A,B)+√n⋅h.

Since the same estimate also works for , the claim follows. ∎

Taking a closer look, though, the worst-case situation considered above is quite strange. In principle, it can happen that there is some cell with but for which all corners are not in . Such a situation is depicted in Figure 4. However, in practice such a case is very unlikely to occur. In particular, assume that we describe the set by a level-set function , and that for all corners of some grid cell . In that case, there is no way of knowing whether, in reality, there is some part of inside the cell or not. The grid is simply too coarse to “see” such geometric details. Consequently, it makes sense to assume the simplest possible situation, namely that for all such cells . Thus, we make the following additional assumption:

###### Definition 4.

Consider grid cells such that for all corners . If for all those , the grid is said to be suitable for .

In the case of a suitable grid (for both and ), we get the “reduced Lipschitz constant” in (10) for at least one corner per relevant cell. This allows us to lower the error estimate:

###### Corollary 2.

Let the grid be suitable for and . Furthermore, we introduce the dimensional constant

 Δn=supy∈Qmin(|y|,minx∈N′2|x−y|). (11)

Here, is the unit square, and

 N′={x∈Rn|xi∈{0,1} for % all i=1,…,n}∖{0}

is the set of its corners except for the origin. Then,

 δ≤Δn⋅h.
###### Proof.

Let be a cell with . Since the grid is assumed to be suitable, we know that there exists at least one corner with . Hence, for arbitrary ,

 minx∈N(c)t(x,y)≤min(|y−x0|,minx∈N(c)∖{x0}2|y−x|)≤Δn⋅h.

With this, the claim follows as in the proof of Corollary 1. ∎

The most difficult part of Corollary 2 is probably the strange dimensional constant defined in (11). This constant replaces the functions for . It can be interpreted like this: Let spherical fronts propagate starting from all corners of the unit square . The front starting at the origin has speed one, while the other fronts have speed . Over time, the fronts will hit each other, and will reach all parts of . The value of is precisely the time it takes until all points in have been hit by at least one front. For the case , these arrival times are shown in Figure (a)a. The correct value of is the maximum attained at both spots with the darkest red (one in the north and one in the east). Figure (b)b shows the maximising points (red and black) over the unit cube for . Since the expression that is maximised in (11) is symmetric with respect to permutation of the coordinates, there are six maximisers. The highlighted one sits at the intersection of the spheres originating from the three corners marked in blue. Based on these observations and some purely geometrical arguments, one can calculate

 Δ1=23≈\numprint0.67,Δ2=23√5−√7≈\numprint1.02,Δ3=23√8−√19≈\numprint1.27.

These constants are a clear improvement over the estimate of Corollary 1. In fact, the bound in Corollary 2 is actually sharp. To demonstrate this, we will conclude this subsection with an example in two dimensions that really attains the maximal error :

###### Example 4.

For simplicity, assume . We consider the situation sketched in Figure 6. Observe first that all grid points except , , and are part of , and thus for them. Consequently, we only have to consider these four points in order to find . For symmetry reasons, it is actually enough to concentrate only on and . The point corresponds to the position with maximal arrival time, as seen also in Figure (a)a. It is characterised by requiring

 |a−p|=|d−p|=r2,|b−p|=|c−p|=r. (12)

Solving these equations for the coordinates of and the radius yields

 p=(8−√76,12),r=23√5−√7.

Note specifically that . (In fact, a construction similar to this one can be used to calculate in the first place.) The relations (12) can also be seen in the sketch: The dotted circle has radius and centre . The points and lie on it. The two smaller circles (which define the exclusion from ) have centres and with radius , and lies on both of them.

is chosen in such a way that is also at the centre of its hole. Consequently, is the point in that achieves . This is indicated by the red line. Let us also introduce as the width of the small ring between the dotted circle and the dark region. Then and . Furthermore, note that

 dA(a)=r2,dB(a)=r2+ρ.

Hence,

 ~d(A,B)=|dA(a)−dB(a)|=|dA(b)−dB(b)|=ρ.

This also implies that , which is, indeed, the largest possible bound permitted by Corollary 2.

### 3.2 External Hausdorff Distances

As we have promised, the situation from Example 4 shows that one can not, in general, expect a better error estimate than Corollary 2. However, considering Figure 6, we also observe that the situation there is quite strange. Thus, there is hope that we can get stronger estimates if we add some more assumptions on the geometrical situation. This is the goal of the current subsection. It will turn out in Theorem 5 that this is, indeed, possible. Consider, for example, Figure (a)a: There, . Furthermore, all points along the external black line satisfy . Consequently, all those points are maximisers of (4). If a grid point happens to lie somewhere on this line, is exact. But even if this is not the case (as shown in the figure), will be very close to for grid points that are far away from the sets and close to the line. In all of these cases, we can expect to be much closer to than the bounds from the previous Subsection 3.1 tell us. Furthermore, the estimate will be more precise the further away we can go on the external line. Two conditions determine how far that really is: First, of course, the size of our finite grid is a clear restriction. Second, we need that and for the points on the external line that we consider. This means that and must be the closest points to of the sets and , respectively. The latter is a purely geometrical condition on and , and is not related to the grid. Let us formalise it:

###### Definition 5.

Assume that with and . Let and set as well as and . We say that and admit an external Hausdorff distance with radius if

 (13)

The condition in Definition 5 is quite technical, but it is relatively easy to understand and verify for concrete situations (as long as it is known where the Hausdorff distance is attained). It is related to the skeleton of the sets and , for which we refer to Section 3.3 of [3]. We will see later in Corollary 3 that, for instance, convex sets admit an external Hausdorff distance for arbitrary radius , and that Definition 5 applies in a lot of additional practical situations. Even for non-convex sets, an external Hausdorff distance with some restriction on the possible may be admissible. See, for instance, the situation in Figure (b)b. A possible choice for and is shown there. (The furthest possible is at the end of the black line.) The dotted circles are and . One can see that the inner one only touches at , and the outer one does the same with at . This is the geometrical meaning of (13). Due to this property, we know that and . One can also verify that the condition (13) gets strictly stronger if we increase . In other words, if an external Hausdorff distance with is admissible, this is automatically also the case for all radii .

Based on this concept of external Hausdorff distances, we can now formalise the motivating argument about better error bounds for this situation:

###### Theorem 5.

Let and admit an external Hausdorff distance with . Let be the grid spacing, and assume that the grid is chosen large enough. Then, for ,

 δ≤n2h2r+O(h3). (14)
###### Proof.

We use the same notation as in Definition 5. In particular, let with and . We also use and as in the definition. If is a point next to the straight line , we can project it onto this line. Let the resulting point be called , then and are right triangles. This situation is shown in Figure 8. According to the sketch, we set

 ρ=R−|z−c|=R−√a2+b2.

Note that the dotted circle is entirely contained in . By (13), this implies that . Since , we also know . Both inequalities together yield

 ~d(A,B)≥|dA(z)−dB(z)|≥dB(z)−dA(z)≥R−√a2+b2−√b2+(r−a)2.

(Assuming that is a grid point.) On the other hand, since we have an external Hausdorff distance, also

 dH(A,B)=∣∣dA(c′)−dB(c′)∣∣=dB(c′)−dA(c′)=(R−a)−(r−a)

holds. Hence,

 δ=dH(A,B)−~d(A,B)≤√b2+a2−a+√b2+(r−a)2−(r−a). (15)

So far, was just an (almost) arbitrary grid point. We will now try to choose it in a way that reduces the bound on as much as possible. For this, observe that (15) contains two terms of the form and that this function is decreasing in . Thus, in order to get a small bound, we would like to choose both values of , namely and , as large as possible. Consequently, we want . Let be the precise midpoint between and . Since is the longest diagonal of the grid cells, there exists a grid point with . Choosing as the projection of onto the line as before, this implies that

 a≥r2−∣∣m−c′∣∣≥r−√n⋅h2,r−a≥r−√n⋅h2,b=∣∣z−c′∣∣≤√n2h.

(Since , not all of these estimates can be sharp at the same time. It may be possible to refine them and get smaller bounds below, but we do not attempt to do that for simplicity.) Substituting in (15) yields

 δ≤√nh2+(r−√n⋅h)2−(r−√n⋅h).

Series expansion of this result for finally implies the claimed estimate (14).

A particular situation in which Definition 5 is satisfied is that of convex sets (see Figure (a)a). For them, (14) holds with arbitrary as long as the grid is large enough to accommodate for the far-away points:

###### Corollary 3.

Let and be compact and convex. Then and admit an external Hausdorff distance for arbitrary . Consequently, (14) applies for all for which the grid is large enough.

###### Proof.

We exclude the trivial case , since (14) is obviously fulfilled for that situation anyway. Assume, without loss of generality, that . Let and be given with according to Lemma 1. Choose arbitrarily and let be as in Definition 5.

The assumption means that is the closest point in to . In other words, , where we have set for simplicity. This is depicted with the dotted circle (which is outside of ) in Figure 9. The dotted lines indicate half-planes perpendicular to the line and through and , respectively. Assume for a moment that we have some point that is “above” the “lower” half-plane. Due to convexity of , this would imply that the whole line must be inside of . This, however, contradicts as indicated by the red part of . Hence, the half-plane through separates and . Similarly, we can show that the half-plane through separates and : Assume that is “above” this half-plane. Then must be the case, as shown by the blue line. But this is a contradiction, since for all . These separation properties of the half-planes, however, in turn imply (13). Thus, everything is shown.

Let us also remark that the proof of Corollary 3 stays valid as long as the sets and are “locally convex” in a neighbourhood of and . This is an important situation for a lot of potential applications: We already mentioned above that our own motivation for computing the Hausdorff distance is to measure convergence during shape optimisation. In this case, it is often the case that the Hausdorff distance is already quite small in relation to the sets. For a lot of these situations, the largest difference between the sets is attained in a way similar to Figure 9, even if the sets themselves need not necessarily be convex.

### 3.3 Numerical Demonstration

To conclude this section, let us give a numerical demonstration of the results presented so far. The situation that we consider is depicted schematically in Figure 10: We have an “outer ring” which is part of both and , and an inner circle (corresponding to ) is placed within. Note that this is already a situation where we have non-convex sets. For the inner circle at the origin as in Figure (a)a, no external Hausdorff distance is admissible. This is due to the fact that the point that achieves is in the interior of . If we displace the inner circle, the point will be on the boundary as soon as the origin is no longer part of . In these situations, we have an external Hausdorff distance with a restricted maximal radius . This is indicated in Figure (b)b. In our calculations, the outer circle has a radius of nine and the inner circle’s radius is one. Figure 10 shows other proportions since this makes the figure clearer. However, qualitatively, the situations shown are exactly those that will be used in the following.

Let us first fix the grid spacing and consider the effect of moving the inner circle. The approximation error of the exact Hausdorff distance is shown (in units of ) in Figure 11. The blue curve shows under the assumption that and are known on the grid points without any approximation error. This is the situation we have discussed theoretically above. The red curve shows the error if we also compute the signed distance functions themselves using the Fast Marching code in [10]. This is a situation that is more typical in practice, where often only some level-set functions are known for and . They are, most of the time, not already signed distance functions. Note that the grid was chosen such that the origin (and thus the optimal for small displacements) is at the centre of a grid cell and can not be resolved exactly by the grid. This yields the “plateau” in the error for small displacements. However, as soon as external Hausdorff distances are admitted, the observed error falls rapidly in accordance to Theorem 5. The “steps” in the blue and red lines are caused by the discrete nature of the grid. The black curve shows the expected upper bound, which is given by for small displacements and by (14) for larger ones. (For our example situation, the maximum allowed radius in Definition 5 can be computed exactly.) One can clearly see that the theoretical and numerical results match very well in their qualitative behaviour.

Figure 12 shows how the error depends on the grid spacing . The blue and red data is as before. In the upper Figure (a)a, the inner circle is at the origin. This is the situation of Figure (a)a, and corresponds to the very left of the curves in Figure 11. Here, the upper bound of Corollary 2 applies and is shown with the black curve. The convergence rate corresponds to , which can be seen clearly in the plot. On the other hand, the lower Figure (b)b shows how behaves if we have an external Hausdorff distance. It corresponds to the very right in Figure 11, with the inner circle displaced from the origin similarly to Figure (b)b. Here, Theorem 5 implies convergence. Also this is, indeed, confirmed nicely by the numerical calculations. (While both plots look similar, note the difference in the scaling of the -axes!)

## 4 Improvements by Randomising the Grid

Let us now take a closer look at the concept of external Hausdorff distances and, in particular, the error estimate in the proof of Theorem 5. An important ingredient for the resulting estimate (and the actual error) is how close grid points come to lie to the external line. We can emphasise this even more by reformulating the error estimate in the following way:

###### Lemma 2.

Let and admit an external Hausdorff distance with . Choose , and as in Definition 5. We denote by the minimum distance any grid point has to the part of the external line between and , i. e.,

 (16)

If the grid is large enough, then the error estimate

 δ≤3r⋅β2

holds for all grid spacings that are small enough.

###### Proof.

We base the proof on (15). Using the notation of Figure 8, let us consider points on the middle third of the external line. For them, . Consequently, (15) implies

 δ≤√b2+a2−a+√b2+(r−a)2−(r−a)≤2⋅(√b2+r29−r3)≤3r⋅b2.

The last estimate can be seen with a series expansion. This holds for being any distance (in normal direction) of a grid point to . Furthermore, note that the interval