A Proof of Proposition 3.1

Spectral edge detection in two dimensions using wavefronts


A recurring task in image processing, approximation theory, and the numerical solution of partial differential equations is to reconstruct a piecewise-smooth real-valued function , where , from its truncated Fourier transform (its truncated spectrum). An essential step is edge detection for which a variety of one-dimensional schemes have been developed over the last few decades. Most higher-dimensional edge detection algorithms consist of applying one-dimensional detectors in each component direction in order to recover the locations in where is singular (the singular support).

In this paper, we present a multidimensional algorithm which identifies the wavefront of a function from spectral data. The wavefront of is the set of points which encode both the location of the singular points of a function and the orientation of the singularities. (Here denotes the unit sphere in dimensions.) More precisely, is the direction of the normal line to the curve or surface of discontinuity at . Note that the singular support is simply the projection of the wavefront onto its -component. In one dimension, the wavefront is a subset of , and it coincides with the singular support. In higher dimensions, geometry comes into play and they are distinct. We discuss the advantages of wavefront reconstruction and indicate how it can be used for segmentation in magnetic resonance imaging (MRI).


kletlet \SetCommentStytextit \SetKwFunctionifftInverseFFT \linesnumbered\restylealgoalgoruled

1 Introduction

Consider an image, i.e. a function . If the image is smooth (e.g. ), then the Fourier transform of , denoted , will decay rapidly (and hence be localized near ). Discontinuities in the image cause to decay more slowly as . Thus, information about the discontinuities can be said to be encoded in the high frequency components of . The goal of spectral edge detection is to recover the location of the discontinuities from limited (and often noisy) information about .

As an example, consider the function which is equal to inside and elsewhere. The set of discontinuities of this function (the singular support) is given by . One natural approach to computing the curve on which the discontinuities lie is to first find a set of point which lie in the singular support, followed by an algorithm aimed at connecting these points sets into a finite number of curves. In our example, the output would be the unit circle. A relatively recent and important class of methods for locating the singular support is based on concentration kernels [10, 12, 14, 15, 16], a high pass filtering approach, which we describe briefly below.

For the function , the normal at each point of discontinuity is simply the normal to the unit circle at that point. The wavefront of this function is, therefore, , with and . In this paper, we study the problem of extracting the wavefront from continuous spectral data available in a finite frequency range in two dimensions. This extra information is useful practically as well as theoretically. First, it is easier to reconstruct curves of discontinuity from points in the wavefront than points in the singular support, both in closing “gaps” and in associating points on close-to-touching curves to the correct one [2, 7, 8, 9, 17]. Second, the directional information is useful for noise removal. If spurious points are included in the wavefront set, the normal (or tangent) data allows us to filter it out; it is unlikely that a random point and a random tangent will be consistent with the points and tangents that come from the actual curves of discontinuity, as we have shown previously in [17].

Our approach to edge detection is based on applying concentration kernels (high pass filters) to angular slices of the Fourier data. Rather than recovering the points on the edges (as in [10, 15, 16]), we also determine the direction of the normal. In section 2, we present a precise statement of the problem to be solved and an overview of the full edge detection procedure (section 2.4). In section 3, we present a detailed analysis of the asymptotic behavior of the Fourier transform of the characteristic function of a smooth region. Section 4 is devoted to a discussion of the directional filters used to extract wavefront data and Section 6 describes the full algorithm. Many of the proofs are technical and we have relegated most proofs to the appendices. We discuss the application of our method to magnetic resonance imaging (MRI), where raw data is acquired in the Fourier domain, extending the algorithm of [14]). Finally, we note that our algorithm is closely related to the recent paper [18], which describes a wavefront extraction procedure based on the shearlet transform, extending the use of curvelets in [5, 6].

2 Mathematical Formulation

Let be a piecewise smooth function form , and let be a compactly supported radial function. The singular support of consists of the points for which has slowly decaying Fourier transform for every , i.e.:


In order to find the singular support, concentration kernel methods [10, 12, 14, 15, 16] multiply the Fourier data by a function which gives heavier weight to high frequencies than to low frequencies (a high-pass filter). Since high frequencies encode the location of singularities but are unaffected by smooth parts of the image, this method isolates discontinuities from the rest of the image. In short, concentration kernel methods find the location of singularities by flagging local maxima in the inverse Fourier transform of the high-pass filtered Fourier data.

The wavefront of a function consists of the points for which the Fourier transform of decays slowly in the direction :


As indicated in the introduction, while the singular support of only contains the location of singularities, the wavefront also contains the direction of the singularities.

Remark 2.1

In the language of computational geometry, the singular support is a set of points, while the wavefront is a set of surfels (pairs of the form with representing a position and a direction).

2.1 Definition of the image class

To simplify the theory, we consider a special class of images. In particular, we consider two-dimensional images supported on and vanishing near the boundaries, which consist of a set of piecewise constant functions on which is superimposed a globally smooth function:


where are simple closed curves, and for in the interior of and elsewhere. The “texture” term is band limited, i.e. for .

Definition 2.2

Let be a simple closed curve. The curvature at each point is denoted by , the normal to is denoted by , and the tangent is denoted by .

Assumption 1

We assume the curves have bounded curvature, i.e.:

Assumption 2

We also assume that the curves are separated from each other (see Fig. 1):

and that different areas of the same curve are separated from each other, i.e. if the curves are parameterized to move at unit speed, then:
Figure 1: An illustration of Assumption 2. The black arrow illustrates the condition (2.5a), while the red arrow illustrates the condition (2.5b).

In order to extract the wavefront, we assume each edge is associated with a sufficiently large discontinuity. Since our wavefront detector decays somewhat slowly away from the wavefront, we also assume that the discontinuity is bounded from above, so as not to pollute nearby edges of lower contrast. We formalize this as:

Assumption 3

We assume the contrast of the discontinuities in the image is bounded above and below:


The data we are given and wish to segment are noisy samples of obtained on a grid with spacing in -space centered at the origin: with . Our segmentation goal is to recover the curves .

With the preceding sampling, we may define the maximum frequency available in the image as . We assume that . More precisely, we assume that , providing at least 12 lattice points in the sampling beyond . In most of our experiments, we take , and .

Finally, we make the technical assumption that the curves have non-vanishing curvature:

Assumption 4

We assume that the curvature of the curves is bounded below.

Remark 2.3

The assumption (2.7) implies that the region bounded by is convex. This geometric fact is not used by our algorithm in any way.

Remark 2.4

Assumption 4 is introduced only because it is required below by our proof technique for the correctness of the directional filters. It is not strictly necessary, and it would be straightforward to extend our analysis to cases where the curvature vanishes. This, however, would require more complicated conditions on higher derivatives of in places where the curvature vanishes, and correspondingly more complicated proofs. See Figure 6 which illustrates that our algorithm works even when Assumption 4 is violated.

2.2 Wavefront Extraction Methodology

The algorithm we present in this work takes an image, given as spectral data , and extracts a set of surfels sampled from the wavefront. We do this in two steps.

First, we construct a set of directional filters:


Here, is a high pass filter in the radial direction (the radial filter), is a smooth function, compactly supported on (the angular filter), and is the Fourier transform. Note that, given a direction , the angular filter is supported on the angular window . The angular and radial filters will be related by the parabolic scaling:

(Equivalently, they will be chosen to satisfy in the -domain.) In particular, this implies that the filter angle , where pass-band is the center of the pass-band of the radial filter (to be determined in section 4).

When applied to the function , the directional filters will return a function which is small except near locations where points in the direction . This allows us to pinpoint the locations of singularities with direction . The result is, in some sense, a directional version of the jump function of [10, 14, 15, 16]. Spikes (local maxima) obtained from these directional filters correspond to surfels in the wavefront of the image.

Remark 2.5

For the algorithm described here to work, it is required that accurate values of the continuous spectral data be available. The discrete Fourier transform (DFT) of can not be used as a substitute for the continuous data, since aliasing induced by the DFT will destroy the asymptotic expansion of .

2.3 A note on our phantom

Figure 2: A plot of the phantom used in this work. The image in the second panel is obtained by applying the DFT to the (exactly known) continuous Fourier Transform of the phantom on a grid of samples in -space.

In the medical imaging literature, the Shepp-Logan phantom (which is piecewise constant) is a traditional choice for analysis and validation purposes. We have added a smoothly varying component (see (2.3)), with defined as a sum of Gaussians whose bandwidth (to six digits of accuracy) is equal to half the bandwidth of the measurement data.

2.4 Informal Description of the Algorithm

The full algorithm proceeds in three steps.

  1. Construct directional filters, based on (the maximum frequency content of the data) and (the frequency content of the “smooth” part of the image).

  2. Multiply the data by each directional filter indexed by ) and transform back to the image domain.

    • Apply a threshold in the image domain.

    • Add each point above the threshold to the surfel set as the pair .

  3. Given the set of all surfels, use the algorithm of [17] to reconstruct the set of curves that define the discontinuities.

The last step is described briefly in section 6, while the bulk of the paper is devoted to the surfel extraction procedure itself.

3 Large Asymptotics of

In this section we wish to compute the large asymptotics of . In particular, we will show that is dominated solely by the parts of where , i.e. where .

We can use Green’s theorem to rewrite the Fourier transform of as follows. Let with . Then by Green’s theorem:


(with the region bounded by ). This trick is taken from [22].

We can now use stationary phase to approximate for large . For this, express in polar coordinates , fix a direction , and consider what happens as becomes large. As we remarked earlier, the phase becomes stationary only when or . This is precisely where points normal to the curve, and it is these locations that dominate :

Proposition 3.1

Let correspond to the value of at which and (i.e. the normal to points in the direction ). Then:


where with


Note that incorporates both geometric and contrast information about the image itself. This result is proved carefully in Appendix A. The basic idea behind the proof is simple, however. Set , with fixed. Then:

with . The phase function is stationary when , or equivalently the place where (i.e. ). By Assumption 4, we find that is nonzero. We restrict the consideration to an interval and apply stationary phase:

Proving Proposition 3.1 is done by adding this result up over all the curves, and all points of stationary phase, and estimating the remainder.

3.1 What if ?

It is important to note that all is not lost when . In this case, although the coefficient on in (3.2) becomes singular, the asymptotics of do not.

In this case, what happens is that the leading order behavior becomes , where is the order of the first non-vanishing derivative. This can be seen relatively easily from stationary phase, although rigorous justification requires a long calculation. However, the limiting case (a straight line) is easy enough to treat.

Proposition 3.2

Let for . Then:


Proof.  If , then:

Multiplying by yields the result we seek.

The asymptotics are straightforward to compute (from the second line of (3.4)), but note that the constant in the term in (3.4) is not uniform in .

4 Directional Filters

We are now in a position to build the filter operators of (2.8), which will allow us to extract edge information from the signal. We demand that the radial filter takes the form

where is a positive, symmetric function. This means that the -dimensional inverse Fourier transform of is

The intuition behind this operator is the following. Multiplying by localizes on the region . Dropping all but one of the terms in Proposition 3.1, we find that:

The term incorporates both the decay from the asymptotics of and the localization to and large from the filters.

Therefore, if we inverse Fourier transform, we will obtain

Provided is a sharply localized bump function, this will be a bump located at . Of course, this calculation is not exactly correct, and is presented merely to obtain intuition. We will go through the details shortly, but require a few definitions.

Definition 4.1

Given , define the auxiliary functions:

Definition 4.2

Define the set to be the set of points where some has normal pointing in the direction , i.e.:

We also define to be the set of arcs excluding .

The goal of our directional filters is to approximate the location of . That is to say we want to be large near and small away from it. The decay in the tangential direction away from a point in is at least of the order . To ensure that the decay in the normal direction is as fast, we consider the case when (see Fig. 3   d).

We then have the following result which proves the directional filters approximate the location of .

Theorem 4.3
Suppose that satisfies
and satisfies (4.1a) as well as the following decay condition:
Then away from , the directional filter has the following decay:
At the point

We postpone the proof of this result to Appendix B, where we compute the leading order asymptotics of a single segment of the curve and put the various pieces together.

Remark 4.4

While the expressions above are somewhat involved, for the filters described in the next section, (4.4c) simplifies to


Note that the second term in this estimate becomes negligible as increases. The term determines the rate of decay of the directional filter away from a surfel and it should be as small as possible (to minimize noise), for which the norm of should be small. At the same time, we want to be as large as possible (to maximize signal). For this, the norm of should be large. We will need to balance this competition.

Remark 4.5

If the decay of is faster than (4.4b), we can of course get sharper results. However, this would require extra geometric conditions and would require Theorem 4.3 to make distinctions concerning the direction from to . To simplify the exposition of this paper, such considerations will be reported at a later date.

4.1 The estimate is suboptimal

Let us consider the application of our directional filters on a simple numerical example. We consider a pixel image on . The image is taken to be the phantom described in Section 2.3, and the parameters are , , , and . With this set of parameters, if we use the windows and . The first term on the right-hand side in (4.4c) is, therefore, approximately if we move one pixel away from a surfel. The second term, however, is approximately so that we have no reason to expect that the directional filters will yield any useful information. However, numerical experiments show that they do in fact work even in this case. Figure 3 shows that the directional filters do yield the location of the edges. Figure 4 shows that even in the presence of noise (up to of the total image energy), the directional filters still yield correct results.

It is also useful to compare the directional filters to a naive algorithm, namely computing the directional derivative. As is apparent from Figure 5, this naive method does not perform as well as our algorithm.

Figure 3: An illustration of the sharp directional filter. The first panel shows the image. The second panel shows the directional filter in -space. The third panel shows the edge map with and . The red lines are the contour lines, while the green lines are the actual (analytically known) edges of the image. The fourth panel shows the directional filter in the -domain.
Figure 4: An illustration of the directional filters in the presence of noise. The image and parameters are the same as in Figure 3. The figures in the left column are plots of . The gray dotted lines in the right column illustrate the discontinuities of the image (which we know exactly, c.f. Section 2.3), while the red regions are the places where . Noise amplitude is described as a percentage of the total image strength. Noise begins to corrupt the data when it reaches 7.5%, and becomes a serious problem at 10%.
Figure 5: Result of edge detection based on measuring the directional derivative, i.e. . The green lines represent the analytically known edges of the image, while the red regions are the regions where .
Figure 6: Result of edge detection applied to an image where Assumption 4 is violated. The green lines in the second panel indicate the actual discontinuities, while the red arrows indicate extracted surfels (at angles and . The vertical and horizontal Gray bars are artifacts from using the DFT to reconstruct a discontinuous image.

4.2 Parabolic Scaling

The choice of is an important one. We want to be as small as possible, since this will give us better angular resolution.

On the other hand, the constant bounding the size of the filtered image away from the edges is directly proportional to , as we will show shortly. For simplicity, let us impose the constraint that


Now let us take to be a fixed filter scaled with , i.e. , where is supported in and . With this choice of , we find that:


Substituting this into (4.4d) and assuming to be very small yields:


To prevent from being too large, we want to ensure that is as large as possible.

However, to ensure that the filter is sufficiently large on the edges, we need to prevent from being too large (Theorem 4.3). In particular, we require (c.f. (4.4a))


To satisfy both these constraints, we simply replace the “” in (4.9) by “”. This yields the standard parabolic scaling used elsewhere [5, 6, 11, 21, 23].

Remark 4.6

The requirement (4.4a) yields the standard parabolic scaling used to analyze line discontinuities in harmonic analysis [5, 6, 11, 21, 23]. In the -domain, the standard parabolic scaling uses elements with . In the -domain, this translates to .

This also implies that for the filter to approach zero as , we require that .

4.3 Choosing the Filters

We now consider the simplest choices of window possible which satisfy our assumptions. We take to be a a step function, i.e.:

and we take to be a triangle function:

In this case, it is straightforward to show:

Then we have:


Similarly, the remainder term has the bound:

where is defined in (3.3).

Therefore, we find that: