A fast eikonal equation solver using the Schrödinger wave equation

A fast eikonal equation solver using the Schrödinger wave equation

Abstract

We use a Schrödinger wave equation formalism to solve the eikonal equation. In our framework, a solution to the eikonal equation is obtained in the limit as Planck’s constant (treated as a free parameter) tends to zero of the solution to the corresponding linear Schrödinger equation. The Schrödinger equation corresponding to the eikonal turns out to be a generalized, screened Poisson equation. Despite being linear, it does not have a closed-form solution for arbitrary forcing functions. We present two different techniques to solve the screened Poisson equation. In the first approach we use a standard perturbation analysis approach to derive a new algorithm which is guaranteed to converge provided the forcing function is bounded and positive. The perturbation technique requires a sequence of discrete convolutions which can be performed in using the Fast Fourier Transform (FFT) where is the number of grid points. In the second method we discretize the linear Laplacian operator by the finite difference method leading to a sparse linear system of equations which can be solved using the plethora of sparse solvers. The eikonal solution is recovered from the exponent of the resultant scalar field. Our approach eliminates the need to explicitly construct viscosity solutions as customary with direct solutions to the eikonal. Since the linear equation is computed for a small but non-zero , the obtained solution is an approximation. Though our solution framework is applicable to the general class of eikonal problems, we detail specifics for the popular vision applications of shape-from-shading, vessel segmentation, and path planning.

Keywords:
eikonal equation Schrödinger wave equation perturbation theory Fast Fourier Transform (FFT) screened Poisson equation Green’s function sparse linear system

1 Introduction

The eikonal (from the Greek word or “image”) equation is traditionally encountered in the wave and geometric optics literature where the principal concern is the propagation of light rays in an inhomogeneous medium (8). Its twin roots are in wave propagation theory and in geometric optics. In wave propagation theory, it is obtained when the wave is approximated using the Wentzel–Kramers–Brillouin (WKB) approximation (27). In geometric optics, it can be derived from Huygens’ principle (2). In the present day, the eikonal equation has outgrown its humble optics origins and now finds application in far flung areas such as electromagnetics (27), robot motion path planning (7) and image analysis (24).

The eikonal equation is a nonlinear, first order, partial differential equation (31) of the form

(1.1)

subject to the boundary condition , where is an open subset of . The forcing function is a positive valued function and denotes the gradient operator. In the special case where equals one everywhere, the solution to the eikonal equation is the Euclidean distance function (24). Detailed discussions on the existence and uniqueness of the solution can be found in (11).

While the eikonal equation is venerable and classical, it is only in the last twenty years that we have seen the advent of numerical methods aimed at solving this problem. To name a few are the pioneering fast marching (25); (30) and fast sweeping (33) methods. Algorithms based on discrete structures such as the well known Dijkstra single source shortest path algorithm (10) can also be adapted to solve this problem. When we seek solutions on a discretized spatial grid width points, the complexity of the fast marching method is while that of the fast sweeping method for a single pass over the grid, is and therefore both of these efficient algorithms have seen widespread use since their inception. The fast sweeping method is computationally nicer and easier to implement than the fast marching method, however the actual number of sweeps required for convergence depends on the problem at hand—experimentally it is observed that sweeps are required in dimensions. Recently, the ingenious work of Sapiro et al. (32) provided an implementation of the fast marching method with a cleverly chosen untidy priority queue data structure. Typically, eikonal solvers grow the solution from a set of seed points at which the solution is known.

The eikonal equation can also be derived from a variational principle, namely, Fermat’s principle of least time which states that “Nature always acts by the shortest paths” (3). From this variational principle, the theoretical physics developmental sequence proceeds as follows: The first order Hamilton’s equations of motion are derived using a Legendre transformation of the variational problem wherein new momentum variables are introduced. Subsequently, a canonical transformation converts the time varying momenta into constants of the motion. The Hamilton-Jacobi equation emerges from the canonical transformation (17). In the Hamilton-Jacobi formalism specialized to the eikonal problem, we seek a surface such that its increments are proportional to the speed of the light rays. This is closely related to Huygens’ principle and thus marks the rapprochement between geometric and wave optics (2). It is this nexus that drives numerical analysis methods (25); (33) (focused on solving the eikonal equation) to base their solutions around the Hamilton-Jacobi formalism.

So far, our development has followed that of classical physics. Since the advent of quantum theory—specifically the Schrödinger wave equation—in the 1920s, the close relationship between the Schrödinger and Hamilton-Jacobi equations has been intensely studied (6). Of particular importance here is the quantum to classical transition as where the nonlinear Hamilton-Jacobi equation emerges from the phase of the Schrödinger wave equation. This relationship has found very few applications in the numerical analysis literature despite being well known. In this paper, we leverage the important distinction between the Schrödinger and Hamilton-Jacobi equations, namely, that the former is linear whereas the latter is not. We take advantage of the linearity of the Schrödinger equation while exploiting its relationship to Hamilton-Jacobi and derive computationally efficient solutions to the eikonal equation.

A time-independent Schrödinger wave equation at the energy state E has the form (19), where —the stationary state function—is the solution to the time-independent equation and is the Hamiltonian operator. When the Hamilton-Jacobi scalar field is the exponent of the stationary state function, specifically , and if satisfies the Schrödinger equation, we show that as , satisfies the Hamilton-Jacobi equation. Note that in the above, a nonlinear Hamilton-Jacobi equation is obtained in the limit as of a linear Schrödinger equation which is novel from a numerical analysis perspective. Consequently, instead of solving the Hamilton-Jacobi equation, one can solve its Schrödinger counterpart (taking advantage of its linearity), and compute an approximate for a suitably small value of . This computational procedure is approximately equivalent to solving the original Hamilton-Jacobi equation.

Since the efficient solution of a linear wave equation is the cornerstone of our approach, we now briefly describe the actual computational algorithm used. We derive the static Schrödinger equation for the eikonal problem. The result is a generalized, screened Poisson equation (15) whose solution is known at seed points. This linear equation does not have a closed-form solution and therefore we resort to a perturbation method (14) of solution—which is related to the Born expansion (23). The perturbation method comprises a sequence of multiplications with a space-varying forcing function followed by convolutions with a Green’s function (for the screened Poisson operator) which we solve using an efficient fast Fourier transform (FFT)-based technique (9). Perturbation analysis involves a geometric series approximation for which we show convergence for all bounded forcing functions independent of the value of .

The intriguing characteristic of the perturbation approach is that it solves the generalized screened Poisson without the explicit need to spatially discretize the Laplacian operator. However, the downside of this method is that it requires repeated convolution of the Green’s function with the solution from the previous iteration which can only be approximated by discrete convolution. Hence the errors tend to accumulate with iteration. A different route to solve the screened Poisson would be to approximate the continuous Laplacian operator (say) by the method of finite differences and use sparse, linear system solvers to obtain the solution. This approach leads to many algorithm choices since there are myriad efficient sparse linear solvers. We showcase the application of our linear discretized framework in path planning, shape from shading and vessel segmentation.

The paper is organized as follows. In Section 2, we provide the Hamilton-Jacobi formulation for the eikonal equation as adopted by the fast sweeping and fast marching methods. We restrict to the special case of the eikonal equations involving constant forcing functions in Section 3 and derive its corresponding Schrödinger wave equation. Section 4 considers the more general version, where we derive and provide an efficient arbitrary precision FFT-based method for solving the Schrödinger equation using techniques form perturbation theory. In Section 6 we present a second approach to solve the linear differential equation where by invoking a finite difference approximation of the Laplacian operator we handle a sparse linear system. We conclude in Section 7 by summarizing our current work.

2 Hamilton-Jacobi formulation for the eikonal equation

2.1 Fermat’s principle of least time

It is well known that the Hamilton-Jacobi equation formalism for the eikonal equation can be obtained by considering a variational problem based on Fermat’s principle of least time (2) which in is

(2.1)

We take an idiosyncratic approach to the eikonal equation by considering a different variational problem which is still very similar to Fermat’s least time principle. The advantage of this variational formulation is that the corresponding Schrödinger wave equation can be easily obtained.

Consider the following variational problem namely,

(2.2)

where the forcing term is assumed to be independent of time and the Lagrangian is defined as

(2.3)

Defining

(2.4)

and applying the Legendre transformation (2), we can obtain the Hamiltonian of the system in as

(2.5)

From a canonical transformation of the Hamiltonian (17), we obtain the following Hamilton-Jacobi equation

(2.6)

Since the Hamiltonian in (2.5) is a constant independent of time, equation (2.6) can be simplified to the static Hamilton-Jacobi equation. By separation of variables, we get

(2.7)

where is the total energy of the system and is called Hamilton’s characteristic function (2). Observing that , equation (2.6) can be rewritten as

(2.8)

Choosing the energy to be , we obtain

(2.9)

which is the original eikonal equation (1.1). is the required Hamilton-Jacobi scalar field which is efficiently obtained by the fast sweeping (33) and fast marching methods (25).

3 Eikonal equations with constant forcing functions

We begin the quantum formulation of the eikonal equation by considering its special case where the forcing function is constant and equals everywhere.

3.1 Deriving the Schrödinger wave equation

The time independent Schrödinger wave equation is given by (19)

(3.1)

where is the time-independent wave function and is the Hamiltonian operator obtained by first quantization where the momentum variables are replaced with the operator . denotes the energy of the system.

For this special case where the forcing functions is constant and equals everywhere, the Hamiltonian of the system is given by (in 2D)

(3.2)

Its first quantization then yields the wave equation

(3.3)

When we get oscillatory solution and when we get exponential solutions in the sense of distributions. In (20); (29) we have shown that for the Euclidean distance function problem where , the exponential solution for obtained by setting which is then used to recover using the relation , guarantees convergence of to the true solution as . The work in (20) concerns with computing Euclidean distance functions only from point-sets, and its extension of obtaining distance functions from curves is developed in (29). Following along similar lines, we propose to solve (3.3) at , for which satisfies the differential equation

(3.4)

3.2 Solving the Schrödinger wave equation

We now provide techniques for efficiently solving the Schrödinger equation (3.4). Note that we are interested in the computing the solution only on the specified set of discrete grid locations.

Since the Laplacian operator is negative definite, it follows that the Hamiltonian operator is positive definite for all values of and hence the above equation does not have a solution in the classical sense. Hence we look for a solution in the distributional sense by considering the forced version of the equation, namely

(3.5)

The points can be considered to be the set of locations which encode initial knowledge about the scalar field , say for example .

For the forced equation (3.5), closed-form solutions for can be obtained in , and (20) using the Green’s function approach (1). Since goes to infinity for points at infinity, we can use Dirichlet boundary conditions at the boundary of an unbounded domain. The form of the solution for the Green’s function is given by,

1D: In , the solution for (1) is

(3.6)

2D: In , the solution for (1) is

(3.7)

where is the modified Bessel function of the second kind.

3D: In , the solution for (1) is

(3.8)

The solutions for can then be obtained by convolution

(3.9)

from which can be recovered using the relation (4.6). can explicitly be shown to converge to the the true solution , where as (20).

Modified Green’s function

Based on the nature of the Green’s function we would like to highlight on the following very important point. In the limiting case of ,

(3.10)

for and being constants greater than zero and therefore we see that if we define

(3.11)

for some constant ,

(3.12)

and furthermore, the convergence is uniform for away from zero. Therefore, provides a very good approximation for the actual Green’s function as . For a fixed value of and , the difference between the Green’s functions is which is relatively insignificant for small values of and for all . Moreover, using also avoids the singularity at the origin that has in the 2D and 3D case. The above observation motivates us to compute the solutions for by convolving with , namely

(3.13)

instead of the actual Green’s function and recover using the relation (4.6), given by

(3.14)

Since is a constant independent of and converges to 0 as , it can ignored while computing at small values of –it is equivalent to setting to be 1. Hence the Schrödinger wave function for a constant force can be approximated by

(3.15)

It is worth emphasizing that the above defined wave function (3.15), contains all the desirable properties that we need. Firstly, we notice that as , at the given point-set locations . Hence from (4.6) as satisfying the initial conditions. Secondly as , can be approximated by where . Hence , which is the true value. When , we get the Euclidean distance function. Thirdly, can be easily computed using the fast Fourier transform as described under section (5.2). Hence for computational purposes we consider the wave function defined in (3.15).

4 General eikonal equations

Armed with the above set up, we can now solve the eikonal equation for arbitrary, positive-valued, bounded forcing functions . We first show that even for general , when satisfies the same differential equation as in the case of constant forcing equation (replacing by ), namely

(4.1)

and is related to by , asymptotically satisfies the eikonal equation (1.1) as . We show this for the case but the generalization to higher dimensions is straightforward.

When , the first partials of are

(4.2)

The second partials required for the Laplacian are

(4.3)

From this, equation (4.1) can be rewritten as

(4.4)

which in simplified form is

(4.5)

The additional term [relative to (1.1)] is referred to as the viscosity term (11); (25) which emerges naturally from the Schrödinger equation derivation—an intriguing result which differs from direct solutions of the non-linear eikonal that artificially incorporate viscosity terms. Since is bounded, as , (4.5) tends to which is the original eikonal equation (1.1). This relationship motivates us to solve the linear Schrödinger equation (4.1) instead of the non-linear eikonal equation and then compute the scalar field via

(4.6)

5 Perturbation theory approach to solver the linear system

Since the linear system (4.1) (and its forced version) doesn’t have a closed-form solution for non-constant forcing functions, one approach to solve it is using perturbation theory (14). Assuming that is close to a constant non-zero forcing function , equation (4.1) can be rewritten as

(5.1)

Now, defining the operator

(5.2)

and the function , we see that satisfies

(5.3)

and

(5.4)

Notice that in the differential equation for (5.3), the forcing function is constant and equals everywhere. Hence behaves like the wave function corresponding to the constant forcing function —described under section (3.2) and can be approximated by

(5.5)

We solve for in (5.4) using a geometric series approximation for . Firstly, observe that the approximate solution for in (5.5) is a square-integrable function which is necessary for the subsequent steps.

Let denote the space of square integrable functions on , i.e, if and only if . The function norm for a function is given by where is the Lebesgue measure on . Let denote a closed unit ball in the Hilbert space and be the operator norm defined as

(5.6)

If , we can approximate using the first few terms of the geometric series to get

(5.7)

where the operator norm of the difference can be bounded by

(5.8)

which converges to exponentially in . We would like to point out that the above geometric series approximation is similar to a Born expansion used in scattering theory (23). We now derive an upper bound for .

Let where and . We now provide an upper bound for . For a given , let where satisfies the relation with vanishing Dirichlet boundary conditions at . Then

(5.9)

Using the identity we write

(5.10)

Divergence theorem states that and hence

(5.11)

Using the above relation in (5.9) we find implying that

(5.12)

Furthermore, as for any

(5.13)

we get

(5.14)

Since , from equations (5.12) and (5.14) we can deduce that

(5.15)

It is worth commenting that the bound for is independent of . So, if we guarantee that , the geometric series approximation for (5.7) converges for all values of .

5.1 Deriving a bound for convergence of the perturbation series

Interestingly, for any positive, upper bounded forcing function bounded away from zero, i.e for some 17, by defining , we observe that . From equation (5.15), we immediately see that . This proves the existence of for which the geometric series approximation (5.7) is always guaranteed to converge for any positive bounded forcing function bounded away from zero. The choice of can then be made prudently by defining it to be the value that minimizes

(5.16)

This in turn minimizes the operator norm , thereby providing a better geometric series approximation for the inverse (5.7).

Let and let . We now show that attains its minimum at

(5.17)

case (i): If , then . Clearly,

(5.18)

case (ii): If , then . It follows that

(5.19)

We therefore see that is the optimal value.

Using the above approximation for (5.7) and the definition of from (5.2) we obtain the solution for as

(5.20)

where satisfies the recurrence relation

(5.21)

Observe that (5.21) is an inhomogeneous, screened Poisson equation with a constant forcing function . Following a Green’s function approach (1), each can be obtained by convolution

(5.22)

where is given by equations (3.6), (3.7) or (3.8) depending upon the spatial dimension.

Once the ’s are computed, the wave function can then be determined using the approximation (5.20). The solution for the eikonal equation can be recovered using the relation (4.6). Notice that if everywhere, then all ’s except is identically equal to zero and we get as described under section (3).

5.2 Efficient computation of the wave function

In this section, we provide numerical techniques for efficiently computing the wave function . Recall that we are interested in solving the eikonal equation only at the given discrete grid locations. Consider the solution for given in (5.5). In order to obtain the desired solution for computationally, we must replace the function by the Kronecker delta function

(5.23)

that takes at the point-set locations () and at other grid locations. Then can be exactly computed at the grid locations by the discrete convolution of (setting ) with the Kronecker-delta function.

To compute , we replace each of the convolutions in (5.22) with the discrete convolution between the functions computed at the grid locations. By the convolution theorem (4), a discrete convolution can be obtained as the inverse Fourier transform of the product of two individual transforms which for two sequences can be performed in time (9). Thus, the values of each at the grid locations can be efficiently computed in making use of the values of determined at the earlier step. Thus, the overall time complexity to compute the approximate using the first few terms is then . Taking the logarithm of then provides an approximate solution to the eikonal equation. The algorithm is summarized in Table 1.

1. Compute the function at the grid locations.
2. Define the function which takes the value at the point-set locations
and at other grid locations.
3. Compute the FFT of and , namely and respectively.
4. Compute the function .
5. Compute the inverse FFT of to obtain at the grid locations.
6. Initialize to .
7. Consider the Green’s function corresponding to the spatial dimension
and compute its FFT, namely .
8. For to do
9.            Define .
10.            Compute the FFT of namely .
11.            Compute the function .
12.            Compute the inverse FFT of and multiply it with the grid width
           area/volume to compute at the grid locations.
13.            Update .
14. End
15. Take the logarithm of and multiply it by to get
the approximate solution for the eikonal equation at the grid locations.
Table 1: Algorithm for the approximate solution of the eikonal equation

We would like to emphasize that the number of terms () used in the geometric series approximation of (5.7) is independent of . Using more terms only improves the approximation of this truncated geometric series as shown in the experimental section. From equation (5.8), it is evident that the error incurred due to this approximation converges to zero exponentially in and hence even with a small value of , we should be able to achieve good accuracy.

5.3 Numerical issues

In principle, we should be able to apply our technique at very small values of and obtain highly accurate results. But we noticed that a naïve double precision-based implementation tends to deteriorate for values very close to zero. This is due to the fact that at small values of (and also at large values of ), drops off very quickly and hence for grid locations which are far away from the point-set, the convolution done using FFT may not be accurate. To this end, we turned to the GNU MPFR multiple-precision arithmetic library which provides arbitrary precision arithmetic with correct rounding (16). MPFR is based on the GNU multiple-precision library (GMP) (18). It enabled us to run our technique at very small values of giving highly accurate results. We corroborate our claim and demonstrate the usefulness of our method with the set of experiments described in the subsequent section.

5.4 Experimental verification of the perturbation approach

In this section, we demonstrate the usefulness of our perturbation approach by computing the approximate solution to the general eikonal equation (1.1) over a grid.

Comparison with the true solution

Example 1: In this example, we solve the eikonal equation for the scenario where the exact solution is known a priori at the grid locations. The exact solution is . The boundary condition is at the point source located at . The forcing function—the absolute gradient —is specified on a grid consisting of points between and with a grid width of . We ran the Schrödinger for iterations at and the fast sweeping for iterations sufficient enough for both the methods to converge. The percentage error is calculated according to

(5.24)

where and are respectively the actual value and the absolute difference of the computed and actual value at the grid point. The maximum difference between the true and approximate solution for different iterations is summarized in the table (2).

Iter % error max diff
1 2.081042 0.002651
2 1.514745 0.002140
3 1.390552 0.002142
4 1.363256 0.002128
5 1.357894 0.002128
6 1.356898 0.002128
Table 2: Percentage error for the Schrödinger method for different iterations

The fast sweeping gave a percentage error of . We believe that the error incurred in our Schrödinger approach can be further reduced by decreasing but at the expense of more computational power requiring higher precision floating point arithmetic.

The contour plots of the true solution and those obtained from Schrödinger and fast sweeping are displayed below (figure  1). We can immediately observe the similarity of our solution with the true solution. We do observe smoother isocontours in our Schrödinger method relative to fast sweeping.

Figure 1: Contour plots: (i) Left: True solution, (ii) Center: Schrödinger, and (iii) Right: Fast sweeping

Comparison with the fast sweeping

In order to verify the accuracy of our technique, we compared our solution with fast sweeping for the following set of examples, using the latter as the ground truth as the true solution is not available in closed-form.

Example 2: In this example we solved the eikonal equation from a point source located at for the following forcing function

(5.25)

on a grid consisting of points between and with a grid width of . We ran our method for iterations with set at and fast sweeping for iterations sufficient for both techniques to converge. When we calculated the percentage error for the Schrödinger according to equation 5.24 (with fast sweeping as the ground truth), the error was just around . The percentage error and maximum difference between the fast sweeping and Schrödinger solutions after each iteration are adumbrated in Table (3).

Iter max diff
1 1.144632 0.008694
2 1.269028 0.008274
3 1.223836 0.005799
4 1.246392 0.006560
5 1.244885 0.006365
6 1.245999 0.006413
Table 3: Percentage error for the Schrödinger method in comparison to fast sweeping.

We believe that the fluctuations both in the percentage error and the maximum difference are due to repeated approximations of the integration involved in the convolution with discrete convolution and summation, but nevertheless stabilized after 6 iterations. The contour plots shown in Figure 2 clearly demonstrate the similarities between these methods.

Figure 2: Contour plots: (i) Left: Schrödinger, (ii) Right: Fast sweeping

Example 3: Here we solved the eikonal equation for the sinusoidal forcing function

(5.26)

on the same grid as in the previous example. We randomly chose grid locations namely,

as data locations and ran our method for iterations with set at and ran fast sweeping for iterations. The percentage error between the Schrödinger solution (after 6 iterations) and fast sweeping was with the maximum absolute difference between them being .

The contour plots are shown in Figure (3). Notice that the Schrödinger contours are more smoother in comparison to the fast sweeping contours.

Figure 3: Contour plots: (i) Left: Schrödinger, (ii) Right: Fast sweeping

Example 4: Here we compared with fast sweeping on a larger grid consisting of points between and with a grid width of . We again considered the sinusoidal forcing function

(5.27)

and chose 4 grid locations namely as data locations. Notice that the Green’s function and goes to zero exponentially faster for grid locations away from zero for small values of . Hence for a grid location say which is reasonably far away from 0, the value of the Green’s function say at may be zero even when we use a large number of precision bits . This problem can be easily circumvented by first scaling down the entire grid by a factor , computing the solution on the smaller denser grid and then rescaling it back again by to obtain the actual solution. It is worth emphasizing that scaling down the grid is tantamount to scaling down the forcing function as clearly seen from the fast sweeping method. In fast sweeping (33), the solution is computed using the quantity where is the value of forcing function at the grid location and is the grid width. Hence scaling down by a factor of is equivalent to fixing and scaling down by . Since the eikonal equation (1.1) is linear in , computing the solution for a scaled down –equivalent to a scaled down grid–and then rescaling it back again is guaranteed to give the actual solution.

The factor can be set to any desired quantity. For the current experiment we set , and ran our method for 6 iterations. Fast sweeping was run for 15 iterations. The percentage error between these methods was about . The contour plots are shown in Figure 4. Again, the contours obtained from the Schrödinger are more smoother than those obtained from fast sweeping.

Figure 4: Contour plots: (i) Left: Schrödinger, (ii) Right: Fast sweeping

6 Discretization approach to solve the linear system with applications to path planning and shape from shading

As seen above, the perturbation technique requires repeated convolution of the Green’s function with the solution from the previous iteration . As this convolution does not carry a closed form solution in general, we approximate the continuous convolution by its discrete counterpart computed via FFT. The error incurred from this discrete approximation tends to pile up with iteration. To circumvent this issue, a possible direct route to solving the screened Poisson equation in (4.1) is by discretization of the linear operator () on a standard grid where the Laplacian is approximated by standard finite differences. Solving the discretized screened Poisson equation is far less complicated to implement than the fast marching or fast sweeping methods needed for directly solving (1.1). In addition, the computational complexity for implementing (4.1) can match these algorithms since sparse, linear system solvers are available (28). This comes from the fact that a finite difference approximation to (4.1), using a standard five-point Laplacian stencil, simply results in a sparse linear system of the form (12):

where represents the number of grid points and is a tri-diagonal block of the form

The function in (LABEL:eq:sparse-system) is defined in (5.23) which is supported and takes the value one only at the point-set locations . Though better complexities are achievable using multigrid (28) solvers, one can just as well address many problems in a satisfactory manner using direct, sparse solvers, like MATLAB’s —an approach we adopt for the experiments in the present paper.

6.1 Path planning

In the pioneering contribution of (22), Kimmel and Sethian developed one of the earliest applications of the eikonal equation to path planning. By finding a solution (referred to as the value function in the path planning context) to the eikonal equation in (1.1), we immediately recover the minimum cost to go from a source location in the state space to any other point (in the state space). Here, we impose the boundary condition , and consider as the cost to travel through location (higher the value, the more costly) and prescribe it as a strictly positive function. The function function is set to high values for undesirable travel regions for the optimal path, and very small values for favorable travel areas (and set to one on the source point). In comparison to other popular path planning techniques like potential field methods (21), the value function is an example of a navigation function—a potential field free of local minima.

Given a scalar field solution, the optimal paths are determined by gradient descent on , and is typically referred to as backtracking. The backtracking procedure can be formulated as an ordinary differential equation

(6.2)

whose solution is the reconstructed path from a fixed target location . We typically terminate the gradient backtracking procedure at some value such that , i.e. we get arbitrarily close the source point, for some small . By construction, the backtracking on cannot get stuck in local minima—an obvious proof by contradiction validates this claim if one considers to be differentiable and have local minima at some , but , and contradicts the eikonal equation . Although, in theory, can contain saddle points, but usually this is not an issue in practice.

6.2 Anecdotal verification of path planning on complex mazes and extensions to vessel segmentation

We applied our path planning approach to a variety of complex maze images. We explicitly chose the maze grid sizes to be much larger than the norm for recent publications that apply the eikonal equation for path planning; these typical sizes are usually smaller than . An objective juxtaposition of contemporary fast sweeping and marching techniques, which require special discretization schemes, data structures, sweep orders, etc., versus our approach presented here, clearly illustrates the efficiency and simplicity of the later. Our framework reduces path planning (a.k.a. all-pairs, shortest path or geodesic processing) implementation to four straightforward steps:

  1. Define , which assigns a high cost to untraversable areas in the grid and low cost to traversable locations. In the experiments here, we simply let appropriately scaled versions of the maze images be , with white pixels representing boundaries and black pixels the possible solution paths.

  2. Select a source point on the grid and a small value for . The solution to (4.1) will simultaneously recover all shortest paths back to this source from any non-constrained region in the grid.

  3. Use standard finite differencing techniques to evaluate (4.1). This leads to a sparse, block tri-diagonal system which can be solved by a multitude of linear system numerical packages. We simply use MATLAB’s ’' operator. Recover approximate solution to eikonal by letting .

  4. Backtrack to find the shortest path from any allowable grid location to the source, i.e. use eq. (6.2) to perform standard backtracking on .

The grid sizes and execution times for several mazes are provided in Table 4.

2D Grid Dims. (No. of Points) (sec.)
, Fig. 5 0.79
, Fig. 6(a) 1.02
, Fig. 6(b) 1.22
, Fig. 6(c) 0.77
Table 4: Maze grids sizes and time to solve sparse system for path planning. Our approach simply uses MATLAB’s ’' operator to solve the shortest path problem, avoiding complex nuances of discretization schemes and specialized data structures required for fast marching.

Notice that even for larger grids, our time to solve for is on the order of a few seconds, and this is simply using the basic sparse solver in MATLAB. The time complexity of MATLAB’s direct solver is , which makes our approach here slightly slower than the optimal runtime. is achievable for sparse systems, such as ours, using multigrid methods, but we have opted to showcase the simplicity of our implementation versus pure speed. Figure 5 illustrates the resulting optimal paths from three different locations.

Figure 5: Optimal paths from various locations to common source. (Note: constraint areas are in white and traversable regions in black.)

Figure 6 illustrates our path planing approach on a variety of mazes: (a) demonstrates path planning while paying homage to Schrödinger’s cat, (b) is a traditional maze, and (c) is a whimsical result on a skull maze. Notice in all these mazes there are multiple solution paths back to the source, but only the shortest path is chosen.

(a) (b) (c)
Figure 6: (a) Schrödinger’s cat maze, (b) standard maze, (c) skull maze. Multiple solutions are possible but only the shortest path is chosen to be optimal. (Note: constraint areas are in white and traversable regions in black.)

The above discussed application of path planning can be readily extended to centerline extraction from medical imagery of blood vessels. One can view the image as a “maze” where we only want to travel on the vessels in the image. Figure 7, column (a) illustrates three example medical images: eye, brain, and hand. Columns (b) showcases our results, while (c) provides comparative analysis against fast sweeping. Notice that our solution naturally generates smoother centerline segmentations, which is a natural consequence of having a built-in, viscosity-like term in (4.5). Whereas, the fast marching and fast sweeping methods tend to have sharper transitions in the paths and deviate from the center—viscosity solutions can be used to alleviate this, but are not organic to the formulation like ours. In fact, it has been shown that additional constraints have to be incorporated to ensure fast marching approaches extract the centerline (13). Again, we stress the simplicity and ease-of-use of this approach, with only one free parameter , making it a viable option for many path planning related applications, such as robotic navigation, optimal manipulation, and vessel extraction in medical images.