Sampling-based Roadmap Planners are Probably Near-Optimal after Finite Computation

Sampling-based Roadmap Planners are Probably Near-Optimal after Finite Computation

Andrew Dobson    George V. Moustakides    Kostas E. Bekris
Abstract

Sampling-based motion planners have proven to be efficient solutions to a variety of high-dimensional, geometrically complex motion planning problems with applications in several domains. The traditional view of these approaches is that they solve challenges efficiently by giving up formal guarantees and instead attain asymptotic properties in terms of completeness and optimality. Recent work has argued based on Monte Carlo experiments that these approaches also exhibit desirable probabilistic properties in terms of completeness and optimality after finite computation. The current paper formalizes these guarantees. It proves a formal bound on the probability that solutions returned by asymptotically optimal roadmap-based methods (e.g., ) are within a bound of the optimal path length with clearance after a finite iteration . This bound has the form , where is an error term for the length a path in the  graph, . This bound is proven for general dimension Euclidean spaces and evaluated in simulation. A discussion on how this bound can be used in practice, as well as bounds for sparse roadmaps are also provided.

1 Background

Early contributions in sampling-based motion planning focused on overcoming the computational challenges posed by motion planning problems with high dimensionality and geometrically complex spaces (Latombe, 1991; LaValle, 2006; Choset et al., 2005). Two alternative families of sampling based planners emerged during this process, roadmap-based methods, such as (Kavraki et al., 1996; Kavraki and Latombe, 1998), which are suited to multi-query planning, and tree-based approaches, such as (LaValle, 1998; LaValle and Kuffner, 2000). Formal analysis of these methods followed, showing they are probabilistically complete (Kavraki et al., 1998; Hsu et al., 1998; Ladd and Kavraki, 2004; Chaudhuri and Koltun, 2009). Though these methods are probabilistically complete, the literature has shown that solution non-existence can be detected under certain conditions (Varadhan and Manocha, 2005; McCarthy et al., 2012). Other work tries to return high clearance paths, or characterize the -space obstacles (Wilmarth et al., 1999; Amato et al., 1998), and others return high quality solutions in practice (Raveh et al., 2011).

A major recent breakthrough was the identification of the conditions under which these methods asymptotically converge to optimal paths (Karaman and Frazzoli, 2011, 2010), resulting in algorithms such as and . Both probabilistic completeness and asymptotic optimality relate to desirable properties after infinite computation time. Since these methods are practically terminated after some finite amount of computation, these guarantees cannot provide information about expected path cost or of solution non-existence in practice (Varadhan and Manocha, 2005; McCarthy et al., 2012). Nevertheless, experiments show that asymptotically optimal methods do have very good behavior in terms of path quality after finite computation time, even when optimality constraints are relaxed to create more efficient methods with path length guarantees (Marble and Bekris, 2013; Salzman and Halperin, 2013; Wang et al., 2013). To address the gap between practical experience and formal guarantees, recent work by the authors has proposed that asymptotically optimal sampling-based planners also exhibit probabilistic near-optimality properties after finite computation using Monte Carlo experiments (Dobson and Bekris, 2013). This kind of guarantee is similar to the concept of Probably Approximately Correct () solutions in the machine learning literature (Valiant, 1984). The focus in this work is on the properties of roadmap-based methods, such as , as they are easier to analyze.

This work formally shows the Probabilistic Near-Optimality () of sampling-based roadmap methods in general settings and with limited assumptions. It provides the following contributions relative to the state-of-the-art and the previous contribution by the authors (Dobson and Bekris, 2013):

  • Prior work relied on Monte Carlo simulations to provide path length bounds, while this work achieves tight, closed-form bounds. This required solving a problem in geometric probability, which to the best of the authors’ knowledge had not been addressed before.

  • The framework is extended to work with a version of  which constructs a roadmap having edges, which is in the order of the lower bound for asymptotic optimality. Prior work used a method called , which creates edges.

2 Problem Setup

This section introduces terminology and definitions required for the formal analysis. This work examines kinematic planning in the configuration space, , where a robot’s configuration is cast as a point. is partitioned into the collision free () and colliding () configurations. This work reasons over  as a metric space, using the Euclidean -norm as a distance metric. The objective is to compute a path after finite iterations with path length guarantees relative to an -robust feasible path, i.e. a path with minimum distance to  of at least . If a motion planning problem is robustly feasible, there exists a set of -robust paths which answer a query, . Let the path of minimum length from the set be denoted as , with length . The path planning problem this work considers is the following:

Defn. 1 (Robustly Feasible Motion Planning)

Let the tuple be an instance of a Robustly Feasible Motion Planning Problem. Given a configuration space , two configurations , and a clearance value so that an -robust path exists so that and , find a solution path so that and .

To solve this problem, a slight variation of the  algorithm is applied (Karaman and Frazzoli, 2011). The high-level operations of  are as follows:

  •  generates configurations in , rejecting samples generated in , and then adding to a graph, , i.e. .

  • For each sample, an radius, where (Karaman and Frazzoli, 2011), local neighborhood in  is examined. If a local path to a neighbor can be generated which remains entirely in , an edge connecting them is added to .

  • The above steps are repeated iteratively until some stopping criterion is met.

This work’s variant  uses a larger connection radius, , and the reason why becomes apparent from the analysis. The larger connection radius allows for the following property to be argued:

Prop. 1 (Probabilistic Near-Optimality for RFMP)

An algorithm is probabilistically near-optimal for an RFMP problem , if for a finite iteration of and a given error threshold , it is possible to compute a probability so that for the length of a path answering the query in the planning structure computed by at iteration :

where is the length of the optimum -robust path for a value .

Figure 1: Hyperballs over an optimal path with radius and separation . Consecutive balls lie entirely within some clearance ball .

The clearance of the optimum path considered at iteration and the iteration after which point the guarantee can be achieved, can be computed given the analysis in this work.

Probabilistic Near-Optimality () can be argued by reasoning over a theoretical construction of hyperballs tiled over , where hyperballs are denoted as , being centered at configuration and having radius . The construction of these hyperballs is illustrated in Figure 1. Construct balls, centered along , i.e. , having radius , where , and where is the connection radius used by the algorithm. The construction enforces the centers of the balls to be apart, and by choice of , these balls have empty intersections. Then, since , the algorithm will attempt connections between any pairs of points between consecutive hyperballs.  guarantees are over a path in the planning structure with length . This path corresponds to the set of all the first samples generated in each of the hyperballs.

Then, using the steps from related work (Karaman and Frazzoli, 2011), can be derived, as well as, for an equivalent  variant. These values are derived in the next section.

3 Derivation

This section provides a bound on the probability that  returns poor-quality paths. Namely, it constructs the probability of being times larger than the optimal path after iterations. Then, it provides a guarantee , where is an input multiplicative bound, and is a confidence bound. A guarantee of this type can be considered a Probably Near Optimal () property. First, the algorithmic parameters and are derived.

3.1 Deriving

This section employs the same steps as the derivation for in the literature (Karaman and Frazzoli, 2011). The objective of this section is to leverage a bound on the probability that  will fail to produce a sample in each of the hyperballs over  to derive an appropriate constant for the connection radius. Let this connection radius employed by the  variant be . Then, by construction, this connection radius is at least four times larger than the radius of a hyperball, i.e. . Then,

where is the -dimensional constant for the volume of a hyperball. Also by construction, . Then, the number of hyperballs constructed over  can be bounded by .

Then, in line with previous work in the literature (Kavraki et al., 1998; Karaman and Frazzoli, 2011), the probability of failure can be bounded using the probability that a single hyperball contains no sample. The event that a single hyperball does not contain a sample is denoted as , and has probability:

Then, since ,

(1)

Now, compute bounds on the event  that at least one ball does not contain a sample as:

Substituting the computed value for , and from Eq. 1:

Now, if is less than infinity, this implies by the Borel-Cantelli theorem that (Grimmet and Stirzaker, 2001). Furthermore, by the Zero-one Law, , meaning the probability of coverage converges to in the limit.

In order for the sum to be less than infinity, it is sufficient to show that the exponent, . The algorithm can ensure this by using an appropriate value of . Solving the inequality for shows that it suffices that:

3.2 Deriving for

This section employs the same steps as the derivation for in the literature (Karaman and Frazzoli, 2011). The objective of this section is to derive the function, , for a  variant of . The high-level idea is that it will be shown that two events happen infinitely often with the given ; the set of hyperballs each contain at least one sample, and that each ball of radius has no more than samples inside it. From this, it is clear that if  attempts to connect each sample with neighbors, it will attempt connections between samples in neighboring hyperballs.

Then, using the computed value of  from above,

Let be an indicator random variable which takes value when there is a sample in some arbitrary hyperball of radius . Then, . Since each sample is drawn independently of the others, the number of samples in a ball can be expressed as a random variable , such that . Due to being a Bernoulli random variable, the Chernoff Bound can be employed to bound the probability of taking large values, namely:

Then, let . Substituting this above yields:

Now, in order for the connections to attempt connections outside of a -ball, it must be that:

which clearly holds if . This implies that .

Finally, consider the event that even one of the balls has more than samples:

Then, it is clear that , which by the Borel-Cantelli Theorem implies that , and furthermore, via the Zero-one Law, i.e. the number of samples in the -ball is almost certainly less than .

Finally, using the result showing the convergence of to , and the above result for , it can be concluded that , implying that for the choice of ,  attempts the appropriate connections.

3.3 Deriving the Probability of Coverage

The derivation of the probability of path coverage leverages several results in the literature (Kavraki et al., 1998; Karaman and Frazzoli, 2011; Dobson and Bekris, 2013). The objective is to exactly derive the probability that at any finite iteration, , the algorithm has generated a sample in each of the hyperballs over . Deriving this probability will work off of the result shown in prior work which gives the probability of coverage for a similar construction of hyperballs to that employed here (Dobson and Bekris, 2013), which shows:

(2)

where is the radius of the set of hyperballs and is the number of such hyperballs. Here, the inner term is the probability of failing to throw a sample in a particular hyperball after samples have been thrown. Then, the probability of success for throwing a sample in all of the hyperballs yields the above form. This holds for any values of and such that the hyperballs are disjoint, which is exactly the construction employed in this work. Then, substituting the values computed for and from Section 3.1 above yields:

where , , and is the d-dimensional constant for the volume of a hyperball, i.e. . Then, simplifying this expression yields the following Lemma:

Lemma 1 (Probability of Path Coverage)

Let  be the event that for one execution of   there exists at least one sample in each of the hyperballs of radius over the clearance robust optimal path, , for a specific value of and . Then,

(3)

Where and .

3.4 Deriving a probabilistic bound

Let be the event that there does not exist a sample in each of the hyperballs covering a path, i.e. . Then, the value for can be expressed as:

This is because the probability of returning a low quality path is expressed as a sum of probabilities, when event  has occurred, and when  has not occurred. Since , then via Lemma 1, both and are known for known and . It is assumed that the probability of a path being larger than is quite high if has not happened, i.e. is close to ; therefore, this probability can be upper bounded by . All that remains is to compute . Let be a random variable identically distributed with , but having mean, i.e. . Then, let

Then, the absolute value can be removed, as or . Then, the probability is equal to the sum:

where due to symmetry,

Rearranging the terms inside the probability yields:

This probability will be bounded with Chebyshev’s Inequality, which states:

In order to employ this inequality, both and for the length of a path in the  planning structure,  are needed.

3.5 Approximation of in

Figure 2: The differential over a lower-dimensional hyperball, illustrated for .

Let, , where is the length of a single segment between two random samples in consecutive disjoint balls. Then, because all are I.I.D., . Then, to compute , is computed. This problem is similar to the problem known as the ball-line picking problem from geometric probability (Santalo, 1976). The ball-line picking problem is to compute the average length of a segment lying within a d-dimensional hyperball, where the endpoints of the segment are uniformly distributed within the hyperball. The ball-line picking problem yields an analytical solution in general dimension; however, in the problem examined here, there are two disjoint hyperballs rather than a single hyperball. To the best of the authors’ knowledge, this variant of the problem has not been previously studied. Computing this value requires integration over the possible locations of the endpoints of the segment, as illustrated in Figure 3.

Figure 3: Illustrations in 3D of the mean calculation. (top) The first set of integrals is performed over the left hyperball, averaging the distance between points and . (bottom) Using the result from the first set of integrals, a second set of integrals is performed over the second hyperball, yielding the expected value.

The integration is broken into two steps, and the first integral will be for the situation depicted in Figure 3 (left). The objective is to get an expected value for the distance between points and . Here, represents a random point within the first hyperball, while is some fixed point within the second hyperball which has distance from the center of the first hyperball. Without loss of generality, can be displaced along only the first coordinate, . To get an expected value, this distance is integrated over all points within the first hyperball, and then divided by the volume of the d-dimensional hyperball. In this work, the volume of a d-dimensional hypersphere of radius is denoted , where is a constant dependent on the dimension of the space. Taking the distance between and to be produces the following integral:

This integral will be converted from a d-dimensional integral into a double integral using substitution. First, let . This allows performing the integral over only two variables, and ; however, the form of the integral changes, as the differential is adapted as illustrated in Figure 2. This differential, , is taken over a lower dimensional hypersphere, of dimension , as is taking the place of coordinates. Then:

where . Taking this derivative, , and substituting into yields:

The integral can be represented in terms of polar coordinates, where , , and . This gives

A second-order Taylor Approximation for the square root is taken. Let , where . The approximation will be taken about the point . This is reasonable given that overall, is considered to be smaller than the separation between consecutive hyperballs, . Take the second-order Taylor Approximation as:

Taking a derivative of yields and . Then,

substituting ,

Then, as this is a second-order approximation, the third- and fourth-order terms are considered negligible, and thus, the approximation results in:

Substituting the result:

Simplifying this integral requires the following Lemmas:

Lemma 2 (Value of )

In terms of the hyperball volume constant, ,

Proof

For simplicity, let be denoted as . Then:

where is a constant dependent on the dimension, . Then, the volume can be computed as an integral of the following form:

where the second differential is over a sphere of radius of dimension . Then,

Now, to simplify this integral, it will be converted to polar coordinates, using , , and . Substituting these values yields:

Lemma 3 (Recurrence relation of )

For , the following recurrence relation holds:

Proof To determine this recurrence, the following expression will be solved for :

Substitute the result from Lemma 2, getting:

Then, substituting the value of yields:

Applying these Lemmas:

The second integral over will integrate to , due to the presence of cosine, while the other terms leverage Lemmas 2 and 3:

This is only an intermediate result, however, and it must be integrated over once again to consider all possible placements of the point in the second hyperball, as illustrated in Figure 3(right). In order to do so, write in terms of by taking the distance between and . Then, , and is computed as:

Steps similar to what was just taken to derive the intermediate result are used to compute this integral. As a matter of simplicity, note that the second term inside the integral is already a second-order term, which means taking the integral will result in higher-order terms. Since , the second term will take only the constant term of the Taylor Approximation for . Then, taking :

Again, perform a polar coordinate transformation so as to take the integral: