Densification Strategies for Anytime Motion Planning over Large Dense Roadmaps
We consider the problem of computing shortest paths in a dense motion-planning roadmap . We assume that , the number of vertices of , is very large. Thus, using any path-planning algorithm that directly searches , running in time, becomes unacceptably expensive. We are therefore interested in anytime search to obtain successively shorter feasible paths and converge to the shortest path in . Our key insight is to provide existing path-planning algorithms with a sequence of increasingly dense subgraphs of . We study the space of all (-disk) subgraphs of . We then formulate and present two densification strategies for traversing this space which exhibit complementary properties with respect to problem difficulty. This inspires a third, hybrid strategy which has favourable properties regardless of problem difficulty. This general approach is then demonstrated and analyzed using the specific case where a low-dispersion deterministic sequence is used to generate the samples used for . Finally we empirically evaluate the performance of our strategies for random scenarios in and and on manipulation planning problems for a 7 DOF robot arm, and validate our analysis.
Densification Strategies for Anytime Motion Planning over Large Dense Roadmaps
Shushman Choudhury, Oren Salzman, Sanjiban Choudhury and Siddhartha S. Srinivasa
00footnotetext: *Work by Sh. C., O. S. and S. S. was (partially) funded by the National Science Foundation IIS (#1409003), Toyota Motor Engineering & Manufacturing (TEMA), and the Office of Naval Research.00footnotetext: The Robotics Institute, Carnegie Mellon University shushmac, osalzman, sanjibac, ss5 @andrew.cmu.edu
Let be a motion-planning roadmap with vertices embedded in some configuration space (C-space). We consider the problem of finding a shortest path between two vertices of . Specifically, we are interested in settings, prevalent in motion planning, where testing if an edge of the graph is collision free or not is computationally expensive. We call such graphs Explicit graphs with Expensive Edge-Evaluation or E-graphs. Moreover, we are interested in the case where is very large, and where the roadmap is dense, i.e. . This makes any path-finding algorithm that directly searches , subsequently performing edge evaluations, impractical. We wish to obtain an approximation of the shortest path quickly and refine it as time permits. We refer to this problem as anytime planning on large E-graphs.
Our problem is motivated by previous work (Sec. II) on sampling-based motion-planning algorithms that construct a fixed roadmap as part of a preprocessing stage [16, 1, 24]. These methods are used to efficiently approximate the structure of the C-space. When the size of the roadmap is large, even finding a solution, let alone an optimal one, becomes a non-trivial problem requiring specifically-tailored search algorithms . Our roadmap formulation departs from the PRM setting which chooses a connectivity radius that achieves asymptotic optimality. We are interested in dense, nearly-complete roadmaps that capture as much C-space connectivity information as possible, and probably have one or more paths that are strictly shorter than the optimal path for the standard PRM.
Our key insight for solving the anytime planning problem in large E-graphs is to provide existing path-planning algorithms with a sequence of increasingly dense subgraphs of , using some densification strategy. At each iteration, we run a shortest-path algorithm on the current subgraph to obtain an increasingly tighter approximation of the true shortest path. This favours using incremental search techniques that reuse information between calls. We present a number of such strategies, and we address the question:
How does the densification strategy affect the time at which the first solution is found, and the quality of the solutions obtained?
We focus on -disk subgraphs of , i.e. graphs defined by a specific set of vertices where every two vertices are connected if their mutual distance is at most . This induces a space of subgraphs (Fig. 1) defined by the number of vertices and the connection radius (which, in turn, defines the number of edges). We observe two natural ways to traverse this space. The first is to define each subgraph over the entire set of vertices and incrementally add batches of edges by increasing (vertical line at in Fig. 1). Alternatively, we can incrementally add batches of vertices and, at each iteration, consider the complete graph () defined over the current set of vertices (parabolic arc in Fig. 1). We call these variants edge batching and vertex batching, respectively. Vertex batching and edge batching seem to be better suited for easy and hard problems, respectively, as visualized and explained in Fig. 2 and Fig. 3. This analysis motivates our hybrid batching strategy, which is more robust to problem difficulty.
Our main contribution is the formulation and analysis of various densification strategies to traverse the space of subgraphs of (Sec. IV). We analyse the specific case where the vertices of are obtained from a low-dispersion deterministic sequence (Sec. V). Specifically, we describe the structure of the space of subgraphs and demonstrate the tradeoff between effort and bounded suboptimality for our densification strategies. Furthermore, we explain how this tradeoff varies with problem difficulty, which is measured in terms of the clearance of the shortest path in .
We discuss implementation decisions and parameters that allow us to efficiently use our strategies in dense E graphs in Sec. VI. We then empirically validate our analysis on several random scenarios in and and on manipulation planning problems for a 7 DOF robot arm (Sec. VII). Finally, we discuss directions of future work (Sec. VIII).
Ii Related Work
Ii-a Sampling-based motion planning
Sampling-based planning approaches build a graph, or a roadmap, in the C-space, where vertices are configurations and edges are local paths connecting configurations. A path is then found by traversing this roadmap while checking if the vertices and edges are collision free. Initial algorithms such as PRM  and RRT  were concerned with finding a feasible solution. However, in recent years, there has been growing interest in finding high-quality solutions. Karaman and Frazzoli  introduced variants of the PRM and RRT algorithms, called PRM* and RRT*, respectively and proved that, asymptotically, the solution obtained by these algorithms converges to the optimal solution. However, the running times of these algorithms are often significantly higher than their non-optimal counterparts. Thus, subsequent algorithms have been suggested to increase the rate of convergence to high-quality solutions. They use different approaches such as lazy computation [1, 13, 22], informed sampling , pruning vertices , relaxing optimality , exploiting local information  and lifelong planning together with heuristics . In this work we employ several such techniques in order to speed up the convergence rate of our algorithms.
Ii-B Finite-time properties of sampling-based algorithms
We are interested in bounding the quality of a solution obtained using a fixed roadmap for a finite number of samples. When the samples are generated from a deterministic sequence, Janson et. al. [12, Thm2] give a closed-form solution bounding the quality of the solution of a PRM whose roadmap is an -disk graph. The bound is a function of , the number of vertices and the dispersion of the set of points used. (See Sec. III for an exact definition of dispersion and for the exact bound given by Janson et. al.).
Dobson et. al.  provide similar bounds when randomly sampled i.i.d points are used. Specifically, they consider a PRM whose roadmap is an -disk graph for a specific radius where is the number of points, is the dimension and is some constant. They then give a bound on the probability that the quality of the solution will be larger than a given threshold.
Ii-C Efficient path-planning algorithms
We are interested in path-planning algorithms that attempt to reduce the amount of computationally expensive edge expansions performed in a search. This is typically done using heuristics such as for A* , for Iterative Deepening A*  and for Lazy Weighted A* . Some of these algorithms, such as Lifelong Planning A*  allow recomputing the shortest path in an efficient manner when the graph undergoes changes. Anytime variants of A* such as Anytime Repairing A*  and Anytime Nonparametric A*  efficiently run a succession of A* searches, each with an inflated heuristic. This potentially obtains a fast approximation and refines its quality as time permits. However, there is no formal guarantee that these approaches will decrease search time and they may still search all edges of a given graph . For a unifying formalism of such algorithms relevant to E graphs and additional references, see .
Iii Notation, Problem Formulation and Mathematical Background
We provide standard notation and define our problem concretely. We then provide necessary mathematical background about the dispersion of a set of points.
Iii-a Notation and problem formulation
Let denote a -dimensional C-space, the collision-free portion of , its complement and let be some distance metric. For simplicity, we assume that and that is the Euclidean norm. Let be some sequence of points where and denote by the first elements of . We define the -disk graph where , and each edge has a weight . See [14, 25] for various properties of such graphs in the context of motion planning. Finally, set , namely, the complete111Using a radius of ensures that every two points will be connected due to the assumption that and that is Euclidean. graph defined over .
For ease of analysis we assume that the roadmap is complete, but our densification strategies and analysis can be extended to dense roadmaps that are not complete. Furthermore, our definition assumes that is embedded in the C-space. Thus, we will use the terms vertices and configurations as well as edges and paths in C-space interchangeably.
A query is a scenario with start and target configurations. Let the start and target configurations be and , respectively. The obstacles induce a mapping called a collision detector which checks if a configuration or edge is collision-free or not. Typically, edges are checked by densely sampling along the edge, hence the term expensive edge evaluation. A feasible path is denoted by where and . Slightly abusing this notation, set to be the shortest collision-free path from to that can be computed in , its clearance as and denote by and the shortest path and its clearance that can be computed in , respectively. Note that a path has clearance if every point on the path is at a distance of at least away from every obstacle.
Our problem calls for finding a sequence of increasingly shorter feasible paths , in , converging to . We assume that is sufficiently large, and the roadmap covers the space well enough so that for any reasonable set of obstacles, there are multiple feasible paths to be obtained between start and goal. Therefore, we do not consider a case where the entire roadmap is invalidated by obstacles. The large value of makes any path-finding algorithm that directly searches , thereby performing calls to the collision-detector, too time-consuming to be practical.
The dispersion of a sequence is defined as . Intuitively, it can be thought of as the radius of the largest empty ball (by some metric) that can be drawn around any point in the space without intersecting any point of . A lower dispersion implies a better coverage of the space by the points in . When is the -dimensional Euclidean space and is the Euclidean distance, deterministic sequences with dispersion of order exist. A simple example is a set of points lying on grid or a lattice.
Other low-dispersion deterministic sequences exist which also have low discrepancy, i.e. they appear to be random for many purposes. One such example is the Halton sequence . We will use them extensively for our analysis because they have been studied in the context of deterministic motion planning [12, 2]. For Halton sequences, tight bounds on dispersion exist. Specifically, where is the prime number. Subsequently in this paper, we will use (and not ) to denote the dispersion of the first points of .
Janson et. al. bound the length of the shortest path computed over an -disk roadmap constructed using a low-dispersion deterministic sequence [12, Thm2]. Specifically, given start and target vertices, consider all paths connecting them which have -clearance for some . Set to be the maximal clearance over all such . If , then for all set to be the cost of the shortest path in with -clearance. Let be the length of the path returned by a shortest-path algorithm on with having dispersion . For , we have that
Notably, for random i.i.d. points,
the lower bound on the dispersion is  which is strictly larger than for deterministic samples.
For domains other than the unit hypercube, the insights from the analysis will generally hold. However, the dispersion bounds may become far more complicated depending on the domain, and the distance metric would need to be scaled accordingly. This may result in the quantitative bounds being difficult to deduce analytically.
We now discuss our general approach of searching over the space of all (-disk) subgraphs of . We start by characterizing the boundaries and different regions of this space. Subsequently, we introduce two densification strategies—edge batching and vertex batching. As we will see, these two are complementary in nature, which motivates our third strategy, which we call hybrid batching.
Iv-a The space of subgraphs
To perform an anytime search over , we iteratively search a sequence of graphs . If no feasible path exists in the subgraph, we move on to the next subgraph in the sequence, which is more likely to have a feasible path.
We use an incremental path-planning algorithm that allows us to efficiently recompute shortest paths. Our problem setting of increasingly dense subgraphs is particularly amenable to such algorithms. However, any alternative shortest-path algorithm may be used. We emphasize again that we focus on the meta-algorithm of choosing which subgraphs to search. Further details on the implementation of these approaches are provided in Sec. VI.
Fig. 4 depicts the set of possible graphs for all choices of and . Specifically, the graph depicts as a function of . We discuss Fig. 4 in detail to motivate our approach for solving the problem of anytime planning in large E-graphs and the specific sequence of subgraphs we use. First, consider the curves that define the boundary of all possible graphs: The vertical line corresponds to subgraphs defined over the entire set of vertices, where batches of edges are added as increases. The parabolic arc , corresponds to complete subgraphs defined over increasingly larger sets of vertices.
Recall that we wish to approximate the shortest path which has some minimal clearance . Given a specific graph, to ensure that a path that approximates is found, two conditions should be met: (i) The graph includes some minimal number of vertices. The exact value of will be a function of the dispersion of the sequence and the clearance . (ii) A minimal connection radius is used to ensure that the graph is connected. Its value will depend on the sequence (and not on ).
Requirement (i) induces a vertical line at . Any point to the left of this line corresponds to a graph with too few vertices to prove any guarantee that a solution will be found. We call this the vertex-starvation region. Requirement (ii) induces a curve such that any point below this curve corresponds to a graph which may be disconnected. We call this the edge-starvation region. The exact form of the curve depends on the sequence that is used. The specific value of and the form of when Halton sequences are used is provided in Sec. V.
Iv-B Edge and vertex batching
Our goal is to search increasingly dense subgraphs of . This corresponds to a sequence of points on the space of subgraphs (Fig. 4) that ends at the upper right corner of the space. Two natural strategies emerge from this. We defer the discussion on the choice of parameters used for each strategy to Sec. VI.
Iv-B1 Edge batching
All subgraphs include the complete set of vertices and the edges are incrementally added via an increasing connection radius. Specifically, and where and is some small initial radius. Here, we choose , where is the edge-starvation boundary curve defined previously. Using Fig. 4, this induces a sequence of points along the vertical line at starting from and ending at .
Iv-B2 Vertex batching
In this variant, all subgraphs are complete graphs defined over increasing subsets of the complete set of vertices . Specifically , where and the base term is some small number of vertices. Because we have no priors about the obstacle density or distribution, the chosen is a constant and does not vary due to or due to the volume of . Using Fig. 4, this induces a sequence of points along the parabolic arc starting from and ending at . The vertices are chosen in the same order with which they are generated by . So, has the first samples of , and so on.
Intuitively, the relative performance of these densification strategies depends on problem hardness. We use the clearance of the shortest path, , to represent the hardness of the problem. This, in turn, defines which bounds the vertex-starvation region. Specifically we say that a problem is easy (resp. hard) when (resp. ). For easy problems, with larger gaps between obstacles, vertex batching can find a solution quickly with fewer samples and long edges, thereby restricting the work done for future searches. In contrast, assuming that , edge batching will find a solution on the first iteration but the time to do so may be far greater than for vertex batching because the number of samples is so large. For hard problems vertex batching may require multiple iterations until the number of samples it uses is large enough and it is out of the vertex-starvation region. Each of these searches would exhaust the fully connected subgraph before terminating. This cumulative effort is expected to exceed that required by edge batching for the same problem, which is expected to find a feasible albeit sub-optimal path on the first search. A visual depiction of this intuition is given in Fig. 5.
Iv-C Hybrid batching
Vertex and edge batching exhibit complementary properties for problems with varying difficulty. Yet, when a query is given, the hardness of the problem is not known a-priori. In this section we propose a hybrid approach that exhibits favourable properties, regardless of the hardness of the problem.
This hybrid batching strategy commences by searching over a graph where is the same as for vertex batching and . As long as , the next batch has and . When (and ), all subsequent batches are similar to edge batching, i.e., (and ).
This can be visualized on the space of subgraphs as sampling along the curve from until intersects and then sampling along the vertical line . See Fig. 1 and Fig. 5 for a mental picture. As we will see in our experiments, hybrid batching typically performs comparably (in terms of path quality) to vertex batching on easy problems and to edge batching on hard problems.
V Analysis for Halton Sequences
In this section we consider the space of subgraphs and the densification strategies that we introduced in Sec. IV for the specific case that is a Halton sequence. We start by describing the boundaries of the starvation regions. We then continue by simulating the bound on the quality of the solution obtained as a function of the work done for each of our strategies.
V-a Starvation-region bounds
To bound the vertex starvation region we wish to find after which bounded sub-optimality can be guaranteed to find the first solution. Note that is the clearance of the shortest path in connecting and , that denotes the prime and for Halton sequences. For Eq. (III-B) to hold we require that . Thus,
Indeed, one can see that as the problem becomes harder (namely, decreases), and the entire vertex-starvation region grows.
We now show that for Halton sequences, the edge-starvation region has a linear boundary, i.e. . Using Eq. (III-B) we have that the minimal radius required for a graph with vertices is
For any -disk graph , the number of edges is . In our case,
V-B Effort-to-quality ratio
We now compare our densification strategies in terms of their worst-case anytime performance. Specifically, we plot the cumulative amount of work as subgraphs are searched, measured by the maximum number of edges that may be evaluated, as a function of the bound on the quality of the solution that may be obtained using Eq. (III-B). We fix a specific setting (namely and ) and simulate the work done and the suboptimality using the necessary formulae. This is done for an easy and a hard problem. See Fig. 6.
Indeed, this simulation coincides with our discussion on properties of both batching strategies with respect to the problem difficulty. Vertex batching outperforms edge batching on easy problems and vice versa. Hybrid batching lies somewhere in between the two approaches with the specifics depending on problem difficulty.
Vi-a Search Parameters
We choose the parameters for each densification strategy such that the number of batches is .
Vi-A1 Edge Batching
We set . Recall that for -disk graphs, the average degree of vertices is , therefore this value (and hence the number of edges) is doubled after each iteration. We set .
Vi-A2 Vertex Batching
We set the initial number of vertices to be , irrespective of the roadmap size and problem setting, and set . After each batch we double the number of vertices.
Vi-A3 Hybrid Batching
The parameters are derived from those used for vertex and edge batching. We begin with , and after each batch we increase the vertices by a factor of . For these searches, i.e. in the region where , we use . This ensures the same radius at as for edge batching. Subsequently, we increase the radius as , where .
Our analysis and intuition is agnostic to any specific algorithms or implementations. However, for these densification strategies to be useful in practice, we employ certain optimizations.
Vi-B1 Search Technique
Each subgraph is searched using Lazy  with incremental rewiring as in . For details, see the search algorithm used for a single batch of . This lazy variant of has been shown to outperform other path-planning techniques for motion-planning search problems with expensive edge evaluations .
Vi-B2 Caching Collision Checks
Each time the collision-detector is called for an edge, we store the ID of the edge along with the result using a hashing data structure. Subsequent calls for that specific edge are simply lookups in the hashing data structure which incur negligible running time. Thus, is called for each edge at most once.
Vi-B3 Sample Pruning and Rejection
For anytime algorithms, once an initial solution is obtained, subsequent searches should be focused on the subset of states that could potentially improve the solution. When the space is Euclidean, this, so-called “informed subset”, can be described by a prolate hyperspheroid . For our densification strategies, we prune away all existing vertices (for all batching), and reject the newer vertices (for vertex and hybrid batching), that fall outside the informed subset.
Successive prunings due to intermediate solutions significantly reduces the average-case complexity of future searches , despite the extra time required to do so, which is accounted for in our benchmarking. Note that for Vertex and Hybrid Batching, which begin with only a few samples, samples in successive batches that are outside the current ellipse can just be rejected. This is cheaper than pruning, which is required for Edge Batching. Across all test cases, we noticed poorer performance when pruning was omitted.
In the presence of obstacles, the extent to which the complexity is reduced due to pruning is difficult to obtain analytically. As shown in Theorem VI-B3, however, in the assumption of free space, we can derive results for Edge Batching. This motivates using this heuristic.
Running edge batching in an obstacle-free -dimensional Euclidean space over a roadmap constructed using a deterministic low-dispersion sequence with and , while using sample pruning and rejection makes the worst-case complexity of the total search, measured in edge evaluations, .
Let denote the cost of the solution obtained after iterations by our edge batching algorithm, and denote the cost of the optimal solution. Using Eq. (III-B),
where . Using the parameters for edge batching,
Let be the maximum number of iterations and recall that we have .
Note that the fact that vertices and edges are pruned away, does not change the bound provided in Eq. (2). To compute the actual number of edges considered at the th iteration, we bound the volume of the prolate hyperspheriod in (see ) by,
where is a constant. Using Eq. (3) we can bound the volume of the ellipse used at the ’th iteration, where ,
Furthermore, we choose such that . Now, the number of vertices in can be bounded by,
Recall that we measure the amount of work done by the search at iteration using , the number of edges considered. Thus,
Finally, the total work done by the search over all iterations is
Our implementations of the various strategies are based on the publicly available OMPL  implementation of . Other than the specific parameters and optimizations mentioned earlier, we use the default parameters of . Notably, we use the Euclidean distance heuristic, an approximately sorted queue, and limit graph pruning to changes in path length greater than 1%.
Vii-a Random scenarios
The different batching strategies are compared to each other on problems in for . The domain is the unit hypercube while the obstacles are randomly generated axis-aligned -dimensional hyper-rectangles. All problems have a start configuration of and a goal configuration of . We used the first and points of the Halton sequence for the and problems, respectively.
Two parameters of the obstacles are varied to approximate the notion of problem hardness described earlier – the number of obstacles and the fraction of which is in , which we denote by . Specifically, in , we have easy problems with obstacles and , and hard problems with obstacles and . In we maintain the same values for , but use and obstacles for easy and hard problems, respectively. For each problem setting (/; easy/hard) we generate different random scenarios and evaluate each strategy with the same set of samples on each of them. Each random scenario has a different set of solutions, so we show a representative result for each problem setting in Fig. 7.
The results align well with our intuition about the relative performance of the densification strategies on easy and hard problems. Notice that the naive strategy of searching with directly requires considerably more time to report the optimum solution than any other strategy. We mention the numbers in the accompanying caption of Fig. 7 but avoid plotting them so as not to stretch the figures. Note the reasonable performance of hybrid batching across problems and difficulty levels.
Vii-B Manipulation problems
We also run simulated experiments on HERB , a mobile manipulator designed and built by the Personal Robotics Lab at Carnegie Mellon University. The planning problems are for the 7-DOF right arm, on the problem scenarios shown in Fig. 8. We use a roadmap of vertices defined using a Halton sequence which was generated using the first prime numbers. In addition to the batching strategies, we also evaluate the performance of , using the same set of samples . had been shown to achieve anytime performance superior to contemporary anytime algorithms. The hardness of the problems in terms of clearance is difficult to visualize in terms of the C-space of the arm, but the goal regions are considerably constrained. As our results show (Fig. 8), all densification strategies solve the difficult planning problem in reasonable time, and generally outperform the BIT* strategy on the same set of samples.
Viii Conclusion and Future Work
We present, analyze and implement several densification strategies for anytime planning on large E graphs. We provide theoretical motivation for these densification techniques, and show that they outperform the naive approach significantly on difficult planning problems.
In this work we demonstrate our analysis for the case where the set of samples is generated from a low-dispersion deterministic sequence. A natural extension is to provide a similar analysis for a sequence of random i.i.d. samples. Here,  instead of . When out of the starvation regions we would like to bound the quality obtained similar to the bounds provided by Eq. (III-B). A starting point would be to leverage recent results by Dobson et. al.  for Random Geometric Graphs under expectation, albeit for a specific radius .
Another question we wish to pursue is alternative possibilities to traverse the subgraph space of . As depicted in Fig. 1, our densification strategies are essentially ways to traverse this space . We discuss three techniques that traverse relevant boundaries of the space. But there are innumerable trajectories that a strategy can follow to reach the optimum. It would be interesting to compare our current batching methods, both theoretically and practically, to those that go through the interior of the space.
-  R. Bohlin and L. E. Kavraki. Path planning using lazy PRM. In IEEE International Conference on Robotics and Automation, pages 521–528, 2000.
-  M. S. Branicky, S. M. LaValle, K. Olson, and L. Yang. Quasi-randomized path planning. In IEEE International Conference on Robotics and Automation, pages 1481–1487, 2001.
-  S. Choudhury, J. D. Gammell, T. D. Barfoot, S. S. Srinivasa, and S. Scherer. Regionally accelerated batch informed trees (rabit*): A framework to integrate local information into optimal path planning. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pages 4207–4214. IEEE, 2016.
-  B. J. Cohen, M. Phillips, and M. Likhachev. Planning single-arm manipulations with n-arm robots. In RSS, 2014.
-  I. A. Şucan, M. Moll, and L. E. Kavraki. The Open Motion Planning Library. IEEE Robotics & Automation Magazine, 2012.
-  C. M. Dellin and S. S. Srinivasa. A unifying formalism for shortest path problems with expensive edge evaluations via lazy best-first search over paths with edge selectors. In International Conference on Automated Planning and Scheduling, pages 459–467, 2016.
-  A. Dobson, G. V. Moustakides, and K. E. Bekris. Geometric probability results for bounding path quality in sampling-based roadmaps after finite computation. In IEEE International Conference on Robotics and Automation, 2015.
-  J. D. Gammell, S. S. Srinivasa, and T. D. Barfoot. Informed RRT*: Optimal sampling-based path planning focused via direct sampling of an admissible ellipsoidal heuristic. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2997–3004, 2014.
-  J. D. Gammell, S. S. Srinivasa, and T. D. Barfoot. Batch informed trees (BIT*): Sampling-based optimal planning via the heuristically guided search of implicit random geometric graphs. In IEEE International Conference on Robotics and Automation, pages 3067–3074, 2015.
-  J. H. Halton. On the efficiency of certain quasi-random sequences of points in evaluating multi-dimensional integrals. Numer. Math., 2(1):84–90, 1960.
-  P. E. Hart, N. J. Nilsson, and B. Raphael. A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems, Science, and Cybernetics, SSC-4(2):100–107, 1968.
-  L. Janson, B. Ichter, and M. Pavone. Deterministic sampling-based motion planning: Optimality, complexity, and performance. CoRR, abs/1505.00023, 2015.
-  L. Janson, E. Schmerling, A. Clark, and M. Pavone. Fast marching tree: A fast marching sampling-based method for optimal motion planning in many dimensions. I. J. Robotics Res., pages 883–921, 2015.
-  S. Karaman and E. Frazzoli. Sampling-based algorithms for optimal motion planning. I. J. Robotics Res., 30(7):846–894, 2011.
-  L. E. Kavraki, M. N. Kolountzakis, and J. Latombe. Analysis of probabilistic roadmaps for path planning. IEEE Trans. Robotics and Automation, 14(1):166–171, 1998.
-  L. E. Kavraki, P. Svestka, J.-C. Latombe, and M. H. Overmars. Probabilistic roadmaps for path planning in high dimensional configuration spaces. IEEE Trans. Robotics, 12(4):566–580, 1996.
-  S. Koenig, M. Likhachev, and D. Furcy. Lifelong planning A*. Artificial Intelligence, 155(1):93–146, 2004.
-  R. E. Korf. Iterative-deepening-A*: An optimal admissible tree search. In Joint Conference on Artificial Intelligence, pages 1034–1036, 1985.
-  S. M. LaValle and J. J. Kuffner. Randomized kinodynamic planning. In IEEE International Conference on Robotics and Automation, pages 473–479, 1999.
-  M. Likhachev, G. J. Gordon, and S. Thrun. ARA*: Anytime A* with provable bounds on sub-optimality. In Advances in Neural Information Processing Systems, pages 767–774, 2003.
-  H. Niederreiter. Random Number Generation and quasi-Monte Carlo Methods. Society for Industrial and Applied Mathematics, 1992.
-  O. Salzman and D. Halperin. Asymptotically-optimal motion planning using lower bounds on cost. In Robotics and Automation (ICRA), 2015 IEEE International Conference on, pages 4167–4172. IEEE, 2015.
-  O. Salzman and D. Halperin. Asymptotically near-optimal RRT for fast, high-quality motion planning. IEEE Trans. Robotics, 32(3):473–483, 2016.
-  K. Solovey, O. Salzman, and D. Halperin. Finding a needle in an exponential haystack: Discrete RRT for exploration of implicit roadmaps in multi-robot motion planning. I. J. Robotics Res., 35(5):501–513, 2016.
-  K. Solovey, O. Salzman, and D. Halperin. New perspective on sampling-based motion planning via random geometric graphs. In RSS, 2016.
-  S. S. Srinivasa, D. Ferguson, C. J. Helfrich, D. Berenson, A. Collet, R. Diankov, G. Gallagher, G. Hollinger, J. J. Kuffner, and M. V. Weghe. HERB: a home exploring robotic butler. Autonomous Robots, 28(1):5–20, 2010.
-  J. van den Berg, R. Shah, A. Huang, and K. Y. Goldberg. Anytime nonparametric A. In Association for the Advancement of Artificial Intelligence, pages 105–111, 2011.
-  C. M. Wilt and W. Ruml. When does weighted A* fail? In Symposium on Combinatorial Search, pages 137–144, 2012.