Bayesian Active Edge Evaluation on Expensive Graphs

Bayesian Active Edge Evaluation on Expensive Graphs

Sanjiban Choudhury    Siddhartha Srinivasa and Sebastian Scherer S. Choudhury and S. Scherer are with The Robotics Institute, Carnegie Mellon University, USA {sanjiban, basti}@cmu.eduS. Srinivasa is with School of Computer Science and Engineering, University of Washington, USA {siddh}@cs.uw.edu
Abstract

Robots operate in environments with varying implicit structure. For instance, a helicopter flying over terrain encounters a very different arrangement of obstacles than a robotic arm manipulating objects on a cluttered table top. State-of-the-art motion planning systems do not exploit this structure, thereby expending valuable planning effort searching for implausible solutions. We are interested in planning algorithms that actively infer the underlying structure of the valid configuration space during planning in order to find solutions with minimal effort.
Consider the problem of evaluating edges on a graph to quickly discover collision-free paths. Evaluating edges is expensive, both for robots with complex geometries like robot arms, and for robots with limited onboard computation like UAVs. Until now, this challenge has been addressed via laziness i.e. deferring edge evaluation until absolutely necessary, with the hope that edges turn out to be valid. However, all edges are not alike in value - some have a lot of potentially good paths flowing through them, and some others encode the likelihood of neighbouring edges being valid. This leads to our key insight - instead of passive laziness, we can actively choose edges that reduce the uncertainty about the validity of paths. We show that this is equivalent to the Bayesian active learning paradigm of decision region determination (DRD). However, the DRD problem is not only combinatorially hard, but also requires explicit enumeration of all possible worlds. We propose a novel framework that combines two DRD algorithms, DiRECt and BiSECt, to overcome both issues. We show that our approach outperforms several state-of-the-art algorithms on a spectrum of planning problems for mobile robots, manipulators and autonomous helicopters.

\IEEEoverridecommandlockouts\overrideIEEEmargins\pdfstringdefDisableCommands

1 Introduction

A widely-used approach for solving robot motion-planning problems is the construction of graphs, where vertices represent robot configurations and edges represent potentially valid movements of the robot between these configurations. The main computational bottleneck is collision checking which is manifested as expensive edge evaluations. For example, in robot arm planning [13] (Fig. 1(a)), evaluation requires expensive geometric intersection computations. In autonomous helicopter planning [5] (Fig. 1(b)), evaluation requires expensive reachability volume verification of the closed loop system. State-of-the-art planning algorithms [12] deal with expensive evaluation by resorting to laziness - they first compute a set of unevaluated paths quickly, and then evaluate them sequentially until a valid path is found.

Figure 1: Real world planning problems where edges are correlated. In such cases, our approach can infer the structure of the world from outcomes of edge evaluations. (a) The presence of a table in robotic arm planning correlates neighbouring edges (courtesy Dellin [12]). (b) The presence of wires and guide-towers in helicopter planning correlates corresponding edges. (c) A typical helicopter planning problem with wires, terrain and no-fly zones. The state-of-the-art planner, LazySP [12], passively defers edge evaluation thus requiring 590 checks. It is unable to leverage priors on the world. (d) Our approach uses DiRECt to actively infer the presence of the wire, hills and NFZ and BiSECt to focus the search and find a path in 20 checks.
Figure 2: Equivalence between the feasible path identification problem and the decision region determination problem. A plausible world is equivalent to hypothesis (as shown by the blue dots in the lower row). A path is equivalent to a region over valid hypotheses where the path is feasible. A collision check is equivalent to a test whose outcome is valid (green) or invalid (red). Tests eliminate hypotheses and the algorithm terminates when uncertainty is pushed into a region () and the corresponding path () is determined to be valid.

However, such lazy policies overlook a fundamental characteristic of the planning problem - edges in a graph are implicitly correlated. In Fig. 1(a), the presence of a table in the robot arm workspace implicitly correlates edges in front of the robot. Similarly in Fig. 1(b), the presence of power-lines during a UAV overflight implicitly correlates a horizontal strip of edges near the ground. Evaluating such edges provides valuable information about the feasibility likelihood of other edges which in turn can be used to infer the feasibility likelihood of a path. We wish to compute such a policy that judiciously chooses edges to evaluate by reasoning about likely worlds in which the robot operates.

This problem is equivalent to the Bayesian active learning problem of decision region determination (DRD) [21, 4] - given a set of tests (edges), hypotheses (worlds), and regions (potential paths), the objective is to select a sequence of tests that drive uncertainty into a single decision region. The DRD problem has one key distinction from the general active learning problem [11] - we only need to know enough about the world to ascertain if a path is feasible.

To solve the DRD problem in our context, we need to address two issues:

  1. Enumeration of all possible worlds.

  2. Solving the DRD problem in general is NP-hard [21].

Fortunately, Chen et al. [4] provide an algorithm, DiRECt, to address 2 by maximizing an objective function that satisfies adaptive submodularity [15] - a natural diminishing returns property that endows greedy policies with near-optimality guarantees.

However, DiRECt requires 1 to be solved, i.e. requires an exhaustive training database of worlds. Since DiRECt operates on a realizability assumption, it can easily terminate without finding a solution when the test world is not in its training database. Explicitly enumerating all possible worlds is impractical even as an offline operation - a graph with edges can induce possible worlds.111A typical graph, , will need bits of storage!

In previous work [7], we addressed 1 by examining the DRD problem when edges are independent. We proposed an efficient near-optimal algorithm BiSECt which reduces the computation from to . BiSECt reasons about the exhaustive set of worlds without ever explicitly enumerating them by leveraging the independence assumption. However, this assumption is too strong for certain environments (such as those in Fig 1) thus leading to excessive edge evaluations.

Our key idea is to combine the two approaches. We sample a finite database of worlds and apply DiRECt offline on this database to compute a decision tree of edges to evaluate. At test time we execute the tree. When we reach a leaf node, we have either solved the problem or we have narrowed the problem down to a set of ‘tail worlds’ outside of DiRECt’s domain, i.e. low probability worlds that do not appear in the sampled database. We then run BiSECt, which implicitly reasons about this set of ‘tail worlds’, and accept the performance loss due to the independence assumption. We make the following contributions:

  1. We show an equivalence between the optimal edge evaluation problem and the decision region determination problem.

  2. We propose a framework to combine two DRD algorithms, DiRECt and BiSECt, that near-optimally solves the decision region problem, overcomes issues pertaining to finite databases and can be executed efficiently online.

  3. We demonstrate the efficacy of our approach on a spectrum of planning problems for mobile robots, manipulators, autonomous full-scale helicopters.

We note that a limitation of this approach is that it ignores solution quality and requires an explicit library of paths. We discuss ways to alleviate this in Section 6.

2 Problem Formulation

We now describe the edge evaluation problem, showing the equivalence to the DRD problem along the way. Let be an explicit graph that consists of a set of vertices and edges . Given a pair of start and goal vertices, , a search algorithm computes a path - a connected sequence of valid edges. The search is performed on an underlying world which corresponds to a specific setting of obstacles. To ascertain the validity of an edge , the algorithm queries the underlying world which returns a binary status. We address applications where edge evaluation is expensive, i.e., the computational cost of computing is significantly higher than regular search operations. We make a simplification to the problem - from that of search to that of identification. Instead of searching online for a path, we frame the problem as identifying a valid path from a library of ‘good’ candidate paths .

Let be a set of “hypotheses”, each of which is analogous to a world. We have a prior distribution on this set. A “test” is performed by querying a corresponding edge for evaluation, which returns a binary outcome denoting if an edge is valid or not. Thus each hypothesis can be considered a function, , mapping tests to corresponding outcomes. The cost of performing a test is . A path corresponds to a set of worlds on which that path is valid. Hence each path corresponds to a “decision region” over the space of hypotheses. Let be the set of “decision regions” corresponding to .

For a set of tests that are performed, let the observed outcome vector be denoted by . Let the version space be the set of hypotheses consistent with outcome vector , i.e. .

We define a policy as a mapping from the current outcome vector to the next test to select. A policy terminates when at least one region is valid, or all regions are invalid. Let be the underlying world on which it is evaluated. Denote the outcome vector of a policy as . The expected cost of a policy is where is the cost of all tests . The objective is to compute a policy with minimum cost that ensures at least one region is valid, i.e.

(1)

An illustration of this equivalence is shown in Fig. 2.

3 Related Work

The computational bottleneck in motion planning varies with problem domain and that has led to a plethora of planning techniques ([24]). When vertex expansions are a bottleneck, A* [17] is optimally efficient while techniques such as partial expansions [29] address graph searches with large branching factors. However, we examine the problem class that is of particular importance in robotics - expensive edge evaluation. This is primarily because evaluation is performed by querying an underlying representation of the world that is built online and requires expensive geometric intersection computation.

The problem class we examine, that of expensive edge evaluation, has inspired a variety of ‘lazy’ approaches. The LazyPRM algorithm [1] only evaluates edges on the shortest path while FuzzyPRM [26] evaluates paths that minimize probability of collision. The Lazy Weighted A* (LWA*) algorithm [10] delays edge evaluation in A* search and is reflected in similar techniques for randomized search [14, 6, 18]. An approach most similar in style to ours is the LazyShortestPath (LazySP) framework [12] which examines the problem of which edges to evaluate on the shortest path. Instead of the finding the shortest path, our framework aims to efficiently identify a feasible path in a library of ‘good’ paths. The Anytime Edge Evaluation (AEE*) framework [25] also deals with a similar problem however it makes an independent edge assumption. Finally, there is a lot of work on modelling belief over configuration spaces [8, 27, 20, 2]. Using such models in DRD would be interesting future work.

We draw a novel connection between motion planning and optimal test selection which has a wide-spread application in medical diagnosis [22] and experiment design [3]. Optimizing the ideal metric, decision theoretic value of information [19], is known to be NPPP complete [23]. For hypothesis identification (known as the Optimal Decision Tree (ODT) problem), Generalized Binary Search (GBS) [11] provides a near-optimal policy. For disjoint region identification (known as the Equivalence Class Determination (ECD) problem), EC2 [16] provides a near-optimal policy. When regions overlap (known as the Decision Region Determination (DRD) problem), HEC [21] provides a near-optimal policy. The DiRECt algorithm [4], a computationally more efficient alternative to HEC, forms the basis of our approach. We also employ the BiSECt algorithm [7], which solves the DRD problem under edge independence assumptions.

4 Approach

4.1 Overview

Fig. 3 shows an overview of our approach. We sample a finite database of worlds to create a training dataset. We employ a greedy yet near-optimal algorithm DiRECt [4] to solve the DRD problem. DiRECt chooses decisions to prune of inconsistent worlds from the database till it can ascertain if a path is valid. The decisions of DiRECt can be compactly stored in the form of a decision tree which is computed offline. At test time, the tree is executed till the leaf node is reached. At this point, either the problem is solved or the fraction of consistent worlds drops below a threshold , i.e. it is likely that the test world is not in the database. In the latter case, we invoke another DRD algorithm, BiSECt. BiSECt implicitly reasons about the exhaustive set of worlds and does this efficiently by assuming edges are independent. BiSECt is invoked with a bias vector of edge likelihoods computed from the remaining consistent worlds in DiRECt. The combined behaviour of the framework is as follows - the tree makes a set of evaluations to quickly collapse the posterior on to a set of candidate paths, while BiSECt completes the episode being guided by the obtained posterior. We describe each component of the framework in the remaining subsections.

Figure 3: The overall approach framework. A training database is created by randomly sampling worlds from a generative model, collision checking the edge of the graph on each such world and creating a library of paths. The algorithm DiRECt is invoked to compute a decision tree offline. Each node of the tree contains the index of the edge to evaluate and branches on the outcome. The leaf node of the tree correspond either to a feasible path existing or the number of consistent worlds dropping below a threshold fraction . In the latter case, the bias vector is stored. At test time, the tree is executed till a leaf node is reached. If the problem is unsolved at that point, the BiSECt algorithm is invoked with as bias term.

4.2 The Decision Region Edge Cutting algorithm (DiRECt)

In order to solve the DRD problem in (1), we adopt the framework of Decision Region Edge Cutting (DiRECt) [4]. The intuition behind the method is as follows - as tests are performed, hypotheses inconsistent with test outcomes are pruned away. Hence, tests should be incentivized to push the probability mass over hypotheses into a region as fast as possible. Chen et al. [4] derive a surrogate objective function that not only provides such an incentive, but also exhibits the property of adaptive submodularity [15] - greedily maximizing such an objective results in a near-optimal policy.

DiRECt uses a key result from Golovin et al.[16] who address the Equivalence Class Determination (ECD) problem - a special case of the DRD problem (1) when regions are disjoint. Let be a set of disjoint regions, i.e, for . Golovin et al.[16] provide an efficient yet near-optimal approach for solving ECD in their EC2 algorithm. The EC2 algorithm defines a graph where the nodes are hypotheses and edges are between hypotheses in different decision regions . The weight of an edge is defined as . An edge is said to be ‘cut’ by an observation if either hypothesis is inconsistent with the observation. Hence a test with outcome is said to cut a set of edges . The aim is to cut all edges by performing test while minimizing cost.

EC2 employs a weight function over regions, . Naively, computing the total edge weight requires enumerating all pairs of regions. However, we can compute this efficiently in linear complexity as . EC2 defines an objective function that measures the ratio of the original weight of subregions and the weight of pruned subregions , i.e.

(2)
1 for  do
2        ;
3        for  do
               ;
                 Prune hyp
               ;
                 Probability of outcome
4               ;
5              
6       ;
7       
8return ;
Algorithm 1 DiRECt

EC2 uses the fact that is adaptive submodular ([15]) to define a greedy algorithm. Let the expected marginal gain of a test be . EC2 greedily selects a test .

We now return to the general DRD problem where regions are not disjoint. DiRECt reduces the DRD problem with regions to instances of the ECD problem. Each ECD problem is a ‘one region versus all’. ECD problem is defined over the following disjoint regions: the first region is and the remaining regions are singletons containing only one hypothesis . The EC2 objective corresponding to this problem is . The key idea is that solving any one ECD problem solves the DRD problem. The DiRECt algorithm then combines them in a Noisy-OR formulation by defining the following combined objective

(3)

DiRECt uses the fact that is also adaptive submodular to greedily select a test . For details on the theoretical guarantees and proofs, we refer the reader to [4].

1 ;
2 for  do
        ;
          Gain from each ECD
3       
4return ;
Algorithm 2
;
   Number of hyp in region
;
   Remaining hyp
return
Algorithm 3

To aid in implementation, we provide a pseudo-code for DiRECt. The pseudo-code is derived by expanding and simplifying which we omit for brevity. Alg. 1 describes the DiRECt policy. is the set of active hypotheses which have remained consistent so far with test outcomes. is a binary membership matrix where if . is the test outcome matrix where . is a vector of test costs. Alg. 1 computes the expected gain for each test by computing , the set of hypotheses conditioned on test outcomes, and picks the best test. Alg. 2 computes the DRD gain for by taking a product of individual ECD gains. Alg. 3 calculates the weight of the ECD problem. The computational complexity of Alg. 1 is . Speedups can be obtained by lazily evaluating gains and using graph coloring to reduce the number of ECD problems [4].

LazySP LazySPSet MaxTally SetCover MVoI BiSECt DiRECt +
BiSECt +
2D Geometric Planning: Variation across environments
Forest
OneWall
TwoWall
MovingWall
Baffle
Maze
Bugtrap
2D Geometric Planning (Baffle): Variation across path library size
SE(2) Nonholonomic Path Planning: Variation across environments
OneWall
MovingWall
Baffle
Bugtrap
Autonomous Helicopter Path Planning: Variation across environments
Wires
Canyon
7D Arm Planning: Variation across environments
Clutter
Table+Clutter
Table 1: Normalized cost (with respect to our approach) of different algorithms on different datasets (lower and upper bounds of C.I.)

4.3 Creating an offline decision tree using DiRECt

DiRECt needs access to the entire training database which can be prohibitive at runtime for storage and computational reasons. We circumvent this problem by computing a decision tree offline using DiRECt and storing it. The nodes of the tree encode which edge to evaluate. The tree branches on the outcome of the evaluation. Note that the depth of the tree is bounded by as all leaf nodes must be consistent with the training database. The size is further bounded by the fact that the tree terminates on a leaf node when the uncertainty has been pushed onto one region.

4.4 Executing BiSECt from the leaf node

As discussed in Section 1, it is impractical to have a database large enough to encompass all possible worlds that can arise at test time. Hence, if we reach the leaf node of the tree and the problem is still unsolved, we need to execute an online algorithm that can run to completion by reasoning over the exhaustive set of worlds. We use the Bernoulli Subregion Edge Cutting (BiSECt) algorithm [7] as our online algorithm. BiSECt addresses the DRD problem under the assumption that test outcomes are independent Bernoulli random variables. It leverages this assumption to reduce computational complexity from to and hence can be easily executed online.

BiSECt needs as input a bias vector which corresponds to the independent likelihood of an edge being free. Since DiRECt has made a set of decisions to collapse the posterior, albeit on a finite database, we wish to use this to inform BiSECt. We do this by growing the DiRECt decision tree only till the version space drops below a fraction of consistent worlds, i.e. . This is then used to create a bias vector with a mixture term to ensure non-zero support for all plausible worlds. The bias term for a test is

(4)

Using leads to a more informed BiSECt as compared to directly invoking BiSECt from the beginning using a bias vector computed from the training database.

5 Experiments

5.1 Dataset construction

We evaluated our approach on a collection of datasets spanning a spectrum of motion planning applications that range from simplistic yet insightful 2D problems to more realistic high dimension problems as encountered by a helicopter or a robot arm. The autonomous helicopter dataset in particular is our target application. A typical dataset is constructed as follows. The robot dynamics information is used to create an explicit graph and a start and goal vertex. A dataset of worlds is sampled from a designed generative model. Each edge is evaluated on each world to create a test outcome matrix . A library of paths is created by solving for shortest paths on the dataset and sub-sampling it to maintain a size of . This is then used to create a binary membership matrix encoding the validity of a path on a world. of the data is used for test, remainder for training. The algorithms work with these abstract representations and do not need access to application specific details. Refer to Choudhury et al. [7] for more details on dataset construction. 222Typical values used are , . We plan to provide a link to open source code and datasets for the camera ready version.

Figure 4: Comparison of LazySP, LazySPSet, BiSECt and DiRECt +BiSECt on a selection of datasets. 4 samples from each dataset is shown. The final performance of all algorithms on a test problem is shown: valid edges checked (green) and invalid edges checked (red).
Figure 5: DiRECt performs edges evaluation to collapse the uncertainty about the validity of a path. (a) An example from the Baffle dataset for SE(2) nonholonomic path planning. Here two walls occur in a pair forcing the path to maneuver through the gap. The prior shows only a general location where obstacles are likely to occur. After 2 checks, DiRECt is able to locate the gap. The resultant posterior allows BiSECt to finish off the episode. (b) A realistic example from the Wires dataset for autonomous helicopter path planning. Here the helicopter is flying over a terrain that may have powerlines. The terrain also has natural obstacles such as hills. Presence of other aircrafts and no-fly zones also require avoidance. The prior shows a band of low likelihood region that corresponds to the presence of the wires. After 2 checks, DiRECt is able to infer the location of obstacles on either flank. The resultant posterior allows BiSECt to focus on the centre region and find a path.

5.2 Baseline algorithms

Our primary baseline is BiSECt [7] which treats each edge as independent Bernoulli random variables (i.e. averages along each column to use as bias). We additionally use high performing baselines from Choudhury et al. [7] which were competitive with BiSECt, i.e the MaxProbReg version of MaxTally, SetCover and MVoI. We add to this the LazySP algorithm [12] which operates on the original graph . We also introduce a new algorithm LazySPSet which is restricted to the library of paths .

5.3 Summary of results

Table 1 shows the evaluation cost of all algorithms on various datasets normalized w.r.t DiRECt +BiSECt. The two numbers are lower and upper confidence intervals - hence it conveys how much fractionally poorer algorithms are w.r.t our approach. The best performance on each dataset is highlighted. Fig. 4 shows a comparison of algorithms on certain datasets. We present a set of observations to interpret these results.

O 1.

DiRECt +BiSECt has a consistently competitive performance across all datasets.

Table 1 shows on datasets, DiRECt is at par with the best - on of those it is exclusively the best.

O 2.

DiRECt is more effective on environments with spatial correlation.

Fig. 4 shows that datasets such as TwoWall, MovingWall, Maze and Baffle are more structured. For example in the Maze dataset, there are 5 hallways with one interconnecting passage. DiRECt is able to locate this passage with a few checks and has better performance than BiSECt which assumes independence between edges. On the other hand the Forest dataset has less spatial correlation and BiSECt performs comparably (has an upper margin of ). Similar phenomemnon was observed in 7D arm planning between Clutter (less correlation) and Table+Clutter (more correlation) datasets.

O 3.

DiRECt +BiSECt improves in performance with more data.

Fig. 6(a) shows that both mean and variance reduce as the size of the dataset is increased. This is not only due to DiRECt having better realizability, but also due to BiSECt having a more accurate bias term.

O 4.

BiSECt is essential as a post-processing step

We defined an algorithm, DiRECtonly that runs DiRECt to completion and randomly returns a path from the consistent set of paths, i.e. the a path DiRECt believes should be feasible. Fig. 6(b) shows the failure rate of DiRECtonly with training size, i.e. the returned path being infeasible. The plot shows the failure does not go to zero. BiSECt is essential to reason about the remaining paths and in which order to check edges to ascertain which path is free.

Figure 6: (a) Mean and variance of edge evaluation cost of DiRECt +BiSECt with increasing training size. (b) The average failure (to indentify a feasible path) rate when only using DiRECt (without BiSECt).

5.4 Case study: Roles played by DiRECt and BiSECt

We take a closer look at the Baffle dataset for SE(2) path planning as shown in Fig. 5(a). The combination of the narrow gap between two walls and the curvature constraint of the robot makes this a challenging problem as shown by the performance of baseline LazySP in Fig. 4(e). Also note that BiSECt too struggles on this problem. Returning back to Fig. 5(a), we see that the prior over edge validity is not informative enough for BiSECt to find the gap. As DiRECt proceeds to collision check edges, it is quickly able to localize the gap between the two walls. Interestingly, it is relatively uncertain about the actual vertical location of the wall - this is reflective of DiRECt judiciously reducing uncertainty only enough to make a region valid (i.e to know if a candidate path would be feasible). The posterior is much more informative for BiSECt which is able to easily find a feasible path.

We see a similar phenomenon in the Wires dataset for helicopter planning in Fig. 5(b). As DiRECt proceeds to collision check edges, it is quickly able to ascertain presence of hills in the two flanks and a gap in the centre. BiSECt uses this posterior to focus along the centre and find a path.

6 Discussion and Future Work

In this paper, we addressed the problem of identification of a feasible path from a library while minimizing the expected cost of edge evaluation given priors on the likelihood of edge validity. We showed that this problem is equivalent to a DRD problem where the goal is to select tests (edges) that drive uncertainty into a single decision region (a valid path). We proposed an approach that combines two DRD algorithms, DiRECt and BiSECt, to efficiently solve the problem. We validated our approach on a spectrum of problems against state of the art heuristics and showed that it has a consistent performance across datasets. These results demonstrate the efficacy of leveraging prior data to significantly reduce collision checking effort. We now discuss some insights and directions for future work.

Q 1.

How can we relax the restrictions in the framework in (2) - the prior is specified only via a finite database of worlds and selection is limited to a fixed library of paths.

An alternate approach to modeling belief over configuration spaces is to assume edges are locally correlated. Under this assumption, one can use local models such as KDE [8], mixture of Gaussians [20], RKHS [28] or even customized models [27]. The efficacy of these models depends on how accurately they can represent the world, how efficiently they can be updated and how efficiently they can be projected on the graph. The active learning not only needs to reason about the current belief of the world, but belief posteriors conditioned on possible outcomes of edge evaluation.

Explicitly reasoning about a set of paths is expensive as the size of the set can be exponential in the number of edges in the graph. An alternate method is to directly reason about a distribution over all possible paths between two vertices implicitly, however, this can be intractable. Tractable approximations to such functions have been explored in the context of edge selection [12]. Adopting such techniques in the active learning setting would be interesting to pursue.

Q 2.

We have so far been concerned with finding a feasible path. Can we extend our framework to the optimal path identification problem?

Introducing an additional criteria of minimizing path cost creates a tension between producing high quality paths and expending more evaluation effort. A desirable behaviour is to have an anytime algorithm that traverses the Pareto-frontier [8, 9]. We can tweak our algorithm to display such behavior - we first solve the feasible path identification problem, prune all costlier paths (including this) from the library, prune worlds which belonged only to those paths, and then solve the feasible path problem again. However, while this will eventually converge to the optimal path, we can not necessarily control the speed of convergence.

7 Acknowledgement

The authors thank Shushman Choudhury for feedback, insightful discussions and the robot arm dataset. They also thank Shervin Javdani for helpful tips on DiRECt implementation.

References

  • Bohlin and Kavraki [2000] Robert Bohlin and Lydia E Kavraki. Path planning using lazy prm. In ICRA, 2000.
  • Burns and Brock [2005] Brendan Burns and Oliver Brock. Sampling-based motion planning using predictive models. In ICRA, 2005.
  • Chaloner and Verdinelli [1995] Kathryn Chaloner and Isabella Verdinelli. Bayesian experimental design: A review. Statistical Science, pages 273–304, 1995.
  • Chen et al. [2015] Yuxin Chen, Shervin Javdani, Amin Karbasi, Drew Bagnell, Siddhartha Srinivasa, and Andreas Krause. Submodular surrogates for value of information. In AAAI, 2015.
  • Choudhury et al. [2014] Sanjiban Choudhury, Sankalp Arora, and Sebastian Scherer. The planner ensemble and trajectory executive: A high performance motion planning system with guaranteed safety. In AHS 70th Annual Forum, 2014.
  • Choudhury et al. [2016a] Sanjiban Choudhury, Jonathan D. Gammell, Timothy D. Barfoot, Siddhartha Srinivasa, and Sebastian Scherer. Regionally accelerated batch informed trees (rabit*): A framework to integrate local information into optimal path planning. In ICRA, 2016a.
  • Choudhury et al. [2017a] Sanjiban Choudhury, Shervin JAvdani, Siddhartha Srinivasa, and Sebastian Scherer. Near-optimal edge evaluation in explicit generalized binomial graphs. Arxiv, 2017a.
  • Choudhury et al. [2016b] Shushman Choudhury, Christopher M Dellin, and Siddhartha S Srinivasa. Pareto-optimal search over configuration space beliefs for anytime motion planning. In IROS, 2016b.
  • Choudhury et al. [2017b] Shushman Choudhury, Oren Salzman, Sanjiban Choudhury, and Siddhartha S Srinivasa. Densification strategies for anytime motion planning over large dense roadmaps. In ICRA, 2017b.
  • Cohen et al. [2015] Benjamin Cohen, Mike Phillips, and Maxim Likhachev. Planning single-arm manipulations with n-arm robots. In Eigth Annual Symposium on Combinatorial Search, 2015.
  • Dasgupta [2004] Sanjoy Dasgupta. Analysis of a greedy active learning strategy. In NIPS, 2004.
  • Dellin and Srinivasa [2016] Christopher M Dellin and Siddhartha S Srinivasa. A unifying formalism for shortest path problems with expensive edge evaluations via lazy best-first search over paths with edge selectors. In ICAPS, 2016.
  • Dellin et al. [2016] Christopher M Dellin, Kyle Strabala, G Clark Haynes, David Stager, and Siddhartha S Srinivasa. Guided manipulation planning at the darpa robotics challenge trials. In Experimental Robotics, 2016.
  • Gammell et al. [2015] Jonathan D. Gammell, Siddhartha S. Srinivasa, and Timothy D. Barfoot. Batch Informed Trees: Sampling-based optimal planning via heuristically guided search of random geometric graphs. In ICRA, 2015.
  • Golovin and Krause [2011] Daniel Golovin and Andreas Krause. Adaptive submodularity: Theory and applications in active learning and stochastic optimization. Journal of Artificial Intelligence Research, 2011.
  • Golovin et al. [2010] Daniel Golovin, Andreas Krause, and Debajyoti Ray. Near-optimal bayesian active learning with noisy observations. In NIPS, 2010.
  • Hart et al. [1968] Peter E Hart, Nils J Nilsson, and Bertram Raphael. A formal basis for the heuristic determination of minimum cost paths. IEEE Trans. on Systems Science and Cybernetics, 1968.
  • Hauser [2015] Kris Hauser. Lazy collision checking in asymptotically-optimal motion planning. In Robotics and Automation (ICRA), 2015 IEEE International Conference on, pages 2951–2957. IEEE, 2015.
  • Howard [1966] Ronald A Howard. Information value theory. IEEE Tran. Systems Science Cybernetics, 1966.
  • Huh and Lee [2016] Jinwook Huh and Daniel D Lee. Learning high-dimensional mixture models for fast collision detection in rapidly-exploring random trees. In ICRA, 2016.
  • Javdani et al. [2014] Shervin Javdani, Yuxin Chen, Amin Karbasi, Andreas Krause, Drew Bagnell, and Siddhartha Srinivasa. Near optimal bayesian active learning for decision making. In AISTATS, 2014.
  • Kononenko [2001] Igor Kononenko. Machine learning for medical diagnosis: History, state of the art and perspective. Artificial Intelligence in Medicine, 2001.
  • Krause and Guestrin [2009] Andreas Krause and Carlos Guestrin. Optimal value of information in graphical models. Journal of Artificial Intelligence Research, 35:557–591, 2009.
  • LaValle [2006] S. M. LaValle. Planning Algorithms. Cambridge University Press, Cambridge, U.K., 2006.
  • Narayanan and Likhachev [2017] Venkatraman Narayanan and Maxim Likhachev. Heuristic search on graphs with existence priors for expensive-to-evaluate edges. In ICAPS, 2017.
  • Nielsen and Kavraki [2000] Christian L Nielsen and Lydia E Kavraki. A 2 level fuzzy prm for manipulation planning. In IROS, 2000.
  • Pan et al. [2012] Jia Pan, Sachin Chitta, and Dinesh Manocha. Faster sample-based motion planning using instance-based learning. In WAFR. Springer Verlag, 2012.
  • Ramos and Ott [2016] Fabio Ramos and Lionel Ott. Hilbert maps: Scalable continuous occupancy mapping with stochastic gradient descent. IJRR, 2016.
  • Yoshizumi et al. [2000] Takayuki Yoshizumi, Teruhisa Miura, and Toru Ishida. A* with partial expansion for large branching factor problems. In AAAI/IAAI, pages 923–929, 2000.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
44773
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description