# Efficient motion planning for problems lacking optimal substructure

## Abstract

We consider the motion-planning problem of planning a collision-free path of a robot in the presence of risk zones. The robot is allowed to travel in these zones but is penalized in a super-linear fashion for consecutive accumulative time spent there. We suggest a natural cost function that balances path length and risk-exposure time. Specifically, we consider the discrete setting where we are given a graph, or a roadmap, and we wish to compute the minimal-cost path under this cost function. Interestingly, paths defined using our cost function do not have an optimal substructure. Namely, subpaths of an optimal path are not necessarily optimal. Thus, the Bellman condition is not satisfied and standard graph-search algorithms such as Dijkstra cannot be used. We present a path-finding algorithm, which can be seen as a natural generalization of Dijkstra’s algorithm. Our algorithm runs in time, where and are the number of vertices and edges of the graph, respectively, and is the number of intersections between edges and the boundary of the risk zone. We present simulations on robotic platforms demonstrating both the natural paths produced by our cost function and the computational efficiency of our algorithm.

## 1Introduction

In this paper, we explore motion-planning problems where an agent has to compute the least-cost path to navigate through *risk zones* while avoiding obstacles. Travelling these regions incurs a penalty which is *super-linear* in the traversal time. We call the class of problems *Risk Aware Motion Planning (RAMP)* and define a natural cost function which simultaneously optimizes for paths that are both short and reduce consecutive exposure time in the risk zone.

We are motivated by real-world problems involving *risk*, where continuous exposure is much worse than intermittent exposure. Examples include pursuit-evasion where sneaking in and out of cover is the preferred strategy, and visibility planning where the agent must ensure that an observer or operator is minimally occluded.

Interestingly this setting (where continuous exposure is much worse than intermittent exposure) is not limited to planning: consider a large number of processes running in a shared-memory environment. Here, we would like to *quantify* the cost of running each process. When a process writes to the shared memory, it is required to use a mutex mechanism, possibly blocking other processes. Clearly, as the blocking-time increases, the probability that other processes have to stay idle, increases. Thus, as in the previous examples, the cost of running the process is its duration with a super-linear penalty proportional to accumulative consecutive times where the process writes to the shared memory.

Although practically useful, our cost function for RAMP suffers from one fundamental algorithmic challenge: *optimal plans do not posses optimal substructure*.

We explain with an example and an analogy. Consider Figure 1, where an agent, traversing a graph, starts at and must reach while avoiding . The cost of traversing through is linear in the distance travelled while the cost of traversing is super-linear in the distance travelled (formalized in Section 3).

Now consider two snails (an homage to Pohl [24]) taking two different paths racing to : snail passes through and , and snail through and . If there were no risk zones, and the snails move at constant speed, the optimal substructure implies that when reaches and notices that has passed through it already (snails leave personalized slime trails) it has no hope of catching up and should give up. In other words, the first snail that reaches is *also* the first snail that reaches every intermediate point along the optimal path to . Optimal substructure is critical for search algorithms [6] to efficiently track all “promising” snails (Sec. Section 4).

Unfortunately, our cost function for RAMP does *not* posses optimal substructure. Imagine that snails passing through a risk region accumulate muck on their foot. The more time a snail spends in a risk region, the more muck it accumulates, and the *slower* it gets. Now, although reaches later than , it has less muck on it, having spent less time in the risk zone than , and actually *catches up and overtakes* to reach first.

We define a *boundary point* to be at the boundary of a risk-free and a risk region (like and in our example) and let denote the number of boundary points in our graph.

We can now address RAMP via two key insights:

Any optimal path from a risk-free start to a risk-free goal that passes through a risk region can be decomposed into (i) optimally reaching a boundary point, (ii) optimally traversing the risk region, and (iii) optimally reaching the goal from the exit (of the risk region). As a consequence, if we store pairwise optimal paths (one for each pair of boundary points), we could create an augmented graph which can be used to directly solve RAMP.

All snails that pass through one specific boundary point satisfy the optimal substructure condition. In other words, the only way a snail can overtake another is if it enters the risk region via a different boundary point. As a consequence, we

*only*need to track extra snails.

Fuelled by these insights, we propose two new fundamentally different algorithms (Sec. Section 5) that solve RAMP efficiently by augmenting different data structures used by Dijkstra’s algorithm:

Our *precomputation-based algorithm* creates an augmented graph with boundary vertices and their precomputed optimal paths added to the original graph, incurring a time complexity of . Here and are the number of vertices and edges on the graph, respectively.

Our *incremental algorithm* is a strict generalization of Dijkstra’s algorithm to account for risk regions. It maintains an augmented priority queue to account for the boundary points, incurring a time complexity of .

Interestingly the (asymptotic) running time of both algorithms is identical unless the component of the precomputation dominates the running time. This happens when . Intuitively, the precomputation algorithm slows down if there are many boundary *vertices*, whereas the incremental algorithm slows down if there are many risk-zone *edges*.

We evaluate our algorithms in Section 6. Our experiments show that our cost function naturally balances path length and risk-exposure times. Furthermore we demonstrate the advantage of our incremental algorithm over the precomputation-based approach. Finally, we discuss future work in Section 7.

## 2Related work

The RAMP problem lies on the intersection between several disciplines which we will briefly review. In its general form, our problem can be seen as an instance of the *motion-planning problem* [14]. Indeed, in this work we follow the *sampling-based paradigm* (Sec. Section 2.1). This calls for computing a discrete graph which is then traversed by a *path-finding* algorithm (Sec. Section 2.2).

Computing minimal-risk paths in high-dimensional spaces is related to motion-planning under *risk constraints* (Sec. Section 2.3). When the search domain is two dimensional, alternative, more efficient approaches exist (Sec. Section 2.4).

### 2.1Sampling-based motion planning

The basic motion-planning problem calls for moving a robot in a workspace cluttered with static obstacles from a start position to a target one while avoiding obstacles and minimizing some cost function. Typically, is abstracted as a point in a *configuration space* which is subdivided into free and forbidden regions [17]. The problem then reduces to computing a minimal-cost collision-free path for a point in .

For high-dimensional problems, even computing a path, let alone an optimal one, becomes computationally hard [25]. Thus, a common approach is to approximate using a graph, or a roadmap, . One such example is the Probabilistic Roadmap Planner or PRM [12]. Here vertices of are points sampled in and two “close-by” vertices are connected by an edge if the straight line connecting the two does not intersect obstacles in . A query is then answered by running a shortest-path algorithm on . Under certain assumptions, the cost of solutions obtained by this algorithm converge to the cost of the optimal solution as the number of samples grow [11].

### 2.2Path planning

Planning a shortest path on a given graph is a well studied problem. Given a graph with a cost function on its edges, the shortest-path problem asks for finding a path of minimum cost between two given vertices. When the cost function has an optimal substructure, efficient algorithms such as Dijkstra [6], A* [9] and their many variants can be used.

In certain applications, including our setting, this is not the case. For example in [31] every edge is associated with two attributes, say cost and resource, and there is a non-linear objective function which is convex and non-decreasing

Our problem is also closely related to multi-objective path planning. Here, we are given a set of cost functions and we are interested in finding a set of paths that captures the trade-off (the so-called Pareto curve) among the several objectives. In general, the number of efficient solutions may be exponential in the problem size [7]. However, a fully polynomial-time approximation scheme (FPTAS) can be found, even for the case where the cost functions are non additive [32]. For recent results on multi-objective and multi-constrained non-additive shortest path problems, see e.g. [26] and references within.

### 2.3Planning under risk constraints

Planning under risk constraints has been studied in several motion-planning settings. A common approach to formulate this problem is to assign some risk values to regions. Paths are considered by the planning algorithm only if the risk obtained in different regions are below some predefined thresholds [22].

Planning under risk constraints was also considered for the specific problem of Autonomous Underwater Vehicle (AUV). Here the cost function used was the sum of risk values at waypoints along a given path and the domain was two-dimensional [23].

Planning under risk constraints is closely related to the problem of planning under uncertainty. A common approach is to minimize the estimated collision probability of a path [15]. Additional constraints are often added such as requiring some notion of smoothness [20].

### 2.4Planning in low dimensions

When planning occurs in low dimensions, efficient algorithms exist (see, e.g., [8] for a survey). We briefly mention several variants which are related to the problem we consider.

One such example is computing minimal-cost paths for weighted planar regions [18]. Here, a planar space is subdivided into different regions, where each region is assigned a positive weight. The length of a path is defined as the weighted sum of (Euclidean) lengths of the subpaths within each region. The problem is conjectured to be computationally hard, thus the focus of the community has been on approximate algorithms. For an overview of recent results, as well as a generalization of the problem, see [29].

Another example is computing paths which are simultaneously short and stay away from the obstacles [33]. It is not clear if this problem is NP-Hard, however an FPTAS for this problem is known [1].

## 3Problem formulation

Let denote the -dimensional configuration space, the collision-free portion of and . Let and denote the risk and risk-free zones, respectively. We assume that and are open and closed sets, respectively. See Fig. ?.

A trajectory is a continuous mapping between time and configurations. The image of a trajectory is called a path. By a slight abuse of notation we refer to as the sub-path connecting and for . Finally, we assume that both endpoints of the path lie in the risk-free zone. Namely, .

Given a trajectory , and some time , let be the latest time such that . Notice that if then . We define the *current exposure time* of at as . Namely, if then is the time passed since last entered . If then .

We are now ready to define our cost function. Let be a trajectory and any function such that and . The cost of , denoted by is defined as

Equation 1 penalizes continuous exposure to risk in a super-linear fashion (hence the requirement that ). As , the cost of traversing the risk-free zone is simply path length. See Figure 2 for a conceptual visualization of the current exposure time and our cost function.

Equipped with our cost function we can formally state the risk-aware motion-planning problem:

**P1** *Risk-aware motion-planning problem (RAMP)* Given the tuple , where are start and target configurations, compute with the set of all collision-free trajectories connecting and

As mentioned, RAMP is a generalization of the motion-planning problem (where there are no risk zones) which is known to be PSPACE-Hard [25]. Thus, in this paper we concentrate on the discrete version of the problem:

**P2** *discrete Risk-aware motion-planning problem (dRAMP)* Given the tuple , where is a roadmap embedded in the C-space such that , compute with the set of all collision-free paths^{1}

To simplify the discussion, in the rest of this paper we assume that the robot is moving in constant speed and we use . Thus, we can re-write Equation 1 as

Using the assumption that the robot is moving in constant speed, we will use the terms duration of a trajectory and path length interchangeably (here we measure path length as the Euclidean distance). Further exploiting this assumption and by a slight abuse of notation we will also use Equation 2 to define the cost of a path (and not of a trajectory).

## 4Preliminaries

In this section we review Dijkstra’s path-finding algorithm (using a min-priority queue) which relies on the cost function to have an optimal substructure. We then continue to discuss our cost function—why it does not have an optimal substructure and what properties it does have.

### 4.1Dijkstra’s shortest-path algorithm

Dijkstra’s algorithm computes the minimal-cost path between a given start vertex and all other vertices in a graph. Returning to our snails, the algorithm can be intuitively described as follows: a snail starts at moving at constant speed. Every time it reaches a vertex, it splits into multiple snails, one for each outgoing edge. If a snail reaches a vertex which was already reached by another snail (identified by the existing trail of slime), it retracts into its shell and stops moving. Clearly, the distance travelled by the first snail to reach any vertex is the minimal distance to reach .

The problem with the aforementioned process is that it is continuous. Dijkstra’s algorithm uses discrete times where each time-step represents the event that a snail reaches a specific vertex. The key difference is that here only one snail moves at a time and his movement spans the entire length of an edge. Processing an event is similar to the continuous version: if a snail reaches a vertex which was already reached by another snail, it retracts into it’s shell and stops its progress. Otherwise, it splits into multiple snails, one for each outgoing edge, and the time the snail is intended to reach the edge’s endpoint is computed. This is registered as a new event.

The implementation of the aforementioned process is via a min-cost priority queue . Each entry in the queue represents the time, or cost, that a snail is to reach the vertex through the parent . We emphasize that for each vertex , only the *current best* path (or snail) is maintained, together with its cost. Initially, this value is only known for . For every other vertex this value is unknown, thus the event is initialized to have infinite cost. After all such events are inserted into the queue, the minimal cost entry is removed and if it is the goal, the process terminates. If not, then for every neighboring edge that is collision free, the cost to reach via is computed. If it represents a shorter path to reach , than ’s current entry, ’s entry together with its location in the priority queue are updated.

### 4.2Properties of our cost function

Recall that our cost does not have an optimal substructure (see Section 1 and Figure 1). This implies that, for any vertex , we need to consider not only the optimal path to reach , but also all paths that pass through and may be part of a minimal-cost path to reach some future vertex (that pass through ). To do so, we must characterize this set of paths. We start by noting the following properties of Equation 2:

Using the fact that Obs. ? and ? hold for any maximally connected subpath in or we can rewrite Equation 2. Namely, let such that , and then^{2}

We characterize the set of paths that our algorithm will have to consider using the notion of *domination*. Given two trajectories that start at and end in some vertex , we say that *dominates* if it will always be more beneficial to use and not to reach some future vertex . Namely, dominates if and . The set of all trajectories that start at and end in where no trajectory dominates any other trajectory is said to be a *useful* set of trajectories.

Understanding path domination and the maximal size of a useful set of trajectories will be key in understanding our algorithms and bounding their running time. We note several properties of such trajectories:

**Proof:** Since we have that . Furthermore, is a minimal-cost path, thus . Hence, by definition, dominates .

**Proof:** Let and be the times that and reach , respectively. If , by Equation 3, we can replace subpath with in and reduce its cost which contradicts being a minimal-cost path. The same argument holds for subpaths and . Thus, we have that and dominates .

## 5Minimal-cost path-finding algorithm

In this section we address problem P2, namely how to efficiently compute a minimal-cost path between two vertices in a given roadmap. As we have seen, in Dijkstra’s algorithms (and its many variants), each vertex can have only one useful path which will dominate all other paths to reach . Lemma ? and ? imply that in our setting this holds for vertices in but for vertices in , the number of useful paths may be as large as the number of edges entering . This gives rise to two different algorithmic approaches:

The first approach (Sec. Section 5.2) is to directly use Equation 3 by splitting the graph into subgraphs that are fully contained in and subgraphs that are fully contained in . We then pre-compute the graph distance between all points that lie on the border of and (we call these *border points*) restricted to moving only in . Using these distances allows us to define a new graph, which has an edge between every pair of border points with weights assigned using the precomputed distances. We can then run any shortest-path algorithm on the new graph without having to consider the multiple useful paths of vertices in . This is analogous to having our snails roam the original graph and considering a traversal of as one discrete event.

The first algorithm (Sec. Section 5.2), which requires preprocessing the entire graph, can be seen as a warm-up for our efficient path-finding algorithm (Sec. Section 5.3). This algorithm essentially runs a Dijkstra-type search without any preprocessing. To do so, for vertices within , it efficiently maintains all useful paths. Here we can envision our snails entering as in Dijkstra’s algorithm. However, when one snail reaches a vertex already traversed by another snail, it only retracts into its shell if the other snail dominates it.

Both algorithms have to distinguish between and . Thus, we start by defining the *refined roadmap* (Sec. Section 5.1) and then continue to detail each algorithm.

### 5.1Refined roadmap

An edge is said to be a *border edge* if it straddles and . Namely, if and . For simplicity of exposition, we assume that every border edge of intersects the boundary of exactly once. We denote this point and call it a *border point*. Set to be the set of all border edges and to be all the border points.

Given a roadmap , define the *refined roadmap* such that and . Namely, the refined roadmap is the roadmap defined by adding all border points to the original set of vertices and subdividing border edges accordingly.

### 5.2Minimal-cost planning via precomputinon

To compute the shortest path in , we start by constructing . For each border point, we run Dijkstra’s algorithm restricted to . This gives us a mapping that denotes shortest *distances*, or traversal times, of paths that stay strictly in (if no such path exists, then the mapping returns ). Thus, the cost of the shortest path between two border points that stays strictly in is . Now, construct the graph where , and . Namely, consists of all vertices that are risk free or are border points. contains three types of edges: (i) all original edges that are risk free, (ii) edges from that start at a risk-free vertex and end at a border point, and (iii) new edges connecting each pair of border points within the same risk zone. The weights of edges in are simply the weights of the edges in if they are of the first two types or if the edge is of the third type. See Fig. ?. Finally, we run any shortest-path algorithm between and in .

This algorithm requires preprocessing the entire graph and computing distances between all pairs of border points. As we will see, this may incur unnecessary computations. We continue with a Dijkstra-type algorithm that computes paths in a just-in-time manner.

### 5.3Minimal-cost planning via incremental search

Recall that Dijkstra’s algorithm makes use of a priority queue with entries of the form . Our algorithm will also store entries in which we call *Risk-Aware Shortest Paths* entries, or RASP entries. Each such RASP entry represents a dominating path. Following Lemma ?, for each vertex we need one such entry. Following Lemma ?, for each vertex we need at most one entry for each border point of ’s risk region.

Specifically, each entry will represent a dominating path, or trajectory , to reach a vertex (implicitly defined by the parent pointer ), its cost and duration (stored as and , respectively), its current exposure time at time (stored as ) and the last border point () that passed through if (NIL if ) .

The algorithm (Alg. ?) starts by initializing RASP entries (lines 1-4) and inserting them into the min-priority queue (lines 5-6). Entries in are ordered according to their cost. We iteratively pop the min-cost entry from and set to be the vertex associated with the entry (line 8). If it is the goal vertex, then a minimal-cost path has been found and the algorithm terminates (lines 9-10). If not, we consider each of its neighbors , and if the edge connecting the two is collision free we expand the trajectory to (line 12). This trajectory is represented by a temporary RASP entry . As in Dijkstra’s algorithm, we check if it improves the current-best path to reach . Here we restrict ourselves to paths that enter through a specific border point . If this is the case (line 13-15), we update the relevant RASP entry and decrease it’s cost in (lines 14-15)

We now detail the `expand`

operation (line 12). Specifically, let be the entry popped from with its associated vertex. Let be its neighbor and let denote the length of an edge . We describe the content of the new RASP entry according to whether and are in or . See appendix for a visualization of how the RASP lists are maintained by the algorithm.

Case (i) and :

We set .

Case (ii) and :

We compute the border point and the lengths and . We then compute the RASP entry which represents the path reaching using as its parent. This entry will have, , , and .

Case (iii) and :

We set , , and .

Case (iv) and :

We compute the border point and the lengths and . Similar to case (ii) we set , , and .

#### Practical implementation

Similar to our descriptions of Dijkstra’s algorithm (Sec. Section 4), we traded practical efficiency with ease of exposition (this has no effect on the asymptotic runtime of the algorithm). In practice, we can initialize to contain only the RASP entry associated with . Other RASP entries can be created on the fly only when they are first constructed by the `expand`

operation (lines 12-15).

Additionally, Lemma ? states that a given vertex can have at most useful trajectories. In practice, this number may be much smaller. Thus, before inserting a RASP entry to , we can check if it is dominated by any other entry in with .

Finally, the same algorithm can be transformed into an A*-type algorithm by using a heuristic that estimates the cost-to-go and ordering according to the sum of the cost-to-come and the estimated cost-to-go.

### 5.4Computational complexity

In this section we discuss the computational complexity of our search algorithms. We assume that testing if an edge is collision free and computing border points and distances take constant time. We note that while this assumption is common in search algorithms such as Dijkstra and A*, in many motion-planning applications, these operations often dominate the (practical) running time of search algorithms [14].

Recall that is the number of border points in and that our algorithm that uses precomputation (Sec. Section 5.2), runs (i) Dijkstra’s algorithm from every border point (restricted to vertices within within ) (ii) adds an edge between every two border points in the same connected component and (iii) runs a shortest-path algorithm on the new graph . Step (i) takes time. We then add edges to our new graph in step (ii). This implies that the number of vertices and edges of is and . Thus, step (iii) takes . To summarize, our precomputation-based algorithm takes

Our incremental search algorithm (Sec. Section 5.3) has complexity identical to Dijkstra’s except that there may be at most RASP entries in (and not ). Moreover, each outgoing edge of a vertex can be expanded once for each of ’s useful paths which is at most . Thus, our incremental search algorithm runs in time, which is equal to

Interestingly, the (asymptotic) running time of the two algorithms is identical unless the component dominates the running time of the first algorithm. This happens when . However, in practice, in the precomputation of the first algorithm we often compute paths in that will not be used. Moreover, this requires testing in advance which edges are in , which is an expensive operation in practice. This is demonstrated in Section 6.

## 6Evaluation

In this section we visualize our cost function and demonstrate the behavior of our algorithm. All algorithms were implemented using the Open Motion Planning Library (OMPL 1.2.1) [28] running on a 4.0-GHz Intel Core i7 processor with 16 GB of memory. Source code is publicly available at https://github.com/personalrobotics/ompl_rasp .

For each experiment, we constructed a roadmap and precomputed for each vertex and each edge whether it is collision-free and whether it is in the risk zone. This allows us to compare the time that graph operations take for each of the algorithms.

Our first set of experiments is motivated by the early Viking sailing expeditions: For centuries, sailing was done primarily by coastal navigation, where the sea vessel stayed within sight of the coast. Gradually, the art of open-seas navigation was developed, relying on more uncertain factors such as visibility to the sun, moon and the stars. Thus, we model the sea and the land as the free and forbidden regions, respectively. Any point closer (further) than a predefined distance from the shore is modelled as the safe (risk) region, respectively.

Fig. ? depicts maps with different queries. For each query, we use a eight-connected grid as a roadmap. We then compute the minimal-cost path computed using Equation 2, the shortest (Euclidean) paths and the minimal-risk path that minimizes time spent in . As we can see, our cost function serves as a natural interpolation between the two opposing metrics.

We present running times in Table 1. Computing minimal-cost paths results in larger computation times when compared to computing shortest paths. For our incremental-based algorithm, this is roughly a 4 or 5 slowdown. For our precomputation-based algorithm, this is slower by a factor of several thousands. Not surprisingly, the lion’s share of the algorithm’s running time is dedicated to computing shortest paths between pairs of points on the boundary of .

Scenario | Dijkstra | Precomputation-based | Incremental-based |
---|---|---|---|

Vikings (Fig. ) | |||

Vikings (Fig. ) | |||

Assistive Fig. | — | ||

Our second scenario is motivated by assistive robotics. Consider a robot arm performing a task such as pouring juice from a bottle, while receiving inputs from a user such as when to stop pouring. During the motion, the robot’s end effector is moving between regions that are either visible or occluded to the user.

Specifically, in this motion-planning problem configurations in are points occluded from the viewpoint of a user sitting in a wheelchair. We computed a Halton graph with 10,000 vertices (a typical-size roadmap in such motion-planning settings) and ran Dijkstra’s algorithm as well as our path-finding algorithms. Results, depicted in Fig. ? demonstrate how the shortest path traverses the risk zone for a long duration while minimal-cost paths enter it for a short period of time.

Timing results, reported in Table 1 show that our incremental-based algorithm is actually faster than Dijkstra’s algorithm. This is because in this scene there is a large risk region with high cost and a “narrow passage” that reduces risk exposure. Our cost function naturally guides the search towards this promising region. In contrast, Dijkstra searches in cost-to-come space and exposes more vertices. Our precomputation-based algorithm, on the other hand, terminated due to insufficient memory.

## 7Conclusions and future work

Many interesting research questions arise from our problem formulation. The first, relates to the roadmap generation: Using [11], we can describe the necessary conditions for solutions obtained using PRM to converge to an optimal solution. Applying the same analysis to our setting is not straightforward. This is partially due to the fact that the proof used in [11] assumes that the source and target configurations are in the roadmap. Consider an optimal path that passes in and out of risk zones. A naive attempt to use the aforementioned proof is to subdivide the path into sections that are fully contained within and fully contained within and argue that asymptotically, the roadmap will converge to each of these subpaths. However, the points where moves from to (and vice-versa) are *not* in the roadmap. We believe, that under certain assumptions on the structure of this may be done but many details should be carefully addressed.

Since our problem is single-shot, a possible approach to solve RAMP (and not dRAMP) is not to construct a roadmap but a tree, rooted at the initial configuration. Here, an RRT*-type algorithm [11] may be used in order to asymptotically converge to the optimal solution.

Another interesting question relates to the setting where , i.e., when planning is restricted to the plane. Here, we are interested in understanding what complexity class does our problem fall in. It is well known that planning for shortest paths in the plane amid polygonal obstacles can be computed in time, where is the complexity of the obstacles (see [19] for a survey). When computing shortest paths amid polyhedral obstacles in , or in when there are constraints on the curvature of the path, the problem becomes NP-Hard [2]. Furthermore, the Weighted Region Shortest Path Problem, which is closely related to our problem (Sec. Section 2), is unsolvable in the Algebraic Computation Model over the Rational Numbers [5]. If our problem is NP-Hard, as we conjecture, then a reduction, possibly along the lines of [2] should be provided together with an approximation algorithm. Here, a possible approach would be to sample the boundary of , similar to [1].

Finally, our work assumed that the dimension of and are , the dimension of . However, we envision our cost function being used in situations where this assumption does not hold. Consider compliant motion planning or fine motion [16] where a robot reduces uncertainty by making and maintaining contact with the environment. Here, we would like to penalize for *not* being in contact with an obstacle feature. Thus, induces a manifold of lower dimensionality than which raises many interesting questions.

## Acknowledgements

The authors thank Chris Dellin and Shushman Choudhury for their contribution at the early stages of the this work.

## Appendix

We visualize our incremental-based minimal-cost planning algorithm (Sec. 5.3) in Fig. ?. Each sub-figure contains the current state of the priority queue (entries with unbounded cost are not shown) ordered from low cost (top) to high cost (bottom). Additionally, each figure contains the current set of paths represented in the queue. Paths are encoded (solid, dashed and dotted lines) according to the border point in their respective RASP entry. For simplicity, we will use the terms paths and RASP entries interchangeably.

The algorithm starts with containing one entry for the start vertex (Fig. ?). It is then populated with two entries, representing paths to and (Fig. ?). Since the path to has the minimal cost, it is popped from and the path to is added to (Fig. ?). Notice that this path, which traverses the risk region, is encoded using a dashed line to emphasize that the border point of the path’s entry has changed.

Now, the path to is popped and an alternative path to reach is computed (Fig. ?). Notice that now contains two paths to reach but neither dominates the other. We pop the minimal-cost path to reach from (dashed line from ) and add to the entry representing the path reaching (Fig. ?). Similarly, we pop the next-best path to reach from (dotted line from ) and add to the entry representing the alternative path reaching (Fig. ?). Notice that now contains two entries where the first dominates the second which, in turn, may be discarded.

### Footnotes

- Note that we use paths to define curves in and as sequence of edges in a graph. These definitions coincide as the graph is embedded in .
- There is a slight inaccuracy here as is an open set and thus cannot be fully contained in . However this inaccuracy does not change our results and is intentionally used for ease of exposition.

### References

**2016.**

Agarwal, P. K.; Fox, K.; and Salzman, O. An efficient algorithm for computing high-quality paths amid polygonal obstacles.**1987.**

Canny, J. F., and Reif, J. H. New lower bound techniques for robot motion planning problems.**2005.**

Choset, H.; Lynch, K. M.; Hutchinson, S.; Kantor, G.; Burgard, W.; Kavraki, L. E.; and Thrun, S.*Principles of Robot Motion: Theory, Algorithms, and Implementation*.**2015.**

Chow, Y.; Ghavamzadeh, M.; Janson, L.; and Pavone, M. Risk-constrained reinforcement learning with percentile risk criteria.**2014.**

De Carufel, J.-L.; Grimm, C.; Maheshwari, A.; Owen, M.; and Smid, M. A note on the unsolvability of the weighted region shortest path problem.**1959.**

Dijkstra, E. W. A note on two problems in connexion with graphs.**2000.**

Ehrgott, M., and Gandibleux, X. A survey and annotated bibliography of multiobjective combinatorial optimization.**2016.**

Halperin, D.; Salzman, O.; and Sharir, M. Algorithmic motion planning.**1968.**

Hart, P. E.; Nilsson, N. J.; and Raphael, B. A formal basis for the heuristic determination of minimum cost paths.**2014.**

Jaklin, N.; Tibboel, M.; and Geraerts, R. Computing high-quality paths in weighted regions.**2011.**

Karaman, S., and Frazzoli, E. Sampling-based algorithms for optimal motion planning.**1996.**

Kavraki, L. E.; Svestka, P.; Latombe, J.-C.; and Overmars, M. H. Probabilistic roadmaps for path planning in high dimensional configuration spaces.**2011.**

Kirkpatrick, D. G.; Kostitsyna, I.; and Polishchuk, V. Hardness results for two-dimensional curvature-constrained motion planning.**2006.**

LaValle, S. M.*Planning algorithms*.**2014.**

Liu, W., and Ang, M. H. Incremental sampling-based algorithm for risk-aware planning under motion uncertainty.**1984.**

Lozano-Perez, T.; Mason, M. T.; and Taylor, R. H. Automatic synthesis of fine-motion strategies for robots.**1983.**

Lozano-Pérez, T. Spatial planning: A configuration space approach.**1991.**

Mitchell, J. S. B., and Papadimitriou, C. H. The weighted region problem: Finding shortest paths through a weighted planar subdivision.**2016.**

Mitchell, J. S. B. Shortest paths and networks.**2014.**

Müller, J., and Sukhatme, G. S. Risk-aware trajectory generation with application to safe quadrotor landing.**2015.**

Ono, M.; Pavone, M.; Kuwata, Y.; and Balaram, J. Chance-constrained dynamic programming with application to risk-aware robotic space exploration.**2013.**

Ono, M.; Williams, B. C.; and Blackmore, L. Probabilistic planning for continuous dynamic systems under bounded risk.**2013.**

Pereira, A.; Binney, J.; Hollinger, G. A.; and Sukhatme, G. S. Risk-aware path planning for autonomous underwater vehicles using predictive ocean models.**1969.**

Pohl, I.*Bi-directional and heuristic search in path problems*.**1979.**

Reif, J. H. Complexity of the mover’s problem and generalizations (extended abstract).**2011.**

Reinhardt, L. B., and Pisinger, D. Multi-objective and multi-constrained non-additive shortest path problems.**2016.**

Solovey, K.; Salzman, O.; and Halperin, D. New perspective on sampling-based motion planning via random geometric graphs.**2012.**

Şucan, I. A.; Moll, M.; and Kavraki, L. E. The Open Motion Planning Library.**2007.**

Sun, Z., and Reif, J. H. On robotic optimal path planning in polygonal regions with pseudo-euclidean metrics.**2016.**

Sun, W.; Torres, L. G.; van den Berg, J.; and Alterovitz, R. Safe motion planning for imprecise robotic manipulators by minimizing probability of collision.**2004.**

Tsaggouris, G., and Zaroliagis, C. D. Non-additive shortest paths.**2006.**

Tsaggouris, G., and Zaroliagis, C. D. Multiobjective optimization: Improved FPTAS for shortest paths and non-linear objectives with applications.**2008.**

Wein, R.; van den Berg, J. P.; and Halperin, D. Planning high-quality paths and corridors amidst obstacles.