The FastMap Algorithm for Shortest Path Computations

The FastMap Algorithm for Shortest Path Computations

Liron Cohen  Tansel Uras  Shiva Jahangiri  Aliyah Arunasalam  Sven Koenig  T. K. Satish Kumar
{lironcoh, turas, arunasal, skoenig}@usc.edu, shivaj@uci.edu, tkskwork@gmail.com
University of Southern California    University of California, Irvine
Abstract

We present a new preprocessing algorithm for embedding the nodes of a given edge-weighted undirected graph into a Euclidean space. In this space, the Euclidean distance between any two nodes approximates the length of the shortest path between them in the given graph. Later, at runtime, a shortest path between any two nodes can be computed using A* search with the Euclidean distances as heuristic estimates. Our preprocessing algorithm, dubbed FastMap, is inspired by the Data Mining algorithm of the same name and runs in near-linear time. Hence, FastMap is orders of magnitude faster than competing approaches that produce a Euclidean embedding using Semidefinite Programming. Our FastMap algorithm also produces admissible and consistent heuristics and therefore guarantees the generation of optimal paths. Moreover, FastMap works on general undirected graphs for which many traditional heuristics, such as the Manhattan Distance heuristic, are not always well defined. Empirically too, we demonstrate that the FastMap heuristic is competitive with other state-of-the-art heuristics like the Differential heuristic.

The FastMap Algorithm for Shortest Path Computations


Liron Cohen  Tansel Uras  Shiva Jahangiri  Aliyah Arunasalam  Sven Koenig  T. K. Satish Kumar {lironcoh, turas, arunasal, skoenig}@usc.edu, shivaj@uci.edu, tkskwork@gmail.com University of Southern California    University of California, Irvine

Introduction and Related Work

Shortest path problems commonly occur in the inner procedures of many AI programs. In video games, for example, a large fraction of CPU cycles are spent on shortest path computations [?]. Many other tasks in AI, including motion planning [?], temporal reasoning [?], and decision making [?], also involve finding and reasoning about shortest paths. While Dijkstra’s algorithm [?] can be used to compute shortest paths in polynomial time, faster computations bear important implications on the time-efficiency of solving the aforementioned tasks. One way to boost shortest path computations is to use the A* search framework with an informed heuristic [?].

A perfect heuristic is one that returns the true shortest path distance between any two nodes in a given graph. In this graph, A* with such a heuristic and proper tie-breaking is guaranteed to expand nodes only on an optimal path between the specified start and goal nodes. In general, computing the perfect heuristic value between two nodes is as hard as computing the shortest path between them. Hence, A* search can benefit from a perfect heuristic only if it is computed offline. However, precomputing all pairwise shortest path distances is not only time-intensive but also requires a prohibitive memory where is the number of nodes.

Many methods for preprocessing a given graph (without precomputing all pairwise shortest path distances) have been studied before and can be grouped into several categories. Hierarchical abstractions that yield suboptimal paths have been used to reduce the size of the search space by abstracting groups of vertices [??]. More informed heuristics [???] guide the searches better to expand fewer states. Hierarchies can also be used to derive heuristics during search [??]. Dead-end detection and other pruning methods [???] identify areas of the graph that do not need to be searched to find shortest paths. Search with contraction hierarchies [?] is an optimal and extremely hierarchical method, as every level of the hierarchy contains only a single node. It has been shown to be effective on road networks but seems to be less effective on graphs with higher branching factors, such as grid-based game maps [?]. Another approach is that of N-level graphs [?] constructed from undirected graphs by partitioning the nodes into levels. The hierarchy allows significant pruning during search.

A different approach that does not rely on preprocessing the graph makes use of some notion of a geometric distance between two nodes as a heuristic estimate of the shortest path distance between them. One such common heuristic that is used in gridworlds is the Manhattan Distance heuristic.111In a 4-connected 2D gridworld, for example, the Manhattan Distance between two cells and is . Similar generalizations exist for 3D and 8-connected gridworlds. For many gridworlds, A* search with the Manhattan Distance heuristic outperforms Dijkstra’s algorithm. However, in complicated 2D/3D gridworlds like mazes, the Manhattan Distance heuristic may not be informed enough to efficiently guide A* search. Another issue associated with Manhattan Distance-like heuristics is that they are not well defined for general graphs.222Henceforth, whenever we refer to a graph, we mean an edge-weighted undirected graph unless stated otherwise. For a graph that cannot be conceived in a geometric space, there is no closed-form formula for a “geometric” heuristic estimate for the distance between two nodes because there are no coordinates associated with them.

For a graph that does not already have a geometric embedding in Euclidean space, a preprocessing algorithm can be used to generate one. As described before, at runtime, A* search would then use the Euclidean distance between any two nodes in this space as an estimate for the length of the shortest path between them in the given graph. One such approach is presented in [?]. This approach guarantees admissiblility and consistency of the heuristic and therefore generates optimal paths. However, it requires solving a Semidefinite Program (SDP) in its preprocessing phase. SDPs can be solved in polynomial time [?]; and in this case, additional structure is leveraged to solve them in cubic time [?]. Still, a cubic preprocessing time limits the size of the graphs that are amenable to this approach.

The Differential heuristic is another state-of-the-art approach that has the benefits of near-linear preprocessing time. However, unlike the approach in [?], it does not produce an explicit Euclidean embedding. In the preprocessing phase of the Differential heuristic approach, some nodes of the graph are chosen as pivot nodes. The shortest path distances between each pivot node and every other node are precomputed and stored [?]. At runtime, the heuristic distance between two nodes, and , is given by where is a pivot node and is the precomputed distance. The preprocessing time is linear in the number of pivots times the size of the graph. The required space is linear in the number of pivots times the number of nodes, although a more succinct representation is presented in [?]. Similar preprocessing techniques are used in Portal-Based True Distance heuristics [?].

In this paper, we present a new preprocessing algorithm that produces an explicit Euclidean embedding while running in near-linear time. It therefore has the benefits of the Differential heuristic’s preprocessing time as well as that of producing an embedding from which heuristic estimates can be quickly computed using closed-form formulas. Our preprocessing algorithm, dubbed FastMap, is inspired by the Data Mining algorithm of the same name [?]. It is orders of magnitude faster than SDP-based approaches for producing Euclidean embeddings. FastMap also produces admissible and consistent heuristics and therefore guarantees the generation of optimal paths.

In comparison to other heuristics derived from closed-form formulas, like the Manhattan or the Octile Distance heuristics, the FastMap heuristic has several advantages. First, it is defined for general undirected graphs (even if they are not gridworlds). Second, we observe empirically that even in gridworlds, A* with the FastMap heuristic outperforms A* with the Manhattan or the Octile Distance heuristic. In comparison to the Differential heuristic with the same memory resources, the FastMap heuristic is not only competitive with it on some graphs but even outperforms it on some others. This performance of FastMap is encouraging given that it produces an explicit Euclidean embedding that has other representational benefits like recovering the underlying manifolds of the graph and/or visualizing it. Moreover, we observe that the FastMap and the Differential heuristics have complementary strengths and can be easily combined to generate a more informed heuristic.

The Origin of FastMap

The FastMap algorithm [?] was introduced in the Data Mining community for automatically generating geometric embeddings of abstract objects. For example, if we are given objects in the form of long DNA strings, multimedia datasets such as voice excerpts or images, or medical datasets such as ECGs or MRIs, there is no geometric space in which these objects can be naturally visualized. However, in many of these domains, there is still a well defined distance function between every pair of objects. For example, given two DNA strings, the edit distance between them333The edit distance between two strings is the minimum number of insertions, deletions or substitutions that are needed to transform one to the other. is well defined although an individual DNA string cannot be conceptualized in a geometric space.

Clustering techniques, such as the -means algorithm, are well studied in Machine Learning [?]; but they cannot be applied directly to domains with abstract objects as described above. This is because these algorithms assume that the objects are described as points in a geometric space. FastMap revives the applicability of these clustering techniques by first creating an artificial Euclidean embedding for the abstract objects. The Euclidean embedding is such that the pairwise distances are approximately preserved. Such an embedding would also help in the visualization of the abstract objects. This visualization, for example, can aid physicians in identifying correlations between symptoms or other patterns from medical records.

We are given a complete undirected edge-weighted graph . Each vertex represents an abstract object . Between any two vertices, and , there is an edge with weight . Here, is the given pairwise distance between the objects and . A Euclidean embedding assigns to each object a -dimensional point . A good Euclidean embedding is one in which the Euclidean distance between any two points, and , closely approximates .

One of the early approaches for generating such an embedding was based on the idea of multi-dimensional scaling (MDS) [?]. Here, overall distortion of the pairwise distances is measured in terms of the “energy” stored in “springs” connecting each pair of objects. MDS, however, requires time () and hence does not scale well in practice. On the other hand, FastMap [?] requires only linear time. Both methods embed the objects in a -dimensional space for a user-specified .

FastMap works as follows. In the very first iteration, it heuristically identifies the farthest pair of objects and in linear time. It does this by initially choosing a random object and then choosing to be the farthest object away from . It then reassigns to be the farthest object away from . Once and are determined, every other object defines a triangle with sides of lengths , and . Figure 1 shows this triangle. Since the sides of the triangle define its entire geometry, the length . We set the first coordinate of , the embedding of the object , to be . In particular, the first coordinate of is and of is . We note that computing the first coordinates of all objects takes only linear time since the distance between any two objects and for is never computed.

Figure 1: The three sides of a triangle fully explain its geometry. In particular, .

In the subsequent iterations, the same procedure follows for computing the remaining coordinates of each object. However, the distance function is adapted for different iterations. For example, after the first iteration, and have their first coordinates as and respectively. Because this fully explains the true distance between them, from the second iteration onwards, the rest of and ’s coordinates should be identical. Intuitively, this means that the second iteration should mimic the first one on a hyperplane that is perpendicular to the line . Figure 2 explains this intuition. Although the hyperplane is never constructed explicitly, its conceptualization implies that the distance function for the second iteration should be changed in the following way: . Here, and are the projections of and , respectively, onto this hyperplane; and is the new distance function.

Figure 2: Shows a geometric conceptualization of the recursive step in FastMap. In particular, .

FastMap for Shortest Path Computations

In this section, we provide the high-level ideas for how to adapt the Data Mining FastMap algorithm to the context of shortest path computations. In the shortest path computation problem, we are given an edge-weighted undirected graph along with a start node and a goal node . As a preprocessing technique, we can embed the vertices of in a Euclidean space. During A* search for a shortest path from to , the Euclidean distances from to can be used as heuristic estimates to . The number of nodes expanded by the A* search depends on the informedness of the heuristic which, in turn, depends on the ability of the embedding to preserve pairwise distances.

The general idea is to view the nodes of as the objects to be embedded in Euclidean space. As such, the Data Mining FastMap algorithm cannot be used directly for generating an embedding in linear time. This is because the Data Mining FastMap algorithm assumes that given two objects, and , computing the distance between them can be done in constant time, i.e., it does not depend on the number of objects. This assumption does not hold for our domain because computing the shortest path distance between two nodes depends on the size of the graph. Another problem that arises in this context is that the Euclidean distances may not satisfy important properties such as admissibility and/or consistency. Admissibility guarantees the generation of optimal paths in A* while consistency allows us to avoid re-expansions of nodes as well.

The first issue of having to retain (near-)linear time complexity can be addressed as follows. In each iteration, after we identify the farthest pair of objects and (which are nodes in ), the distances and need to be computed for all other objects . Computing and for any single can no longer be done in constant time but requires time instead [?]. However, since we need to compute these distances for all , computing two shortest path trees rooted at and yields all the necessary distances. The complexity of doing so is also , which is only linear in the size of the graph.444unless , in which case the complexity of doing so is near-linear because of the factor The amortized complexity for computing and for any single is therefore near-constant time. This revives the applicability of the FastMap algorithm.

The second issue of having to generate admissible and consistent heuristics is formally addressed in Theorem 1. The basic idea is to use distances instead of distances in each iteration of the FastMap algorithm. The mathematical properties of the distance function can be used to prove that admissibility and consistency hold irrespective of the dimensionality of the embedding, .

Input: , and .
Output: and for all .
1 ; ;
2 while  do
3       Let ;
4       GetFarthestPair();
5       Compute shortest path trees rooted at and on to obtain , and for all ;
6       if  then
7             Break;
8            
9      for each  do
             ;
              // K coord.
10            
11      for each edge  do
12             ;
13            
14      ; ;
15      
Algorithm 1 Shows the FastMap algorithm. is the edge-weighted undirected graph; is the user-specified upper bound on the dimensionality; is a user-specified threshold; is the dimensionality of the computed embedding; is the Euclidean embedding of node . Line is equivalent to .

Algorithm 1 presents the FastMap algorithm adapted to the shortest path problem. The input to this algorithm is an edge-weighted undirected graph along with two user-specified parameters and . is the maximum number of dimensions allowed in the Euclidean embedding. It bounds the amount of memory needed to store the Euclidean embedding of any node. is a threshold parameter that marks a point of diminishing returns when the distance between the farthest pair of nodes becomes negligible. The output of this algorithm is an embedding for each node . is a -dimensional point, where .

The algorithm maintains a working graph initialized to . The nodes and edges of are identical to those of but the weights on the edges of change with every iteration. In each iteration, the farthest pair of nodes, and , in is heuristically identified in near-linear time (line ). The coordinate, , of each node is computed using a formula similar to that for in Figure 1. However, that formula is modified to to ensure admissibility and consistency of the heuristic. In each iteration, the weight of each edge is decremented to resemble the update rule for in Figure 2 (line ). However, that update rule is modified again to use the distances instead of the distances. Theorem 1 shows that doing so ensures admissibility and consistency of the heuristic.

The method GetFarthestPair() in line computes shortest path trees in a small constant number of times, denoted by .555 in our experiments. It therefore runs in near-linear time. In the first iteration, we assign to be a random node. A shortest path tree rooted at is computed to identify the farthest node from it. is assigned to be this farthest node. In the next iteration, a shortest path tree rooted at is computed to identify the farthest node from it. is reassigned to be this farthest node. Subsequent iterations follow the same switching rule for and . The final assignments of nodes to and are returned after iterations. This entire process of starting from a randomly chosen node can be repeated a small constant number of times.666This constant is also in our experiments.

Figure 3 shows the working of our algorithm on a small gridworld example.

(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
Figure 3: Illustrates the working of our algorithm. (a) is a -connected gridworld with obstacles in black. (b) is the graphical representation of (a) with original unit weights on edges. (c) shows the identified farthest pair of nodes. (d) shows two numbers in each cell representing the distances from and , respectively. (e) shows the first coordinate produced for each cell. (f) shows new edge weights for the next iteration. (g), (h) and (i) correspond to (c), (d) and (e), respectively, in the second iteration. (j) shows the produced D embedding.

Proof of Consistency

In this subsection, we prove the consistency of the FastMap heuristic. Since consistency implies admissibility, this also proves that A* with the FastMap heuristic returns optimal paths. We use the following notation in the proofs: is the weight on the edge between nodes and in the iteration; is the shortest path distance between nodes and in the iteration (using the weights ); is the vector of coordinates produced for node and is its coordinate;777The iteration sets the value of . is the FastMap heuristic value between nodes and after iterations. Note that . We also define . In the following proofs, we use the fact that and .

Lemma 1.

For all , and , .

Proof.

We prove by induction that in any iteration , for all . This would mean that the weight of each edge in the iteration is non-negative and therefore for all , . For the base case, . We assume and show that . Let and be the farthest pair of nodes identified in the iteration. From lines and , . To show that we show that . From triangle inequality, for any node , . Therefore . This means that . Therefore, . This concludes the proof since . ∎

Lemma 2.

For all , and , .

Proof.

Let be the shortest path from to in iteration . By definition, and . From line , . Therefore, . This concludes the proof since . ∎

Lemma 3.

For all , , and , .

Proof.

We prove by induction on . The base case for is implied by Lemma . We assume and show . We know that . Since , we have . Hence, . Using the inductive assumption, we get . By definition, . Substituting for , we get . Lemma shows that which concludes the proof. ∎

Theorem 1.

The FastMap heuristic is consistent.

Proof.

From Lemma , we know . From Lemma , we have . Put together, we have for any , , and . ∎

Theorem 2.

The informedness of the FastMap heuristic increases monotonically with the number of dimensions.

Proof.

This follows from the fact that for any two nodes and , . ∎

Experimental Results

We set up experiments on many benchmark maps from [?]. Figure 4 presents representative results. The FastMap heuristic (FM) and the Differential heuristic (DH) with equal memory resources888The dimensionality of the Euclidean embedding for FM matches the number of pivots in DH. are compared against each other. In addition, we include the Octile heuristic (OCT) as a baseline heuristic that also uses a closed-form formula for heuristic computations.

We observe that as the number of dimensions increases, (a) FM and DH perform better than OCT; (b) in accordance with Theorem 2, the median number of FM’s expansions decreases; and (c) FM’s MADs decrease. When FM’s MADs are high, the variabilities can possibly be exploited in future work using Rapid Randomized Restart strategies.

FastMap also gives us a framework to identify a point of diminishing returns with increasing dimensionality. This happens when the distance between the farthest pair of nodes stops being “significant”. For example, such a point is observed in Figure 4(f) around dimensionality .999The farthest pair distances, computed in line of Algorithm 1, for the first dimensions are: .

In mazes, such as in Figure 4(g), DH outperforms FM. This leads us to believe that FM provides good heuristic guidance in domains that can be approximated with a low-dimensional manifold. This observation also motivates us to create a hybrid FM+DH heuristic by taking the max of the two. Some relevant results are shown in Table 1. Here, all heuristics have equal memory resources. We observe that FM()+DH() always performs second best compared to FM() and DH(). On the one hand, this decreases the percentages of instances on which it expands the least number of nodes. But, on the other hand, its performance is not far from that of the best technique in each breakdown.

Map ‘lak503d’ ‘brc300d’ ‘maze512-32-0’
FM-WINS 570 DH-WINS 329 FM+DH-WINS 101 FM-WINS 846 DH-WINS 147 FM+DH-WINS 7 FM-WINS 382 DH-WINS 507 FM+DH-WINS 111
Med MAD Med MAD Med MAD Med MAD Med MAD Med MAD Med MAD Med MAD Med MAD
FM(10) 261 112 465 319 2222 1111 205 105 285 149 894 472 1649 747 11440 9861 33734 13748
DH(10) 358 215 278 156 885 370 217 119 200 129 277 75 3107 2569 2859 2194 8156 4431
FM(5)+DH(5) 303 160 323 170 610 264 206 105 267 135 249 73 2685 2091 3896 2992 7439 4247
Table 1: Shows the median and MAD numbers of A* node expansions for different maps using three different heuristics with equal memory resources on random instances. FM() denotes the FastMap heuristic with dimensions, DH() denotes the Differential heuristic with pivots and FM()+DH() is a combined heuristic which takes the max of a -dimensional FastMap heuristic estimate and a -pivots Differential heuristic estimate. The results are split into bins according to winners (along with their number of wins).
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
Figure 4: Shows empirical results on maps from Bioware’s Dragon Age: Origins. Specifically, (a) is ‘lak503d’ containing nodes and edges; (d) is ‘brc300d’ containing nodes and edges; and (g) is ‘maze512-32-0’ containing nodes and edges. In (b), the x-axis shows the number of dimensions for FastMap (or the number of pivots for the Differential heuristic) used in the preprocessing phase. The y-axis shows the number of instances (out of ) on which each technique expanded the least number of nodes. Each instance has randomly chosen start and goal nodes. (c) shows the median number of expanded nodes across all instances. Vertical errorbars indicate the Median Absolute Deviations (MADs). The figures in the second and third rows follow the same order. In the legends, ‘FM’ denotes FastMap, ‘DH’ denotes the Differential heuristic and ‘OCT’ denotes the Octile heuristic.

Conclusions

In this paper, we presented a near-linear time preprocessing algorithm, dubbed FastMap, for producing a Euclidean embedding of a general edge-weighted undirected graph. At runtime, these Euclidean distances were used as heuristic estimates by A* for shortest path computations. We proved that the FastMap heuristic is admissible and consistent, thereby generating optimal paths. FastMap is significantly faster than competing approaches for producing Euclidean embeddings with optimality guarantees. We also showed that it is competitive with other state-of-the-art heuristics derived in near-linear preprocessing time. However, our method has the combined benefits of requiring only near-linear preprocessing time as well as producing explicit Euclidean embeddings that try to recover the underlying manifolds of the given graphs.

References

  • [Alpaydin, 2010] Ethem Alpaydin. Introduction to Machine Learning. The MIT Press, 2nd edition, 2010.
  • [Björnsson and Halldórsson, 2006] Yngv Björnsson and Kári Halldórsson. Improved heuristics for optimal path-finding on game maps. In Proceedings of the Sixth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, pages 9–14, 2006.
  • [Botea et al., 2004] Adi Botea, Martin Müller, and Jonathan Schaeffer. Near optimal hierarchical path-finding. Journal of Game Development, 1:7–28, 2004.
  • [Cazenave, 2006] T. Cazenave. Optimizations of data structures, heuristics and algorithms for path-finding on maps. In Proceedings of the 2006 IEEE Symposium on Computational Intelligence and Games, pages 27–33, 2006.
  • [Dechter, 2003] Rina Dechter. Constraint processing. The Morgan Kaufmann Series in Artificial Intelligence. Elsevier, 2003.
  • [Dijkstra, 1959] Edsger W. Dijkstra. A note on two problems in connexion with graphs. Numerische Mathematik, 1(1):269–271, 1959.
  • [Faloutsos and Lin, 1995] Christos Faloutsos and King-Ip Lin. Fastmap: A fast algorithm for indexing, data-mining and visualization of traditional and multimedia datasets. In Proceedings of the 1995 ACM SIGMOD International Conference on Management of Data, pages 163–174, 1995.
  • [Geisberger et al., 2008] R. Geisberger, P. Sanders, D. Schultes, and D. Delling. Contraction hierarchies: Faster and simpler hierarchical routing in road networks. In Proceedings of the 7th International Conference on Experimental Algorithms, 2008.
  • [Goldenberg et al., 2010] M. Goldenberg, A. Felner, N. Sturtevant, and J. Schaeffer. Portal-based true-distance heuristics for path finding. In Proceedings of the Third Annual Symposium on Combinatorial Search, 2010.
  • [Goldenberg et al., 2011] Meir Goldenberg, Nathan Sturtevant, Ariel Felner, and Jonathan Schaeffer. The compressed differential heuristic. In Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, pages 24–29, 2011.
  • [Hart et al., 1968] Peter E. Hart, Nils J. Nilsson, and Bertram Raphael. A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems, Science, and Cybernetics, SSC-4(2):100–107, 1968.
  • [Holte et al., 1994] R. C. Holte, C. Drummond, M. B. Perez, R. M. Zimmer, and A. J. Macdonald. Searching with abstractions: A unifying framework and new high-performance algorithm. In Proceedings of the 10th Canadian Conference on Artificial Intelligence, 1994.
  • [LaValle, 2006] Steven LaValle. Planning Algorithms. Cambridge University Press, New York, NY, USA, 2006.
  • [Leighton et al., 2008] M.C. Leighton, W. Ruml, and R. C. Holte. Faster optimal and suboptimal hierarchical search. In Proceedings of the First International Symposium on Combinatorial Search, 2008.
  • [Pochter et al., 2010] N. Pochter, A. Zohar, J. Rosenschein, and A. Felner. Search space reduction using swamp hierarchies. In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, pages 155–160, 2010.
  • [Rayner et al., 2011] Chris Rayner, Michael Bowling, and Nathan Sturtevant. Euclidean heuristic optimization. In Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, pages 81–86, 2011.
  • [Russell and Norvig, 2009] Stuart J. Russell and Peter Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall, 2009.
  • [Storandt, 2013] S. Storandt. Contraction hierarchies on grid graphs. In Proceedings of the 36th Annual Conference on Artificial Intelligence, 2013.
  • [Sturtevant and Buro, 2005] Nathan Sturtevant and Michael Buro. Partial pathfinding using map abstraction and refinement. In Proceedings of the Twentieth AAAI Conference on Artificial Intelligence, pages 1392–1397, 2005.
  • [Sturtevant et al., 2009] N. Sturtevant, Ariel Felner, Max Barrer, Jonathan Schaeffer, and Neil Burch. Memory-based heuristics for explicit state spaces. In Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence, pages 609–614, 2009.
  • [Sturtevant, 2012] Nathan Sturtevant. Benchmarks for grid-based pathfinding. Transactions on Computational Intelligence and AI in Games, 4(2):144 – 148, 2012.
  • [Tarjan and Fredman, 1984] R.E. Tarjan and M.L. Fredman. Fibonacci heaps and their uses in improved network optimization algorithms. 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, 1984.
  • [Torgerson, 1952] Warren S. Torgerson. Multidimensional scaling: I. theory and method. Psychometrika, 17(4):401–419, 1952.
  • [Uras and Koenig, 2014] Tansel Uras and Sven Koenig. Identifying hierarchies for fast optimal search. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pages 878–884, 2014.
  • [Uras and Koenig, 2015] Tansel Uras and Sven Koenig. Subgoal graphs for fast optimal pathfinding. In Steve Rabin, editor, Game AI Pro 2: Collected Wisdom of Game AI Professionals, chapter 15, pages 145–160. A K Peters/CRC Press, 2015.
  • [Vandenberghe and Boyd, 1996] Lieven Vandenberghe and Stephen Boyd. Semidefinite programming. SIAM REVIEW, 38:49–95, 1996.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
1124
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description