Path Planning in Dynamic Environments using Generative RNNs and Monte Carlo Tree Search

Path Planning in Dynamic Environments using Generative RNNs and Monte Carlo Tree Search

Abstract

State of the art methods for robotic path planning in dynamic environments, such as crowds or traffic, rely on hand crafted motion models for agents. These models often do not reflect interactions of agents in real world scenarios. To overcome this limitation, this paper proposes an integrated path planning framework using generative Recurrent Neural Networks within a Monte Carlo Tree Search (MCTS). This approach uses a learnt model of social response to predict crowd dynamics during planning across the action space. This extends our recent work using generative RNNs to learn the relationship between planned robotic actions and the likely response of a crowd. We show that the proposed framework can considerably improve motion prediction accuracy during interactions, allowing more effective path planning. The performance of our method is compared in simulation with existing methods for collision avoidance in a crowd of pedestrians, demonstrating the ability to control future states of nearby individuals. We also conduct preliminary real world tests to validate the effectiveness of our method.

I Introduction

Autonomous navigation among moving agents such as humans or self-driving vehicles is becoming an increasingly important problem as we see more applications of robotics in real world environments. Predictive models of motion are required for planning around moving agents. It is crucial that these models reflect real world behaviours and capture how agents respond to the actions of a robot. This is needed to accurately understand how a robot influences it’s environment, a critical requirement for safety, robustness, and efficiency in applications in dynamic environments, such as crowds of pedestrians and herds of livestock.

In this paper we propose an integrated path planning framework using generative Recurrent Neural Networks (RNNs) and Monte Carlo Tree Search (MCTS). This builds upon recent work in motion prediction of crowds [1, 25]. We use RNNs in an encoder-decoder architecture, where the input at each timestep is the current position of agent and the future position of the robot, which is known during training and sampled during planning.

This approach allows the encoding of observed sequences of agent positions and robot actions, which can be used by the decoder stage to generate likely responses of all agents to a robot’s action. The generative model is used within the MCTS to simulate state transitions for sampled actions during a tree search of the robot’s action space.

Fig. 1: Real world testing of our MCTS-RNN dynamic path planner around moving pedestrians.

We validate the proposed approach on three varied datasets, including interactions between pedestrians and a vehicle [26], livestock and a mobile robot [23], and simulated interactions of pedestrians generated using the optimal reciprocal collision avoidance (ORCA) model of motion [24].

The performance of our path planning method is compared to existing approaches including a reactive potential field and deep reinforcement learning. The results demonstrate that not only does our planning algorithm perform comparably to state of the art methods for collision avoidance, but more importantly, it is able to direct the future states of nearby individuals using a motion model learnt from real world data, allowing application to tasks such as planning paths that maneuver nearby individuals, or herding of livestock towards a goal. We have conducted preliminary tests of our approach on a real robotic platform around moving pedestrians.

Ii Related Work

Ii-a Dynamic Path Planning

There is extensive literature on path planning and motion prediction in dynamic environments, with a number of surveys available [20, 15, 21]. We briefly discuss the most relevant works here.

A critical minimum requirement of dynamic path planning is avoiding collisions between the robot and it’s surroundings, with the problem of collision avoidance receiving much attention in past years [12, 4, 11]. However, it is widely acknowledged that many path planning methods suffer from the “freezing robot” problem, i.e., when the surrounding environment becomes increasingly complex, the robot loses the ability to find a path and chooses to stop or acts erratically to avoid collisions [22].

To overcome the above issue, traditional planning methods have been extended to account for interactions between agents and predict their future motion during robot navigation, incorporating the hand-crafted Social-Force model (SFM)[13]. SFM has been applied to characterise cooperation and interactions between agents as interacting Gaussian processes [22]. Additionally, the velocity obstacle model of motion in [11] has been generalized to ORCA[24] for multiple agents. This approach has also been combined with Bayseian inference to adaptively learn parameters for individually observed agents in BRVO [16].

Reinforcement learning (RL) methods have also been applied to collision avoidance in dynamic environments. These approaches learn evaluation functions that can be applied to any observed state, and navigation policies that aim to maximise the expected value along a path. RL methods are often initialised using imitation learning in a supervised approach. For example, [6] uses state-action-value pairs taken from training episodes generated by ORCA, whilst [10] uses pairs generated by their previous RL policy CADRL [8], which was in turn initialised using ORCA. A challenge in adopting RL for applications such as path planning in dynamic environments is that the state transitions are unknown. To alleviate the above issue, many approaches assume that the state will evolve with constant velocity over short time periods  [7], or that all agents follow the ORCA model [6]. There has also been recent interest in using actor-critic algorithms to learn the policy and value functions simultaneously, applying a combination of the current learnt policy and other simple motion models to all agents within the scene [10].

Nonetheless, most of these RL methods still make use of simple parametric motion models in training, rather than using models learnt from real world data, such as can be captured by state of the art trajectory prediction models [20].

Recent developments in RNN-based trajectory prediction methods[1, 25] allow for improved prediction in crowded environments, outperforming parametric based methods such as SFM [21]. Inspired by these methods, in this paper we will demonstrate how a predictive model can be adapted for use in path planning, using an encoder-decoder RNN architecture with MCTS.

Ii-B MCTS for Solving Sequential Decision Making Problems

Path planning in dynamic environments can be formulated as a sequential decision making problem. By including all relevant agent dynamics in the current state, it can be framed as a Markov Decision Process (MDPs). This can be achieved by using a trajectory prediction model to encode the observed sequence in a hidden state of an RNN, which we detail further in Section III.

MCTS allows the solving of MDPs where we do not know the state transition matrix but can simulate future states. This is achieved through random sampling of the action space in a structured decision tree [5]. This approach has previously been applied to robotic planning in tasks such as high level action selection for autonomous driving [19], and motion planning for active perception in unknown environments  [2, 3].

The search tree consists of nodes representing each state, and edges representing actions. Upper Confidence Bounds applied to trees (UCT)[17] is often applied in MCTS to balance exploration and exploitation during node selection. In UCT, the value of a node is determined as:

(1)

where, after the i-th move: is the cumulative reward of the node considered; and are number of times the child and parent nodes have been traversed; and c is an exploration-exploitation balance parameter. MCTS allows for anytime optimisation and propagation of state uncertainty, which we utilise in this work for path planning, detailed further in Section III.

Fig. 2: System overview illustrating the use of a learnt model of social response within a tree search based planner. After training (blue), the Encoder’s final hidden state for a given observed sequence is used alongside the latest observation as the root state of the planner (red). The Decoder can then be used in a single step to simulate state transitions of the MDP for a given action A and node state S.

Iii Approach

Iii-a Overview

Given observed trajectories X, and robot path R, for all timesteps in period , where for non-controlled agents, we want to plan a path that optimises an objective function based both on the state of the robot and all other agents across a future time period . The input trajectory for agent is defined as it’s position for each timestep . is the timestep of the latest observation, and the future timestep to which we predict.

We achieve this by first training a sequence prediction model on X and R, as well as the ground truth future positions of each agent Y and known future positions of the robot for . We use this trained model in an adapted MCTS of the robot’s action space across future timesteps, applied recursively within a receding horizon planner.

Fig. 2 illustrates the overall architecture of our approach, outlining the use of the predictive model to first encode the observed trajectories for . The state of the root node of the MCTS is formed from the final encoded state and the current observation . During the creation of the search tree, the predictive model is again used at each expanded node to predict the next state , given the state-action pair in the MCTS simulation step. Our integrated predictive planner is summarised in Alg. 1.

Iii-B Learnt Model of Social Response

The predictive model of response used in this work combines our prior work ResponseRNN[9] with the Recurrent Encoder-Decoder (RED) model we used as a baseline comparison in the same work. The RED model is based on the Seq2Seq model in [21], which does not pool interactions between agents. Models that do not consider interactions between all agents in a scene have been shown to perform comparably to those which do, often outperforming them in terms of prediction accuracy even in crowded scenes [26]. These simpler models also have inference speeds orders of magnitude less, which is critically important when conducting multiple predictions per timestep in a tree search approach.

Our predictive model uses a robot’s future action as an input, where the robot’s position at timestep represents the action at time . Thus, is used alongside as input to the predictive model, as shown in Fig. 2.

The use of a known future action as input has been shown to learn the relationship between a robot’s action and the likely next position of nearby agents [9]. Experiment 1, in Section IV-A details how the use of varying impacts prediction.

This model, learnt from real world interactions, can then be used to simulate state transitions in an approach similar to the use of parametric models in training deep RL.

Model Structure: The Encoder and Decoder have the same structure, comprised of a non linear embedding layer that takes in the input at each timestep, followed by two LSTM  [14] layers. The inputs of the Encoder are made up of and for all . The current observation at is used as the first input to the Decoder.

The Decoder takes the same size inputs as the Encoder, however, at all timesteps after the first Decoder input we feed the Decoder zeros in place of the agent positions. This is done for both training and inference. This zero-feed approach has been shown to improve performance at inference time, when there are no known ground truth agent positions  [27]. This is in comparison to other approaches that use a sample from the output of the prior step as input to the next step. The non-linear embedding layer uses Rectified Linear unit (ReLu) activations and the same weights for both encoding and decoding steps. The outputs of the Decoder are passed through a linear layer that maps to a bivariate Gaussian output for each agent’s position at each predicted timestep.

Training: We use variable length encoding sequences between 8 and 20 timesteps. We decode for a fixed length of 8 timesteps, and compare the output of each Decoder step, , to the ground truth positions of each agent . Training of the generative RNN is done so as to minimise the loss shown in Eq. 2, which is the negative log-likelihood of Y given , across all prediction timesteps.

(2)

where , for all agents , for all timesteps in prediction period

Inference: Encoding is again performed on inputs across the period . The final hidden state of the encoder, , and the last observed agent positions at time form the state of the root node of the tree search as . The Decoder is then used at each simulation step within the tree search to simulate , as detailed in the following section.

Iii-C Tree Search Planner

MCTS is applied using UCT and adapted for parallel single step simulation (SSS). Alg. 1 details the steps involved our adapted MCTS Planner. We select the K best nodes, according to the node value determined by Eq. 1. We then expand each node, randomly choosing an action from its set of valid moves A. We next perform SSS in parallel for all chosen nodes, using the Decoder. This returns the predicted node’s state , given the node’s parent state . The cost of the node is then calculated according to the state evaluation function and propagated up through the tree. This process is repeated for a time budget, returning the best action from the root node.

1:A Actions discretised action space
2:B Budget time in nsecs
3:C CostFunction() State Eval function
4:function MCTS-SSS(root, A, B, C)
5:     while time < B do planning budget
6:         K = Select(root) Select K best nodes
7:         a = Expand(K, A) Choose valid actions
8:          parallel single step simulation
9:         if first iteration then
10:               = RNN-Decoder(, a, h)
11:         else
12:               = RNN-Decoder(0, a, h)
13:         end if
14:         U = uncertainty
15:         r = C(,U ) reward dependent on U
16:         Backup(K, r) update node values
17:     end while return root
18:end function
19:while not at destination do
20:      observe()
21:      encode observed tracks
22:      = RNNEncoder()
23:      create root node
24:      perform MCTS with SSS
25:     Tree = MCTS-SSS()
26:      = Tree.best_plan() Yield best current path
27:end while
Algorithm 1 Predictive Planner

Parallel Single Step Simulation: We adapt the simulation stage of MCTS to terminate after a single step. This differs from normal implementation in which the episode plays out until a terminal state is reached, selecting random actions each iteration. This is made possible by designing state evaluation functions capable of directly evaluating the value of any observed state. Details of the state evaluation functions used are outlined in Section IV-B. This approach is similar to that used in game applications of MCTS  [18], where it has been shown that limiting the steps of the simulation stage leads to improved performance when a node’s value can be directly estimated.

This also allows parallelisation of the simulation stage, as all simulations now run for the same number of iterations and use the same Decoder model. We alter the selection stage to find the best nodes to expand in the tree. This is implemented by updating the number of traversals across each node in between selections, before we simulate. This results in a temporarily decreased value of the node as determined by the UCT method in Eq. 1, and so a decreased likelihood of selecting a node from the same branch.

Simulation is then conducted across all nodes in parallel, where the selected actions , and associated parent states are passed to the Decoder, returning the state of all expanded nodes , as shown in Alg. 1 line 10.

Uncertainty: The use of a bivariate Gaussian distribution as output of the predictive model provides a measure of uncertainty for each state estimation. We represent uncertainty using the square root of the determinant of the covariance matrix, , of each output. This provides a measure of the volume of the ellipse defined by 1 standard deviation from the mean in all dimensions. We denote , and then use in the state evaluation function to discount the reward computed for each simulated future state.

Iv Experiments and Results

We conduct three separate experiments:

  1. The first aims to validate the use of a future robotic action as an input to a predictive model.

  2. The second compares the performance of our planner to traditional and state of the art approaches.

  3. Thirdly, we also test our path planner in a real world environment around moving pedestrians.

Iv-a Predictive Model Validation

We compare the average and final displacement errors (ADE and FDE) for our model trained with varying robot action lookaheads, {0,1,2,3,4,5}. We also compare model accuracy when no robot action is included is the input.

Datasets: This experiment has been repeated using three datasets:

  • Vehicle-Crowd Interaction (VCI) DUT [26]

  • ‘A Robot Amongst the Herd’ (ARATH) [23] livestock-robot interactions

  • Generated ORCA Trajectories

The two real world datasets have been chosen as they both contain interactions between a controlled vehicle and uncontrolled agents, and focus on agent-agent interactions, rather than agent-space interactions. A description of the ARATH dataset and the method of data collection is detailed in [9]. Both datasets have been preprocessed to have a frame rate of approximately 5Hz, with a mean of 0 and a standard deviation of 1 for all input dimensions. The ORCA dataset has been included for use during planning experiments in the ORCA environment, and includes 10,000 randomly generated scenes of between 2 to 12 agents.

Fig. 3: Comparison of prediction accuracy for varying robot lookaheads, , and when no robot input is used (=None). ADE and FDE are shown for all agents (red), and limited to agents near the robot (blue,cyan and teal). A clear correlation between and accuracy can be seen for ORCA and VCI, though not for ARATH except when distance limited.

Implementation: Each dataset has been split into 5 non-overlapping sets, of which 1 has been left for testing. We have used a 20% validation split during training. The network is implemented in tensorflow with ADAM optimiser for 100 epochs on a single Titan-X GPU, taking approximately 1 hour to train. Inference time per decoder step is less than 0.1ms.

Results: Fig. 3 compares the prediction accuracy of our model trained using varying action lookaheads, , as well as accuracy when no robot action is used. For both pedestrian datasets, VCI and ORCA, there is a clear correlation between increased accuracy and 0. This trend is more apparent when only considering the accuracy of agents within interaction distance thresholds of the robotic vehicle, shown for 5m, 2m and 1m thresholds. However, models trained on the ARATH livestock dataset do not display this same trend, only showing slight improvement of prediction when limited to distance thresholds of 2m and 1m.

This result confirms that using a planned robotic action as input to a predictive model of agent trajectory improves accuracy for close range interactions, accurately learning the relationship between a robot’s planned action and the response of an agent. This validates the use of such a model to simulate state transitions in a dynamic path planner.

Whilst we include results for 1 to display trends, we will only use =1 in our planner comparisons.

Iv-B Planner Comparison

We compare the performance of the following methods:

  • MCTS-RNN (Ours)

  • LM-SARL (Local Map Self-Attention RL) [6]

  • Reactive Potential Field (PF)

  • MCTS-CV

MCTS-CV refers to our method with the predictive model is replaced by constant velocity motion for each agent. We also compare performance of our method using a second state evaluation function, referred to as SEF2 in Table 1.

State Evaluation: We compare the following two state evaluation functions (SEF) for our planner:

(3)
(4)

where is the robot position, is the goal, is agent position and is prediction uncertainty for each agent . Both SEF1 (Eq. 3) and SEF2 (Eq. 4) apply a cost based on the distance of the robot to the goal, as well as the robot’s distance to each agent, , which is scaled by the uncertainty of the agent’s position and only considered when the robot is within a threshold distance of an agent. In SEF2 an additional term scales the cost based on the agent’s acceleration when it is near the robot. This term aims to limit the impact of the robot on the current velocity of each agent, and is referred to as Disturbance in the results. SEF2 has been chosen to demonstrate the ability of our path planner to influence agents’ future states without retraining the predictive model. For testing, we use .

Fig. 4: Planner comparisons in two test cases. In the top scenario, the robot attempts to pass from right to left, through an oncoming group travelling left to right. In the bottom scenario, the robot and 10 other agents all attempt to pass to the other side of a 15m wide circle. Our method (MCTS-RNN) demonstrates ability to plan a path through the oncoming group that takes into consideration their likely response. The Potential Field method (d) displays the ‘frozen-robot’ problem when encountering the group. All non-controlled agents are simulated using the ORCA motion model [24] for both scenarios.

Planner comparisons have been performed in a simulated environment, where agents’ motion is modelled using ORCA. LM-SARL has been trained as per [6]. Our planner is implemented with the following realistic assumptions. We use a time threshold of 300ms in the MCTS based on our robot’s observation frequency and discretise the action space over acceleration and yaw rate, based on real constraints of our robotic vehicle, where acc[-0.05, -0.01, 0, 0.01, 0.05] m/sec/timestep, and yaw-change[-20,-5,0,5,20] deg/timestep. For all experiments we use 50 streams for parallel Single Step Simulation. The MCTS exploration constant has been chosen experimentally as .

Results: We compared performance in terms of rate of success at reaching the goal, rate of collisions with agents, average path length, computation time, and disturbance of nearby agents. The quantitative results in Table 1 demonstrate that our planner is comparable to the state of the art methods for collision avoidance around dynamic agents.

Disturbance was measured based on the rate that a nearby agent’s absolute acceleration exceeded thresholds of 1, 0.5 and 0.25 . The use of SEF2 (Eq. 4) allowed our approach to alter its behaviour, planning a path that minimally disturbed nearby agents without retraining. As expected, the reactive PF performs poorly, failing 50% of the time.

Qualitative results in Fig. 4 compare the behaviour of each planner in 2 test cases. These include driving towards an oncoming group of 3 pedestrians, and a circle crossing scenario with 10 pedestrians. Whilst substituting constant velocity for our predictive model performed surprisingly well, both cases makes it clear that using a learnt predictive model clearly allows for better planning with consideration of agent responses.

This figure also highlights that whilst the RL method was able to find shorter paths on average, its behaviour was not always as understandable as our MCTS-RNN method, occassionally displaying oscillatory movements as shown in the top case. The ‘frozen-robot’ problem is clearly displayed by the reactive PF planner in both cases, failing to find a path through either the oncoming group, or the circle of agents.

Suc-
cess
%
Coll-
ision
%
Avg
Len
(m)
Avg
Comp
Time
(s)
Disturbance
(% Agent Acc
x )
Method 1.0 0.5 0.25
Ours 98.0 0.0 20.08 0.3* 2.5 9.0 15.2
Ours(SEF2) 96.0 0.0 22.52 0.3* 1.8 5.8 11.0
MCTS-CV 93.0 2.6 20.48 0.3* 2.8 8.0 14.5
LM-SARL 98.0 0.0 19.26 0.20 2.2 9.5 15.0
PF 50.0 5.0 30.76 0.01 2.3 11.0 16.0
TABLE I: Planner comparisons from 500 ORCA simulations, agents numbers ranging from 2 to 12. Disturbance represents the change of agent velocity caused by the robot’s motion. (*Our methods use a time budget of 300ms by design)

Iv-C Real-world Experiments

We have also tested our approach in real-world experiments on a robotic platform moving through a crowd of pedestrians, instructed to walk towards chosen goals similar to the scenarios shown in Fig. 4. A video of these experiments is available at https://youtu.be/vBPKiqtCYRU. Some issues were experienced during this testing, including perception issues resulting in delays in pedestrian position estimation. This led to some planning errors and near collisions when both the robot and pedestrians were travelling at high speeds. Due to time constraints we have not performed real world comparisons of each planner included in this work, which will be required in future.

V Conclusion

We have demonstrated in this work that the inclusion of a planned robot action to a predictive model can allow for more accurate prediction of agent trajectories in close range interactions.

We have also shown how this model can be used within a dynamic path planning method to achieve results comparable to state of the art. Our approach has the added benefit of being trained purely on observed data, suggesting that our model could achieve similar results in real world data, an approach not possible with reinforcement learning, in which training requires interaction with a simulated environment.

Additionally, we have demonstrated that the planner’s behaviour can be altered by simply changing the state evaluation function, without requiring retraining the navigation policy, as reinforcement learning requires. This has been applied to minimising disturbance of nearby agents, and could in future be extended to any goal that aims to direct the future state of agents.

Future work will focus on extensive real world testing of our approach in comparison to existing methods, as well as improving the predictive model to account for agent-agent interactions.

References

  1. A. Alahi, K. Goel, V. Ramanathan, A. Robicquet, L. Fei-Fei and S. Savarese (2016) Social LSTM: Human Trajectory Prediction in Crowded Spaces. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 961–971. External Links: Document, ISBN 978-1-4673-8851-1, ISSN 10636919 Cited by: §I, §II-A.
  2. A. Arora, P. M. Furlong, R. Fitch, T. Fong and S. Sukkarieh (2018) Online Multi-Modal Learning and Adaptive Informative Trajectory Planning for Autonomous Exploration. Field and Service Robotics 5, pp. pp 239–254. External Links: Document Cited by: §II-B.
  3. G. Best, O. M. Cliff, T. Patten, R. R. Mettu and R. Fitch (2016) Decentralised Monte Carlo Tree Search for Active Perception. The 12th International Workshop on the Algorithmic Foundations of Robotics (WAFR). Cited by: §II-B.
  4. L. Blackmore, M. Ono and B. C. Williams (2011) Chance-constrained optimal path planning with obstacles. IEEE Transactions on Robotics 27 (6), pp. 1080–1094. External Links: Document, ISSN 15523098 Cited by: §II-A.
  5. C. Browne, E. Powley, D. Whitehouse, S. Lucas, S. Member, P. I. Cowling, P. Rohlfshagen, S. Tavener, D. Perez, S. Samothrakis and S. Colton (2012) A Survey of Monte Carlo Tree Search Methods. IEEE Transactions on Computational Intelligence and AI in Games 4 (1), pp. 1–43. Cited by: §II-B.
  6. C. Chen, Y. Liu, S. Kreiss and A. Alahi (2019) Crowd-Robot Interaction: Crowd-aware Robot Navigation with Attention-based Deep Reinforcement Learning. In IEEE InternationalConference on Robotics and Automation (ICRA), pp. 6015 – 6022. External Links: 1809.08835, ISBN 0009-9236 Cited by: §II-A, 2nd item, §IV-B.
  7. Y. F. Chen, M. Everett, M. Liu, J. P. How and R. O. May (2017) Socially Aware Motion Planning with Deep Reinforcement Learning. In IEEE International Conference on Intelligent Robots and Systems (IROS), pp. 1343–1350. External Links: arXiv:1703.08862v2 Cited by: §II-A.
  8. Y. F. Chen, M. Liu, M. Everett and J. P. How (2017) Decentralized Non-communicating Multiagent Collision Avoidance with Deep Reinforcement Learning. In IEEE International Conference on Robotics and Automation (ICRA), pp. 285–292. External Links: arXiv:1609.07845v2 Cited by: §II-A.
  9. S. Eiffert and S. Sukkarieh (2019) Predicting Responses to a Robot’s Future Motion using Generative Recurrent Neural Networks. Proceedings - ARAA Australasian Conference on Robotics and Automation (ACRA). Cited by: §III-B, §III-B, §IV-A.
  10. M. Everett, Y. F. Chen and J. P. How (2018) Motion Planning among Dynamic, Decision-Making Agents with Deep Reinforcement Learning. In IEEE International Conference on Intelligent Robots and Systems (IROS), pp. 3052–3059. External Links: Document, arXiv:1805.01956v1, ISBN 9781538680940, ISSN 21530866 Cited by: §II-A.
  11. P. Fiorini and Z. Shiller (1998) Motion planning in dynamic environments using velocity obstacles. Int. Journal of Robotics Research 17 (7), pp. 760–772. External Links: Document, ISBN 0-8186-3450-2, ISSN 10504729 Cited by: §II-A, §II-A.
  12. D. Fox, W. Burgard and S. Thrun (1997) The dynamic window approach to collision avoidance. IEEE Robotics and Automation Magazine 4 (1), pp. 22–23. External Links: Document, arXiv:1011.1669v3, ISBN 1070-9932 VO - 4, ISSN 10709932 Cited by: §II-A.
  13. D. Helbing and P. Molnár (1995) Social force model for pedestrian dynamics. Physical Review E: Statistical, Nonlinear, and Soft Matter Physics 51 (5), pp. 4282–4286. External Links: Document, 9805244, ISBN 1063-651X, ISSN 1063651X Cited by: §II-A.
  14. S. Hochreiter and J. Schmidhuber (1997) Long Short-Term Memory. Neural Computation 9 (8), pp. 1735–1780. External Links: Document, 1206.2944, ISBN 08997667 (ISSN), ISSN 08997667 Cited by: §III-B.
  15. M. Hoy, A. S. Matveev and A. V. Savkin (2015) Algorithms for collision-free navigation of mobile robots in complex cluttered environments: A survey. Robotica 33 (3), pp. 463–497. External Links: Document, ISSN 14698668 Cited by: §II-A.
  16. S. Kim, S. J. Guy, W. Liu, D. Wilkie, R. W.H. Lau, M. C. Lin and D. Manocha (2015) BRVO: Predicting pedestrian trajectories using velocity-space reasoning. International Journal of Robotics Research 34 (2), pp. 201–217. External Links: Document, ISSN 17413176 Cited by: §II-A.
  17. L. Kocsis and C. Szepesvári (2006) Bandit based Monte-Carlo planning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), External Links: ISBN 354045375X, ISSN 16113349 Cited by: §II-B.
  18. R. Lorentz (2016) Using evaluation functions in Monte-Carlo Tree Search. Theoretical Computer Science 644, pp. 106–133. External Links: Document, ISSN 03043975 Cited by: §III-C.
  19. C. Paxton, V. Raman, G. D. Hager and M. Kobilarov (2017) Combining Neural Networks and Tree Search for Task and Motion Planning in Challenging Environments. In IEEE International Conference on Intelligent Robots and Systems (IROS), pp. 6059–6066. External Links: arXiv:1703.07887v1 Cited by: §II-B.
  20. A. Rudenko, L. Palmieri, M. Herman, K. M. Kitani, D. M. Gavrila and K. O. Arras (2019) Human motion trajectory prediction: A survey. In CoRR, Cited by: §II-A, §II-A.
  21. S. Becker, R. Hug, W. Hübner and M. Arens (2018) An Evaluation of Trajectory Prediction Approaches and Notes on the TrajNet Benchmark. arXiv preprint arXiv:1805.07663. External Links: arXiv:1805.07663v6 Cited by: §II-A, §II-A, §III-B.
  22. P. Trautman and A. Krause (2010) Unfreezing the robot: Navigation in dense, interacting crowds. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 797–803. External Links: Document, ISBN 9781424466757, ISSN 2153-0858 Cited by: §II-A, §II-A.
  23. J. Underwood, M. Calleija, J. Nieto, S. Sukkarieh, C. E. Clark, S. C. Garcia, K. L. Kerrisk and G. M. Cronin (2013) A robot amongst the herd: Remote detection and tracking of cows. Proceedings of the 4th Australian and New Zealand spatially enabled livestock management symposium; Ingram, L., Cronin, G., Sutton, L., Eds 9, pp. 52. Cited by: §I, 2nd item.
  24. J. Van Den Berg, S. J. Guy, M. Lin and D. Manocha (2011) Reciprocal n-body collision avoidance. Springer Tracts in Advanced Robotics 70 (STAR), pp. 3–19. External Links: Document, ISBN 9783642194566, ISSN 16107438 Cited by: §I, §II-A, Fig. 4.
  25. A. Vemula, K. Muelling, J. Oh and R. O. Oct (2018) Social Attention : Modeling Attention in Human Crowds. In IEEE International Conference on Robotics and Automation (ICRA), pp. 4601–4607. External Links: arXiv:1710.04689v1 Cited by: §I, §II-A.
  26. D. Yang, L. Li, K. Redmill and Ü. Özgüner (2019) Top-view Trajectories: A Pedestrian Dataset of Vehicle-Crowd Interaction from Controlled Experiments and Crowded Campus. In IEEE Intelligent Vehicles Symposium (IV), External Links: 1902.00487 Cited by: §I, §III-B, 1st item.
  27. A. Zyner, S. Worral and E. Nebot (2019) Naturalistic Driver Intention and Path Prediction using Recurrent Neural Networks. IEEE Transactions on Intelligent Transportation Systems, pp. 1–11. Cited by: §III-B.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
406334
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description