Graph Neural Networks for Learning Robot Team Coordination

Graph Neural Networks for Learning Robot Team Coordination

Amanda Prorok
Department of Computer Science and Technology
University of Cambridge, UK
asp45@cam.ac.uk
Abstract

This paper shows how Graph Neural Networks can be used for learning distributed coordination mechanisms in connected teams of robots. We capture the relational aspect of robot coordination by modeling the robot team as a graph, where each robot is a node, and edges represent communication links. During training, robots learn how to pass messages and update internal states, so that a target behavior is reached. As a proxy for more complex problems, this short paper considers the problem where each robot must locally estimate the algebraic connectivity of the team’s network topology.

Graph Neural Networks for Learning Robot Team Coordination


Amanda Prorok Department of Computer Science and Technology University of Cambridge, UK asp45@cam.ac.uk

1 Introduction

Robot teams are becoming a de-facto solution to many of today’s logistics problems (product delivery [?], warehousing [?], and mobility-on-demand [?]). Robot teams also hold the promise of delivering robust performance in unstructured or extreme environments [??]. These applications hinge on algorithms that successfully and efficiently coordinate the robots, by providing solutions to collective decision-making, formation and coverage control, and task allocation problems.

This work focuses on the problem of developing distributed coordination mechanisms. To date, most distributed coordination algorithms tend to be point-solutions to very specific applications, and a lot of work goes into their design [???]. Notably, many state-of-the art approaches rely on idealized and simplistic operational assumptions (e.g., reliability of inter-robot communications and robot homogeneity). Some of our recent work highlights the challenge of developing coordination mechanisms in heterogeneous or faulty robot teams [??]: not only are these algorithms computationally hard, but also, they are difficult to design. As a consequence, we are interested in methods that more easily generate coordination mechanisms that are capable of functioning under complex operational conditions. Although some work has already been done in the domain of learning for robot team coordination [??], it is still a nascent field of research.

The goal of this paper is to apply a recent machine learning model, Graph Neural Networks (GNNs) [?], to the problem of robot team coordination. The GNN framework exploits the fact that many underlying relationships among data can be represented as graphs. Although GNNs have been applied to a number of problem domains, including molecular biology [?], quantum chemistry [?], and simulation engines [?], they have yet to be considered within the multi-robot domain. Nevertheless, we have found that the fit is quite natural, as we capture the relational aspect of robot coordination by modeling the robot team as a graph, where each robot is a node, and edges represent communication. This representation allows us to exploit GNNs to learn the desired coordination mechanism, where we presume that examples of the target behaviors are given to the system in a supervised learning setting.

2 Problem and Method

In our problem setting, robot team coordination is broken down into two main parts, (i) inter-robot message exchange, and (ii) robot state update. The goal is to learn both these parts. In a first instance (within the context of this short paper), we consider a simple problem as a proxy for more complex problems: distributed computation of the connectivity of the robot team.

Our notation leans on the notation in [?]. We consider an undirected graph with edges and nodes . Connected nodes can pass each other messages for a duration of time-steps. Neighbors of a node are denoted by . Messages are denoted by for node at time . During the message passing phase, nodes update their internal states, . These updates are defined through a message function and a node update function :

(1)

After messages have been passed, a local readout function returns a feature vector describing a node characteristic:

We can also define a global readout function that is invariant to node permutations (graph isomorphisms). The key point is that the functions , , and are all differentiable functions, and hence, can be learned via back-propagation. This is the premise of GNNs.

In this work, we distribute the computation of the algebraic connectivity of the network topology of a multi-robot team (with robots). In other words, each robot computes its own local estimate. In coordination mechanisms that rely on consensus, the algebraic connectivity is an important network property: it predicts convergence and characterizes the convergence rate. Notably, it is associated to the robustness of network topologies [??], with recent work demonstrating its effect on robot team resilience [?]. The algebraic connectivity is computed by taking the second smallest eigenvalue of the graph Laplacian. Since global knowledge of the network topology is needed to compute the Laplacian, this is generally done in a centralized manner. Distributed algorithms that estimate the algebraic connectivity have previously been proposed [???]. Although the details of the aforementioned estimation algorithms differ, they are all iterative approaches that lean on involved first-principles-based design.

Our approach is to bypass the principled design of these distributed algorithms, and to estimate the algebraic connectivity () directly via a learned coordination mechanism. Briefly stated, each node in the system estimates a local value via the local readout function :

3 Experiments

We adapted an implementation of GNNs available on github 111http://github.com/Microsoft/
gated-graph-neural-network-samples
. Our message passing function is a linear transform of the state; the state update function is handled by a GRU and the readout functions consist of a single hidden layer. All hidden layers have size 100, and all activations are ReLUs. The GNN is trained for a duration , over 100 epochs, using Adam with a learning rate of . We implement two variant GNNs: a centralized model (with global readout), and a distributed model (with local readout). We generated 100’000 random training examples of strongly connected graphs with , for which we compute the true algebraic connectivity. The default validation set comprises 10’000 graphs with . We train using the loss function , and our results report the error :

Figure 1 shows an example of a network topology and the connectivity values predicted through our model with a local readout. Figure 2 shows the average error over validation sets, as training progresses, for three local and three global GNNs with varying messaging durations. As expected, global performs better than local, and higher perform better than lower . Interestingly, increasing to 4 in the local model enables it to outperform in the global model. Figure 3 shows the ability of the models to generalize beyond the graph sizes they were trained on. As expected, the loss increases with the distance to known network sizes. For known graph size instances, the local model produces the somewhat counterintuitive result that its performance improves as graph sizes grow (the global model’s behavior is the inverse).

4 Conclusion

This short paper demonstrates the feasibility of learning distributed coordination mechanisms for robot team coordination. We trained Graph Neural Networks on random network topologies, to show that accurate distributed estimation of the network connectivity is achievable. Further work will consider team coordination mechanisms beyond distributed estimation, as shown in this work.

Figure 1: Example of learned distributed algebraic connectivity estimation on a graph of size for . The true value is . Local estimates are superimposed on nodes.
Figure 2: Performance of the network, evaluated for varying durations , for both a distributed model (local readout ) and a centralized model (global readout ).
Figure 3: Performance as a function of the size of the network graph. The model was trained on graph instances of size 9 to 11 nodes; the shaded bars show the performance on graph size instances not encountered during training.

References

  • [Amato et al., 2016] Christopher Amato, George Konidaris, Ariel Anders, Gabriel Cruz, Jonathan P How, and Leslie P Kaelbling. Policy search for multi-robot coordination under uncertainty. The International Journal of Robotics Research, 35(14):1760–1778, 2016.
  • [Aragues et al., 2012] Rosario Aragues, Guodong Shi, Dimos V Dimarogonas, C Sagues, and Karl Henrik Johansson. Distributed algebraic connectivity estimation for adaptive event-triggered consensus. In American Control Conference (ACC), 2012, pages 32–37. IEEE, 2012.
  • [Battaglia et al., 2016] Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. Interaction networks for learning about objects, relations and physics. In Advances in neural information processing systems, pages 4502–4510, 2016.
  • [Di Lorenzo and Barbarossa, 2014] Paolo Di Lorenzo and Sergio Barbarossa. Distributed estimation and control of algebraic connectivity over random graphs. IEEE Transactions on Signal Processing, 62(21):5615–5628, 2014.
  • [Duvenaud et al., 2015] David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. In Advances in neural information processing systems, pages 2224–2232, 2015.
  • [Enright and Wurman, 2011] John Enright and Peter R Wurman. Optimization and coordinated autonomy in mobile fulfillment systems. In Automated action planning for autonomous mobile robots, pages 33–38, 2011.
  • [Garin and Schenato, 2010] Federica Garin and Luca Schenato. A survey on distributed estimation and control applications using linear consensus algorithms. In Networked Control Systems, pages 75–107. Springer, 2010.
  • [Gilmer et al., 2017] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. arXiv preprint arXiv:1704.01212, 2017.
  • [Grippa et al., 2017] Pasquale Grippa, Doris A Behrens, Christian Bettstetter, and Friederike Wall. Job selection in a network of autonomous uavs for delivery of goods. Robotics: Science and Systems, 2017.
  • [Kantor et al., 2003] George Kantor, Sanjiv Singh, Ronald Peterson, Daniela Rus, Aveek Das, Vijay Kumar, Guilherme Pereira, and John Spletzer. Distributed search and rescue with robot and sensor teams. In Field and Service Robotics, pages 529–538. Springer, 2003.
  • [Liemhetcharat and Veloso, 2017] Somchaya Liemhetcharat and Manuela Veloso. Allocating training instances to learning agents for team formation. Autonomous Agents and Multi-Agent Systems, 31(4):905–940, 2017.
  • [Oh et al., 2015] Kwang-Kyo Oh, Myoung-Chul Park, and Hyo-Sung Ahn. A survey of multi-agent formation control. Automatica, 53:424–440, 2015.
  • [Olfati-Saber and Murray, 2004] Reza Olfati-Saber and Richard M Murray. Consensus problems in networks of agents with switching topology and time-delays. IEEE Transactions on automatic control, 49(9):1520–1533, 2004.
  • [Pavone et al., 2012] Marco Pavone, Stephen L Smith, Emilio Frazzoli, and Daniela Rus. Robotic load balancing for mobility-on-demand systems. The International Journal of Robotics Research, 31(7):839–854, 2012.
  • [Poonawala and Spong, 2015] Hasan A Poonawala and Mark W Spong. Decentralized estimation of the algebraic connectivity for strongly connected networks. In American Control Conference (ACC), 2015, pages 4068–4073. IEEE, 2015.
  • [Prorok et al., 2017] Amanda Prorok, M Ani Hsieh, and Vijay Kumar. The impact of diversity on optimal control policies for heterogeneous robot swarms. IEEE Transactions on Robotics, 33(2):346–358, 2017.
  • [Rossi et al., 2018] Federico Rossi, Saptarshi Bandyopadhyay, Michael Wolf, and Marco Pavone. Review of multi-agent algorithms for collective behavior: a structural taxonomy. arXiv preprint arXiv:1803.05464, 2018.
  • [Saulnier et al., 2017] Kelsey Saulnier, David Saldana, Amanda Prorok, George J Pappas, and Vijay Kumar. Resilient flocking for mobile robot teams. IEEE Robotics and Automation Letters, 2(2):1039–1046, 2017.
  • [Scarselli et al., 2009] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80, 2009.
  • [Shahrivar et al., 2015] Ebrahim Moradi Shahrivar, Mohammad Pirani, and Shreyas Sundaram. Robustness and algebraic connectivity of random interdependent networks. IFAC-PapersOnLine, 48(22):252–257, 2015.
  • [Thayer et al., 2001] Scott M Thayer, M Bernardine Dias, Bart Nabbe, Bruce Leonard Digney, Martial Hebert, and Anthony Stentz. Distributed robotic mapping of extreme environments. In Mobile Robots XV and Telemanipulator and Telepresence Technologies VII, volume 4195, pages 84–96. International Society for Optics and Photonics, 2001.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
191298
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description