Managing a Fleet of Autonomous Mobile Robots (AMR) using Cloud Robotics Platform

Managing a Fleet of Autonomous Mobile Robots (AMR) using Cloud Robotics Platform

Aniruddha Singhal, Nishant Kejriwal, Prasun Pallav, Soumyadeep Choudhury, Rajesh Sinha and Swagat Kumar Email IDs: {aniruddha.singhal, nishant.kejriwal, prasun.pallav, soumyadeep.choudhury, rajesh.sinha, swagat.kumar}@tcs.comThe authors are with TCS Research, Tata Consultancy Services, New Delhi, India 201309.
Abstract

In this paper, we provide details of implementing a system for managing a fleet of autonomous mobile robots (AMR) operating in a factory or a warehouse premise. While the robots are themselves autonomous in its motion and obstacle avoidance capability, the target destination for each robot is provided by a global planner. The global planner and the ground vehicles (robots) constitute a multi agent system (MAS) which communicate with each other over a wireless network. Three different approaches are explored for implementation. The first two approaches make use of the distributed computing based Networked Robotics architecture and communication framework of Robot Operating System (ROS) itself while the third approach uses Rapyuta Cloud Robotics framework for this implementation. The comparative performance of these approaches are analyzed through simulation as well as real world experiment with actual robots. These analyses provide an in-depth understanding of the inner working of the Cloud Robotics Platform in contrast to the usual ROS framework. The insight gained through this exercise will be valuable for students as well as practicing engineers interested in implementing similar systems else where. In the process, we also identify few critical limitations of the current Rapyuta platform and provide suggestions to overcome them.

Fleet Management System, Multi-AMR control, Rapyuta, Cloud Robotics Platform, Robot Operating System, MAS, Gazebo, Gzweb

I Introduction

The last couple of decades have witnessed a steady rise in robot-based industrial automation. These industrial robots are comparatively inexpensive and are capable of carrying out repeated tasks at high speed and great accuracy and hence, are widely deployed in the industries of mass production. In spite of this, the robotic automation has remained confined only to big industries who can pay for elaborate assembly lines built around these robots to compensate for their lack of intelligence. In addition, this involves writing and testing extensive programs to take into account all possible cases that a robot might encounter during its operation. In short, the current robot-based industrial automation requires huge investment both in terms of capital and time, making it unaffordable to small and medium enterprises. This scenario is poised to change with the rise of service robots [1] [2], which unlike their industrial counterparts, can work in unstructured environments while learning and adapting to

Fig. 1: Block diagram of a simplified fleet management system for autonomous vehicles.

changes around them. These robots are designed to be safe and can work collaboratively with humans in close proximity without any protective fencing. These robots could be programmed very easily and intuitively through demonstrations by operators themselves. This gives rise to a field known as teaching by demonstration paradigm [3] which can be used for changing the robot behavior on the fly. Similarly, these robots will exhibit higher level of intelligence in taking autonomous decisions based on sensory perception.

According to International Federation of Robotics (IFR) [4], service robotics is going to drive the growth in robotic industry in the coming decade. This growth will be partly due to the increased adoption of robots in industries as well as domestic environments. Cloud Robotics [5] [6] will play a significant role in the growth of service robotics by augmenting the robot capabilities while reducing the per unit costs of each robot. This will become possible as the robots can off-load computationally intensive tasks on to the cloud for processing, can collaborate with other robots and humans over network, can learn new skills instantly from internet. Cloud Robotics can be used for providing “Robotics-as-a-Service” based solutions where robots could be dynamically combined to give support to specific applications [7]. One such application that is being considered in this paper is a vehicle fleet management system for warehouse and factory shop floors.

A vehicle fleet management system comprises various software and hardware components which facilitates optimum utilization of vehicles in meeting pre-defined goals. One such example is the use of Kiva mobile robots [8] for moving goods within Amazon fulfillment centers [9]. These autonomous ground vehicles (AGVs) are programmed to move autonomously along predefined tracks. However, the schedule and routes are provided by a centralized planner which also carries out resource allocation and manages job assignment to individual robots. Such a system also includes effective modules that facilitates efficient collaboration between machines and robots [10].

In this paper, we are looking into a simpler version of this fleet management system where a group of autonomous vehicles are required to follow desired paths provided by a global path planner as shown in Figure 1. This figure shows the essential components required for implementing such a fleet management system. The current location of robots as well as new obstacles detected on the way are used to update the environment map which, in turn, is used by the global planner to create new paths for the robots. The user or the operator provides the goals or destinations for each robot in this case. However, such goals may also come from an ERP (Enterprise Resource Planning) system in an industrial setting. The autonomy of each robot is governed by the navigation module that implements SLAM (Simultaneous localization and mapping) [11] as well as obstacle avoidance capabilities. Unlike the existing systems that focus on system integration involving various software and hardware components [12] [13] [14], we are particularly interested in exploring various software frameworks like ROS [15] and Rapyuta [5] for implementing such systems. To be specific, we provide details of three implementation in this paper. First two make use of the distributed control and communication framework of Robot Operating System (ROS) [15] and the last implementation uses Rapyuta cloud robotics engine [5]. A comparative analysis of these approaches are carried out which provides an understanding of underlying challenges, which if addressed, may increase the usability of the platform. The working of these implementations are demonstrated through several simulation as well as real world experiments.

In short, the contributions made in this paper could be summarized as follows: (1) We provide three different implementations of a fleet management system for autonomous ground vehicles using ROS and Rapyuta platforms. This includes single-master based ROS system, multi-master based ROS system and Rapyuta-based cloud robotics system. (2) The working details of these implementations are provided for both simulation as well as actual experiments which could serve as operation manual for students, researchers and practicing engineers who would like to implement similar systems in other domains. (3) Through rigorous comparative performance analysis, we identify the critical limitations of existing cloud robotics platform which, if solved, will improve the usability of these platforms.

The rest of this paper is organized as follows. An overview of related work is provided in the next section. The three approaches of implementing fleet management system is described in Section III. The comparative performance analysis of these systems for simulation and actual experiments are provided in Sections IV and IV-D respectively. The limitations of the current implementation which provides direction for future work is discussed in Section VI followed by conclusion in Section VII.

Ii Related Work

In this section we provide a brief overview of several related work. This will also serve as a background material for various core concepts that will be repeatedly referred in the rest of this paper.

Ii-a Robot Operating System

Robot Operating System (ROS) [16] is a software framework for managing and controlling multiple robots. It uses a peer-to-peer topology for communication between robot processes, supports multiple programming languages and provides tools for robot software development. Readers can refer to online wiki [15] to know about ROS in detail. For the sake of completion, some of the common concepts which will be used frequently are listed below for the sake of completeness.

(1) Nodes are ROS processes that perform computation. They can communicate with each other by passing messages. (2) Topics are medium over which nodes exchange messages. They provide a link between two nodes. A topic is channel for anonymous communication. Multiple nodes can publish/subscribe to a given topic. (3) Subscriber is a node which listens to the messages that are published to a topic. (4) Publisher is a node which writes to a topic from which other nodes can subscribe. (5) roscore is a set of nodes which are necessary for ROS environment to work. roscore starts a ROS master node, ROS permanent server and a node where logs are published. (6) AMCL (Adaptive Monte Carlo Localization)[17] is an inbuilt package in ROS that is used by the robots to localizes themselves in the map. (7) TF is a package that lets the user keep track of multiple coordinate frames over time. TF maintains the relationship between coordinate frames in a tree structure buffered in time, and lets the user transform points, vectors, etc., between any two coordinate frames at any desired point in time. (8) GMapping [18] package provides laser-based SLAM (Simultaneous Localization and Mapping) capability. It runs as a ROS node called slam_gmapping. This node can be used for creating 2-D occupancy grid map of the environment from laser and pose data collected by a mobile robot.

Ii-B Cloud Robotics Platform: Rapyuta

Rapyuta [5] is an open-source cloud robotics framework. It provides an elastic computing model which dynamically allocates secure computing environment for robots. In this way it helps in solving the problem of unavailability of high computing power on robots. The Rapyuta framework is based on clone-based model [19] where each robot connected to the cloud has a system level clone on the cloud itself which allows them to offload heavy computation into the cloud. These clones are tightly interconnected with high bandwidth making it suitable for multi-robot deployment. In addition, Rapyuta provides access to libraries of images, maps otherwise known as RoboEarth knowledge repository [20] [21] and, provides framework that facilitates collaborative robot learning and human computation [6]. A number of applications have been reported in literature that demonstrate the applicability and usefulness of the platform. This includes collaborative mapping [19] [22], robot-grasping [23], tele-presence [24] and ubiquitous manufacturing [25]. Readers are also referred to [22] [26] [26] for a comparative study on several other cloud robotics platforms reported in the literature. While a cloud-based system offers several advantages, it also poses several challenges which if solved can greatly enhance the usability of such platforms. Some of these challenges include network latency, data interaction and security [27].

Also a slightly related work is done by Turnbull et al in which they have made a system to detect position of robots through a camera placed on ceiling and control their motion so that they don’t collide. They have exploited the large computation power provided by the cloud.[28]. A collision avoidance and path planning system which works on individual robots also exist [29]. They have used common ROS topic for inter robot communication and AMCL for localization.

Ii-C Fleet Management System

A fleet management system [12] [30] [14] [31] primarily concerns itself with managing a group of vehicles to meet the goals and objectives obtained from an enterprise computer system. While most of the existing system focus on integrating various software and hardware components to ensure efficient utilization of resources, there has been very few efforts at generalizing the underlying architecture to make it more flexible and generic. Authors in [28] do propose to use a cloud infrastructure to implement formation control of a multi-robot system by using an external camera system for detecting and tracking individual robots. While a cloud infrastructure is used for image processing, it does not use a generic framework like Rapyuta.

In this paper, we primarily implement a simplified fleet management system using Rapyuta cloud robotics engine. The implementation is carried out through simulation as well as physical experiments using actual robots. The purpose of this work is to provide an insight into the working of the cloud robotics framework as well as identifying the limitations of current architecture. We also attempt to offer suggestions for overcoming these limitations and thereby improving the usability of the Rapyuta cloud robotics framework. The details of implementations for fleet management system is described next in this paper.

Iii The Methods

In this section, we provide details of our implementation of a simplified fleet management system as shown in Figure 1. It primarily consists of four modules: (1) a user, an operator or an ERP system that provides goals or target destination for each robot, (2) a global planner that computes the path to be taken by each robot based on the current state of the environment (3) Autonomous Mobile Robots (AMR) having capability for autonomous navigation and obstacle avoidance; and (4) an environment map which could be updated with the information of new obstacles detected by the robots. The user is also free to update the availability of routes for any robot by creating obstacles in the environment map.

The above fleet management system is implemented using three methods: (1) single-master system, (2) multi-master system and (3) Cloud Robotics platform. The first two methods make use of the distributed computing and communication architecture of Robot Operating System (ROS) [16] while the last methods uses Rapyuta cloud robotics framework [5]. The details of each implementation and their respective pros and cons are presented next in this section.

Iii-a Single Master System

Fig. 2: The figure shows two robots connected to a third system which is the master running ROSCORE. The master also runs the global planner. Rectangular boxes show nodes, solid oval shows topics on the machine, and dashed oval shows topics available for subscription from other machines.

In a single master system, ROSCORE runs on one machine which is called the master. Other nodes work in a distributed fashion on different machines. The nodes can run anywhere on the network except the driver nodes, which runs on the system that is directly connected to the hardware. All the nodes need to connect to the master. They connect via ROS_MASTER_URI which can be set in .bashrc file of the respective machines as shown below. All the machines in the network have a bidirectional connection with each other. Also, the host IP and the master IP will be same in case of the master machine.

 

export ROS_MASTER_URI=http://<master_ip>:11311
export ROS_HOSTNAME=<host_ip>
  

 

Some of the common tasks like localization, mapping etc. runs on every robot resulting in nodes with same name under ROSCORE. A single launch file cannot be used to launch the nodes as it will create a conflict and the previous running node will be overridden with the new instance of the same name. This problem is resolved by introducing namespace and tf_prefix tags in the launch file as shown below.

 

<launch>
<group ns="Robot1">
<param name="tf_prefix" value="Robot1" />
   .
   .
  <node pkg="<package_name>"
  type="<node_type>"
  name="<node_name>">
  <param name="<xyz>"
  type="double"
  value="<value_to_be_passed>" />
  </node>
   .
   .
</group>
</launch>

 

The single master system can be set up by following the steps given below:

  • Setup .bashrc in each robot as shown above.

  • Append suitable namespace and tf_prefix to the nodes corresponding to each robot.

  • Run roscore on the master.

  • Launch each individual robot.

A single master system is handy for quick testing of algorithms on a single robot because of its simple setup process. Its simplicity, however, does not provide much advantage as the number of robots increase in the environment. A schematic diagram of a working instance of single master system is shown in Figure 2. It shows one master running roscore and two client robots connected to the master over LAN. As one can see, all the topics from one robot is available for subscription by the all other robots as well. These topics are shown as dotted ellipse. The topics generated by the robot is shown as solid ellipses. Making topics available to everyone all the time may lead to some security concern as one would like to have some control over who can access which topics. In other words, this would require additional overhead to restrict access to the topics of a given robot by the other. Secondly, the bandwidth requirement for a single master system with multiple robots is comparatively higher as all the topics are available over the network for subscription. Moreover, having a single master makes the whole system vulnerable because if roscore dies, service based communication between the nodes get stopped. Topic based communication can still work because once a connection between nodes is established via topics, roscore is no longer needed, but new topics cannot be created without roscore running. Also, as the number of robots increase, it becomes increasingly cumbersome to deal with conflict among similar topics and namespace resolution.

Iii-B Multi Master System

Many of the limitations of a single master system can be overcome by having multiple masters running their own independent roscore as shown in Figure 3. This makes the system robust as the failure of one will not lead to the failure of the complete system. Since the visibility of topics is limited to the scope of each roscore environment, there are no namespace conflict with topics in a multi-master system. All the nodes and services are local to that robot. However, it is possible to share a minimum number of topics with other robots through remapping as and when required. Since only a limited number of topics are shared, the bandwidth required in a multi-master system is less compared to that in a single master system for the same task.

Fig. 3: A schematic view of a multi master system. The figure shows multiple roscores running on different machines. In this configuration, there is no conflict among the topics with similar names as their visibility is limited to the machine running its own roscore.

To implement a multi-master System, a package called multimaster_fkie is needed [32] and can be easily installed as shown below. This allows two important processes, master_discovery and master_sync to run simultaneously. The function of master_discovery is to send multicast messages to the network so that all roscore environments become aware of each other. It also monitors the changes in the network and intimates all ROS masters about these changes. The other process called master_sync enables us to select which topics can shared between different roscore. Without master_sync node, no information can be accessed by other roscores. The following commands are required to be executed to install and activate multi-master mode in each machine:

 

$ sudo apt-get install ros-indigo-multimaster-fkie
$ sudo sh -c "echo 0 >/proc/sys/net/ipv4/
                        icmp_echo_ignore_broadcasts"
$ export ROS_MASTER_URI= http://<host_ip>:11311
$ export ROS_HOSTNAME=<host_ip>
$ roscore
$ rosrun master_discovery_fkie master\
                        _discovery_mcast_group:=224.0.0.1
$ rosrun master_sync_fkie master_sync\
                        _sync_topics:=[’topic_name’]

 

It is to be noted that the host and master IPs are same on each machine. This is unlike the single-master case where these two IPs could be different for a given machine. The namespace conflict in multi-master system can be avoided using a relay node. The use of relay node can be understood in the context shown in Figure 3. The global planner needs to access pose data from Robot 1 and 2 for carrying out path planning. Each of these two robots publish pose data to a topic called /amcl_pose under their respective roscores. To avoid conflict, one has to relay the /amcl_pose of Robot 1 to the topic /Robot1/amcl_pose and that of Robot 2 to /Robot2/amcl_pose respectively. This can be done by executing the following command on each of the robots:

 

$ rosrun topic_tools relay /amcl_pose /Robot1/amcl_pose

 

As shown in the above figure, the global planner can now access these new topics called /Robot1/amcl_pose and /Robot2/amcl_pose for obtaining their respective pose data.

Even though multi-master system saves us from several problems encountered in a single master system, it still does not provide solution to some other problems such as scalability, load balancing and lower computation power. As number of robots increase, one needs to reconfigure system files manually for each robot to enable multi-casting. It does not make efficient use of the processing power available because, by default, the processes are not distributed such that load on each machine is balanced. Bandwidth usage in multi-master system is still high compared to a cloud-based system due to the difference in network protocols [22] used by different machines. In a multi-master system, each machine has a limited on-board computational hardware which can not be augmented to accommodate for higher demand in the run time. This limits the usability of multi-machine system.

Iii-C Cloud Robotics System

Many of the limitations of a multi-master system can be solved by having a cloud infrastructure to which the robots can offload computationally heavy tasks. In this paper, Rapyuta cloud robotics engine [5] [33] is used for implementing the fleet management system. As discussed earlier, it is a Platform as a Service (PaaS) framework suitable for developing robotic applications. The schematic of such an implementation is shown in Figure 4. It shows four main components: (1) a cloud server which includes both software as well as hardware infrastructure; (2) Physical or simulated Robots and their working environment. (3) an user interface for interacting with the system and (4) an operator or an ERP system to provide goals for the system.

Fig. 4: Block-diagram of implementation using a cloud robotics platform

The inner working of this cloud-based implemented could be better understood by studying the Figure 5 that provides a process level overview of the system showing nodes, topics and interconnection pathways among various modules of the fleet management system. The figure shows a five agent system implemented using four physical machines (three robots and a server). Each robot runs processes for localization and autonomous navigation through nodes /amcl and /move_base respectively. The processes related to Rapyuta cloud robotics engine runs on the server machine. It also runs processes for global planner which generates paths for the robots. In a general scenario, the global planner and all related optimization algorithms can run on a separate physically machine on the network. Hence, it is shown as a separate block in the Figure 5 similar to the blocks corresponding to robots.

As shown in this figure, the global planner publishes data into two types of topics. The first topic is /goalNodesList which provides paths generated by the planner in the form of an array of grid block numbers. Each robot subscribes to its corresponding goalNodeList to know the cell locations that it needs to traverse. The second topic, called /cancelGoal, is a binary number which indicates whether the current goal locations received from the global planner is to be discarded by the robot or not. The binary value for the topic /cancelGoal for a given robot is set if a cell on its path is blocked either by an user or by an obstacle detected by the robot sensors. The grid cells could also be blocked by an ERP (Enterprise Resource Planning) system indicating non-traversable regions in the environment. Whenever the value for /cancelGoal is set, the robot discards previously received goal locations and uses new values available at the corresponding /goalNodesList topic. These topics are subscribed by the respective move_client nodes on the cloud which, in turn, publish necessary topics for use subscription by the physical robots.

Fig. 5: The process nodes and topics required for implementing the fleet management system using Rapyuta cloud robotics engine. The system shows four agents (three robots and one global planner) interacting with each other through a cloud server. In this implementation, only a single container is used to execute all relevant processes. The arrow heads show the direction of information flow through topics between different nodes.

Before going further, a brief understanding of Rapyuta organization will be useful for understanding the configuration steps described later. Rapyuta has the following four main components [5]. (1) Computing environments are the Linux containers [34] [35] used for running various ROS based robot applications; (2) Communication protocols: are the standard protocols used for internal and external communication between cloud, container and robot processes. (3) Core Task Set: for managing all process and tasks. They are further divided into three groups, namely, robot task set, environment task set and container task set. (4) Command Data Structures: are the necessary formats used for various system administration activities.

The setup process for the cloud robotics based fleet management system involves two main step:

  • Create configuration files providing details of interaction between cloud and robots.

  • Launch these files using system commands on server as well as robot clients.

In the remaining part of this section, we provide the details of configuration on server as well as the clients.

Iii-C1 Configuration of Cloud Server

The configuration for the cloud-based fleet management system is shown in Figure 6. The dotted box shows the activities within the cloud server. The first process which needs to be started on the server is the Master Task Set which controls and manages all other processes on the cloud. It takes up an IP called master_ip and listens on port 8080. This process is started by executing following Linux command:

 

$ rce-master

 

The next process which needs to be started on the server is the Robot Task Set which is responsible for managing communication with physical robots. It can be started using the following command:

 

$ rce-robot <master_ip>

 

The third task which needs to be started is the Container Task Set responsible for managing containers which are the basic computing environment on the cloud. The corresponding command is:

 

$ sudo rce-container <master_ip>

 

Each Linux container (LXC) takes up its own IP and port to communicate with master. Linux containers need not be collocated with the Rapyuta server (rce-master) and can run on any other machine on the network. It is also possible to have multiple containers. The linux containers are capable of running standard available ROS nodes or user-created nodes to perform a specific task. Inside each Linux container, lies the fourth and final core task set known as Environment Task Set. This task set allows the ROS nodes running within the container to communicate with other nodes running on other Linux containers and robots on the network. The configuration for these environment tasks for containers are provided in the configuration files used by the individual clients as will be explained in the next section.

The Figure 6 also shows two main types of connection for communication among various processes. One is for internal communication within different Rapyuta processes and, the other one is for external communication between Rapyuta processes and robots. Internally, Rapyuta communicates over UNIX Sockets. For instance, the master task set uses port 8080 for communication and is referred to as an internal_port. The processes within the Linux container communicate with robot end points through communication ports or comm_port. The corresponding port number is 10030 and is represented by the letter ‘P’ (stands for ports) in the above figure. The robot endpoints provide interfaces for converting external format (e.g. JASON) into internal format of robots (e.g. ROS messages). On the other hand, ports are used for internal communication between endpoint processes. The external communication between Rapyuta processes and robots uses web-socket protocol. This communication is over 9010 port which is also knows as ws_port or websocket_port. Readers can refer to [5] for more details. The figure also shows the process IDs (PID) for all related topics and nodes.

Fig. 6: Configuration for multiple AMRs in a Rapyuta-based fleet management system. The dashed line shows the server system where Rapyuta cloud engine is running along with a ROBOT process called global planner. Three AMRs are represented by the three blocks termed as Robot 1 , Robot 2 and Robot 3. On right hand side processes inside ROBOT 1 are shown and their interaction with rce-ros process to send data. As shown in figure, move_base process having PID 6784 communicates with rce-ros process having PID 9369 through system assigned ports.

Iii-C2 Configuration for Robots

In order to demonstrate the working of the system, Turtlebots [36] are used as autonomous mobile robot (AMR) platforms for our fleet management system. After setting up the cloud, robot processes are required to be started on each robot. Each robot is made alive within ROS environment by using turtlebot_bringup command. Other functionalities of the robot (autonomous navigation, obstacle avoidance, localization etc.) are activated through a standard ROS launch file. The connection between robot and Rapyuta is established using rce-ros command with a local configuration file available on each robot. The basic commands for setting up robots are as follows:

 

$ turtlebot_bringup
$ sudo rce-ros robot1.config
$ roslaunch botmotion.launch

 

The configuration files are written in JSON and are used for sending request instructions to master task set for establishing connection with the cloud. The configuration file for each robot has the following four main components: (1) Containers, (2) Nodes, (3) Interfaces and (4) Connections. Other than this, the first part of the configuration file is used to send HTTP request to the cloud. This part appears as shown below:

 

"url":"http://192.168.5.36:9000/",
"userID" :"testUser",
"password" : "testUser",
"robotID"  : "testRobot_1",

 

As shown above, the request is sent on port 9000 and in response, Rapyuta sends the endpoint’s URL to the robot as a JSON encoded response. This received URL is used by the robot to connect with the cloud through port 9010. These ports are configured at the time of installation. Upon establishing the connection the robot requests for container creation and it is done by the following block in the configuration file:

 

"containers": ["cTag" : "cTag_01" ]

 

This creates a container inside Rapyuta having a unique tag provided by the key "cTag". Each container starts with the necessary processes or daemons like roscore, sshd, etc. and looks for the nodes which needs to be run inside the container. This information is provided in the ‘node’ block in the configuration file as shown below:

 

  "nodes": [
  "cTag" : "cTag_01",
  "nTag" : "move_client_node_1",
  "pkg"  : "move_client",
  "exe"  : "move_client_pthread",
  "args" : "/Robot1/goalNodesList/Robot1,
                     /cancelGoal, Robot1/map",
  "namespace" : "Robot1"
  ... ]

 

The key "cTag" refers to the name of the container where these nodes are to be created, "nTag" specifies the name for the node, "pkg" tells the master task set about the needed packages. The key "exe" tells the name of the executable, "args" contains the arguments to be passed and "name-space" segregates the processes inside the container giving us the flexibility to run multiple copies of the same executable independently inside a container.

Once the nodes are up, it is necessary to define interfaces for each robot. Interfaces primarily refer to various kinds of sensor data that will be shared with the cloud or other robots in the network. This is specified by the following block in the configuration file:

 

  "interfaces": [ {
  "eTag" : "cTag_01",
  "iTag" : "amclPoseReceiver_1",
  "iType" : "PublisherInterface",
  "iCls" :  "geometry_msgs/
               PoseWithCovarianceStamped",
  "addr" : "/Robot1/amcl_pose" }

 

The key "eTag" refers to the endpoint tag which is either a robot end or a container end and accordingly, a robot ID or a container tag can be mentioned as its value. The key "iTag" is the interface tag and is unique in the scope of an endpoint tag. "iType" defines the type of the interface tag which can be subscriber, publisher, service client or service provider as defined by Rapyuta [37]. "iCls" refers to the class name and it defines the message type for publisher or subscriber and "addr" is the address of ROS topic. After defining the interfaces, it is necessary to specify the connections between various endpoints as shown in the following block:

 

  "connections" : [ {
  "tagA" : "cTag_01/amclPoseReceiver_1",
  "tagB" : "testRobot_1/amclPoseSender_1" },

 

This part establishes the connection between interfaces. The points to be connected are defined as "tagA" and "tagB".

Iv Simulations & Experiments

In this section we will provide details of how different components of fleet management system work. The modules that are being discussed here include global planner, gazebo simulation model and the web-based user interface.

Iv-a Global Planner

As discussed earlier, the global planner is responsible for generating paths for robots between their current locations and the target destinations provided by the operator. It receives the location information from each of the robots, the destination information for these robots from the operator and, uses the latest map to generate necessary paths for the robots. In its simplified form, it implements a Dijkstra algorithm [38][39] [40] on a grid map to find shortest path between two cells as shown in Figure 7. In this figure, the robots are represented by filled circles. The start and end destinations of these robots are represented by the symbol pair where in this case. The Figure (a)a shows the case when no obstacles are present in the map. As soon as the path information is transmitted to the robots, they start following their respective paths as shown by the trail of circular dots on their paths. The Figure (b)b shows the case when an obstacle is created (or detected) in the cell number 26 at any time during this motion. This results in generation of new paths by the global planner. In a simulated environment, the robots can react instantaneously to this change. However, the robots may take some in a real world scenario due to factors like communication delay and inertia of motion as shown in this figure. The global planner may also include several other factors such as, battery life of robots, additional on-board sensor or actuator on robots (in case of a heterogeneous scenario) and other environmental conditions to solve a multi-objective optimization problem to generate these paths. Our purpose in this paper has been to demonstrate the working of a complete fleet management system which invariably requires such a centralized planner for task allocation and towards this end, we pick up the simplest path planner as an example. Readers are free to explore other planners in the same context.

(a)
(b)
Fig. 7: Paths generated by Global Planner: (a) Paths for three robots obtained without any obstacles. Circular dots show the location of the robot as it traverses this path. (b) Shows new paths generated by the global planner once the user blocks the cell number 26. Grid cells can also be blocked when a robot detects an obstacle.

Iv-B Simulation Environment

The simulated environment for the fleet management system is created using Gazebo [41] [42], which is an open-source software well integrated with ROS. The steps required for this on a Ubuntu Linux environment are as follows:

  • Create "model.config" and "model.sdf" file and place them in a folder preferably in folder .gazebo/models/.

  • Create a launch file similar to empty_world.launch and set "world_name" argument as the address of your newly created world by using the following command:

 

$ roslaunch empty_world.launch world_name:=
            ’address of newly created world’
      

 

The resulting simulated environment is shown in Figure 8. It also shows three obstacles (cuboidal blocks) and three robots which are spawned in the environment. The grid cells on the floor correspond to the grid map used by the global planner shown in Figure 7. Whenever an user blocks a cell in the grid map, a cuboidal block is spawned in the Gazebo environment. The slow performance of Gmapping algorithm in Gazebo simulation might be overcome by tweaking some scan matching parameters as shown below:

 

<param name="minimumScore" value="10000"/>
<param name="srr" value="0"/>
<param name="srt" value="0"/>
<param name="str" value="0"/>
<param name="stt" value="0"/>
<param name="particles" value="1"/>
      

 

Fig. 8: The simulated Gazebo environment for the fleet management system. There are four obstacles and three robots spawned in environment.

Iv-C Web User Interface

A web interface is also built to interact with the robots and view the robot movement. This is shown in Figure (b)b. The figure shows two windows - one for visualizing the robot motion in a simulated environment and an interactive grid map for user interaction. The user can block out cells to spawn obstacles in the simulated environment and select starting position for robots. The interface is created using Gzweb111http://gazebosim.org/gzweb which is a web graphics library (WebGL) for Gazebo simulator. Like Gzclient, it is a front-end interface to GZserver and provides visualization of the simulation. It is a comparatively thin and light weight client that can be accessed through a web browser. The organization of this interface is shown in Figure (a)a. Gzweb uses Gz3D for visualization and interacts with Gzserver through Gzbridge. Gzserver which forms the core of the Gazebo simulator can interact with user programmes written with ROS APIs. This web-based interface makes the whole system platform-independent where an user can access the system over internet without having to worry about installing the pre-requisite software on his/her system.

(a) Browser interaction with Gazebo and ROS
(b) Web interface for cloud based fleet management system
Fig. 9: Simulated Environment for the multi-robot fleet management system.

Iv-D The Experimental Setup

In this section, we provide details of our real world experiment with physical robots. Three Turtlebots [36] are used as autonomous mobile robots (AMR) in a lab environment as shown in Figure (a)a. The map of the environment is created by using Gmapping SLAM algorithm available with ROS [43]. The map generated is shown in Figure (b)b. Each of the robots run AMCL [44] [45] to localize itself in the map. It also runs an obstacle avoidance algorithm that uses on-board Kinect depth range information to locate obstacles on the path. These programs are run on a low power Intel Atom processor based netbook with 2 GB RAM that comes with these robots. The map is divided into equispaced grid to match with the grid up used by the global planner shown in Figure 7. The server is a 12 CPU machine with Intel Xeon processor with 48 GB of RAM and 2 TB of storage space. The robots and server communicate over a local wireless LAN. The complete video of the experiment [46] as well as the source codes [47] are made available online for the convenience of users.

(a) Actual robot setup in a laboratory
(b) RVIZ visualization of map and robots
Fig. 10: Experimental setup for testing the working of the fleet management system.

V Performance Analysis

The performance of each of the three modes of operation is analyzed by performing two different experiments. The details of the experiment and the resulting analysis is provided in this section.

V-a Experiment 1

The schematic of machine configuration used for this experiment is shown in Figure 11. It shows two physical machines in the network connected to each other through Wireless LAN. The figure 11(a) shows the single-master mode where Machine 1 acts as the master running roscore. Machine 2 runs Gazebo simulation environment as explained in Section IV-B and spawns five Turtlebots in it. Machine 1 apart from running roscore subscribes to the Kinect scan data from these robots and prints them on a terminal console. The Figure 11(b) shows the multi-master mode of operation where both machines run their own roscore processes. As before, the machine runs its own Gazebo simulation environment and spawns a set of five Turtlebots. Each of the machines run master_discovery node to detect other masters in the network. The machine 1 runs the master_sync node to subscribe to the scan data from all robots running on the other machine. The Figure 11(c) shows the cloud-based mode of operation where Machine 1 acts as the cloud server running Rapyuta nodes such as /rce_master, /rce_robot and /rce_container. Similar to the previous case, the other machine runs its own Gazebo simulation environment and spawns its own set of five Turtlebots. In addition these machines also run /rce_ros nodes for each of the robot in order to establish connection with the cloud. In this case as well, the server subscribes to the scan data from all robots from both the machines through a container process.

The relative performance of each of the modes of implementation can be analyzed by studying the two parameters, namely, network usage and CPU usage of the machines as explained below. The network usage for Machine 1 for all the three configurations is shown in Figure 12. It shows that the single master system generates maximum traffic while cloud robotics system generates least network traffic for the same operation. The corresponding CPU usage for the server as well as the clients in each of these three configurations is shown in Figure 13. It also shows the default publishing rate of messages at the topics for three configurations. As one can see, a client in the single master system publishes at higher rate (7.5 Hz) compared to that in the multi-master system (4.5 Hz) or the cloud robotics system (4 Hz). This could be linked to the fact that the CPU usage of the client for a single master system (SMS-C) is lowest giving rise to higher publishing rate. A client in multi-master system (MMS-C) and cloud robotics system (CRS-C) is required to run additional processes to establish communication with the server which leads to higher CPU usage and hence, lower publishing rate. This, however, causes more CPU usage and network usage for the server in the single master system (SMS-S). Overall, it appears that it is advantageous to go for a multi-master system or cloud robotics systems compared to a single master system as the former systems lead to lower network traffic at a comparable CPU usage compared to the later.

We also plot the Round Trip Time (RTT) for the three modes of implementation. It is the time taken by a packet to go from a sender to a receiver and come back to the sender. In this paper, RTT is computed as follows. A message is published at a node on one machine. This node is subscribed by another machine, which in, turns publishes it on another node. This new node is then subscribed by the first time. The time difference between publishing the message on node and receiving it at another on machine 1 is considered as the round trip time. These two machines are located in the same place communicating over wireless LAN. The resulting RTT for all the three configurations is shown in Figure 14. As expected, the round-trip time increases monotonically with increasing data size and it’s behaviour is more or less same for all the three configurations. Usually, the round trip time (RTT) is computed for machines which are physically separated by several kilometers [5]. Nevertheless, the RTT behaviour will remain more or less same as shown in Figure 14 as the network delays between the machines will dominate the minor differences arising out of internal processes of each configuration.

(a) Single Master System (SMS)
(b) Multi-master System (MMS)
(c) Cloud Robotics System (CRS)
Fig. 11: Schematic of simulation experiment carried for analyzing the performance of each of the three modes of implementation. The figure shows the essential nodes running and topics available for subscription on each of the machines.
Fig. 12: Network usage for Machine 1 for three modes of implementation. The Machine 1 echoes robots scan data in each of three cases. It shows that single master system generates more traffic compared to other two configurations under identical conditions.
Fig. 13: CPU resource usage on server as well as client for three modes implementation: single master system (SMS), Multi-master system (MMS) and Cloud Robotics System (CRS). An additional letter ‘S’ or ‘C’ is used to represent a server or a client machine respectively. It also shows the default publishing rate of messages on each topic.
Fig. 14: The round trip time (RTT) for three modes of implementation: single-master, multi-master and cloud based systems. RTT is calculated as the time taken for message to go from a topic to another topic and back.

V-B Experiment 2

In this experiment as well, two physical machines are connected to each other through a Wireless LAN. The experiment is further simplified by removing the Gazebo simulator which has a high computational as well as memory footprint. One of these machines publish images onto a topic which is subscribed by the other machine. The other machine simply echoes this data on a console. The second machine subscribing to the image publishing topic is considered as the server as it either runs a roscore process in the single master mode or a Rapyuta engine in the cloud robotics mode of operation. The relative performance of the machines is analyzed and compared in terms of CPU usage and network bandwidth usage as shown in Figure 15. The network usage is almost same in all the three cases as all of them use the same publishing rate and there are no other processes / nodes that generate additional network traffic. However, there is a difference in the CPU usage in these implementations. It is highest in Cloud Robotics mode of operation both on client as well as server side. This could be attributed to the additional computational overhead needed for running cloud processes. The multi-master system has the second highest CPU usage owing to the additional computation needed for running master_discovery processes and master_sync processes. Since none of these additional processes are there in the single master mode, the CPU usage is least in this case. These observations are in sync with our understanding of the systems as explained in the previous sections.

(a) CPU usage
(b) Network Bandwidth Usage
Fig. 15: CPU and Network Usage in the second experiment.

Vi Limitations and Future Work

As summarized above, the single master or multi-master ROS systems are not suitable for deployment of Fleet Management services as a PaaS environment. Both these architectures implement Networked Robotics model based on Robot-to-Robot (R2R) communication. While enabling the familiar ROS based PaaS environment and transparent availability of sensor data across multiple robots, the following key shortcomings or constraints on an individual robot or a fleet of robots has to be noted: (1) Resource Constraints - There are resource constraints on each robot in terms of onboard compute, memory and robot’s power supply, motion mode and working environment. Once deployed they cannot be easily upgraded. Algorithms which require access to high dimensional data from multiple robots requiring larger compute infrastructure will remain constrained by the overall network of robots’ compute capacities. (2) Communication Constraints - higher bandwidth usage within the R2R network of mobile robots will lead to higher network latencies thereby deteriorating the quality of service. (3) Scalability constraints - on the overall solution as number of robots in a mobile fleet increases.

For cloud-based PaaS systems such as Rapyuta, which implements Robot-to-Cloud (R2C) model, the following limitations are identified which need remediation:

  • In its current form, it does not offer high availability [48] for Rapyuta Master taskset and its failure leads to collapse of the whole system. This needs remediation by infrastructural mechanisms in combination with checkpoint-restart utilities [49] [50].

  • Of the five key characteristics of Cloud Services, the current implementation of Rapyuta PaaS lacks one, namely, the elasticity. It has a cannibalized approach for all containers on a host to access compute, storage and network resources on the host machine and does not offer ability to allocate and resize these containers in the run-time as the workload changes over time. The utilities for monitoring the resource consumption are rudimentary and do not offer advice for migration of containers from one host to another or resizing.

  • In the current implementation of the cloud platform, there are no provisions for managing communication bandwidth to cater to different traffic situations. In practical scenarios for fleet management, having a logical segregation of communication bandwidth between control and data signals will improve the responsiveness of the R2C system. This is a concern when a remote tele-operation is required for an impaired mobile robot in a data centric network environment. Ability to leverage Multi-Path TCP [51] [52] can also improve the transfer rates with R2C communication as it can make use of multiple interfaces to compensate for congestion in one of the channels.

  • In a large warehouse of several thousand square feet area, it is possible that all mobile robots may not always have access to Cloud through the Cloud Access point. But with alternate communication modalities like Bluetooth, Zigbee or Wifi Direct - they may have connectivity to nearby robots which, in turn, may have access to the Cloud infrastructure. In such a scenario, a proxy-based [19] compute topology will be useful where one robot functions as a group leader to bridge the interaction between the set of nearby out-of-coverage robots and the cloud. The current Rapyuta implementation does not provide this topology and would require extensive changes to enable this. However, the other topologies such as clone-based or peer-based models are easier to implement with the current implementation and may be used along the ROS single-master or multi-master mode to simulate proxy-based systems.

  • In the current implementation of Rapyuta framework - the partitioning of data and compute across three options - onboard compute on robot itself or robotic R2R network and/or Cloud execution has to be decided upfront and is usually static. Depending on the task with deadline, whether it is a SLAM, Navigation or Grasping task in warehouse, it would be useful to have a framework that can allocate these tasks to suitable compute resources (on edge / fog / cloud) in the run-time. Use of energy-efficient optimization algorithms [53] [54] for task allocation and subsequent path planning and coordination have to be added on the top of Rapyuta platform for warehouse fleet management.

The directions for future work therefore include remediation of the limitations of the Rapyuta Cloud framework and engineering the algorithm layer for task allocation, task planning, path planning and coordination, Grasping, Tele-operations and Collaborative SLAM in context of Picker-to-Parts Warehouse robotics. Future work needs to add the tier of R2R layer with adhoc network (using Multi-Master ROS) with suitable elastic compute topology (Peer, Proxy or Clone) with R2C Rapyuta framework.

Vii Conclusion

This paper presents the details of implementation of a fleet management system for a group of autonomous mobile robots (AMR) using three configurations: single-master, multi-master and cloud robotics platform. The mobile robots are completely autonomous as far as their navigation capabilities are concerned. These robots are required to traverse the paths provided by a global planner. The global planner implements a basic path planning algorithm to generate paths between the current robot locations and the desired goal locations set by the operator, taking into account the obstacles which could be created dynamically in run time. The whole system can be controlled or monitored through a web-based user interface. The details of implementation for both simulation as well as actual experiment is provided which will be useful for students and practicing engineers alike. These details provide an insight into the working of each of the these modes of operation allowing us to identify the strengths and weaknesses of each one of them. These insights are further corroborated by analyzing parameters such as, network usage, CPU load and round trip time. We also identify the critical limitations of current cloud robotics platform and provide suggestions for improving them which forms the future direction for our work.

References

  • [1] D. S. Wettergreen and T. D. Barfoot, Field and Service Robotics: Results of the 10th International Conference.   Springer, 2016, vol. 113.
  • [2] Y. Amirat, D. Daney, S. Mohammed, A. Spalanzani, A. Chibani, and O. Simonin, “Assistance and service robotics in a human environment,” Robotics and Autonomous Systems, vol. 75, no. PA, pp. 1–3, 2016.
  • [3] B. D. Argall, S. Chernova, M. Veloso, and B. Browning, “A survey of robot learning from demonstration,” Robotics and autonomous systems, vol. 57, no. 5, pp. 469–483, 2009.
  • [4] International Federation of Robotics (IFR), “Service robot statistics,” http://www.ifr.org/service-robots/statistics/.
  • [5] G. Mohanarajah, D. Hunziker, R. D’Andrea, and M. Waibel, “Rapyuta: A cloud robotics platform,” Automation Science and Engineering, IEEE Transactions on, vol. 12, no. 2, pp. 481–493, 2015.
  • [6] B. Kehoe, S. Patil, P. Abbeel, and K. Goldberg, “A survey of research on cloud robotics and automation,” Automation Science and Engineering, IEEE Transactions on, vol. 12, no. 2, pp. 398–409, 2015.
  • [7] B. Koken and G. Mester, “The evolution of cloud robotics: A survey,” Acta Technica Corviniensis-Bulletin of Engineering, vol. 8, no. 2, p. 23, 2015.
  • [8] P. R. Wurman, R. D’Andrea, and M. Mountz, “Coordinating hundreds of cooperative, autonomous vehicles in warehouses,” AI magazine, vol. 29, no. 1, p. 9, 2008.
  • [9] Amazon Robotics, “Robot based warehouse automation systems,” https://www.amazonrobotics.com/#/.
  • [10] A. Rosenfeld, A. Noa, O. Maksimov, and S. Kraus, “Human-multi-robot team collaboration for efficent warehouse operation,” Autonomous Robots and Multirobot Systems (ARMS), 2016.
  • [11] S. Thrun, W. Burgard, and D. Fox, Probabilistic robotics.   MIT press, 2005.
  • [12] T. A. Wellman and D. E. Winner, “Fleet management system,” Aug. 21 2012, US Patent 8,249,910.
  • [13] J. R. Coffee, R. W. Rudow, R. F. Allen, M. Billings, D. A. Dye, M. L. Kirchner, R. W. Lewis, K. M. Marvin, R. D. Sleeper, W. A. Tekniepe et al., “Vehicle tracking, communication and fleet management system,” Aug. 26 2003, US Patent 6,611,755.
  • [14] W. Schnell, M. Radue, T. Andren, T. Baumann, and M. Johansen, “Fleet management system,” Dec. 30 2014, US Patent App. 14/586,323.
  • [15] WillowGarage, “Robot operating system,” http://www.ros.org/.
  • [16] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng, “Ros: an open-source robot operating system,” in ICRA workshop on open source software, vol. 3, no. 3.2.   Kobe, Japan, 2009, p. 5.
  • [17] D. Fox, W. Burgard, F. Dellaert, and S. Thrun, “Monte carlo localization: Efficient position estimation for mobile robots,” AAAI/IAAI, vol. 1999, pp. 343–349, 1999.
  • [18] G. Grisetti, C. Stachniss, and W. Burgard, “Improved techniques for grid mapping with rao-blackwellized particle filters,” IEEE transactions on Robotics, vol. 23, no. 1, pp. 34–46, 2007.
  • [19] G. Hu, W. P. Tay, and Y. Wen, “Cloud robotics: architecture, challenges and applications,” Network, IEEE, vol. 26, no. 3, pp. 21–28, 2012.
  • [20] M. Tenorth, A. C. Perzylo, R. Lafrenz, and M. Beetz, “The roboearth language: Representing and exchanging knowledge about actions, objects, and environments,” in Robotics and Automation (ICRA), 2012 IEEE International Conference on.   IEEE, 2012, pp. 1284–1289.
  • [21] O. Zweigle, R. van de Molengraft, R. d’Andrea, and K. Häussermann, “Roboearth: connecting robots worldwide,” in Proceedings of the 2nd International Conference on Interaction Sciences: Information Technology, Culture and Human.   ACM, 2009, pp. 184–191.
  • [22] G. Mohanarajah, V. Usenko, M. Singh, R. D’Andrea, and M. Waibel, “Cloud-based collaborative 3d mapping in real-time with low-cost robots,” Automation Science and Engineering, IEEE Transactions on, vol. 12, no. 2, pp. 423–431, 2015.
  • [23] B. Kehoe, A. Matsukawa, S. Candido, J. Kuffner, and K. Goldberg, “Cloud-based robot grasping with the google object recognition engine,” in Robotics and Automation (ICRA), 2013 IEEE International Conference on.   IEEE, 2013, pp. 4263–4270.
  • [24] M. K. Ng, S. Primatesta, L. Giuliano, M. L. Lupetti, L. O. Russo, G. A. Farulla, M. Indaco, S. Rosa, C. Germak, and B. Bona, “A cloud robotics system for telepresence enabling mobility impaired people to enjoy the whole museum experience,” in Design & Technology of Integrated Systems in Nanoscale Era (DTIS), 2015 10th International Conference on.   IEEE, 2015, pp. 1–6.
  • [25] X. V. Wang, L. Wang, A. Mohammed, and M. Givehchi, “Ubiquitous manufacturing system based on cloud: A robotics application,” Robotics and Computer-Integrated Manufacturing, 2016.
  • [26] K. Goldberg and B. Kehoe, “Cloud robotics and automation: A survey of related work,” EECS Department, University of California, Berkeley, Tech. Rep. UCB/EECS-2013-5, 2013.
  • [27] J. Wan, S. Tang, H. Yan, D. Li, S. Wang, and A. V. Vasilakos, “Cloud robotics: current status and open issues,” IEEE Access, vol. 4, pp. 2797–2807, 2016.
  • [28] L. Turnbull and B. Samanta, “Cloud robotics: Formation control of a multi robot system utilizing cloud infrastructure,” in Southeastcon, 2013 Proceedings of IEEE.   IEEE, 2013, pp. 1–4.
  • [29] D. Hennes, D. Claes, W. Meeussen, and K. Tuyls, “Multi-robot collision avoidance with localization uncertainty,” in Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems-Volume 1.   International Foundation for Autonomous Agents and Multiagent Systems, 2012, pp. 147–154.
  • [30] J. Coffee, R. Rudow, R. Allen, M. Billings, D. Dye, M. Kirchner, R. Lewis, K. Marvin, R. Sleeper, and W. Tekniepe, “Vehicle tracking, communication and fleet management system,” Feb. 26 2004, uS Patent App. 10/646,715. [Online]. Available: https://www.google.co.in/patents/US20040039504
  • [31] S. T. S. Thong, C. T. Han, and T. A. Rahman, “Intelligent fleet management system with concurrent GPS & GSM real-time positioning technology,” in 2007 7th International Conference on ITS Telecommunications.   IEEE, 2007, pp. 1–6.
  • [32] S. H. Juan and F. H. Cotarelo, “Multi-master ros systems,” Citeseer, Tech. Rep., 2015.
  • [33] D. Hunziker, M. Gajamohan, M. Waibel, and R. D’Andrea, “Rapyuta: The roboearth cloud engine,” in Robotics and Automation (ICRA), 2013 IEEE International Conference on.   IEEE, 2013, pp. 438–444.
  • [34] R. Dua, A. R. Raja, and D. Kakadia, “Virtualization vs containerization to support PaaS,” in Cloud Engineering (IC2E), 2014 IEEE International Conference on.   IEEE, 2014, pp. 610–614.
  • [35] A. M. Joy, “Performance comparison between linux containers and virtual machines,” in Computer Engineering and Applications (ICACEA), 2015 International Conference on Advances in.   IEEE, 2015, pp. 342–346.
  • [36] Turtlebot 2.0, “A low-cost, personal robot kit with open-source software,” http://www.turtlebot.com/.
  • [37] R. D. Resources, “Interfaces, type definition,” http://rapyuta.org/developer_resources.
  • [38] A. V. Goldberg and C. Harrelson, “Computing the shortest path: A search meets graph theory,” in Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms.   Society for Industrial and Applied Mathematics, 2005, pp. 156–165.
  • [39] A. V. Goldberg, H. Kaplan, and R. F. Werneck, “Reach for a: Efficient point-to-point shortest path algorithms,” in Proceedings of the Meeting on Algorithm Engineering & Expermiments.   Society for Industrial and Applied Mathematics, 2006, pp. 129–143.
  • [40] S. Skiena, “Dijkstra’s algorithm,” Implementing Discrete Mathematics: Combinatorics and Graph Theory with Mathematica, Reading, MA: Addison-Wesley, pp. 225–227, 1990.
  • [41] N. Koenig and A. Howard, “Design and use paradigms for gazebo, an open-source multi-robot simulator,” in Intelligent Robots and Systems, 2004.(IROS 2004). Proceedings. 2004 IEEE/RSJ International Conference on, vol. 3.   IEEE, 2004, pp. 2149–2154.
  • [42] Gazebo, “Robot simulation made easy,” http://gazebosim.org/.
  • [43] J. M. Santos, D. Portugal, and R. P. Rocha, “An evaluation of 2d slam techniques available in robot operating system,” in 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR).   IEEE, 2013, pp. 1–6.
  • [44] AMCL, “Adaptive Monte Carlo localization algorithm,” http://wiki.ros.org/amcl.
  • [45] S. Zaman, W. Slany, and G. Steinbauer, “Ros-based mapping, localization and autonomous navigation using a pioneer 3-dx robot and their relevant issues,” in Electronics, Communications and Photonics Conference (SIECPC), 2011 Saudi International.   IEEE, 2011, pp. 1–5.
  • [46] N. Kejriwal and P. Pallav, “Demonstration of cloud based fleet management system,” https://www.youtube.com/watch?v=QMA6dnBweE0.
  • [47] Simulation Code, “Cloud robotics based fleet management system,” https://gitlab.com/prasun2712/Cloud_Robotics_Simulated_Demo.
  • [48] J. Gray and D. P. Siewiorek, “High-availability computer systems,” Computer, vol. 24, no. 9, pp. 39–48, 1991.
  • [49] O. Laadan and S. E. Hallyn, “Linux-cr: Transparent application checkpoint-restart in linux,” in Linux Symposium.   Citeseer, 2010, pp. 159–172.
  • [50] J. Duell, P. H. Hargrove, and E. S. Roman, “Requirements for linux checkpoint/restart,” Lawrence Berkeley National Laboratory, 2002.
  • [51] A. Ford, C. Raiciu, M. Handley, S. Barre, and J. Iyengar, “Architectural guidelines for multipath tcp development,” Tech. Rep., 2011.
  • [52] S. Barré, C. Paasch, and O. Bonaventure, “Multipath tcp: from theory to practice,” in International Conference on Research in Networking.   Springer, 2011, pp. 444–457.
  • [53] A. Vergnano, C. Thorstensson, B. Lennartson, P. Falkman, M. Pellicciari, F. Leali, and S. Biller, “Modeling and optimization of energy consumption in cooperative multi-robot systems,” IEEE Transactions on Automation Science and Engineering, vol. 9, no. 2, pp. 423–428, 2012.
  • [54] F. Dressler and G. Fuchs, “Energy-aware operation and task allocation of autonomous robots,” in Proceedings of the Fifth International Workshop on Robot Motion and Control, 2005. RoMoCo’05.   IEEE, 2005, pp. 163–168.

Aniruddha Singhal is working as a Researcher at Innovation Labs in Tata Consultancy Services. He received his Bachelor’s degree in computer science from Madhav Institute of Technology, Gwalior in the year 2014 and Master’s degree in System Science from Indian Institute of Technology Jodhpur in 2016. His current research interests include Machine Learning, Computer Vision and Robotics

Nishant Kejriwal obtained his Bachelor’s degree in Computer Science from Indian Institute of Technology Jodhpur in 2012. Since then, he is working as a researcher at Innovation Labs in Tata Consultancy Services. His research interests include Machine Learning, Robotics and Computer Vision.

Prasun Pallav obtained his Bachelor’s degree in computer science engineering from West Bengal University of Technology in the year 2014. Since then he is working as a system engineer at Tata Consultancy Services, New Delhi, India. His research interest includes Linux System Programming, Robotics and Computer Vision.

Soumyadeep Choudhury obtained his Bachelor’s degree in Electronics and Communication Engineering in the year 2015 from Academy Of Technology, West Bengal University Of Technology. Since then, he is working as a researcher at Innovation Labs, Tata Consultancy Services, New Delhi, India. His research interests include Linux System Programming, Robotics and Computer Vision.

Rajesh Sinha holds a Bachelor’s degree in Electrical and Electronics Engineering from BITS Pilani and a Masters in Comparative Religion from Dayalbagh University. He has over 20 years experience of building engineered software and hardware solutions for Transportation, Logistics, Government and Retail Industries and startups. He is currently heading the Smart Machines Programme at Tata Consultancy Services’ research and innovation division.

Swagat Kumar (S’08-M’13) obtained his Bachelor’s degree in Electrical Engineering from North Orissa University in the year 2001. He obtained his Master’s and PhD degree in Electrical Engineering from IIT Kanpur in 2004 and 2009 respectively. He was a post doctoral fellow at Kyushu University in Japan for about a year. Then he worked as an assistant professor at IIT Jodhpur for about 2 years before joining TCS Research in 2012. He currently heads the robotics research group at Tata Consultancy Services, New Delhi, India. His research interests are in Machine Learning, Robotics and Computer Vision. He is a member of IEEE Robotics and Automation Society.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
48460
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description