Shared Autonomy in Web-based Human Robot Interaction
In this paper, we aim to achieve a human-robot work balance by implementing shared autonomy through a web interface. Shared autonomy integrates user input with the autonomous capabilities of the robot and therefore increases the overall performance of the robot. Presenting only the relevant information to the user on the web page results in a lower cognitive load of the operator. The use of a web browser makes the interface device-independent. Through our web interface, we provide a mechanism for the operator to directly interact using the displayed information by applying a point-and-click paradigm. Further, we propose our idea to employ a human-robot mutual adaptation in a shared autonomy setting through our web interface for effective team collaboration.
There has been an increase in the number of applications for robot teleoperation including military (Kot and Novák, 2018), industrial (Korpela et al., 2015), surveillance (Lopez et al., 2013), telepresence (Lankenau, 2016) and remote experimentation (Pitzer et al., 2012). Improving operator efficiency and ensuring the safe navigation of robots is of utmost importance. Studies show that human-robot joint problem solving ensures safe and effective task execution (Musić and Hirche, 2017). Humans are better at reasoning and creativity whereas robots are better at carrying out a particular task precisely and repeatedly. Therefore, combining robot capabilities with human skills results in an enhanced human-robot interaction.
Shared autonomy involves the use of user input with an autonomous system. Autonomous robots can only be deployed in a predefined environment but they operate without the involvement of the user. In the case of teleoperated robots, control of the robot lies solely in the hands of the user. Amalgamating the features of autonomous robots with a certain level of user involvement can reduce human effort and increase robot precision.
Improved robot teleoperation interfaces have been an active area of research (Shi et al., 2017; Lee, 2012; Fong et al., 2001; Rea et al., 2017). Having remote access to robots through user-friendly interfaces is very important for effective robot teleoperation. Many interfaces demand expert knowledge in order to control the robot which in turn increases the cognitive load of the operator. The interface described in (Birkenkampf et al., 2014), implements shared autonomy but is limited to a tablet computer. We have tried to overcome these typical shortcomings like poor accessibility and usability through our web-based interface. Such an interface would enable a simpler control of the robot for novices and experts alike. This allows users to control robots within their home or workplace through any web-enabled devices. The web clients are created using modern web standards hence it does not require users to download any extra software in order to use it. Furthermore, we have tried to display all the visualization data on the web page. This makes it convenient for users to perform the entire navigation process by just using the web interface.
The goal of this research is to provide an intuitive human-robot web interface enabling an untrained operator to effectively command a robot. The interface is initially built and tested on the three-wheeled telepresence robot of our lab. It is an open-ended design and can be extended to various use cases.
2. Research Work
2.1. Software Infrastructure
ROS (Robot Operating System) is used as a backend system for most robots nowadays. It is an open-source middleware platform that provides several useful libraries and tools to develop robot applications. The telepresence robot of our lab is based on ROS. We have implemented the ROS Navigation stack to perform autonomous navigation.
For hosting our web page, we employed the roswww package which provides an HTTP web server at a specified port. Hence the user can access the web page as long as they are connected to the wi-fi shared by the robot. Finally, the web_video_server package is used to display the live video feed from the camera of the robot to the web page. It streams the images through an image topic in ROS via HTTP.
2.2. User Interaction
The web interface (shown in Fig 2.) is divided into two parts: Manual teleoperation and autonomous navigation. In manual control, the user is provided with an on-screen touch-capable joystick and the live video feed from the camera of the robot. The extent of pull determines the fraction of the maximum velocity. Using the orientation, corresponding linear and angular velocities are calculated. These are then published in the velocity topic as a geometry/Twist message. The current maximum linear and angular speeds of the robot are displayed. Initially, they are kept at 0.5m/s and 1m/s respectively. Two buttons each are provided using which the users can increase or decrease these velocities by ten percent of the current velocity. By default, the camera topic is set as /camera/rgb/image_raw. The user can change it by typing a different topic name in the space provided. On the click of the “Load Video” button, the video feed loads.
A joystick is used instead of buttons because it employs a game-based strategy to teleoperate robots that are intuitive even for non-expert users. For better teleoperation, we need to enhance the information provided to the user. Hence, we have provided real-time video data on the web page. This provides a robot-centered perspective to the user and it replicates the environment which the user would perceive in place of the robot.
The autonomous navigation section allows the users point-and-click navigation to arbitrary locations. The current position of the robot is displayed as a yellow pulsating arrow on the map. The user can give a goal position and orientation by clicking on an arbitrary point on the map. The goal is marked by a red arrow. This is sent to the move_base node which plans a safe path to the goal. In that instance, the robot starts moving autonomously towards the goal position. The user can zoom in and out of the map using ctrl key and mouse movements. Using the shift key and mouse movements, the user can pan the map.
2.3. Shared Autonomy
The web application provides control over the robot to the user, while simultaneously using the existing autonomous navigation capabilities and obstacle avoidance to ensure safety and correct operation. It enables varied autonomy of the robot during the execution of tasks. The user is free to choose between full control or full autonomy. This changes the level of user involvement in carrying out the tasks.
When the user marks the goal position on the map, the move_base ROS node calculates the shortest path using Dijkstra’s algorithm and the robot starts following the path. If the user feels that the robot is malfunctioning, or if the user wants the robot to follow a different path to reach the goal, the user can override the control. By implementing shared autonomy, we can achieve results that are better than either human or robot can achieve alone.
3. Future Work and Conclusion
In this paper, we have successfully developed a web interface for enhanced human-robot interaction and enable joint problem-solving. User studies can be carried out by analyzing the behavior of the user in controlling the robot in order to improve the interface. The logged data can also be used in fields such as robot learning from demonstration (LfD).
We plan to integrate a bounded-memory adaptation model (BAM) of the human teammate into a partially observable stochastic process that enables a robot to adapt to a human. Studies show that this retains a high-level of user trust in robots and significantly improves human-robot team performance (Nikolaidis et al., 2017, 2016). We plan to implement this on a mobile manipulator through the use of our web interface.
If the user is adaptable, the robot would suggest a good strategy which may be unknown to the user. But if the user is not adaptable, the robot will comply with the user’s decision to regain his trust. We assume that the robot knows the optimal goal for the task: to reach the goal position. Consider a situation where the robot has two choices to avoid the obstacle: it can take the right or the left path. Taking the right path is a better choice, for instance, because the left path is too long, or because the robot has less uncertainty about the left part of the map. Intuitively, if the human insists on the left path, the robot should comply; failing to do so can have a negative effect on the user’s trust in the robot, which may lead to the disuse of the system.
- conference: ; ;
- A knowledge-driven shared autonomy human-robot interface for tablet computers. In 2014 IEEE-RAS International Conference on Humanoid Robots, pp. 152–159. Cited by: §1.
- Rosbridge: ros for non-ros users. In Robotics Research, pp. 493–504. Cited by: §2.1.
- Advanced interfaces for vehicle teleoperation: collaborative control, sensor fusion displays, and remote driving tools. Autonomous Robots 11 (1), pp. 77–85. Cited by: §1.
- Applied robotics for installation and base operations for industrial hygiene. In 2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA), pp. 1–6. Cited by: §1.
- Application of virtual reality in teleoperation of the military mobile robotic system taros. International Journal of Advanced Robotic Systems 15 (1), pp. 1729881417751545. Cited by: §1.
- Virtour: telepresence system for remotely operated building tours. Cited by: §1.
- Web applications for robots using rosbridge. Cited by: §1.
- WatchBot: a building maintenance and surveillance system based on autonomous robots. Robotics and Autonomous Systems 61 (12), pp. 1559–1571. Cited by: §1.
- Control sharing in human-robot team interaction. Annual Reviews in Control 44, pp. 342–354. Cited by: §1.
- Formalizing human-robot mutual adaptation: a bounded memory model. In The Eleventh ACM/IEEE International Conference on Human Robot Interaction, pp. 75–82. Cited by: §3.
- Human-robot mutual adaptation in shared autonomy. In 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI, pp. 294–302. Cited by: §3.
- Robots as web services: reproducible experimentation and application development using rosjs. IEEE. Cited by: §2.1.
- Pr2 remote lab: an environment for remote development and experimentation. In 2012 IEEE International Conference on Robotics and Automation, pp. 3200–3205. Cited by: §1.
- Movers, shakers, and those who stand still: visual atten-tion-grabbing techniques in robot teleoperation. In 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI, pp. 398–407. Cited by: §1.
- Web-based human robot interaction via live video streaming and voice. In International Conference on Intelligent Robotics and Applications, pp. 393–404. Cited by: §1.
- Robot web tools: efficient messaging for cloud robotics. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4530–4537. Cited by: §2.1.