Practical Aspects of Autonomous Exploration with a Kinect2 sensor

Practical Aspects of Autonomous Exploration with a Kinect2 sensor

Miroslav Kulich, Vojtěch Lhotský, Libor Přeučil The authors are with Czech Institute of Informatics, Robotics and Cybernetics, Czech Technical University in Prague,Zikova 1903/4, 166 36 Prague, Czech Republic kulich@cvut.cz,lhotsvoj@fel.cvut.cz, preucil@cvut.cz
Abstract

Exploration of an unknown environment by a mobile robot is a complex task involving solution of many fundamental problems from data processing, localization to high-level planning and decision making. The exploration framework we developed is based on processing of RGBD data provided by a MS Kinect2 sensor, which allows to take advantage of state-of-the-art SLAM (Simultaneous Localization and Mapping) algorithms and to autonomously build a realistic 3D map of the environment with projected visual information about the scene. In this paper, we describe practical issues that appeared during deployment of the framework in real indoor and outdoor environments and discuss especially properties of SLAM algorithms processing MS Kinect2 data on an embedded computer.

I Introduction

Exploration is a process of autonomous navigation of a mobile robot in a priory unknown environment in order to build a map of this environment. An exploration algorithm is an iterative procedure consisting of map updating from current sensory data, determination of a new goal and navigation towards this goal. Such an algorithm is terminated whenever a complete map of the environment is built.

The used exploration approach [1] assumes that the robot moves on a plane and thus exploration can be done in 2D. Specifically, a polygonal map is used for goal determination and planning: a 2D scan is simulated from a single measurement of the MS Kinect2 and approximated with a polygon and particular polygons are then combined together by a modified Vatti algorithm [2]. To provide a correct pose of the robot, RTAB-Map [3], a RGBD Graph-Based SLAM approach based on an incremental appearance-based loop closure detector is employed. This approach, moreover, builds a colored three dimensional model of the environment, which is a final product of the exploration. RTAB-Map was chosen based on comparison of available SLAM libraries, see Table I.

The software part of the system has been realized as a set of ROS (Robot Operating System) nodes. Reading data from the Kinect sensor is done through the libfreenect2 library [4], which provides basic drivers for Kinect2, while connection to ROS is provided by Kinect2 Bridge library [5]. A modified version of ER1 robot by Evolution Robotics was used as a robotic platform for experiments and all computations were done on an Intel NUC mini computer with a Core i5 processor placed directly on the robot.

Ii Real environment issues

Kinect 2 uses a time of flight method for building a depth image and thus it should work (contrary to the first version) outdoor even in a bright day. An experiment was done in an outdoor environment depicted on Fig. (a)a to verify this assumption and resistance of the sensor to sunlight. The environment contains several objects at various distances from the sensor: a barrel (1 m from the sensor), a chair (2 m), flowers (3 m), and several trees and other objects placed 4 – 15 m from the sensor.

Fig. 1: The time of the RTAB-Map’s update and the number of nodes in the map depending on the length of time the mapping runs (test done on Intel NUC computer).

The first experiment was done in a very bright sunlight at 1 PM (May 7, 2016, near Prague). The results (Fig. (b)b) show that the data are very noisy and only the barrel is represented in an acceptable quality. The front part of the chair is still visible, but with a large amount of missing points. Generally, the range of the Kinect 2 sensor in a very bright day is approximately 1.2 – 2.4 m depending on the material of the detected obstacle and many other factors (whether a part of that obstacle is in a shadow, whether the sun shines directly on the sensor and on reflectivity of object’s surface). Running RTAB-Map is thus nearly impossible as it can not effectively localize due to the limited range and large amount of noise.

(a)
(b)
(c)
(d)
Fig. 2: (a) The reference environment, (b) Kinect 2 data in a sunny day, (c) data with most of the scene covered in a shadow (d) data in the evening
Rgbdslam v2 [6] RTAB-Map [3] ORB-SLAM 2 [7]
- Very slow updates of the map Faster than Rgbdslam, slower than ORB-SLAM but provides a fast visual odometry + Very fast updates of the map
+ Can re-localize after looking at known position Problems with re-localization, but it can use the wheel odometry + Can successfully re-localize after being lost for a long time, but also becomes lost more often than the others
+ Well readable map, also published through ROS + Well readable map, also published through ROS - The map is not very readable (for people)
+ All important data is published through ROS topics + All important data is published through ROS topics & services - No data is published through ROS topics
+ Can utilize the odometry provided by the ER1 robot
+ Wide range of parameters + Wide range of parameters - Low range of parameters
TABLE I: Comparison of RGBD SLAM libraries.

The same area is also scanned later afternoon when the most of the scene is covered in shadow from the building, but the sunlight is still bright so even areas covered in shadow are illuminated, see Fig. (c)c. The range increased to 3.5 – 6 m under these circumstances and the majority of objects within 3m range is well represented with almost no noise or outliers. RTAB-Map SLAM is able to run, but localization precision is worse than in ideal conditions.

The last experiment was done in the evening when the scene illumination is much lower, see Fig. (d)d). The sensor correctly detected objects in 12 m range and the RTAB-Map worked with similar precision as indoors.

The Field of view of the sensor is only  degrees, which causes big problems for the SLAM especially when the robot is rotating. Unrecoverable errors appear even though the angular velocity is limited to (lowering the velocity even more would make the exploration too slow).

When exploration runs for a long period of time, its refresh duration is longer than 2 seconds (see the next paragraph) and the robot rotates by up to . Due to the low field of view the SLAM can only match the features from roughly of the image.

RTAB-Map has an option for limiting the computation time, but at the cost of lower quality and even then the duration of the update gradually increases as the map grows. The SLAM must therefore rely on the data from the odometry. Visual odometry in RTAB-Map is fast a precise enough to cover long-standing updates of RTAB-Map in most cases. On the other hand, it can fail due to a low number of features and thus the only working source of the position data is wheel odometry, which slowly drifts.

Duration of the RTAB-Map’s update is directly linked to the problem with a field of view. The time of the update gradually increases. The update time was set to  s in the experiment presented in Fig. 1. As can be seen, this threshold is exceeded in approximately  s after the start and the time of the update continues to grow as the map expands.

Reflective surfaces in the environment cause large amount of noise and also silhouettes of the reflected objects. Most of that noise can be filtered, but silhouettes remain in the map. The worst example of such a reflective surface is a glass wall, which generates a large amount of noise and also points from both the glass wall and the objects behind. The robot can not always detect such a wall and it may try to go through.

On the other hand, the wall is detected in most cases in a form of few points which is sufficient for exploration as the points form a single obstacle avoiding the robot to bump.

The quality and success of the whole exploration highly depends on the number of detected visual features. As this number decreases, the SLAM is not able to register the scenes, which leads to misalignment of a new scan with a map. Such errors can be sometimes repaired by a loop closure, but they often remain in the map uncorrected.

Acknowledgments

This work has been supported by the Technology Agency of the Czech Republic under the project no. TE01020197 “Centre for Applied Cybernetics” and by the EU Horizon 2020 project 688117 “SafeLog” .

References

  • [1] http://imr.ciirc.cvut.cz/Research/EAPD.
  • [2] L. P. T. Juchelka, M. Kulich, “Multi-robot exploration in the polygonal domain,” in Int. Conf. on Automated Planning: Workshop on Planning and Robotics, 2013.
  • [3] M. Labbé and F. Michaud, “Online global loop closure detection for large-scale multi-session graph-based slam,” in Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on, Sept 2014, pp. 2661–2666.
  • [4] https://github.com/OpenKinect/libfreenect2.
  • [5] https://github.com/code-iai/iai_kinect2/tree/master/kinect2_bridge.
  • [6] F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard, “3-d mapping with an rgb-d camera,” IEEE Transactions on Robotics, vol. 30, no. 1, pp. 177–187, Feb 2014.
  • [7] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardós, “ORB-SLAM: A versatile and accurate monocular SLAM system,” IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147–1163, Oct 2015.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
14508
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description