Semi-Dense 3D Semantic Mapping from Monocular SLAM

Semi-Dense 3D Semantic Mapping from Monocular SLAM

Xuanpeng LI
Southeast University
2 Si Pai Lou, Nanjing, China
li_xuanpeng@seu.edu.cn
   Rachid Belaroussi
IFSTTAR, COSYS/LIVIC
25 allée des Marronniers, 78000 Versailles, France
rachid.belaroussi@ifsttar.fr
Abstract

The bundle of geometry and appearance in computer vision has proven to be a promising solution for robots across a wide variety of applications. Stereo cameras and RGB-D sensors are widely used to realise fast 3D reconstruction and trajectory tracking in a dense way. However, they lack flexibility of seamless switch between different scaled environments, i.e., indoor and outdoor scenes. In addition, semantic information are still hard to acquire in a 3D mapping. We address this challenge by combining the state-of-art deep learning method and semi-dense Simultaneous Localisation and Mapping (SLAM) based on video stream from a monocular camera. In our approach, 2D semantic information are transferred to 3D mapping via correspondence between connective Keyframes with spatial consistency. There is no need to obtain a semantic segmentation for each frame in a sequence, so that it could achieve a reasonable computation time. We evaluate our method on indoor/outdoor datasets and lead to an improvement in the 2D semantic labelling over baseline single frame predictions.

1 Introduction

Understanding 3D scene is more widely required but still challenging in many robotics applications. For instance, autonomous navigation in outdoor scene asks for a comprehensive understanding of immediate surroundings. In domestic robotics, a simple fetching task always requires knowledge of both what something is, as well as where it is located [13]. Semantic segmentation is an important and promising step to address this problem. The state-of-art Convolutional Neural Networks (CNNs) make great advances in the image-based 2D semantic segmentation. Combined with SLAM technology, mobile robotics could locate itself and meanwhile recognise objects in pixel-wise level. It means that a task like “move the chair behind the nearest desk” or “park ego-car in front of the left red one at the parking space” could be accurately accomplished. However, scaled sensors, such as stereo or RGB-D cameras, only provide reliable measurements in their limited range. They lack of flexibility of seamless switch between indoor and outdoor scenes. In this work, we exploit a Large-scale Direct Monocular SLAM (LSD-SLAM) [5] provides cues of 3D spatial information working in both indoor and outdoor scenes and combine with recent advances of DeepLab-based CNN [1] to build a 3D scene understanding system.

Most man-made environments, no matter indoor or outdoor scenes, usually exhibit distinctive spatial relations amongst varied classes of objects. Being able to capture, model and utilise these kinds of relations could enhance semantic segmentation performance [19]. In this paper, apart from semi-dense 3D mapping based on monocular SLAM and 2D semantic segmentation via using trained CNN model, 2D-3D transfer and map regularisation in the framework of semi-dense 3D reconstruction are considered as our main research contribution.

Figure 1: semi-dense 3D semantic mapping: The figure shows an example of our system. The sequence of RGB images is used to reconstruct 3D environments and 2D semantic label is predicted on the input frame. Using a 2D-3D label transfer approach and map regularisation could improve the labelling accuracy in a semi-dense way.

Our approach is to use stat-of-the-art deep CNN components to predict semantic information which are projected to globally consistent 3D map from a real-time monocular SLAM system. The 3D map is incrementally constructed by a sequence of selected frames with calculated depth information as tracking references. This allows the 2D CNN’s semantic labelling attached to keyframes which can be fused into the 3D map in a semi-dense way, as shown in Figure 1. There is no need to segment each frame in a sequence, which could save a considerable amount of computation cost. Since the 3D map should have global consistent depth information, it would be regularised in light of its geometry. The regularisation process after the 2D-3D transfer is aimed to remove distinctive outliers and makes the components in the 3D map more consistent, i.e., local points with semantic label should be close in space. NYUv2 and CamVid/KITTI datasets were chosen to evaluate our approach, and we witness an improvement in 2D semantic segmentation. The unlabelled raw videos were used to reconstruct 3D map with semantic predictions in real-time (10Hz).

Figure 2: Overview of our method: The input is only RGB frame in sequence. There are three separate processes, a Keyframe selection process, a 2D semantic segmentation process , and a 3D reconstruction with semantic optimization process. Keyframes are selected from the sequence of frames as tracking reference and the consecutive frames refine the depth and variance contained in Keyframes. The 2D semantic segmentation process classifies the image information in Keyframes. Finally, the 3D map is reconstructed with the incoming Keyframes and fused with accumulated semantic information. The 3D semantic map is then regularised by a dense CRF. The visualised intervals do not correspond to the actual situation.

The paper is presented as follows. The next section gives an overview of related work. Section 3 describes the main components in this work. Section 4 discusses our experimental results of both indoor and outdoor scenes.

2 Related Work

Our work is motivated by [13] which contributes an indoor 3D semantic SLAM from a RGB-D input. They aimed towards a dense 3D map based on ElasticFusion SLAM [18] with semantic labelling. Pixel-wise semantic information is acquired from a Deconvolutional Semantic Segmentation network [15] with scaled RGB input and Depth as fourth channel. Depth information is also used to update surfel’s depth and normal information to construct 3D dense map during loop closure. This system required a RGB-D input with reliable depth measurements, but it limited the scene switch. In addition, previous work, SLAM++ [16], created a map with semantically defined objects, but it is limited to predefined database and hand-crafted template models.

Visual SLAM usually contains sparse, semi-dense, and dense types depending on the methods of image alignment. Feature-based methods only exploited limited feature points - typically image corners and blobs or line segments, such as classic MonoSLAM [3] and ORB-SLAM [14]. They are not suitable for 3D semantic mapping due to rather limited feature points. In order to better exploit image information and avoid the cost on calculation of features, direct dense SLAM system, such as surfel-based ElasticFusion [18] and Dense Visual SLAM [9], have been proposed recently. Whereas, direct image alignment from these dense methods is well-established for RGB-D or stereo sensors [5], not for monocular camera. Semi-dense methods like LSD-SLAM [5] and SVO [7] provide possibility to build a synchronised 3D semantic mapping system.

Deep CNN has proven to be successful in the field of image semantic segmentation. Long et al. [11] firstly introduced an inverse convolution layer to realise an end-to-end training and inference process. Then, an encoder-decoder architecture with max unpooling and deconvolutional layers was proposed to avoid the separate step training problem in FCN network [15]. The cutting-edge method, namely, DeepLab, combines atrous convolutions and atrous spatial pyramid pooling (ASPP) to achieve a state-of-the-art performance on semantic segmentation. It also incorporates dense Conditional Random Field (CRF) which improves both qualitatively and quantitatively via a posterior process.

Our semi-dense approach is also inspired by dense 3D semantic mapping methods [8, 19, 17, 12] in both indoor and outdoor scenes. The major contributions from these work are 2D-3D transfer and map regularisation. Especially, Hermans et al. proposed an efficient 3D Conditional Random Fields to regularise consistent 3D semantic mapping considering influence between neighbors of 3D points (voxels). In this work, we explore a similar strategy of utilising their semantic, visual, and geometrical information to enforce spatial consistency.

3 Approach

Our target is to create a 3D scene map with semi-dense and consistent label information online while robotics equipped with a monocular camera move through an unknown scene. The approach is decoupled into three separately running processes as shown in Figure 2. The 3D reconstruction process selects keyframes from the sequence of image frames captured by monocular camera. The selected keyframes are stacked to reconstruct the 3D map based on their pose graphs. This whole process runs within CPU in real time. Meanwhile, the 2D semantic segmentation process predicts the pixel-level classification on keyframes. The depth information of keyframes are iteratively refined by their consecutive frames. It creates local optimal depth estimation for each keyframe and correspondence between labelled pixels and voxels in the 3D point cloud. To obtain a globally optimal 3D semantic segmentation, we exploit the information over neighboring 3D points, involving the distance, color similarity and semantic label. This process achieves the update of 3D point’s state and make a globally consistently 3D map. The following section describes each process in more detail.

3.1 2D Semantic Segmentation

In our work, deep CNN adopts the core layers as the DeepLab-v2 proposed by Chen et al. [1]. Two important components in DeepLab-v2 are the dilated convolution or named as atrous convolution and atrous spatial pyramid pooling (ASPP), which enlarge the field of view of filters and explicitly combine the feature maps of multiple scales. The final result comes from fusion of the bilinear interpolation of multi-scaled feature maps to the original image resolution. This method can capture the details and successfully handle both large and small objects in images. The encoder part of entire architecture is built based on the VGG 16-layer network. For the inference, we use a softmax layer to obtain the final probabilistic score map.

For the indoor scene, we employ NYUv2 labelled dataset and adopt the predefined 14 classes, denoted as , including floor, wall, sofa, and so on [2]. The NYUv2 dataset consists of 795 labelled training images and 654 labelled val images. On the other hand, we use CamVid dataset for training towards the outdoor scene within 11 classes, including sky, building, car, road and so on. The CamVid dataset is split into 367 labelled training images, 101 labelled val images and 233 labelled test images. Since there is no sequence of images in CamVid dataset, we use all labelled images for training and test our whole approach on the KITTI dataset. We finetune our model with COCO pre-trained model without depth information involved. We do not rescale our input images to the native resolution as a traditional VGG16 network. The dimension of input in the training process is cropped to due to the alignment of map after multiple scales subsampling. During the inference, we keep the original resolution of input image according to different datasets.

3.2 Semi-Dense SLAM Mapping

LSD-SLAM is a real-time and semi-dense 3D mapping method. The 3D environment is reconstructed as pose-graph keyframes with associated semi-dense depth maps. The keyframe is selected from image frames in light of its distance from previous keyframe as tracking reference. Each keyframe (index ) consists of an image intensity , a depth map , the variance of depth map and a semantic score map . Depth map and variance are defined in subset of pixels as , which means semi-dense, only available for certain image regions of large intensity gradient. The semantic score map is with a size of (: height, : width, : number of classes) directly from deep CNN.

Spatial regularisation and outlier removal are incorporated in the creation of depth map and tracked image frames are used to refine depth map based on small-baseline stereo comparisons [6]. Next, directly scale-drift aware image alignment on is carried on these stacked keyframes with refined depth map, which is used to align two differently scaled keyframes [5]. The scale-drift aware operation is against different scale environments, such as office rooms (indoor) and urban city road (outdoor). Due to the inherent correlation between depth map and tracking accuracy, depth residual is exploited to estimate the scaled transformation between keyframes. Consequently, we build a 3D point cloud based on the depth maps of keyframes with minimum of error of image alignment. It could run in real time on a CPU about 25 Hz.

3.3 Accumulated Labelling Fusion

Single 2D semantic segmentation would have inconsistent labelling between consecutive frames due to uncertainty of sensors and environments. Incremental fusion of semantic label information of stacked keyframes is similar to SLAM correspondences that allow us to associate probabilistic label from multiple keyframes in a Bayesian way, like the approach in [13]. For the 3D map at current keyframe , we denote the class distribution of a 3D voxel as , where our goal is to obtain the each 3D point independent probability over the class labels given all stacked keyframes . We use a recursive Bayes’ rule to transfer this:

(1)

where . Applying first Markov assumption to , then we have:

We assume does not change over time and there is no need to calculate the normalisation factor explicitly. Finally, we can update the semantic information of 3D point cloud when a new keyframe arrives as follows:

(3)

This incremental fusion of semantic probabilistic information allows us to label 3D point based on whole existing keyframes in real-time. The following section describes how we explore dense CRF to regularise semi-dense 3D map using map geometry, which could propagate semantic information between spatial neighbors.

3.4 Semi-Dense Map Regularisation

Dense CRF is widely employed in 2D semantic segmentation to smooth noisy segmentation map. Some previous works [17, 8, 19] seek its application on 3D map to model contextual relations between various class labels in a fully connected graph. This algorithm aims at minimising the Gibbs energy by means of mean-field approximation and message passing scheme to efficiently infer the latent variables. In a 3D environment, a voxel in the point cloud is assigned a label . Then, a whole label assignment has a corresponding Gibbs energy that consists of unary and pairwise potentials and :

(4)

with .

The unary potential is defined as the negative logarithm of the labelling’s probability:

(5)

The pairwise potential is modeled to be a linear combination of Gaussian edge potential kernels:

(6)

where is a label compatibility function corresponding to the kernel functions and denotes feature vector for voxel, .

For our application in 3D environments, we explore two Gaussian kernels for the pairwise potentials, similar to the work of Hermans et al. [8]. The first one is a spatial smoothness kernel as Eq. 7, which aims at enforcing a local, appearance-agnositc smoothness amongst voxels with similar normal vectors.

(7)

where are the coordinates of 3D voxels and are the respective surface normals.

Most researches employ an appearance kernel as the second one,

(8)

where are the RGB/LAB color vectors of the corresponding voxels [13]. However, semi-dense LSD-SLAM only utilises the points of high intensity gradient to reconstruct 3D environment. These points are rather limited to capture contextual relations between different classes.

Thus, we take a semantic score-related kernel in Eq. 9, which encourages the voxels in a given segment to take the same label and penalises partial inconsistency of voxels as similar as the work in [17].

(9)

where means we hope that semantic information flows across larger distances than geometrical structure and is the probabilistic score of all classes.

In addition, we take a similar strategy as [19], by defining separate label compatibility functions for both kernels. For the smoothness kernel, we use a simple Potts model: , while a more expressive kernel is defined for semantic potential to distribute the probabilistic score across different distances. Then, is a full, symmetric matrix, where all class relations are defined individually. We did not tune these parameters above. All implementations follow the default settings presented in the work [10].

4 Experiments and Results

4.1 Experiments on Indoor Scene

To demonstrate the performance of our approach, we firstly use the publicly available dataset NYUv2 towards the 2D semantic segmentation. We find that the “poly” stochastic gradient descent is better than the “step” one with a learning rate of , a step size of , momentum of , and weight decay of . For each iteration, we use 10 batch for training and the number of total iterations is , which runs on a Nvidia Titan X GPU about 6 hours. The results of our evaluation are presented in Table 1. We find that the results of 2D semantic segmentation on the NYU v2 dataset gain an improvement over previous work listed in [13]. Especially, it should be noted that RGBD [13] and Eigen [4] use depth information during the training and inference, while we only use the RGB image. In our work, the posterior dense CRF of 2D semantic segmentation contributes an improvement of 1.8% on the deep CNN on the value of pixel average, but it influence the real-time capability heavily. Thus, for online, it is disabled entirely and we apply it only on the final 3D map regularisation.

Method

bed

books

ceiling

chair

floor

furniture

objects

painting

sofa

table

tv

wall

window

class avg.

pixel avg.

Hermans et al. [8] 68.4 45.4 83.4 41.9 91.5 37.1 8.6 35.8 28.5 27.7 38.4 71.8 46.1 48.0 54.3
RGBD-SF [13] 61.7 58.5 43.4 58.4 92.6 63.7 59.1 66.4 47.3 34.0 33.9 86.0 60.5 58.9 67.5
RGBD-SF-CRF 62.0 58.4 43.3 59.5 92.7 64.4 58.3 65.8 48.7 34.3 34.3 86.3 62.3 59.2 67.9
Eigen-SF [4] 47.8 50.8 79.0 73.3 90.5 62.8 46.7 64.5 45.8 46.0 70.7 88.5 55.2 63.2 69.3
Eigen-SF-CRF 48.3 51.5 79.0 74.7 90.8 63.5 46.9 63.6 46.5 45.9 71.5 89.4 55.6 63.6 69.9
Ours 62.8 37.5 72.0 64.7 89.3 62.4 19.7 67.3 56.2 41.6 58.9 83.7 54.8 59.4 68.5
Ours-CRF 64.9 34.6 72.0 67.5 90.5 65.0 17.2 67.3 59.3 41.3 60.0 85.1 57.0 60.3 70.3
Table 1: NYUv2 test set results: It should be mentioned that in the NYUv2 dataset, there are more than 800 classes. The objects here are defined as several stuffs, such as box, bottle and cup, which may be not identical to other’s work. It leads to a low score compared to other classes in our evaluation. Besides, our model is evaluated only based on the training of RGB input in its original resolution ().

A typical example of indoor 3D environment is shown in Figure 1, which contains about 370k 3D points within a total of 30 generated keyframes (about 12k points for each keyframe). Since our approach only executes the 2D semantic segmentation on keyframes, it could run in 10Hz online. We’re aware that the initialisation of LSD-SLAM is quite significant to calculate the accurate depth of keyframes in sequence. Excessive rotation and lacking translation at the beginning would lead to a poor mapping result. We select the sequences in NYU v2 which have effective initialisation. All tests were performed on an Intel Core i7-5930K CPU and a NVIDIA Titan X GPU.

4.2 Experiments on Outdoor Scene

Outdoor scenes usually demand larger range of measurement than indoor scenes. The training process on the CamVid dataset is similar to NYUv2 dataset, using a “poly” strategy within 10 batch in 10k iterations. Since LSD-SLAM works only based on consecutive images, we need to use image sequence to test our system. Thus, we choose raw image sequences of KITTI dataset in our experiments. The resolution of images in KITTI dataset is . Take the 2011_09_26_drive_0093 clip as an example in Figure 3. There are 439 raw images of this clip with 43 seconds. Our system generates about 1600k 3D points within 56 keyframes (about 30k points for each keyframe). The inference process of KITTI image costs about 400 ms to generate a labelling score map of same resolution. Due to 3D reconstruction of selected keyframes, the 3D mapping could arrive at a speed of 10Hz. Moreover, we remove several keyframes at the beginning of sequence, due to limited initialisation of LSD-SLAM based on KITTI data.

Figure 3: Qualitative KITTI test set results. (Top Left) Original image. (Top Center) 2D semantic segmentation. (Top Right) Current keyFrame. (Bottom Row) 3D semantic mapping of a same section from different views.

5 Conclusions

We have presented a semi-dense 3D semantic mapping based on a monocular SLAM, which runs on a CPU coupled with a GPU. In contrast to previous work, this system does not use any scaled sensors but a single camera, and only selected frames for 2D semantic segmentation, which reduces the computation time. In addition, our scale-drift aware system could attain seamlessly switch between both indoor and outdoor scenes without extra effort on various scales. We explored a state-of-the-art deep CNN to accurately segment objects in various scenes. Direct monocular SLAM reconstructs a 3D semi-dense environment without any prior depth information, which is suitable for mobile robots working in both indoor and outdoor. The semantic annotations, even with inaccurate labels, are transferred into the 3D map and regularised with a CRF process. It achieves a promising result, and fit for online use. Significantly, we find the geometry of reconstructed map could help to segment objects in different depths.

In future work, we plan to introduce several state-of-the-art SLAM technologies to improve the initialisation. Research on how labelling boosts 3D reconstruction of SLAM would be an interesting direction. And the deep learning method to solve depth estimation and spatial transformation for SLAM like [4] would be another interesting topic.

References

  • [1] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. CoRR, abs/1606.00915, 2016.
  • [2] C. Couprie, C. Farabet, L. Najman, and Y. LeCun. Indoor semantic segmentation using depth information. arXiv preprint arXiv:1301.3572, 2013.
  • [3] A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse. Monoslam: Real-time single camera slam. IEEE transactions on pattern analysis and machine intelligence, 29(6):1052–1067, 2007.
  • [4] D. Eigen and R. Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In Proceedings of the IEEE International Conference on Computer Vision, pages 2650–2658, 2015.
  • [5] J. Engel, T. Schöps, and D. Cremers. Lsd-slam: Large-scale direct monocular slam. In European Conference on Computer Vision, pages 834–849. Springer, 2014.
  • [6] J. Engel, J. Sturm, and D. Cremers. Semi-dense visual odometry for a monocular camera. In Proceedings of the IEEE international conference on computer vision, pages 1449–1456, 2013.
  • [7] C. Forster, M. Pizzoli, and D. Scaramuzza. Svo: Fast semi-direct monocular visual odometry. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pages 15–22. IEEE, 2014.
  • [8] A. Hermans, G. Floros, and B. Leibe. Dense 3d semantic mapping of indoor scenes from rgb-d images. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pages 2631–2638. IEEE, 2014.
  • [9] C. Kerl, J. Sturm, and D. Cremers. Dense visual slam for rgb-d cameras. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2100–2106. IEEE, 2013.
  • [10] V. Koltun. Efficient inference in fully connected crfs with gaussian edge potentials. Adv. Neural Inf. Process. Syst, 2011.
  • [11] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3431–3440, 2015.
  • [12] A. Martinovic, J. Knopp, H. Riemenschneider, and L. Van Gool. 3d all the way: Semantic segmentation of urban scenes from start to end in 3d. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4456–4465, 2015.
  • [13] J. McCormac, A. Handa, A. Davison, and S. Leutenegger. Semanticfusion: Dense 3d semantic mapping with convolutional neural networks. arXiv preprint arXiv:1609.05130, 2016.
  • [14] R. Mur-Artal, J. Montiel, and J. D. Tardós. Orb-slam: a versatile and accurate monocular slam system. IEEE Transactions on Robotics, 31(5):1147–1163, 2015.
  • [15] H. Noh, S. Hong, and B. Han. Learning deconvolution network for semantic segmentation. In The IEEE International Conference on Computer Vision (ICCV), December 2015.
  • [16] R. F. Salas-Moreno, R. A. Newcombe, H. Strasdat, P. H. Kelly, and A. J. Davison. Slam++: Simultaneous localisation and mapping at the level of objects. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1352–1359, 2013.
  • [17] S. Sengupta, E. Greveson, A. Shahrokni, and P. H. Torr. Urban 3d semantic modelling using stereo vision. In Robotics and Automation (ICRA), 2013 IEEE International Conference on, pages 580–585. IEEE, 2013.
  • [18] T. Whelan, S. Leutenegger, R. F. Salas-Moreno, B. Glocker, and A. J. Davison. Elasticfusion: Dense slam without a pose graph. Proc. Robotics: Science and Systems, Rome, Italy, 2015.
  • [19] D. Wolf, J. Prankl, and M. Vincze. Fast semantic segmentation of 3d point clouds using a dense crf with learned parameters. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pages 4867–4873. IEEE, 2015.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
16558
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description