Dense RGB-D semantic mapping with Pixel-Voxel neural network
For intelligent robotics applications, extending 3D mapping to 3D semantic mapping enables robots to, not only localize themselves with respect to the scene’s geometrical features but also simultaneously understand the higher level meaning of the scene contexts. Most previous methods focus on geometric 3D reconstruction and scene understanding independently notwithstanding the fact that joint estimation can boost the accuracy of the semantic mapping. In this paper, a dense RGB-D semantic mapping system with a Pixel-Voxel network is proposed, which can perform dense 3D mapping while simultaneously recognizing and semantically labelling each point in the 3D map. The proposed Pixel-Voxel network obtains global context information by using PixelNet to exploit the RGB image and meanwhile, preserves accurate local shape information by using VoxelNet to exploit the corresponding 3D point cloud. Unlike the existing architecture that fuses score maps from different models with equal weights, we proposed a Softmax weighted fusion stack that adaptively learns the varying contributions of PixelNet and VoxelNet, and fuses the score maps of the two models according to their respective confidence levels. The proposed Pixel-Voxel network achieves the state-of-the-art semantic segmentation performance on the SUN RGB-D benchmark dataset. The runtime of the proposed system can be boosted to 11-12Hz, enabling near to real-time performance using an i7 8-cores PC with Titan X GPU.
A Real-time 3D semantic mapping is desired in a lot of robotics applications, such as autonomous navigation and robot arm manipulation. The inclusion of semantic information with a 3D dense map is much useful than geometric information alone in robot-human or robot-environment interaction. It enables robots to perform advantage tasks like “nuclear wastes classification and sorting” or “autonomous warehouse package delivery” more intelligently.
A variety of well-known methods such as RGB-D SLAM , Kinect Fusion  and ElasticFusion  can generate dense or semi-dense 3D map from RGB-D videos. But those 3D maps contain no semantic-level understanding of the observed scenes. Meanwhile, the semantic segmentation achieved a significant progress with advantage of convolution neural network. Thus far, FCN , SegNet  and Deeplab  are the most popular methods for RGB level semantic segmentation. FuseNet  and LSTM-CF  take advantage of both RGB and depth images to improve semantic segmentation. PointNet  is the forerunner for 3D semantic segmentation that consumes an unordered point cloud.
During RGB-D mapping, both RGB image with rich contextual information and point cloud with rich 3D geometric information can be obtained directly. To date, there are no existing methods that make use of both RGB and point cloud for the semantic segmentation and mapping. In this paper, we proposed a dense RGB-D semantic mapping system with a Pixel-Voxel neural network which can perform dense 3D mapping while simultaneously recognizing and semantically labelling each point in the 3D map. The main contributions of this paper can be summarized as follows:
A Pixel-Voxel network consuming RGB image and point cloud is proposed, which can obtain global context information through PixelNet and meanwhile, preserve accurate local shape information through VoxelNet. This mutual promotion model achieves the state-of-the-art semantic segmentation performance on SUN RGB-D
A Softmax weighted fusion stack is proposed to adaptively learn the varying contribution of different models. It can fuse the score maps from different models according to their respective confidence levels. The number of input models for fusion can be arbitrary. This stack can be inserted to any kind of network to perform fusion style end-to-end learning.
A dense 3D semantic mapping system integrating Pixel-Voxel network with RGB-D SLAM is developed. Its runtime can be boosted to using an i7 8-cores PC with Titan X GPU, which can nearly satisfy the requirement of real-time applications.
The rest of this paper is organized as follows. The related work is reviewed in Section 2 firstly. Then the details of the proposed methods are introduced in Section 3. The experimental results and analyses are given in Section 4. Finally, we conclude the paper in Section 5.
The existing works are categorized and described in the following two subsection, dense 3D semantic mapping in Section 2.1 and semantic segmentation in Section 2.2, followed by a discussion in Section 2.3
2.1Dense 3D semantic mapping
To the best of our knowledge, the online dense 3D semantic mapping can be further grouped into three main sub-categories: semantic mapping based on 3D template matching , 2D/2.5D semantic segmentation  and RGB-D data association from multiple viewpoints .
The first kind of methods such as SLAM++  can only recognise the known 3D objects in a pre-defined database. It is limited to can only be used in the situations where many repeated and identical objects are present for semantic mapping.
For the second kind of methods, both  and  adopt human-design features with Random Decision Forests to perform per-pixel label predictions of the incoming RGB videos. Then all the semantically labelled images are associated together using a visual odometry to generate the semantic map. Because of the state-of-the-art performance provided by the CNN-based scene understanding, SemanticFusion  integrates deconvolution neural networks  with ElasticFusion  to a real-time capable () semantic mapping system. All of those three methods require fully connected CRF  optimization as an offline post-processing, i.e., the best performance semantic mapping is not an online system. Zhao et al. . proposed the first system to perform simultaneous 3D mapping and pixel-wise material recognition. It integrates CRF-RNN  with RGB-D SLAM  and the post-processing optimization is not required. Keisuke et al.  proposed a real-time dense monocular CNN-SLAM, which can perform depth prediction and semantic segmentation simultaneously from a single image using a deep neural network.
All the above methods mainly focus on semantic segmentation using a single image and they only perform 3D label refinement through a recursive Bayesian update using a sequence of images. However, they do not take full advantage of the associated information provided by multiple viewpoints of a scene. Yu et al.  proposed a DA-RNN integrated with Kinect Fusion  for 3D semantic mapping. DA-RNN employs a recurrent neural network to tightly combine the information contained in multiple viewpoints of an RGB-D video stream to improve the semantic segmentation performance. Ma et al.  proposed a multi-view consistency layer which can use multi-view context information for object-class segmentation from multiple RGB-D views. It utilizes the visual odometry trajectory from RGB-D SLAM  to wrap semantic segmentation between two viewpoints. In addition, Armin et al.  proposed a network architecture for spatially and temporally coherent semantic co-segmentation and mapping of complex dynamic scenes from multiple static or moving cameras.
FCN  is the first end-to-end fashion network instead of using hand-crafted features for semantic segmentation. It replaces the fully connected layers of the classification network with the convolution layers to output the coarse map and utilizes a skip architecture to refine it. DeconvNet  composing of deconvolution and unpooling layers, utilizes the fractionally strided convolutions to alleviate the limited resolution of labelling problem. SegNet  proposed an encoder-decoder architecture, which records the indices of max pooling for up-sampling. DeepLab  makes use of dilated convolutions  to increase the receptive field without down-sampling the feature map. CRF as RNN  reformulates the mean-field inference in dense CRF as an RNN architecture that enables it to integrate with CNN as a fully end-to-end network.
FuseNet  can fuse RGB and depth image cues in a single encoder-decoder CNN architecture for RGB-D semantic segmentation. LSTM-CF  network fuses contextual information from multiple channels of RGB and depth image through stacking several convolution layers and a long short-term memory layer. FuseNet normalises the depth value into the interval of to have the same spatial range as color images, while LSTM-CF network transforms depth image to HHA image to have 3 channels as the color image. The HHA representation can improve the depth semantic segmentation, however, HHA representation requires high computational cost and hence cannot be performed in real-time. In addition, STD2P  proposes a novel superpixel-based multi-view convolutional neural network for RGB-D semantic segmentation, which uses Spatio-temporal pooling layer to aggregate information over space and time.
The forerunner work PointNet  provides a unified architecture for both classification and segmentation which consumes the raw unordered point clouds as input. PointNet only employs a single max-pooling to generate the global feature which describes the original input clouds, thus it does not capture the local structures induced by the 3D metric space points live in. In the improved version PointNet++ , it proposed a hierarchical neural network. It applies PointNet recursively on a nested partitioning of the input point set, which enables it to learn local features with increasing contextual scales.
For the RGB semantic segmentation, CNN-based methods always struggle with the balance between global and local information. The global context information can alleviate the local ambiguities to improve the recognition performance, while local information is crucial to obtain accurate per-pixel accuracy, i.e., shape information. But after several of pooling layers, the resolution of the feature map decreases significantly. It means a lot of shape information is lost. How to increase the receptive field to get more global context information and meanwhile, preserve a high resolution of feature map is still an open problem. 3D geometric data such as point cloud which has additional dimension can provide very useful spatial information. But because of the unordered property of point cloud, the conventional pooling layer cannot be used. It is difficult to obtain the context information in different scales for the point cloud. On the other hand, the resolution of point cloud would not decrease because of the absence of conventional pool layers, i.e., it can keep the original spatial information of the data.
Intuitively, combining RGB-based network and point cloud-based network together can alleviate each other’s drawbacks and take advantage of each other’s advantages. During RGB-D mapping, both RGB image and point cloud can be obtained directly from the RGB-D camera, which is easily available and enables a potential combination of the context information from RGB image and 3D shape information from the point cloud for semantic mapping. That is the main reason why a dense RGB-D semantic mapping with a Pixel-Voxel neural network is proposed in this paper.
In addition, the network in  all simply fuse the score maps from different models using equal weights. Each model should have the different contributions in different situations for different categories. So in this paper, a Softmax weighted fusion stack is proposed for adoptively learning the varying contributions of each model.
The pipeline of dense RGB-D semantic mapping with a Pixel-Voxel neural network is illustrated in Figure.Figure 1. The RGB image and point cloud are obtained directly from an RGB-D camera Kinect V2. The RGB and point cloud data-pair of each key-frame is fed into the Pixel-Voxel network, as shown in Figure.Figure 2, for semantic segmentation. Then the semantically labelled point clouds are combined incrementally through the visual odometry of RGB-D SLAM. Meanwhile, label probability of each voxel is refined by a recursive Bayesian update. Finally, the dense 3D semantic map is generated.
3.2Pixel neural network
The PixelNet is comprised of three units: truncated CNN, context stack and skip architecture. The input of PixelNet is an RGB image. For the truncated CNN, the VGG-16 or ResNet (truncated after ), pre-trained on ImageNet can be employed as the baseline. After truncated CNN, the resolution of feature map decreases times comparing with the input image, i.e., it drops significant shape information.
Inspired by , the context stack is on the top of pre-trained truncated CNN, which is composed of chained 6 layers of convolution stack (). For the VGG-16 network, the receptive field after and layers are respectively and , which is not sufficiently large enough to cover the image that we used. The receptive field of the context stack can be described as below:
Here and are the receptive field and stride product before the first context stack. , and are the receptive field, stride and kernel size of the context stack . is the number of context stacks. The context stack can expand the receptive field progressively to cover all the elements in the current feature map(the whole original image). In addition, the score maps of all the context stacks are fused together to aggregate multi-scale context information. The spatial dimensionality of the feature maps in context stack is unchanged as before.
The skip architecture consists of 3 skip stacks() following , and separately. In order to prevent the network training divergence, the smaller learning rate is usually adopted for the skip architecture training as mentioned in . But the skip architecture in PixelNet can be trained using a bigger learning rate because batch normalization stabilizes the back-propagated error signals. The skip architecture retains the low-level feature of the RGB image.
3.3Voxel neural network
The input of VoxelNet is unordered point cloud which is represented as a set of 3D points stored in a long vector. is the number of points. is a feature vector containing 6 dimension information: position information in the world coordinate and color information .
Here is the multi-layer perception network, i.e., . is the number of multi-layer perception network before max pooling. Its kernel size is and each point shares the same convolution weights. Inspired by PointNet , we also use max pooling operation as the invariant function. Its kernel size is . This Max pooling operation can obtain the global feature from all the points. is the tile operation which recovers the shape of feature map from to . The output is the global feature map of the input set. They are fed to the per point feature of multi-layer perception network to concatenate the global and local information. Because only a single max pooling is adopted to generate the global feature, it drops significant context information of the input point cloud.
Then the new per point features are extracted though multi-layer perception network using the global and local combined point features. is the last multi-layer perception network. The reshape operation transforms the shape of score map from to through back-projection according to the values and camera intrinsic parameters, so that it can be fused with the score map of PixelNet.
The spatial dimensionality is unchanged as the input data in VoxelNet, so it can preserve all the original shape information.
3.4Softmax weighed fusion
Unlike simply fusing score maps from different models using equal weights, a Softmax weighted fusion stack is designed to learn the varying contribution of each model in different situations for different categories.
To be precise, define the score maps are generated from different models. equals the number of categories and is the shape of score map.
is the convolution operation and is the weights of convolution operation. is the fusion score map. The convolution operation can learn the correlations of the multiple score maps from different models.
Softmax operation normalizes the channel values of into the interval of . are the corresponding weights of score maps, which denotes how confidently each model can be relied on.
is the weighted fusion score map. is the element-wise multiplication operation and . This Softmax weighted fusion stack can fuse the score maps of arbitrary number models, and it also can be inserted to any kind of network to be trained end-to-end. As shown in Figure.Figure 2, it fuses 3 score maps from PixelNet and VoxelNet together according to their respective confidence levels.
3.5Class-weighted loss function
Imbalanced class distribution is quite common in most datasets. So focusing more on the rare classes to boost their recognition accuracy can improve the average recognition performance significantly. But the overall recognition performance will decrease lightly. We adopt the class-weighted negative log-likelihood as the loss function:
Where is the likelihood function, is the training data, is the final score map and refers to the training label. is a function that returns 1 if , otherwise 0. is the occurrence frequency of class and is the weight of class . is the threshold of frequency criteria for the rare class. is the integer ceiling operation. In this way, the rare classes can be assigned a higher weight growing exponentially. The is set to 2.5% following the 85%-15% rule in , i.e., the frequency sum of all the rare classes is 15%.
RGB-D SLAM  is employed for dense 3D mapping. Its visual odometry can provide the transformation information between two adjacent semantically labelled point clouds. It is used for generating a global semantic map and enabling incremental semantic label fusion.
RGB-D SLAM is a graph-based SLAM system which consists of a front-end and a back-end units. The former unit processes the RGB-D data to calculate geometric relationships between key-frames through visual features based on RANSAC. The later unit registers pairs of image frames to construct a pose graph. Subsequently, G2O
3.73D label refinement
After obtaining the semantically labelled point clouds from different viewpoints, label hypotheses are fused by a recursive Bayesian update to refine the 3D semantic map. Each voxel in the semantic point cloud stores both the label value and the corresponding discrete probability. The voxels from different viewpoints can be transformed to the same coordinate through the visual odometry of RGB-D SLAM. Then the voxel’s label probability distribution can be updated by the means of a recursive Bayesian update as Equation 9.
where is the label prediction, is the frame and is the normalizing constant. It is applied to all label probabilities of each voxel to generate a proper distribution.
A large-scale indoor scene dataset, i.e., SUN RGB-D dataset, is adopted for the Pixel-Voxel network evaluation. It contains synchronized RGB-D image pairs for training/validation and synchronized RGB-D image pairs for testing. The RGB-D image pairs with different resolutions are captured by 4 different RGB-D sensors: Kinect V1, Kinect V2, Xtion and RealSense. The SUN RGB-D scene understanding challenge is to segment indoor scene classes such as the table, chair, sofa, window, door and etc. The pixel-wise annotation is available and it has extremely unbalanced class instances. As mentioned in Section 3.5, the rareness frequency threshold is set to 2.5% in the class-weighted loss function following the 85%-15% rule.
4.1Data augmentation and preprocessing
For the PixelNet training, all the RGB images are resized to the same resolution through a bilateral filter. We randomly flip the RGB image horizontally and scale the RGB image slightly to augment the RGB training data.
For the VoxelNet training, there is still no large-scale ready-made 3D point cloud dataset available. We generated the point cloud using the RGB-D image pairs and the corresponding camera intrinsic parameters. Similar as mentioned in , there are training and testing RGB-D image pairs to be excluded. Because those raw depth images contain a lot of invalid values, which gives a strong wrong supervision during training. We also randomly flip the 3D point cloud horizontally to augment the point cloud training data. It is a huge computation complexity if the original point clouds are used for VoxelNet training. So we uniformly down-sample the original point cloud to sparse point cloud in 3 different scales. The number of these sparse point clouds are , and . Please note that the input data of VoxelNet is unordered point cloud stored in a long vector.
The whole training process can be divided into 3 stages: PixelNet training, VoxelNet training and Pixel-Voxel network training. All the networks are trained with SGD with momentum. The batch size is set to , the momentum is fixed to and the weight decay is fixed to . The new parameters are randomly initialized using Gaussian distribution with variance .
In the PixelNet training stage, the step learning policy is adopted. The learning rate is initialized to and decreases 10 times after 15 epochs (25 epochs in total). The learning rate of newly-initialized parameters is set to 10 times higher than that of pre-trained parameters.
In the VoxelNet training stage, the polynomial learning policy is adopted. The learning rate is initialized to , the power is set to and the max iteration is set to .
In the Pixel-Voxel network training stage, we load the pre-trained PixelNet and VoxelNet models, then finetune the whole network on the synchronized RGB and point cloud data. Because there are three Softmax weighed fusion stacks in the network, 3 times fine-tuning are required. The same learning policy as VoxelNet training is adopted. The learning rate of newly-initialized parameters in each Softmax weighted fusion stack is set to 10 times higher than that of fine-tuning parameters.
Following , three standard performance metrics for semantic segmentation: pixel accuracy, mean accuracy, mean IoU are used for the Pixel-Voxel network evaluation. The three metrics are defined as below:
where is the number of classes, is the number of pixels of class classified as class , and is the total number of pixels belong to class .
The qualitative results of Pixel-Voxel network on the SUN RGB-D dataset are shown in Fig. ?. Because of preserving 3D shape information through VoxelNet, it is can be seen that the results have accurate boundary shape such as the shape of the bed, close-stool and especially the legs of furniture.
The comparison of overall performance and class-wise accuracy on the SUN RGB-D dataset are shown in Table ? and Table ?. The class-wise IoU of Pixel-Voxel network is also provided. We achieved 79.04% overall pixel accuracy with 0.64% improvement, 57.65% mean accuracy with 4.25% improvement and 44.24% mean IoU with 1.94% improvement over the state-of-the-art method . The improvements of class-wise accuracy are achieved on 30 classes. In addition, the method  is painfully slowly because of the usage of high computational CRF optimization in different scales.
4.4Dense RGB-D semantic mapping
The dense RGB-D semantic mapping system is implemented under the ROS
The system with a pre-trained network is tested in the real-world environment, i.e., a living room and bedroom containing the curtain, bed and etc., as shown in Figure ?. It can be seen that most of the voxels are correctly segmented and the results have accurate boundary shapes. But there are still some voxels in the boundary to be assigned wrong predictions. Some error predictions are caused by upsampling the data through a bilateral filter to the same size as Kinect V2 data. Another reason is that this network is trained using the public SUN RGB-D dataset but it is tested using the real-world data. So some errors result from illumination variances, categories variances and etc. In addition, the noise of Kinect V2 also causes some error predictions.
The runtime performance of our system is using the QHD data from Kinect2. During real-time RGB-D mapping, only a few key-frames are used for mapping. Most of the frames are abandoned because of the small variance between two consecutive frames. It is not necessary to segment all the frames in the sequence but only the key-frames. As mentioned in , runtime performance can nearly satisfy the real-time dense 3D semantic mapping. The runtime performance can be boosted to using the half scale data. It is a trade-off between runtime and accuracy.
All the source code will be published upon acceptance of this paper. A real-time demo can be found in this link https://youtu.be/UbmfGsAHszc.
In this paper, a dense RGB-D semantic mapping system is developed for the real-time applications. The runtime of the system can be boosted to using an i7 8-cores PC with Titan X GPU. A Pixel-Voxel network is proposed that achieves the state-of-the-art semantic segmentation performance on SUN RGB-D benchmark dataset. The proposed Pixel-Voxel Network integrates: 1) PixelNet that aggregates the multi-scales global context information from the RGB image, which extends the receptive field to cover all the elements in the feature map by utilizing multiple context stacks. 2) VoxelNet that preserves the local shape information of the 3D point cloud under the absence of conventional pooling layer. We also proposed a Softmax weighted fusion stack that combines PixelNet and VoxelNet together according to their respective confidence levels under different situations. The qualitative and quantitative evaluations on SUN RGB-D dataset and real-world datasets confirm the effectiveness of the proposed Pixel-Voxel Network.
The work was supported by Toshiba Research Europe and DISTINCTIVE scholarship.
- F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard, “3-d mapping with an rgb-d camera,” Transactions on Robotics, vol. 30, no. 1, pp. 177–187, 2014.
- R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohi, J. Shotton, S. Hodges, and A. Fitzgibbon, “Kinectfusion: Real-time dense surface mapping and tracking,” in ISMAR.1em plus 0.5em minus 0.4emIEEE, 2011, pp. 127–136.
- T. Whelan, S. Leutenegger, R. Salas-Moreno, B. Glocker, and A. Davison, “Elasticfusion: Dense slam without a pose graph.”1em plus 0.5em minus 0.4emRSS, 2015.
- J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in CVPR, 2015, pp. 3431–3440.
- V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” arXiv preprint arXiv:1511.00561, 2015.
- L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” arXiv preprint arXiv:1606.00915, 2016.
- C. Hazirbas, L. Ma, C. Domokos, and D. Cremers, “Fusenet: Incorporating depth into semantic segmentation via fusion-based cnn architecture,” in ACCV.1em plus 0.5em minus 0.4emSpringer, 2016, pp. 213–228.
- Z. Li, Y. Gan, X. Liang, Y. Yu, H. Cheng, and L. Lin, “Lstm-cf: Unifying context modeling and fusion with lstms for rgb-d scene labeling,” in ECCV.1em plus 0.5em minus 0.4emSpringer, 2016, pp. 541–557.
- C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” arXiv preprint arXiv:1612.00593, 2016.
- R. F. Salas-Moreno, R. A. Newcombe, H. Strasdat, P. H. Kelly, and A. J. Davison, “Slam++: Simultaneous localisation and mapping at the level of objects,” in CVPR, 2013, pp. 1352–1359.
- K. Tateno, F. Tombari, and N. Navab, “When 2.5 d is not enough: Simultaneous reconstruction, segmentation and recognition on dense slam,” in ICRA.1em plus 0.5em minus 0.4emIEEE, 2016, pp. 2295–2302.
- A. Hermans, G. Floros, and B. Leibe, “Dense 3d semantic mapping of indoor scenes from rgb-d images,” in ICRA.1em plus 0.5em minus 0.4emIEEE, 2014, pp. 2631–2638.
- V. Vineet, O. Miksik, M. Lidegaard, M. Nießner, S. Golodetz, V. A. Prisacariu, O. Kähler, D. W. Murray, S. Izadi, P. Pérez, et al., “Incremental dense semantic stereo fusion for large-scale semantic scene reconstruction,” in ICRA.1em plus 0.5em minus 0.4emIEEE, 2015, pp. 75–82.
- J. McCormac, A. Handa, A. Davison, and S. Leutenegger, “Semanticfusion: Dense 3d semantic mapping with convolutional neural networks,” in ICRA.1em plus 0.5em minus 0.4emIEEE, 2017, pp. 4628–4635.
- K. Tateno, F. Tombari, I. Laina, and N. Navab, “Cnn-slam: Real-time dense monocular slam with learned depth prediction,” arXiv preprint arXiv:1704.03489, 2017.
- C. Zhao, L. Sun, and R. Stolkin, “A fully end-to-end deep learning approach for real-time simultaneous 3d reconstruction and material recognition,” arXiv preprint arXiv:1703.04699, 2017.
- Y. Xiang and D. Fox, “Da-rnn: Semantic mapping with data associated recurrent neural networks,” arXiv preprint arXiv:1703.03098, 2017.
- L. Ma, J. Stückler, C. Kerl, and D. Cremers, “Multi-view deep learning for consistent semantic mapping with rgb-d cameras,” arXiv preprint arXiv:1703.08866, 2017.
- A. Mustafa and A. Hilton, “Semantically coherent co-segmentation and reconstruction of dynamic scenes,” CVPR, 2017.
- H. Noh, S. Hong, and B. Han, “Learning deconvolution network for semantic segmentation,” in ICCV, 2015, pp. 1520–1528.
- P. Krähenbühl and V. Koltun, “Efficient inference in fully connected crfs with gaussian edge potentials,” in Advances in neural information processing systems, 2011, pp. 109–117.
- S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. H. Torr, “Conditional random fields as recurrent neural networks,” in ICCV, 2015, pp. 1529–1537.
- Y. He, W.-C. Chiu, M. Keuper, M. Fritz, and S. I. Campus, “Std2p: Rgbd semantic segmentation using spatio-temporal data-driven pooling,” in CVPR, 2017.
- C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” arXiv preprint arXiv:1706.02413, 2017.
- F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” arXiv preprint arXiv:1511.07122, 2015.
- B. Shuai, T. Liu, and G. Wang, “Improving fully convolution network for semantic segmentation,” arXiv preprint arXiv:1611.08986, 2016.
- B. Shuai, Z. Zuo, B. Wang, and G. Wang, “Dag-recurrent neural networks for scene labeling,” in CVPR, 2016, pp. 3620–3629.
- G. Lin, C. Shen, A. Van Den Hengel, and I. Reid, “Exploring context with deep structured models for semantic segmentation,” TPAMI, 2017.