SEGCloud: Semantic Segmentation of 3D Point Clouds
http://segcloud.stanford.edu
Abstract
3D semantic scene labeling is fundamental to agents operating in the real world. In particular, labeling raw 3D point sets from sensors provides finegrained semantics. Recent works leverage the capabilities of Neural Networks (NNs), but are limited to coarse voxel predictions and do not explicitly enforce global consistency. We present SEGCloud, an endtoend framework to obtain 3D pointlevel segmentation that combines the advantages of NNs, trilinear interpolation(TI) and fully connected Conditional Random Fields (FCCRF). Coarse voxel predictions from a 3D Fully Convolutional NN are transferred back to the raw 3D points via trilinear interpolation. Then the FCCRF enforces global consistency and provides finegrained semantics on the points. We implement the latter as a differentiable Recurrent NN to allow joint optimization. We evaluate the framework on two indoor and two outdoor 3D datasets (NYU V2, S3DIS, KITTI, Semantic3D.net), and show performance comparable or superior to the stateoftheart on all datasets.
[0]leftmargin=10pt \threedvfinalcopy
1 Introduction
Scene understanding is a core problem in Computer Vision and is fundamental to applications such as robotics, autonomous driving, augmented reality, and the construction industry. Among various scene understanding problems, 3D semantic segmentation allows finding accurate object boundaries along with their labels in 3D space, which is useful for finegrained tasks such as object manipulation, detailed scene modeling, etc.
Semantic segmentation of 3D point sets or point clouds has been addressed through a variety of methods leveraging the representational power of graphical models [36, 44, 3, 48, 30, 35]. A common paradigm is to combine a classifier stage and a Conditional Random Field (CRF) [39] to predict spatially consistent labels for each data point [68, 69, 45, 66, 69]. Random Forests classifiers [7, 15] have shown great performance on this task, however the Random Forests classifier and CRF stage are often optimized independently and put together as separate modules, which limits the information flow between them.
3D Fully Convolutional Neural Networks (3DFCNN) [42] are a strong candidate for the classifier stage in 3D Point Cloud Segmentation. However, since they require a regular grid as input, their predictions are limited to a coarse output at the voxel (grid unit) level. The final segmentation is coarse since all 3D points within a voxel are assigned the same semantic label, making the voxel size a factor limiting the overall accuracy. To obtain a finegrained segmentation from 3DFCNN, an additional processing of the coarse 3DFCNN output is needed. We tackle this issue in our framework which is able to leverage the coarse output of a 3DFCNN and still provide a finegrained labeling of 3D points using trilinear interpolation (TI) and CRF.
We propose an endtoend framework that leverages the advantages of 3DFCNN, trilinear interpolation [47], and fully connected Conditional Random Fields(FCCRF) [39, 37] to obtain finegrained 3D Segmentation. In detail, the 3DFCNN provides class probabilities at the voxel level, which are transferred back to the raw 3D points using trilinear interpolation. We then use a Fully Connected Conditional Random Field (FCCRF) to infer 3D point labels while ensuring spatial consistency. Transferring class probabilities to points before the CRF step, allows the CRF to use point level modalities (color, intensity, etc.) to learn a finegrained labeling over the points, which can improve the initial coarse 3DFCNN predictions. We use an efficient CRF implementation to perform the final inference. Given that each stage of our pipeline is differentiable, we are able to train the framework endtoend using standard stochastic gradient descent.
The contributions of this work are:

We propose to combine the inference capabilities of Fully Convolutional Neural Networks with the finegrained representation of 3D Point Clouds using TI and CRF.

We train the voxellevel 3DFCNN and pointlevel CRF jointly and endtoend by connecting them via Trilinear interpolation enabling segmentation in the original 3D points space.
Our framework can handle 3D point clouds from various sources (laser scanners, RGBD sensors, etc.), and we demonstrate stateofthe art performance on indoor and outdoor, partial and fully reconstructed 3D scenes, namely on NYU V2[52], Stanford LargeScale 3D Indoor Spaces Dataset (S3DIS)[5], KITTI[23, 22], and the Semantic3D.net benchmark for outdoor scenes[26].
2 Related Work
In this section, we present related works with respect to three main aspects of our framework: neural networks for 3D data, graphical models for 3D Segmentation and works that explore the combination of Convolutional Neural Networks (CNN) and CRF. Other techniques have been employed for 3D Scene Segmentation [13, 2, 40] but we focus mainly on the ones related to the above topics.
Neural Networks for 3D Data: 3D Neural Networks have been extensively used for 3D object and parts recognition [60, 54, 46, 25, 53, 21], understanding object shape priors, as well as generating and reconstructing objects [73, 71, 19, 70, 12]. Recent works have started exploring the use of Neural Networks for 3D Semantic Segmentation [53, 16, 32]. Qi \etal. [53] propose a Multilayer Perceptron (MLP) architecture that extracts a global feature vector from a 3D point cloud of 1 physical size and processes each point using the extracted feature vector and additional point level transformations. Their method operates at the point level and thus inherently provides a finegrained segmentation. It works well for indoor semantic scene understanding, although there is no evidence that it scales to larger input dimensions without additional training or adaptation required. Huang \etal. [32] present a 3DFCNN for 3D semantic segmentation which produces coarse voxellevel segmentation. Dai \etal. [16] also propose a fully convolutional architecture, but they make a single prediction for all voxels in the same voxel grid column. This makes the wrong assumption that a voxel grid column contains 3D points with the same object label. All the aforementioned methods are limited by the fact that they do not explicitly enforce spatial consistency between neighboring points predictions and/or provide a coarse labeling of the 3D data. In contrast, our method makes finegrained predictions for each point in the 3D input, explicitly enforces spatial consistency and models class interactions through a CRF. Also, in contrast to [53], we readily scale to larger and arbitrarily sized inputs, since our classifier stage is fully convolutional.
Graphical Models for 3D Segmentation: Our framework builds on top of a long line of works combining graphical models( [61, 62, 39, 20, 38]) and highly engineered classifiers. Early works on 3D Semantic Segmentation formulate the problem as a graphical model built on top of a set of features. Such models have been used in several works to capture contextual relationships based on various features and cues such as appearance, shape, and geometry. These models are shown to work well for this task [50, 49, 36, 58, 44, 3, 48].
A common paradigm in 3D semantic segmentation combines a classifier stage and a Conditional Random Field to impose smoothness and consistency [68, 69, 45, 66, 69]. Random Forests [7, 15] are a popular choice of classifier in this paradigm and in 3D Segmentation in general [75, 17, 9, 8, 51, 67]; they use handcrafted features to robustly provide class scores for voxels, oversegments or 3D Points. In [45], the spin image descriptor is used as a feature, while [68] uses a 14dimensional feature vector based on geometry and appearance. Hackel \etal. [27] also define a custom set of features aimed at capturing geometry, appearance and location. In these works, the Random Forests output is used as unary potentials (class scores) for a CRF whose parameters are learned independently. The CRF then leverages the confidence provided by the classifier, as well as similarity between an additional set of features, to perform the final inference. In contrast to these methods, our framework uses a 3DFCNN which can learn higher dimensional features and provide strong unaries for each data point. Moreover, our CRF is implemented as a fully differentiable Recurrent Neural Network, similar to [76]. This allows the 3DFCNN and CRF to be trained endtoend, and enables information flow from the CRF to the CNN classification stage.
Joint CNN + CRF: Combining 3D CNN and 3D CRF has been previously proposed for the task of lesion segmentation in 3D medical scans. Kamnitsas \etal. [34] propose a multiscale 3D CNN with a CRF to classify 4 types of lesions from healthy brain tissues. The method consists of two modules that are not trained endtoend: a 2stream architecture operating at 2 different scan resolutions and a CRF. In the CRF training stage, the authors reduce the problem to a 2class segmentation task in order to find parameters for the CRF that can improve segmentation accuracy.
Joint endtoend training of CNN and CRF was first demonstrated by [76] in the context of image semantic segmentation, where the CRF is implemented as a differentiable Recurrent Neural Network (RNN). The combination of CNN and CRF trained in an endtoend fashion demonstrated stateoftheart accuracy for semantic segmentation in images. In [76] and other related works [42, 10], the CNN has a final upsampling layer with learned weights which allows to obtain pixel level unaries before the CRF stage. Our work follows a similar thrust by defining the CRF as an RNN and using a trilinear interpolation layer to transfer the coarse output of the 3DFCNN to individual 3D points before the CRF stage. In contrast to [34], our framework is a single stream architecture which jointly optimizes the 3D CNN and CRF, targets the domain of 3D Scene Point Clouds, and is able to handle a large number of classes both at the CNN and CRF stage. Unlike [76, 42, 10], we choose to use deterministic interpolation weights that take into account the metric distance between a 3D point and its neighboring voxel centers (Section 3.2). Our approach reduces the number of parameters to be learned, and we find it to work well in practice. We show that the combination of jointly trained 3DFCNN and CRF with TI consistently performs better than a stand alone 3DFCNN.
In summary, our work differs from previous works in the design of an endtoend deep learning framework for finegrained 3D semantic segmentation, the use of deterministic trilinear interpolation to obtain pointlevel segmentation, and the use of a jointly trained CRF to enforce spatial consistency. The rest of the paper is organized as follows. Sections 3 and 4 present the components of our endtoend framework and Section 5 provides implementation details. Section 6 presents our experiments including datasets (6.1), benchmark results (6.2), and system analysis (6.3). Section 7 concludes with a summary of the presented results.
3 SEGCloud Framework
An overview of the SEGCloud pipeline is shown in Figure 1. In the first stage of our pipeline, the 3D data is voxelized and the resulting 3D grid is processed by a 3D fully convolutional neural network (3DFCNN)^{1}^{1}1Depending on the type of 3D data a preprocessing step of converting it to a 3D point cloud representation might be necessary.. The 3DFCNN downsamples the input volume and produces probability distributions over the set of classes for each downsampled voxel (Section 3.1). The next stage is a trilinear interpolation layer which interpolates class scores from downsampled voxels to 3D points (Section 3.2). Finally, inference is performed using a CRF which combines the original 3D points features with interpolated scores to produce finegrained class distributions over the point set (Section 3.3). Our entire pipeline is jointly optimized and the CRF inference and joint optimization processes are presented in Section 4.
3.1 3D Fully Convolutional Neural Network
Our framework uses a 3DFCNN to learn a representation suitable for semantic segmentation. Moreover, the fully convolutional network reduces the computational overhead needed to generate predictions for each voxel by sharing computations [43]. In the next section, we describe how we represent 3D point clouds as an input to the 3DFCNN.
3DFCNN data representation: Given that the 3DFCNN input should be in the form of a voxel grid, we convert 3D point clouds as follows. Each data point is a 3D observation , that consists of the 3D position and other available modalities, such as the color intensity and sensor intensity . We place the 3D observations in a metric space so that the convolution kernels can learn the scale of objects. This process is usually handled in most 3D sensors. Then we define a regular 3D grid that encompasses the 3D observations. We denote each cell in the 3D grid as a voxel and for simplicity, each cell is a cube with length . Most of the space in the 3D input is empty and has no associated features. To characterize this, we use a channel to denote the occupancy as a binary value (zero or one). We use additional channels to represent other modalities. For instance, three channels are used for RGB color, and one channel is used for sensor intensity when available.
Architecture: Our 3DFCNN architecture is illustrated in Figure 2. We use 3 residual modules [28] sandwiched between 2 convolutional layers, as well as 2 destructive pooling layers in the early stages of the architecture to downsample the grid, and 2 nondestructive ones towards the end. The early downsampling gives us less memory footprint. The entire framework is fully convolutional and can handle arbitrarily sized inputs. For each voxel , the 3DFCNN outputs scores(logits) associated with a probability distribution over labels. The resulting scores are transferred to the raw 3D points via trilinear interpolation.
3.2 3D Trilinear Interpolation
The process of voxelization and subsequent downsampling in the 3DFCNN converts our data representation to a coarse 3D grid which limits the resolution of semantic labeling at the CRF stage (to 20 cm in our case). Running the CRF on such coarse voxels results in a coarse segmentation. One option to avoid this information loss is to increase the resolution of the voxel grid (\iedecrease the voxel size) and/or remove the destructive pooling layers, and run the CRF directly on the finegrained voxels. However, this quickly runs into computational and memory constraints, since for given 3D data dimensions, the memory requirement of the 3DFCNN grows cubically with the resolution of the grid. Also, for a given 3DFCNN architecture, the receptive field decreases as the resolution of the grid increases, which can reduce performance due to having less context available during inference(see [63]).
We therefore dismiss a voxelbased CRF approach and resort to running CRF inference using the raw 3D points as nodes. In this way, the CRF can leverage both the 3DFCNN output and the finegrained modalities of the input 3D points to generate accurate predictions that capture scene and object boundaries in detail. We achieve this using trilinear interpolation to transfer the voxellevel predictions from the 3DFCNN to the raw 3D points as illustrated in Figure 3. Specifically, for each point, , we define a random variable that denotes the semantic class, and the scores(logits) associated with the distribution of are defined as a weighted sum of scores of its 8 spatially closest voxels , whose centers are as follows: {ceqn}
(1)  
where is the voxel size. During back propagation, we use the same trilinear interpolation weights to splat the gradients from the CRF to the 3DFCNN. The obtained point level scores are then used as unaries in the CRF.
3.3 3D Fully Connected Conditional Random Field
The energy function of a CRF consists of a set of unary and pairwise potential energy terms. The unary potentials are a proxy for the initial probability distribution across semantic classes and the pairwise potentials enforce smoothness and consistency between predictions. The energy of the CRF is defined as,
(2) 
where denotes the unary potential which is defined in Equation 3.2 and denotes the pairwise potential. Note that all nodes in the CRF are connected with each other through the pairwise potentials. We use the Gaussian kernels from [37] for the pairwise potentials,
(3) 
where and are the weights of the bilateral and spatial kernel respectively, is the label compatibility score, and are the kernels’ bandwidth parameters. When RGB information is not available, we only use the spatial kernel. Using Gaussian kernels enables fast variational inference and learning through a series of convolutions on a permutohedral lattice [1] (Section 4).
4 CRF Inference and Joint Optimization
Exact energy minimization in CRF is intractable, therefore we rely on a variational inference method which allows us to jointly optimize both the CRF and 3DFCNN [76, 37]. The output after the CRF energy minimization gives us finegrained predictions for each 3D point that takes smoothness and consistency into account. Given the final output of the CRF, we follow the convention and use the distance between the prediction and ground truth semantic labels as a loss function and minimize it.
CRF Inference: The CRF with Gaussian potential has a special structure that allows fast and efficient inference. Krähenbühl \etal. [37] presented an approximate inference method which assumes independence between semantic label distributions , and derived the update equation:
(4) 
The above update equation can be implemented using simple convolutions, sums and softmax as shown by Zheng \etal. [76], who implemented CRF inference and learning as a Recurrent Neural Network (RNN), named CRFRNN. CRFRNN can be trained within a standard CNN framework, so we follow the same procedure to define our 3D CRF as an RNN for inference and learning. This formulation allows us to integrate the CRF within our 3DFCNN framework for joint training.
Loss: Once we minimize the energy of the CRF in Equation 2, we obtain the final prediction distribution of the semantic class on each 3D observation . Denoting the ground truth discrete label of the observation as , we follow the convention and define our loss function as the distance between a final prediction distribution and the ground truth distribution using KL divergence:
(5) 
where is the number of observations. Since the entropy of is a constant with respect to all parameters, we do not include it in the loss function equation.
5 Implementation Details
We implemented the SEGCloud framework using the Caffe neural network library [33]^{2}^{2}2We use [64] that supports 3D convolution.. Within theCaf fe framework, we adapted the bilinear interpolation of [11] and implemented trilinear interpolation as a neural network layer. All computations within the 3DFCNN, trilinear interpolation layer, and CRF are done on a Graphical Processing Unit (GPU). For CRF inference, we adapt the RNN implementation of Zheng \etal. [76] to 3D point clouds.
To address the lack of data in some datasets and make the network robust, we applied various data augmentation techniques such as random color augmentation, rotation along the upright direction, and points subsampling. The above random transformations and subsampling allow to increase the effective size of each dataset by at least an order of magnitude, and can help the network build invariance to rotation/viewpoint changes, as well as reduced and varying context (see [63]).
Training is performed in a 2step process, similar to [76] (see Figure 7). In the first stage, we train the 3DFCNN in isolation via trilinear interpolation for epochs.
In the second stage, we jointly train the 3DFCNN and the CRF endtoend (both modules connected through the trilinear interpolation layer). The approximate variational inference method we used for the CRF [37] approximates convolution in a permutohedral grid whose size depends on the bandwidth parameters . We fixed at 5cm, at 11 and used a grid search with small perturbation on a validation set to find the optimal (see [63]).
Method  manmade  natural  high  low  buildings  hard  scanning  cars  mIOU  mAcc^{3}^{3}3We downloaded confusion matrices from the benchmark website to compute the mean accuracy. 

terrain  terrain  vegetation  vegetation  scape  artefacts  
TMLCMSR [27]  89.80  74.50  53.70  26.80  88.80  18.90  36.40  44.70  54.20  68.95 
DeePr3SS [41]  85.60  83.20  74.20  32.40  89.70  18.50  25.10  59.20  58.50  88.90 
SnapNet [6]  82.00  77.30  79.70  22.90  91.10  18.40  37.30  64.40  59.10  70.80 
3DFCNNTI(Ours)  84.00  71.10  77.00  31.80  89.90  27.70  25.20  59.00  58.20  69.86 
SEGCloud (Ours)  83.90  66.00  86.00  40.50  91.10  30.90  27.50  64.30  61.30  73.08 
Method  ceiling  floor  wall  beam  column  window  door  chair  table  bookcase  sofa  board  clutter  mIOU  mAcc 

PointNet [53]  88.80  97.33  69.80  0.05  3.92  46.26  10.76  52.61  58.93  40.28  5.85  26.38  33.22  41.09  48.98 
3DFCNNTI(Ours)  90.17  96.48  70.16  0.00  11.40  33.36  21.12  76.12  70.07  57.89  37.46  11.16  41.61  47.46  54.91 
SEGCloud (Ours)  90.06  96.05  69.86  0.00  18.37  38.35  23.12  75.89  70.40  58.42  40.88  12.96  41.60  48.92  57.35 
Method  Bed  Objects  Chair  Furniture  Ceiling  Floor  Deco.  Sofa  Table  Wall  Window  Booksh.  TV  mIOU  mAcc  glob Acc 

Couprie et al. [14]  38.1  8.7  34.1  42.4  62.6  87.3  40.4  24.6  10.2  86.1  15.9  13.7  6.0    36.2  52.4 
Wang et al. [65]  47.6  12.4  23.5  16.7  68.1  84.1  26.4  39.1  35.4  65.9  52.2  45.0  32.4    42.2   
Hermans et al. [29]  68.4  8.6  41.9  37.1  83.4  91.5  35.8  28.5  27.7  71.8  46.1  45.4  38.4    48.0  54.2 
Wolf et al. [69]  74.56  17.62  62.16  47.85  82.42  98.72  26.36  69.38  48.57  83.65  25.56  54.92  31.05  39.51  55.60.2  64.90.3 
3DFCNNTI(Ours)  69.3  40.26  64.34  64.41  73.05  95.55  21.15  55.51  45.09  84.96  20.76  42.24  23.95  42.13  53.9  67.38 
SEGCloud (Ours)  75.06  39.28  62.92  61.8  69.16  95.21  34.38  62.78  45.78  78.89  26.35  53.46  28.5  43.45  56.43  66.82 
Method  building  sky  road  vegetation  sidewalk  car  pedestrian  cyclist  signage  fence  mIOU  mAcc 

Zhang \etal. [75]  86.90    89.20  55.00  26.20  50.0  49.00  19.3  51.7  21.1    49.80 
3DFCNNTI(Ours)  85.83    90.57  70.50  25.56  65.68  46.35  7.78  28.40  4.51  35.65  47.24 
SEGCloud (Ours)  85.86    88.84  68.73  29.74  67.51  53.52  7.27  39.62  4.05  36.78  49.46 
6 Experiments
In this section, we evaluate our framework on various 3D datasets and analyze the performance of key components.
6.1 Datasets
Several 3D Scene datasets have been made available to the research community [56, 4, 5, 31, 59, 72, 52, 16, 24, 74]. We chose four of them so that they cover indoor and outdoor, partial and fully reconstructed, as well as small, medium and large scale point clouds. For our evaluation, we favor those for which previous 3D Semantic Segmentation works exist, with replicable experimental setups for comparison. We choose baselines so that they are representative of the main research thrusts and topics related to our method (\ie, Neural Networks, Random Forests, and CRFs). The datasets we chose for evaluation are the Semantic3D.net Benchmark [26], the Stanford LargeScale 3D Indoor Spaces Dataset (S3DIS) [5], KITTI [23, 22], and NYU V2 [52]. The datasets showcase a wide range of sizes from the smallest KITTI dataset with 12 million training points, to the largest Semantic3D.net with billion training points ^{4}^{4}4This excludes the validation set in our data split(details in [63]). We evaluate our method on each dataset and provide a comparison against the stateoftheart.
6.2 Results
We present quantitative and qualitative results for each of the datasets introduced above. We compare against the stateoftheart, and perform an ablation study to showcase the benefits of the CRF. The metrics reported are mean IOU and mean Accuracy across classes unless otherwise stated.
Semantic3D.net benchmark:
We evaluate our architecture on the recent Semantic3D.net benchmark [26], which is currently the largest labeled 3D point cloud dataset for outdoor scenes. It contains over billion points and covers a range of urban scenes. We provide results on the reduced8 challenge of the benchmark in Table 1. Our method outperforms [6] by 2.2 mIOU points and 2.28% accuracy and sets a new stateoftheart on that challenge. When compared against the best method that does not leverage extra data through ImageNet [57] pretrained networks, our method outperforms [27] by 7.1 mIOU points, 4.1% accuracy. Note that we also do not leverage extra data or ImageNet [57] pretrained networks. Our base 3DFCNN trained with Trilinear Interpolation (3DFCNNTI) already achieves stateoftheart performance, and an additional improvement of 3.1 mIOU points and 3.22% can be attributed to the CRF. An example segmentation of our method is shown in Figure 5. The 3DFCNNTI produces a segmentation which contains some noise on the cars highlighted in the figure. However, the combination with the CRF in the SEGCloud is able to remove the noise and provide a cleaner segmentation of the point cloud.
Stanford LargeScale 3D Indoor Spaces Dataset (S3DIS): The S3DIS dataset [5] provides 3D point clouds for six fully reconstructed largescale areas, originating from three different buildings. We train our architecture on two of the buildings and test on the third. We compare our method against the MLP architecture of Qi \etal, (PointNet) [53]. Qi \etal. [53] perform a sixfold cross validation across areas rather than buildings. However, with this experimental setup, areas from the same building end up in both the training and test set resulting in increased performance and do not measure generalizability. For a more principled evaluation, we choose our test set to match their fifth fold (ie. we test on Area 5 and train on the rest). We obtain the results from the authors for comparison shown in Table 2. We outperform the MLP architeture of [53] by 7.83 mIOU points and 8.37% in mean accuracy. Our base 3DFCNNTI also outperforms their architecture and the effect of our system’s design choices on the performance of the 3DFCNN and 3DFCNNTI are analyzed in Section 6.3. Qualitative results on this dataset (Figure 5) show an example of how detailed boundaries are captured and refined by our method.
NYU V2: The NYU V2 dataset [52] contains 1149 labeled RGBD images. Camera parameters are available and are used to obtain a 3D point cloud for each RGBD frame. In robotics and navigation applications, agents do not have access to fully reconstructed scenes and labeling single frame 3D point clouds becomes invaluable. We compare against 2D and 3Dbased methods except for those that leverage additional large scale image datasets (\eg[35], [18]), or do not use the official split or the 13class labeling defined in [14] (\eg[35], [68]). We obtain a confusion matrix for the highest performing method of [69] to compute mean IOU in addition to the mean accuracy numbers they report. Wolf \etal [69] evaluate their method by aggregating results of 10 random forests. Similarly, we use 10 different random initializations of network weights, and use a validation set to select our final trained model for evaluation. Results are shown in Table 3. We outperform the 3D Entangled Forests method of [69] by 3.94 mIOU points and 0.83% mean accuracy.
KITTI: The KITTI dataset [23, 22] provides 6 hours of traffic recording using various sensors including a 3D laser scanner. Zhang \etal. [75] annotated a subset of the KITTI tracking dataset with 3D point cloud and corresponding 2D image annotations for use in sensor fusion for 2D semantic segmentation. As part of their sensor fusion process, they train a unimodal 3D point cloud classifier using Random Forests.
We use this classifier as a baseline for evaluating our framework's performance. The comparison on the labeled KITTI subset is reported in Table 4. We demonstrate performance on par with [75] where a Random Forests classifier is used for segmentation. Note that for this dataset, we train on the laser point cloud with no RGB information.
Analysis of results: In all datasets presented, our performance is on par with or better than previous methods. As expected, we also observe that the addition of a CRF improves the 3DFCNNTI output and the qualitative results showcase its ability to recover clear object boundaries by smoothing out incorrect regions in the bilateral space (\eg. cars in Semantic3D.net or chairs in S3DIS). Quantitatively, it offers a relative improvement of 3.05.3% mIOU and 4.44.7% mAcc for all datasets. Specifically, we see the largest relative improvement on Semantic3D.net  5.3% mIOU. Since Semantic3D.net is by far the largest dataset (at least 8X times larger), we believe that such characteristic might be representative for large scale datasets as the base networks are less prone to overfitting. We notice however that several classes in the S3DIS dataset, such as board, column and beam are often incorrectly classified as walls. These elements are often found in close proximity to walls and have similar colors, which can present a challenge to both the 3DFCNNTI and the CRF.
6.3 System Analysis
We analyze two additional components of our framework: geometric data augmentation and trilinear interpolation. The experiments presented in this section are performed on the S3DIS dataset. We also analyzed the effect of joint training versus separate CRF initialization (details and results in supplementary material [63]).
Method  mIOU 

PointNet [53]  41.09 
Ours no augm. (3DFCNNTI)  43.67 
Ours (3DFCNNTI)  47.46 
Method  mIOU 

PointNet [53]  41.09 
OursNN (3DFCNNNN)  44.84 
Ours (3DFCNNTI)  47.46 
Effect of Geometric Data Augmentation: Our framework uses several types of data augmentation. Our geometric data augmentation methods in particular (random rotation along the zaxis and scaling) are nonstandard. Qi \etal. [53] use different augmentation, including random rotation along the zaxis, and jittering of coordinates to augment object 3D point clouds, but it is not specified whether the same augmentation is used on 3D scenes. We want to determine the role of our proposed geometric augmentation methods on the performance of our base 3DFCNNTI architecture. We therefore train the 3DFCNNTI without any geometric augmentation and report the performance in Table 6. We observe that the geometric augmentation does play a significant role in the final performance and is responsible for an improvement of 3.79 mIOU points. However, even without any geometric augmentation, our base 3DFCNNTI outperforms the MLP architecture of [53] by 2.58 mIOU points.
Trilinear interpolation analysis: We now present a study on the effect of trilinear interpolation on our framework. For simplicity, we perform this analysis on the combination of 3DFCNN and interpolation layer only (no CRF module). We want to study the advantage of our proposed 8neighbours trilinear interpolation scheme (Section 3.2) over simply assigning labels of points according to the voxel they belong to (see Figure 6 for a schematic explanation of the two methods). The results of the two interpolation schemes are shown in Table 6. We observe that trilinear interpolation helps improve the 3DFCNN performance by 2.62 mIOU points over simply transferring the voxel label to the points within the voxel. This shows that considering the metric distance between points and voxels, as well a larger neighborhood of voxels can help improve accuracy in predictions.
7 Conclusion
We presented an endtoend 3D Semantic Segmentation framework that combines 3DFCNN, trilinear interpolation and CRF to provide class labels for 3D point clouds. Our approach achieves performance on par or better than stateoftheart methods based on neural networks, randoms forests and graphical models. We show that several of its components such as geometric 3D data augmentation and trilinear interpolation play a key role in the final performance.
Although we demonstrate a clear advantage over some Random Forests methods and a pointbased MLP method, our implementation uses a standard voxelbased 3DFCNN and could still adapt to the sparsity of the voxel grid using sparse convolutions (\eg[55]) which could add an extra boost in performance, and set a new stateoftheart in 3D Semantic Segmentation.
Acknowledgments
We acknowledge the support of Facebook and MURI (11865141TBCJE) for this research.
References
 [1] A. Adams, J. Baek, and M. A. Davis. Fast HighDimensional Filtering Using the Permutohedral Lattice. Computer Graphics Forum, 2010.
 [2] A. K. Aijazi, P. Checchin, and L. Trassoudaine. Segmentation based classification of 3d urban point clouds: A supervoxel based approach with evaluation. Remote Sensing, 5(4):1624–1650, 2013.
 [3] A. Anand, H. S. Koppula, T. Joachims, and A. Saxena. Contextually Guided Semantic Labeling and Search for 3D Point Clouds. International Journal of Robotics Research, 32(1):19–34, 2013.
 [4] I. Armeni, S. Sax, A. R. Zamir, and S. Savarese. Joint 2D3DSemantic Data for Indoor Scene Understanding. ArXiv eprints, Feb. 2017.
 [5] I. Armeni, O. Sener, A. Zamir, H. Jiang, and S. Savarese. 3D Semantic Parsing of LargeScale Indoor Spaces. CVPR, pages 1534–1543, 2016.
 [6] A. Boulch, B. L. Saux, and N. Audebert. Unstructured Point Cloud Semantic Labeling Using Deep Segmentation Networks. In I. Pratikakis, F. Dupont, and M. Ovsjanikov, editors, Eurographics Workshop on 3D Object Retrieval. The Eurographics Association, 2017.
 [7] L. Breiman. Random forests. Machine Learning, 45(1):5–32, 2001.
 [8] G. J. Brostow, J. Shotton, J. Fauqueur, and R. Cipolla. Segmentation and recognition using structure from motion point clouds. In Proceedings of the 10th European Conference on Computer Vision: Part I, ECCV ’08, pages 44–57, Berlin, Heidelberg, 2008. SpringerVerlag.
 [9] N. Chehata, L. Guo, and C. Mallet. Airborne lidar feature selection for urban classification using random forests. In Proceedings of the ISPRS Workshop: Laserscanningâ09, pages 207–212, 2009.
 [10] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. CoRR, abs/1606.00915, 2016.
 [11] C. B. Choy, J. Gwak, S. Savarese, and M. Chandraker. Universal correspondence network. In Advances in Neural Information Processing Systems 29. 2016.
 [12] C. B. Choy, D. Xu, J. Gwak, K. Chen, and S. Savarese. 3dr2n2: A unified approach for single and multiview 3d object reconstruction. In Proceedings of the European Conference on Computer Vision (ECCV), 2016.
 [13] A. Cohen, A. G. Schwing, and M. Pollefeys. Efficient structured parsing of facades using dynamic programming. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 3206–3213, June 2014.
 [14] C. Couprie, C. Farabet, L. Najman, and Y. Lecun. Indoor semantic segmentation using depth information. 2013.
 [15] A. Criminisi and J. Shotton. Decision Forests for Computer Vision and Medical Image Analysis. Springer Publishing Company, Incorporated, 2013.
 [16] A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner. Scannet: Richlyannotated 3d reconstructions of indoor scenes. arXiv preprint arXiv:1702.04405, 2017.
 [17] D. Dohan, B. Matejek, and T. Funkhouser. Learning hierarchical semantic segmentations of LIDAR data. In International Conference on 3D Vision (3DV), Oct. 2015.
 [18] D. Eigen and R. Fergus. Predicting depth, surface normals and semantic labels with a common multiscale convolutional architecture. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 2650–2658, Dec 2015.
 [19] H. Fan, H. Su, and L. Guibas. A Point Set Generation Network for 3D Object Reconstruction from a Single Image. ArXiv eprints, Dec. 2016.
 [20] P. F. Felzenszwalb and D. P. Huttenlocher. Efficient graphbased image segmentation. Int. J. Comput. Vision, 59(2):167–181, Sept. 2004.
 [21] A. GarciaGarcia, F. GomezDonoso, J. GarciaRodriguez, S. OrtsEscolano, M. Cazorla, and J. AzorinLopez. Pointnet: A 3d convolutional neural network for realtime object class recognition. In 2016 International Joint Conference on Neural Networks (IJCNN), pages 1578–1584, July 2016.
 [22] A. Geiger. Are we ready for autonomous driving? the kitti vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), CVPR ’12, pages 3354–3361, Washington, DC, USA, 2012. IEEE Computer Society.
 [23] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun. Vision meets robotics: The KITTI dataset. International Journal of Robotics Research, 32(11):1231 – 1237, Sept. 2013.
 [24] G. Georgakis, M. A. Reza, A. Mousavian, P. Le, and J. Kosecka. Multiview RGBD dataset for object instance detection. CoRR, abs/1609.07826, 2016.
 [25] K. Guo, D. Zou, and X. Chen. 3d mesh labeling via deep convolutional neural networks. ACM Trans. Graph., 35(1):3:1–3:12, Dec. 2015.
 [26] T. Hackel, N. Savinov, L. Ladicky, J. D. Wegner, K. Schindler, and M. Pollefeys. SEMANTIC3D.NET: A NEW LARGESCALE POINT CLOUD CLASSIFICATION BENCHMARK. to appear in ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci 2017, 2017.
 [27] T. Hackel, J. D. Wegner, and K. Schindler. Fast Semantic Segmentation of 3D Point Clouds With Strongly Varying Density. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, III3(July):177–184, 2016.
 [28] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015.
 [29] A. Hermans, G. Floros, and B. Leibe. Dense 3d semantic mapping of indoor scenes from rgbd images. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pages 2631–2638, May 2014.
 [30] H. Hu, D. Munoz, J. A. Bagnell, and M. Hebert. Efficient 3d scene analysis from streaming data. In IEEE International Conference on Robotics and Automation (ICRA), 2013.
 [31] B.S. Hua, Q.H. Pham, D. T. Nguyen, M.K. Tran, L.F. Yu, and S.K. Yeung. Scenenn: A scene meshes dataset with annotations. In International Conference on 3D Vision (3DV), 2016.
 [32] S. Y. J. Huang. Point Cloud Labeling using 3D Convolutional Neural Network. In International Conference on Pattern Recognition, pages 1–6, 2016.
 [33] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
 [34] K. Kamnitsas, C. Ledig, V. F. J. Newcombe, J. P. Simpson, A. D. Kane, D. K. Menon, D. Rueckert, and B. Glocker. Efficient multiscale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Medical Image Analysis, 36:61–78, 2017.
 [35] B.S. Kim, P. Kohli, and S. Savarese. 3D Scene Understanding by VoxelCRF. Computer Vision (ICCV), 2013 IEEE International Conference on, pages 1425–1432, 2013.
 [36] H. S. Koppula, A. Anand, T. Joachims, and A. Saxena. Semantic Labeling of 3D Point Clouds for Indoor Scenes. Neural Information Processing Systems, pages 1–9, 2011.
 [37] P. Krähenbühl and V. Koltun. Efficient inference in fully connected crfs with gaussian edge potentials. In J. ShaweTaylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 24. 2011.
 [38] L. Ladicky, C. Russell, P. Kohli, and P. H. S. Torr. Associative hierarchical random fields. IEEE Trans. Pattern Anal. Mach. Intell., 36(6):1056–1077, June 2014.
 [39] J. D. Lafferty, A. McCallum, and F. C. N. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML ’01, pages 282–289, San Francisco, CA, USA, 2001. Morgan Kaufmann Publishers Inc.
 [40] J.F. Lalonde, N. Vandapel, D. Huber, and M. Hebert . Natural terrain classification using threedimensional ladar data for ground robot mobility. Journal of Field Robotics, 23(10):839 – 861, November 2006.
 [41] F. J. Lawin, M. Danelljan, P. Tosteberg, G. Bhat, F. S. Khan, and M. Felsberg. Deep projective 3d semantic segmentation, 2017.
 [42] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. Computer Vision and Pattern Recognition (CVPR), 2015.
 [43] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 0712June, pages 3431–3440, 2015.
 [44] Y. Lu and C. Rasmussen. Simplified markov random fields for efficient semantic labeling of 3D point clouds. In IEEE International Conference on Intelligent Robots and Systems, pages 2690–2697, 2012.
 [45] A. Martinović, J. Knopp, H. Riemenschneider, and L. Van Gool. 3D all the way: Semantic segmentation of urban scenes from start to end in 3D. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 0712June, pages 4456–4465, 2015.
 [46] D. Maturana and S. Scherer. VoxNet: A 3D Convolutional Neural Network for RealTime Object Recognition. In IROS, 2015.
 [47] E. Meijering. A chronology of interpolation: From ancient astronomy to modern signal and image processing. Proceedings of the IEEE, 90(3):319–342, March 2002.
 [48] D. Munoz, J. A. Bagnell, and M. Hebert. Coinference machines for multimodal scene analysis. In European Conference on Computer Vision (ECCV), 2012.
 [49] D. Munoz, N. Vandapel, and M. Hebert. Directional associative markov network for 3d point cloud classification. Fourth international symposium on 3D data processing, visualization and transmission, pages 1–8, 2008.
 [50] D. Munoz, N. Vandapel, and M. Hebert. Onboard contextual classification of 3d point clouds with learned highorder markov random fields. In 2009 IEEE International Conference on Robotics and Automation, pages 2009–2016, May 2009.
 [51] L. Nan, K. Xie, and A. Sharf. A searchclassify approach for cluttered indoor scene understanding. ACM Trans. Graph., 31(6):137:1–137:10, Nov. 2012.
 [52] P. K. Nathan Silberman, Derek Hoiem and R. Fergus. Indoor segmentation and support inference from rgbd images. In ECCV, 2012.
 [53] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. CoRR, abs/1612.00593, 2016.
 [54] C. R. Qi, H. Su, M. Nießner, A. Dai, M. Yan, and L. Guibas. Volumetric and multiview cnns for object classification on 3d data. In Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 2016.
 [55] G. Riegler, A. O. Ulusoy, and A. Geiger. Octnet: Learning deep 3d representations at high resolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
 [56] H. Riemenschneider, A. BódisSzomorú, J. Weissenberg, and L. Van Gool. Learning Where to Classify in Multiview Semantic Segmentation, pages 516–532. Springer International Publishing, Cham, 2014.
 [57] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. FeiFei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015.
 [58] R. Shapovalov, A. Velizhev, and O. Barinova. NonAssociative Markov Networks for 3D Point Cloud Classification. Isprs, XXXVIII3A:103–108, 2010.
 [59] S. Song, S. P. Lichtenberg, and J. Xiao. SUN RGBD: A RGBD scene understanding benchmark suite. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 0712June, pages 567–576, 2015.
 [60] S. Song and J. Xiao. Deep Sliding Shapes for amodal 3D object detection in RGBD images. In CVPR, 2016.
 [61] B. Taskar, V. Chatalbashev, and D. Koller. Learning Associative Markov Networks. Proc. of the International Conference on Machine Learning, pages 102–110, 2004.
 [62] B. Taskar, C. Guestrin, and D. Koller. Max margin Markov networks. Neural Information Processing Systems, 2003.
 [63] L. P. Tchapmi, C. B. Choy, I. Armeni, J. Gwak, and S. Savarese. Supplementary Material for SEGCloud: Semantic Segmentation of 3D Point Clouds. http://segcloud.stanford.edu/supplementary.pdf.
 [64] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3d convolutional networks. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 4489–4497. IEEE, 2015.
 [65] A. Wang, J. Lu, G. Wang, J. Cai, and T.J. Cham. Multimodal Unsupervised Feature Learning for RGBD Scene Labeling, pages 453–467. Springer International Publishing, Cham, 2014.
 [66] T. Wang, J. Li, and X. An. An efficient scene semantic labeling approach for 3d point cloud. In ITSC, pages 2115–2120. IEEE, 2015.
 [67] M. Weinmann, B. Jutzi, and C. Mallet. Semantic 3D scene interpretation: A framework combining optimal neighborhood size selection with relevant features. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, pages 181–188, Aug. 2014.
 [68] D. Wolf, J. Prankl, and M. Vincze. Fast Semantic Segmentation of 3D Point Clouds using a Dense CRF with Learned Parameters. Icra, 2015.
 [69] D. Wolf, J. Prankl, and M. Vincze. Enhancing semantic segmentation for robotics: The power of 3d entangled forests. IEEE Robotics and Automation Letters, 1(1):49–56, Jan 2016.
 [70] J. Wu, T. Xue, J. J. Lim, Y. Tian, J. B. Tenenbaum, A. Torralba, and W. T. Freeman. Single image 3d interpreter network. In European Conference on Computer Vision (ECCV), 2016.
 [71] J. Wu, C. Zhang, T. Xue, W. T. Freeman, and J. B. Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generativeadversarial modeling. In Advances in Neural Information Processing Systems, pages 82–90, 2016.
 [72] J. Xiao, A. Owens, and A. Torralba. Sun3d: A database of big spaces reconstructed using sfm and object labels. 2013 IEEE International Conference on Computer Vision (ICCV), 00:1625–1632, 2013.
 [73] X. Yan, J. Yang, E. Yumer, Y. Guo, and H. Lee. Perspective transformer nets: Learning singleview 3d object reconstruction without 3d supervision. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 1696–1704. Curran Associates, Inc., 2016.
 [74] Q. Zhang, X. Song, X. Shao, R. Shibasaki, and H. Zhao. Category modeling from just a single labeling: Use depth information to guide the learning of 2d models. In 2013 IEEE Conference on Computer Vision and Pattern Recognition, pages 193–200, June 2013.
 [75] R. Zhang, S. A. Candra, K. Vetter, and A. Zakhor. Sensor fusion for semantic segmentation of urban scenes. 2015 IEEE International Conference on Robotics and Automation (ICRA), pages 1850–1857, 2015.
 [76] S. Zheng, S. Jayasumana, B. RomeraParedes, V. Vineet, Z. Su, D. Du, C. Huang, and P. H. S. Torr. Conditional Random Fields as Recurrent Neural Networks. International Conference on Computer Vision, pages 1529–1537, 2015.
Appendix
This document presents additional details and qualitative results for the framework presented in our main paper. Section A reports the particulars of our framework’s implementation. The following section B offers details and results on the effect of using endtoend training versus separate CRF intialization. The remaining of the document focuses on additional aspects of the evaluation and experiments. The experimental setup is detailed in Section C. The characteristics of the datasets used in our evaluation are outlined in Section C.1. Section C.2 defines the metrics used in evaluating our framework. Finally, qualitative results of our framework on all four datasets are illustrated in Section C.3.
Appendix A Implementation
This section provides additional implementation details, including procedures for 3D data augmentation, data preparation, training, as well as the programming framework.
a.1 Augmentation Procedures for 3D data
Most of the datasets we used are small to medium in scale. To make up for the lack of data, we perform a series of augmentations for 3D data. We apply the following data augmentations onthefly to increase randomness in the data and save storage space.
Color Augmentation: Color augmentation is a popular data augmentation technique for image datasets. We leverage it in our work by randomly varying the R, G and B channels of each observation within the range for each channel.
Geometric augmentation: We also leverage 2 simple geometric augmentations: random rotation and scaling. We randomly rotate 3D observations around the axis along the gravity direction to mimic a change of viewpoints in a scene. During training, we sample rotation angles in the continuous range of and rotate the point cloud onthefly. We also scale the data by a small factor that is uniformly sampled in the range to make the network invariant to small changes in scale.
Points Subsampling: We also use a random subsampling of points in highly dense datasets, specifically, the Stanford LargeScale 3D Indoor Spaces Dataset (S3DIS) [5] and the Semantic3D.net [26]. During training, we sample points in a scene by a factor empirically chosen based on the number of points in the given point cloud crop (see Table 7). For point clouds having more than points, the subsampling factor for S3DIS is kept at 10 since the density of the point cloud is relatively constant in this dataset. The Semantic3D.net dataset on the other hand has varying density and we use three values of the subsampling factor (10, 50 and 100), as shown in Table 7. This subsampling process aims at building invariance to missing points, and increasing the speed of the training process. At test time, the algorithm is evaluated on all input points without subsampling.
The above random transformations and subsampling allow us to increase the effective size of each dataset and can help the network build invariance to rotation/viewpoint changes, as well as reduced and varying context.
\diagbox[width=10em]Dataset Threshold (#points)  

S3DIS  10  10  10 
Semantic3D.net  10  50  100 
a.2 Input Preparation
The large scale 3D observations are split into areas of at most in the , and dimensions, where is the gravity axis. One notable exception is the S3DIS dataset, which provides fully reconstructed 3D point clouds of indoor buildings spaces. For this dataset, we limit the and dimensions to like rest of the datasets, but keep the entire extent, which allows to include both the ceiling and floor in every crop. During training, such cropped subarea overlap with adjacent subareas by . There is no overlap at test time in order to obtain a single prediction per point. Subareas are then voxelized with a resolution to obtain a maximum input volume of . This granularity provides a balance between memory requirements and an adequate representation of the 3D space without information loss. Each voxel has one to five associated channels that correspond to its binary occupancy (occupied, empty), RGB value normalized within the range , and sensor intensity when available (Semantic3D.net dataset). The sensor intensity is mean centered and normalized using the mean and range of the training data distribution.
a.3 Training
Training is performed in a 2step process similar to [76]. This process is illustrated in Figure 7. In the first training stage, we use the Trilinear Interpolation layer to map the voxelwise predictions to pointwise predictions and minimize the pointwise loss. We train 3DFCNN with Trilinear Interpolation layer for epochs with a learning rate between and , and reduce it by a factor of every epochs. In the second training stage, we combine the pretrained 3DFCNN, the Trilinear Interpolation layer and the CRF, and train the whole system endtoend. The base learning rate in this stage is set to a value between and , and the training is performed for epochs. We use a learning rate multiplier of and for the CRF âs bilateral weights and compatibility matrix, however we did not extensively study the effect of these parameters. In most cases, the training of the second stage converges within a few hundred iterations (Convergence is determined using a validation set). In the CRF formulation, although the kernel weights and the compatibility matrix are learned using gradient descent, the kernel bandwidth parameters are not learned within our efficient variational inference framework. Thus, we used grid search or fixed values for some parameters following [37]. We fix at 5cm, at 11, and use a validation set to search for an optimal value of . We limit our search to the range . When no RGB information is available, we instead searched for in the same range and did not use the bilateral filter. The kernel weights and compatibility matrix are learned during training. Similar to [76] we use 5 CRF iterations during training and 10 CRF iterations at test time.
Appendix B Effect of endtoend training vs separate CRF initialization
We performed an experiment to evaluate the effect of endtoend training versus separately initializing the CRF module. For the separate initialization, we set the theta parameters to the optimal joint training values we found during endtoend training, the spatial weight to 3, and the bilateral to 5 for all experiments. Results show that joint training performs better than separate CRF initialization especially in mAcc metric (see Table 9).
Dataset  Endtoend  manual  

mIOU  mAcc  mIOU  mAcc  
Semantic3D.net  61.30  73.08  60.72  69.69 
S3DIS  48.92  57.35  47.09  53.6 
KITTI  36.78  49.46  36.34  46.34 
NYUV2  43.45  56.43  41.63  52.28 
Appendix C Experimental and Evaluation Setup
c.1 Datasets
We now present the characteristics of the datasets we use to evaluate our framework. The datasets we chose for evaluation are Semantic3D.net [26], the Stanford LargeScale 3D Indoor Spaces Dataset (S3DIS) [5], KITTI [23, 22], and NYU V2 [52]. As shown in Table 8, our framework is general in that it can handle point clouds from various sources, both indoor and outdoor environments, as well as partial and fully reconstructed point clouds. Specifically, two of the datasets are collected from indoor environments and two from outdoor environments. They also cover a variety of data acquisition methods, including laser scanners (Semantic3D.net, KITTI), Kinect (NYU V2), and MatterPort (S3DIS). Moreover, the S3DIS is a fully reconstructed point cloud dataset, while NYU V2 provides point clouds extracted from a single frame RGBD camera. The size of the training sets also vary from million training points for the KITTI dataset to billion training points for Semantic3D.net (excluding the validation set).
c.2 Evaluation Metrics
We use two main metrics for our evaluation: mean class accuracy (mAcc) and mean class IOU (mIOU), where IOU is defined similarly to the Pascal segmentation convention. Accuracy per class is defined as:
(6) 
where is the number of true positives of class , is the number of false negatives of class and is the total number of groundtruth elements of class . The mean class accuracy is then defined as:
(7) 
where N is the number of classes.
We define per class IOU following the Pascal convention as:
(8) 
where are defined as above, and is the number of false positives of class . Note that IOU is a more difficult metric than accuracy since it doesn’t simply reward true positives, but also penalizes false positives. From the definition above, we obtain mean class IOU as:
(9) 
c.3 Visualizations
In this section, we include more qualitative segmentation results for all datasets. The results showcase the initial segmentation of the standalone 3DFCNNTI followed by the final result of the SEGCloud framework.