A Facade Segmentation

SPLATNet: Sparse Lattice Networks for Point Cloud Processing

Abstract

We present a network architecture for processing point clouds that directly operates on the collection of points represented as a sparse set of samples in a high-dimensional lattice. Naïvely applying convolutions on this lattice scales poorly, both in terms of memory and computational cost, as the size of the lattice increases. Instead, our network uses sparse bilateral convolutional layers as building blocks. These layers maintain efficiency by using indexing structures to apply convolutions only on occupied parts of the lattice, and allow flexible specification of the lattice structure enabling hierarchical and spatially-aware feature learning, as well as joint 2D-3D reasoning. Both point-based and image-based representations can be easily incorporated in a network with such layers and the resulting model can be trained in an end-to-end manner. We present results on 3D segmentation tasks where our approach outperforms existing state-of-the-art techniques.

\cvprfinalcopy

1 Introduction

Data obtained with modern 3D sensors such as laser scanners is predominantly in the irregular format of point clouds or meshes. Analysis of point cloud data has several useful applications such as robot manipulation and autonomous driving. In this work, we aim to develop a new neural network architecture for point cloud processing.

A point cloud consists of a sparse and unordered set of 3D points. These properties of point clouds make it difficult to use traditional convolutional neural network (CNN) architectures for point cloud processing. As a result, existing approaches that directly operate on point clouds are dominated by hand-crafted features. One way to use CNNs for point clouds is by first pre-processing a given point cloud in a form that is amenable to standard spatial convolutions. Following this route, most deep architectures for 3D point cloud analysis require pre-processing of irregular point clouds into either voxel representations (\eg,  [44, 36, 43]) or 2D images by view projection (\eg,  [40, 33, 24, 9]). This is due to the ease of implementing convolution operations on regular 2D or 3D lattices. However, transforming point cloud representation to either 2D images or 3D voxels would often result in artifacts and more importantly, a loss in some natural invariances present in point clouds.

Recently, a few network architectures [32, 34] have been developed to directly work on point clouds. One of the main drawbacks of these architectures is that they do not allow a flexible specification of the extent of spatial connectivity across points (filter neighborhood). Both [32] and [34] use max-pooling to aggregate information across points either globally [32] or in a hierarchical manner [34]. This pooling aggregation may lose surface information because the spatial layouts of points are not explicitly considered. It is desirable to capture spatial relationships in input point clouds through more general convolution operations while being able to specify filter extents in a flexible manner.

Figure 1: From point clouds and images to semantics. SPLATNet directly takes point cloud as input and predicts labels for each point. SPLATNet, on the other hand, jointly processes both point cloud and the corresponding multi-view images for better 2D and 3D predictions.

In this work, we propose a generic and flexible neural network architecture for processing point cloud data that alleviates some of the aforementioned issues with existing deep architectures. Our key observation is that the bilateral convolution layers (BCLs) proposed in [22] have several favorable properties for point cloud processing. BCL provides a systematic way of filtering unordered input points while enabling flexible specification of underlying lattice structure on which the convolution operates. BCL smoothly maps given input points onto a sparse lattice, performs standard convolutions on the sparse lattice and then smoothly interpolates the filtered signal back onto the original input points. With BCL as a building block, we propose a new neural network architecture, which we refer to as SPLATNet (SParse LATtice Networks) that does hierarchical and spatially-aware feature learning for unordered point clouds. The proposed SPLATNet model has several advantages for point cloud processing:

  • SPLATNet takes the point cloud as input and does not require any pre-processing to voxels or images.

  • SPLATNet allows for easy specification of filter neighborhood as in standard CNN architectures.

  • With the use of hash table, our network can efficiently deal with sparsity in the input point cloud by convolving only at locations where the data is present.

  • SPLATNet computes hierarchical and spatially aware features of an input point cloud with sparse and efficient lattice filters.

  • In addition, our network architecture allows an easy mapping of 2D points into 3D space and vice-versa. Following this, we propose a joint 2D-3D deep architecture that processes both the multi-view 2D images and the corresponding 3D point cloud in a single forward pass while being end-to-end learnable.

The inputs and outputs of two different proposed SPLATNet and SPLATNet networks are depicted in Figure 1. We demonstrate the above advantages of our approach with experiments on point cloud segmentation. Experiments on two different benchmark datasets of RueMonge2014 [37] for facade segmentation and ShapeNet [45] for part segmentation demonstrate the superior performance of our technique compared to state-of-the-art techniques, while being computationally efficient. Specifically, in the case of facade segmentation, SPLATNet  significantly outperforms prior state-of-the-art on both multi-view image labeling and point cloud labeling. In addition, in the case of ShapeNet part segmentation, SPLATNet outperforms existing state-of-the-art techniques.

2 Related Work

Below we briefly review existing deep learning approaches for 3D shape processing and explain differences with our work.

Multi-view and voxel networks.

Multi-view networks pre-process shapes into a set of 2D rendered images encoding surface depth and normals under various 2D projections [40, 33, 3, 24, 9, 20]. These networks take advantage of high resolution in the input rendered images and transfer learning through fine-tuning of 2D pre-trained image-based architectures. On the other hand, 2D projections can cause surface information loss due to self-occlusions, while viewpoint selection is often performed through heuristics that are not necessarily optimal for a given task.

Voxel-based methods convert the input 3D shape representation into a 3D volumetric grid. Early voxel-based architectures executed convolution in regular, fixed voxel grids, and were limited to low shape resolutions due to high memory and computation costs [44, 29, 33, 6, 15, 38]. Instead of using fixed grids, more recent approaches pre-process the input shapes into adaptively subdivided, hierarchical grids with denser cells placed near the surface [36, 35, 26, 43, 41]. As a result, they have much lower computational and memory overhead. On the other hand, convolutions are often still executed away from the surface, where most of the shape information exists. An alternative approach is to constrain the execution of volumetric convolutions only along the input sparse set of active voxels of the grid [16]. Our approach can be seen as a generalization of sparse grid convolutions to more general ones (permutohedral lattice convolutions). In contrast to previous work, we do not require pre-processing points into voxels that may cause discretization artifacts and surface information loss. We smoothly map the input surface signal to our sparse lattice, perform convolutions over this lattice, and smoothly interpolate the filter responses back to the input surface. In addition, our architecture can easily incorporate feature representations originating from both 3D point clouds and rendered surface images within the same lattice, getting the best of both worlds.

Point cloud networks.

Qi \etal[32] pioneered another type of deep networks having the advantage of directly operating on point clouds. The networks learn spatial feature representations for each input point, then the point features are aggregated across the whole point set [32], or hierarchical surface regions [34] through max-pooling. This aggregation may lose surface information since the spatial layout of points is not explicitly considered. In our case, the input points are mapped to a sparse lattice where convolution can be efficiently formulated and spatial relationships in the input data can be effectively captured through flexible filters.

Non-Euclidean networks.

An alternative approach is to represent the input surface as a graph (\eg, a polygon mesh or point-based connectivity graph), convert the graph into its spectral representation, then perform convolution in the spectral domain [8, 19, 11, 4]. Shapes with different graph structure tend to have largely different spectral bases, thus these networks do not generalize well across structurally different shapes. The shape basis functions can be aligned through a spectral transformer [46], however, this requires a robust initialization scheme. Another class of methods embeds the input shapes into 2D parametric domains, then execute convolutions within these domains [39, 27, 13]. However, these embeddings can suffer from spatial distortions or require topologically consistent input shapes. Other methods parameterize the surface into local patches, and execute surface-based convolution within these patches [28, 5, 30]. Such non-Euclidean networks have the advantage of being invariant to surface deformations, yet this invariance might not be always desirable in man-made object segmentation and classification tasks where large deformations may change the underlying shape or part functionalities and semantics. We refer to Bronstein \etal[7] for an excellent review of spectral, patch- and graph-based networks.

Joint 2D-3D networks.

FusionNet [18] combines shape classification scores from a volumetric and a multi-view network, yet this fusion happens at a late stage, after the final fully connected layer of these networks, and does not jointly consider their intermediate local and global feature representations. In our case, the 2D and 3D feature representations are mapped into the same lattice, enabling end-to-end learning from both input types of representations.

3 Bilateral Convolution Layer

In this section, we briefly review the Bilateral Convolution Layer (BCL) that forms the basic building block of our SPLATNet architecture for point clouds. BCL, proposed in [22], provides a way to incorporate sparse high-dimensional filtering inside neural networks. In [22], BCL was used as a learnable generalization of bilateral filtering [42, 2], hence the name ‘Bilateral Convolution Layer’. Bilateral filtering involves a projection of a given 2D image into higher-dimensional space (\ie, space defined by position and color features) and is traditionally limited to hand-designed filter kernels. BCL provides a way to learn filter kernels in high-dimensional spaces for bilateral filtering. BCL is also shown to be useful for information propagation across video frames [21]. We observe that BCL has several favourable properties to filter data that is inherently sparse and high-dimensional, like point clouds. Here, we briefly describe how a BCL works and then discuss its properties.

Figure 2: Bilateral Convolution Layer. Splat: BCL first interpolates input features onto a -dimensional permutohedral lattice defined by the lattice features at input points. Convolve: BCL then does -dimensional convolution over this sparsely populated lattice. Slice: The filtered signal is then interpolated back onto the input signal. For illustration, input and output are shown as point cloud and the corresponding segmentation.

3.1 Inputs to BCL

Let be the given input features to a BCL, where denotes the number of input points and denotes the dimensionality of input features at each point. For point clouds, denotes the number of points and denotes the features at each point. Input features to BCL can be low-level features such as color, position etc., and can also be high-level features generated by a neural network or some other feature learning technique.

One of the interesting characteristics of BCL is that it allows a flexible specification of the lattice space in which the convolution operates. This is specified as lattice features at each input point. Let denote lattice features at input points with denoting the dimensionality of the feature space in which convolution operates. For instance, the lattice features can be position and color features () that defines 6-dimensional filtering space for BCL. For standard 3D spatial filtering of point clouds, is given as positional features () at each point. Thus BCL takes input features and lattice features of input points and does -dimensional filtering of the points.

Figure 3: SPLATNet. Illustration of inputs, outputs and network architecture for SPLATNet and SPLATNet.

3.2 Processing steps in BCL

As illustrated in Figure 2, BCL has three processing steps of splat, convolve and slice that work as follows.

Splat.

BCL first projects the input features onto the -dimensional grid defined by the lattice features , via barycentric interpolation. Following [1], BCL uses a permutohedral lattice instead of a standard Euclidean grid for efficiency purposes. The size of lattice simplices or space between the grid points is controlled by scaling the lattice features , where is a diagonal scaling matrix.

Convolve.

Once the input points are projected onto -dimensional lattice, BCL performs -dimensional convolution on the splatted signal with learnable filter kernels. Just like in standard spatial CNNs, BCL allows for easy specification of filter neighborhood in -dimensional space.

Slice.

The filtered signal is then mapped back to the input points via barycentric interpolation. The resulting signal can be passed on to other BCLs for further processing. This step is called ‘slicing’. BCL allows to slice the filtered signal onto a different set of points other than the input points. This is achieved by a specification of different set of lattice features at output points of interest.

All the above three processing steps in BCL can be written as matrix multiplication:

(1)

where denotes the column/channel of the input and denotes the corresponding filtered signal.

3.3 Properties of BCL

There are several properties of BCL that makes them particularly convenient for point cloud processing. Here, we mention some of those properties:

  • The input to BCL need not be ordered or lie on a grid as the input points are projected onto -dimensional grid defined by lattice features .

  • The input and output points can be different for BCL with the specification of different input and output lattice features and .

  • Since BCL allows for separate specification of input and lattice features, a given input signal can be projected into different dimensional space for filtering. For instance, a 2D image can be projected onto 3D space for filtering.

  • Just like in standard spatial convolutions, BCL allows for easy specification of filter neighborhood. This allows for flexible neural network architectures.

  • Since a signal is usually sparse in high-dimensions, BCL uses hash table to memorize the populated lattice points and does convolution only at those locations. This helps in efficient processing of sparse inputs.

Refer to [1] for more information about sparse high-dimensional Gaussian filtering on a permutohedral lattice and refer to [22] for more information on BCL.

4 SPLATNet for Point Cloud Processing

We first introduce SPLATNet, an instantiation of our proposed network architecture which operates directly on 3D point clouds and is readily applicable to many important 3D tasks. The input to SPLATNet is a 3D point cloud , where denotes the number of points and denotes the number of feature dimensions including point locations. Additional features are often available either directly from 3D sensors or through pre-processing. These can be RGB colors, surface normal directions, curvature, \etc. at the input points.

As output, SPLATNet produces per-point predictions. Tasks like 3D semantic segmentation and 3D object part labeling fit naturally under this framework. With simple techniques such as global pooling [32], SPLATNet can be modified to produce a single vector as output and thus can be extended to other tasks such as classification.

Network architecture.

The architecture of SPLATNet is depicted in Figure 3. The network starts with a single CONV layer followed by a series of BCLs. The CONV layer processes each input point separately without any data aggregation. The functionality of BCLs is already explained in Section 3. For SPLATNet, we use BCLs each operating on a 3D lattice () constructed using 3D positional features at input points, . We note that different BCLs use different lattice feature scales . Recall from Section 3 that lattice features scaling is a diagonal matrix that controls the spacing between the grid points in the lattice. For BCLs in SPLATNet, we use same lattice scales along each of the , and directions, i.e., , where is a scalar and denotes a identity matrix. We start with an initial lattice scale for the first BCL and subsequently divide the lattice scale by a factor of 2 () for the next BCLs. In other words, SPLATNet with BCLs use the following lattice feature scales: . Lower lattice feature scales implies coarser lattices and larger receptive fields for the filters. Thus, in SPLATNet, deeper BCLs have longer-range connectivity between input points compared to earlier layers. We will discuss more about the effect of lattice spaces and their scales later. Like in standard CNNs, SPLATNet allows for easy specification of filter neighborhoods. For all the BCLs, we use filters operating on 1-neighborhoods (\ie, one-ring) and refer to the supp. material for details on the number of filters per layer.

The responses of the BCLs are concatenated and then passed through two additional CONV layers. Then the output of the final layer passes through a SoftMax layer, which produces point-wise class label probabilities. The concatenation operation aggregates information from different BCLs operating at different lattice scales. Similar techniques of concatenation from network layers at different depths have been useful in 2D CNNs [17]. All the network layers, except for the last CONV layer, are followed by ReLU and BatchNorm layers. More details about the network architecture are given in the supp. material.

Lattice spaces and their scales.

One of the main distinguishing features of SPLATNet compared to existing 3D networks is its use of convolutions on sparse lattices while still taking unordered point clouds as input. The use of BCLs in SPLATNet allows for easy specification of lattice spaces via input lattice features and also the lattice scale via a scaling matrix.

Changing the lattice feature scales directly affects the resolution of the signal on which the convolution operates. This gives us direct control over the receptive fields of network layers. Figure 7 shows lattice cell visualizations for different lattice spaces and scales, with points falling in the same lattice cell shown with the same color. Using coarser lattice would increase the effective receptive field of the filter. Another way to increase the receptive field of a filter is by increasing its neighborhood size. But, in high-dimensions, this will exponentially increase the number of filter weights. For instance, a standard Euclidean filter with 1-neighborhood in 3D space has parameters, whereas a filter with 7-neighborhood has parameters. On the other hand, increasing the receptive field of a filter by making the lattice coarser would not increase the number of filter parameters leading to more computationally efficient network architectures.

We observe that it is beneficial to use finer lattices (larger lattice feature scales) earlier in the network, then coarser lattices (smaller lattice feature scales) going deeper. This is consistent with the common knowledge in 2D CNNs: increasing receptive field gradually through the network can help build hierarchical representations with varying spatial extents and abstraction levels. Although we mainly experiment with lattices in this work, BCL allows for other lattice spaces such as position and color space () or normal space. Using different lattice spaces enforces different connectivity across input points that may be beneficial to the task. In one of the experiments, we experimented with a variant of SPLATNet, where we add an extra BCL with position and normal lattice features () and observed minor performance improvements.

\captionsetup

[subfigure]labelformat=empty {subfigure}.155 {subfigure}.155 {subfigure}.155

Figure 4:
Figure 5:
Figure 6:
Figure 7: Effect of different lattices and their scales. Lattice visualizations for different feature spaces along with different lattice scales . refers to normal features. All points falling in the same lattice cell are shown with the same color.

5 Joint 2D-3D Networks with SPLATNet

Oftentimes, 3D point clouds are accompanied by 2D images of the same scene. For instance, many modern 3D sensors capture RGBD video streams and perform 3D reconstruction to obtain 3D point clouds, resulting in both 2D images and point clouds of a scene together with point correspondences between 2D and 3D. One could also easily sample point clouds along with 2D renderings from a given 3D mesh. When such aligned 2D-3D data is present, SPLATNet provides an extremely flexible framework for joint processing. We propose SPLATNet, another SPLATNet instantiation designed for such joint processing.

The network architecture of the SPLATNet is depicted in the green box of Figure 3. SPLATNet encompasses SPLATNet as one of its components and adds extra computational modules for joint 2D-3D processing. Next, we explain each of these extra components of SPLATNet, in the order of their computation.

Cnn.

First, we process the given multi-view 2D images using a standard 2D segmentation CNN architecture, which we refer to as CNN. In our experiments, we use DeepLab [10] segmentation architecture for CNN and initialize the network weights with those pre-trained on Pascal VOC segmentation dataset [12].

Bcl.

Once the output of CNN is computed for given multi-view 2D images, we project them onto 3D point cloud using a BCL with only splat and slice operations. As mentioned in Section 3, one of the interesting properties of BCL is that it allows for different input and output points by separate specification of input and output lattice features, and . Using this property, we use BCL to splat 2D features onto 3D lattice space and then slice the 3D splatted signal on the point cloud. We refer to this BCL, without convolution operation, as BCL and is illustrated in Figure 8. Specifically, we use 3D locations of input image pixels as input lattice features, , where denotes the number of input image pixels. In addition, we use the 3D locations of points in the point cloud as output lattice features, , which are the same lattice features used in SPLATNet. The lattice feature scale, , controls the smoothness of the projection and can be adjusted according to the sparsity of the point cloud.

Figure 8: 2D to 3D projection. Illustration of 2D to 3D projection using splat and slice operations. Given input features of 2D image pixels are projected onto 3D permutohedral lattice defined by 3D positional lattice features. The splatted signal is then sliced onto the points of interest in a 3D point cloud.

2D-3D Fusion.

At this point, we have the result of CNN projected onto 3D points and also the intermediate features from SPLATNet that exclusively operates on input point cloud. Since both of these signals are embedded in the same 3D space, we concatenate these two signals and then use a series of CONV layers for further processing. The output of ‘2D-3D Fusion’ module is passed onto a SoftMax layer to compute class probabilities at each input point of the point cloud.

Bcl.

Sometimes, we are also interested in segmenting 2D images and want to leverage relevant 3D information for better 2D segmentation. For this purpose, we back-project the 3D features computed by the ‘2D-3D Fusion’ module onto the 2D images by a BCL module. This is the reverse operation of BCL, where the input and output lattice features are swapped. Similarly, a hyper-parameter controls the smoothness of the projection.

Cnn.

We then concatenate the output from CNN, input images and the output of BCL, and pass them through another 2D CNN (CNN) to obtain refined 2D semantic predictions. In our experiments, we find that a simple 2-layered network is good enough for this purpose.

All components in this 2D-3D joint processing framework are differentiable, and can be trained end-to-end. Depending on the availability of 2D or 3D ground-truth labels, loss functions can be defined on either one of the two domains, or on both domains in a multi-task learning setting. More details of the network architecture are provided in the supp. material. We believe that this joint processing offered by SPLATNet results in better predictions for both 2D images and 3D point clouds. For 2D images, leveraging 3D features helps in view-consistent predictions across multi-view images. For point clouds, augmenting 2D CNNs help leverage powerful 2D deep CNN features computed on high-resolution 2D images.

6 Experiments

We evaluate SPLATNet on tasks on two different benchmark datasets of RueMonge2014 [37] and ShapeNet [45]. On RueMonge dataset, we conducted experiments on the tasks of 3D point cloud labeling and multi-view image labeling. On ShapeNet, we evaluated SPLATNet on 3D part segmentation task. We use Caffe [23] neural network framework for all the experiments and use Adam stochastic optimization [25] for training the networks.

6.1 Facade segmentation

Method \arraybackslashAverage IoU \arraybackslashRuntime (min)
With only 3D data \arraybackslash
OctNet [36] \arraybackslash59.2 \arraybackslash-
Autocontext [14] \arraybackslash54.4 \arraybackslash16
SPLATNet (Ours) \arraybackslash65.4 \arraybackslash0.06
With both 2D and 3D data \arraybackslash
Autocontext [14] \arraybackslash62.9 \arraybackslash87
SPLATNet (Ours) \arraybackslash69.8 \arraybackslash1.20
\subcaption

Point cloud labeling Method \arraybackslashAverage IoU \arraybackslashRuntime (min) Autocontext [14] \arraybackslash60.5 \arraybackslash117 Autocontext [14] \arraybackslash62.7 \arraybackslash146 DeepLab [10] \arraybackslash69.3 \arraybackslash0.84 SPLATNet (Ours) \arraybackslash70.6 \arraybackslash4.34 \subcaptionMulti-view image labeling

Table 1: Results on facade segmentation. Average IoU scores and approximate runtimes for point cloud labeling and 2D image labeling using different techniques. Runtimes indicate the time taken to segment the entire test data (202 images sequentially for 2D and a point cloud for 3D).
\captionsetup

[subfigure]labelformat=empty {subfigure}.24 {subfigure}.24 {subfigure}.24 {subfigure}.24

Figure 9: Input Point Cloud
Figure 10: Ground truth
Figure 11: SPLATNet
Figure 12: SPLATNet
Figure 13: Facade point cloud labeling. Sample visual results of SPLATNet and SPLATNet.

Here, the task is to assign semantic label to every point in a point cloud and/or corresponding multi-view 2D images.

Dataset.

RueMonge2014 dataset [37] provides a standard benchmark for 2D and 3D facade segmentation and also inverse procedural modeling. The dataset consists of 428 high-resolution and multi-view images obtained from a street in Paris. A point cloud with approximately 1M points is reconstructed using the multi-view images. A ground-truth labeling with seven semantic classes of door, shop, balcony, window, wall, sky and roof are provided for both 2D images and 3D point cloud. Sample 2D images and point cloud with their corresponding ground truths are shown in Figure 17 and  13 respectively. For evaluation, Intersection over Union (IoU) score is computed for each of the seven classes and then averaged to get a single overall IoU.

Point cloud labeling.

We use our SPLATNet architecture for the task of point cloud labeling on this dataset. We use 5 BCLs followed by a couple of CONV layers in SPLATNet network. Input features to the network comprise of a 7 dimensional vector at each point representing RGB color, normals and height above the ground. For all the BCLs, we use lattice space () with . Experimental results with average IoU and runtime are shown in Table 1. Results show that, with only using 3D data, our method achieves an IoU of 65.4 which is considerable improvement (6.2 IoU) over the state-of-the-art deep network, OctNet [36].

Since this dataset comes with multi-view 2D images, one could leverage the information present in 2D data for better point cloud labeling. We use SPLATNet to leverage 2D information and obtain better 3D segmentations. Table 1 shows the experimental results when using both the 2D and 3D data as input. SPLATNet obtains an average IoU of 69.8 outperforming the previous state-of-the-art by a large margin (6.9 IoU), thereby setting up a new state-of-the-art on this dataset. This is also a significant improvement from the IoU obtained with SPLATNet demonstrating the benefit of leveraging 2D and 3D information in a joint framework. Runtimes in Table 1 also indicate that our SPLATNet approach is much faster compared to traditional Autocontext techniques. A visual result for 3D facade labeling is shown in Figure 13.

Multi-view image labeling.

As illustrated in Section 5, we extend the 2D CNN with CNN to obtain better multi-view image segmentation. Table 1 shows the results of multi-view image labeling on this dataset using different techniques. Using DeepLab (CNN) already outperforms existing state-of-the-art by a large margin. Leveraging 3D information via SPLATNet boosts the performance to 70.6 IoU. An increase of 1.3 IoU from only using CNN demonstrates the potential of our joint 2D-3D framework in leveraging 3D information for better 2D segmentations.

\captionsetup

[subfigure]labelformat=empty {subfigure}.155 {subfigure}.155 {subfigure}.155
{subfigure}.155 {subfigure}.155 {subfigure}.155

Figure 14: Input
Figure 15: Ground truth
Figure 16: SPLATNet
Figure 17: 2D facade segmentation. Sample visual results of SPLATNet.

6.2 ShapeNet part segmentation

#instances 2690 76 55 898 3758 69 787 392 1547 451 202 184 283 66 152 5271
class instance air- bag cap car chair ear- guitar knife lamp laptop motor- mug pistol rocket skate- table
avg. avg. plane phone bike board
Yi \etal [45] 79.0 81.4 81.0 78.4 77.7 75.7 87.6 61.9 92.0 85.4 82.5 95.7 70.6 91.9 85.9 53.1 69.8 75.3
3DCNN [32] 74.9 79.4 75.1 72.8 73.3 70.0 87.2 63.5 88.4 79.6 74.4 93.9 58.7 91.8 76.4 51.2 65.3 77.1
Kd-network [26] 77.4 82.3 80.1 74.6 74.3 70.3 88.6 73.5 90.2 87.2 81.0 94.9 57.4 86.7 78.1 51.8 69.9 80.3
PointNet [32] 80.4 83.7 83.4 78.7 82.5 74.9 89.6 73.0 91.5 85.9 80.8 95.3 65.2 93.0 81.2 57.9 72.8 80.6
PointNet++ [34] 81.9 85.1 82.4 79.0 87.7 77.3 90.8 71.8 91.0 85.9 83.7 95.3 71.6 94.1 81.3 58.7 76.4 82.6
SyncSpecCNN [46] 82.0 84.7 81.6 81.7 81.9 75.2 90.2 74.9 93.0 86.1 84.7 95.6 66.7 92.7 81.6 60.6 82.9 82.1
SPLATNet 82.0 84.6 81.9 83.9 88.6 79.5 90.1 73.5 91.3 84.7 84.5 96.3 69.7 95.0 81.7 59.2 70.4 81.3
SPLATNet 83.3 85.1 82.8 84.2 88.6 80.0 90.5 73.5 91.7 86.2 84.5 96.3 74.7 95.7 83.4 64.0 74.5 81.3
Table 2: Results on ShapeNet part segmentation. Class average mIoU, instance average mIoU and mIoU scores for all the categories on the task of point cloud labeling using different techniques.

The task of part segmentation is to assign a part category label to each point in a point cloud representing a 3D object.

Dataset.

The ShapeNet Part dataset [45] is a subset of ShapeNet, which contains 16681 objects from 16 categories, each with 2-6 part labels. The objects are consistently aligned and scaled to fit into a unit cube, and the ground-truth annotations are provided on sampled points on the shape surfaces. It is common to assume that the category of the input 3D object is known, narrowing the possible part labels to the ones specific to the given object category. We report standard Intersection over Union (IoU) scores for evaluation of part segmentation. An IoU score is computed for each object and then averaged within the objects in a category to compute mean IoU (mIoU) for each object category. In addition to reporting mIoU score for each object category, we also report ‘class average mIoU’ which is the average mIoU across all object categories, and also ‘instance average mIoU’, which is the average mIoU across all objects.

Figure 18: ShapeNet part segmentation. Sample visual results of SPLATNet and SPLATNet.

3D part segmentation.

We use both SPLATNet and SPLATNet for this task. First, we discuss the architecture and results with SPLATNet that uses only 3D point clouds as input. Since the category of the object is assumed to be known, we train separate segmentation models for each object category. SPLATNet network architecture for this taks is also composed of 5 BCLs. Point locations are used as input features as well as the lattice features for all the BCLs and the lattice feature scale for the first BCL layer . Experimental results are shown in Table 2. SPLATNet obtains a class average mIoU of 82.0 and instance average mIoU of 84.6, which is on-par with the best networks that only takes point clouds as input (PointNet++ [34] uses surface normals as additional inputs).

We also adopt our SPLATNet network, which operates on both 2D and 3D data, for this task. For the joint framework to work, we need rendered 2D views and corresponding 3D locations for each pixel in the renderings. We first render 3-channel images: Phong shading [31], depth, and height from the ground plane. Cameras are placed on the 20 vertices of a dodecahedron from a fixed distance, pointing towards the object’s center. The 2D-3D correspondences can be generated by carrying the coordinates of 3D points into the rendering rasterization pipeline so that each pixel also acquires coordinate values from the surface point projected into it. Results in Table 2 show that incorporating 2D information allows SPLATNet to improve noticeably from SPLATNet with 1.3 and 0.5 increase in class and instance average mIoU respectively. SPLATNet obtains class average IoU of 82.3 outperforming existing state-of-the-art approaches, while performing on par with state-of-the-art in terms of instance average IoU score (85.1).

Six dimensional filtering.

We experiment with a variant of SPLATNet where an additional BCL with 6-dimensional position and normal lattice features () is added between the last two CONV layers. This modification gave only a marginal improvement of IoU over standard SPLATNet, in terms of both class and instance average mIoU scores.

7 Conclusion

In this work, we propose a new SPLATNet architecture for point cloud processing. SPLATNet directly takes point clouds as input and computes hierarchical and spatially-aware features with sparse and efficient lattice filters. In addition, SPLATNet allows for an easy mapping of 2D information into 3D and vice-versa, resulting in a novel network architecture for joint processing of point clouds and the corresponding multi-view images. Experiments on two different benchmark datasets shows that the proposed networks compare favourably against state-of-the-art approaches for scene labeling tasks. In future, we would like to explore the use of additional input features (\eg, texture) and also the use of other high dimensional lattice spaces in our network.

Supplementary

In this supplementary material, we provide additional details and explanations to help readers gain a better understanding of our techniques.

Appendix A Facade Segmentation

Network architecture of SPLATNet.

We use 5 BCLs () followed by 2 CONV layers in SPLATNet for the facade segmentation task. We omit the initial CONV layer since we find it has no effect on the overall performance. The number of output channels in each layer are: 64-128-B128-B128-B64-C64-C7 . Note that although written as a linear structure, the network has skip connections from all BCLs (layers start with ’

 }') to the penultimate $1\times 1$ CONV layer. We use an initial scale
$\Lambda_0=32I_3$ for scaling lattice features $XYZ$, and divide the scale by half after each
CL: . The unit of raw input features is meter, with (aligned with gravity axis) having a range of meters. For all the BCLs, we use filters operating on 1-neighborhoods (\ie, one-ring neighborhoods).

Network architecture of SPLATNet.

We use SPLATNet as described above as the 3D component of the 2D-3D joint model. The ‘2D-3D Fusion’ component has 2 CONV layers: 64-7 . DeepLab [10] segmentation architecture is used as CNN. CNN is a small network with 2 CONV layers: 32-7 , where the first layer has filters and 32 output channels, and the second one has filters and 7 output channels. We use and for 2D3D interpolation with BCLs. Note that the dataset provides one-to-many mappings from 3D points to pixels. By using a very large scale (\ie, ), 3D unaries are directly mapped to the corresponding 2D pixel locations without interpolation.

Training.

We randomly sample facade segments of 60k points and use a batch size of 4 when training SPLATNet. CNN is initialized with Pascal VOC [12] pre-trained weights and fine-tuned for 2D facade segmentation. Adam optimizer [25] with an initial learning rate of is used for training both SPLATNet and SPLATNet. Since the training data is small, we augment point cloud training data with random rotations, translations, and small color perturbations. We also augment 2D image data with small color perturbations during training.

Appendix B ShapeNet Part Segmentation

{subfigure}

[t].4 {subfigure}[t].42
{subfigure}[t].4 {subfigure}[t].42

Figure 19: Incorrect labels
Figure 20: Incomplete labels
Figure 21: Inconsistent labels
Figure 22: Confusing labels
Figure 23: Labeling issues in ShapeNet part dataset. Four types of labeling issues are shown here. Two examples from the test set are given for each type, where the first row shows the ground-truth labels and the second row shows our predictions (SPLATNet). Our predictions seem to be more accurate than the ground truth in some cases (see 23 and 23).

Network architecture of SPLATNet.

We use a CONV layer in the beginning, followed by 5 BCLs (), and then 2 CONV layers in SPLATNet for the ShapeNet part segmentation task. The number of output channels in each layer are: 32-B64-B128-B256-B256-B256-128-Cx . ‘x’ in the last CONV layer denotes the number of part categories, and ranges from 2-6 for different object categories. We use an initial scale for scaling lattice features , and divide the scale by half after each BCL: .

Network architecture of SPLATNet.

We use SPLATNet as described above as the 3D component of the joint model. The ‘2D-3D Fusion’ component has 2 CONV layers: 128-x . The same DeepLab architecture is used for CNN. We use in BCL. Since 2D prediction is not needed, CNN and BCL are omitted.

Training.

We train separate models for each object category. CNN is initialized the same way as in the facade experiment. Adam optimizer [25] with an initial learning rate of is used. We augment point cloud data with random rotations, translations, and scalings during training.

Dataset labeling issues.

We observed a few types of labeling issues in the ShapeNet Part dataset:

  • Some object part categories are frequently labeled incorrectly. \Eg, skateboard trucks are often mistakenly labeled as ‘deck’ or ‘wheel’ (Figure 23).

  • Some object parts, \eg‘fin’ of some rockets, have incomplete range or coverage (Figure 23).

  • Some object part categories are labeled inconsistently between shapes. \Eg, airplane landing gears are seen labeled as ‘body’, ‘engine’, or ‘wings’ (Figure 23).

  • Some categories have parts that are labeled as ‘other’, which can be confusing for the classifier as these parts do not have clear semantic meanings or structures. \Eg, in the case of earphones, anything that is not ‘headband’ or ‘earphone’ are given the same label (‘other’) (Figure 23).

The first two issues make evaluations and comparisons on the benchmark less reliable, while the other two make learning ill-posed or unnecessarily hard for the networks.

References

  1. A. Adams, J. Baek, and M. A. Davis. Fast high-dimensional filtering using the permutohedral lattice. In Computer Graphics Forum, volume 29, pages 753–762. Wiley Online Library, 2010.
  2. V. Aurich and J. Weule. Non-linear Gaussian filters performing edge preserving diffusion. In DAGM, pages 538–545. Springer, 1995.
  3. S. Bai, X. Bai, Z. Zhou, Z. Zhang, and L. J. Latecki. GIFT: A real-time and scalable 3d shape search engine. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, pages 5023–5032, 2016.
  4. D. Boscaini, J. Masci, S. Melzi, M. M. Bronstein, U. Castellani, and P. Vandergheynst. Learning class-specific descriptors for deformable shapes using localized spectral convolutional networks. In Proceedings of the Symposium on Geometry Processing, 2015.
  5. D. Boscaini, J. Masci, E. Rodolà, and M. M. Bronstein. Learning shape correspondence with anisotropic convolutional neural networks. In The Conference and Workshop on Neural Information Processing Systems (NIPS 16), 2016.
  6. A. Brock, T. Lim, J. M. Ritchie, and N. Weston. Generative and discriminative voxel modeling with convolutional neural networks. CoRR, 2016.
  7. M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst. Geometric deep learning: Going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18–42, 2017.
  8. J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun. Spectral networks and locally connected networks on graphs. CoRR, abs/1312.6203, 2013.
  9. Z. Cao, Q. Huang, and K. Ramani. 3d object classification via spherical projections. In 3DV, 2017.
  10. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected CRFs. In International Conference on Learning Representations (ICLR), 2015.
  11. M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. CoRR, abs/1606.09375, 2016.
  12. M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111(1):98–136, Jan. 2015.
  13. D. Ezuz, J. Solomon, V. G. Kim, and M. Ben-Chen. Gwcnn: A metric alignment layer for deep shape analysis. Computer Graphics Forum, 36(5), 2017.
  14. R. Gadde, V. Jampani, R. Marlet, and P. Gehler. Efficient 2d and 3d facade segmentation using auto-context. IEEE Trans. PAMI, 2017.
  15. A. Garcia-Garcia, F. Gomez-Donoso, J. G. Rodríguez, S. Orts, M. Cazorla, and J. A. López. Pointnet: A 3d convolutional neural network for real-time object class recognition. 2016 International Joint Conference on Neural Networks (IJCNN), pages 1578–1584, 2016.
  16. B. Graham and L. van der Maaten. Submanifold sparse convolutional networks. arXiv preprint arXiv:1706.01307, 2017.
  17. B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Hypercolumns for object segmentation and fine-grained localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 447–456, 2015.
  18. V. Hegde and R. Zadeh. Fusionnet: 3d object classification using multiple data representations. CoRR, abs/1607.05695, 2016.
  19. M. Henaff, J. Bruna, and Y. LeCun. Deep convolutional networks on graph-structured data. CoRR, abs/1506.05163, 2015.
  20. H. Huang, E. Kalegorakis, S. Chaudhuri, D. Ceylan, V. Kim, and E. Yumer. Learning local shape descriptors with view-based convolutional neural networks. ACM Transactions on Graphics, 2018.
  21. V. Jampani, R. Gadde, and P. V. Gehler. Video propagation networks. In Proc. CVPR, 2017.
  22. V. Jampani, M. Kiefel, and P. V. Gehler. Learning sparse high dimensional filters: Image filtering, dense crfs and bilateral neural networks. In Proc. CVPR, 2016.
  23. Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM international conference on Multimedia, pages 675–678. ACM, 2014.
  24. E. Kalogerakis, M. Averkiou, S. Maji, and S. Chaudhuri. 3D shape segmentation with projective convolutional networks. In Proc. CVPR, 2017.
  25. D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  26. R. Klokov and V. Lempitsky. Escape from cells: Deep kd-networks for the recognition of 3D point cloud models. In Proc. ICCV, 2017.
  27. H. Maron, M. Galun, N. Aigerman, M. Trope, N. Dym, E. Yumer, V. G. Kim, and Y. Lipman. Convolutional neural networks on surfaces via seamless toric covers. ACM Trans. Graph., 36(4), 2017.
  28. J. Masci, D. Boscaini, M. Bronstein, and P. Vandergheynst. Geodesic convolutional neural networks on riemannian manifolds. In Proceedings of the IEEE International Conference on Computer Vision Workshops, 2015.
  29. D. Maturana and S. Scherer. 3D convolutional neural networks for landing zone detection from lidar. In IEEE International Conference on Robotics and Automation (ICRA 15), 2015.
  30. F. Monti, D. Boscaini, J. Masci, E. Rodola, J. Svoboda, and M. M. Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In IEEE Conference on Computer Vision and Pattern Recognition(CVPR 17), 2017.
  31. B. T. Phong. Illumination for computer generated pictures. Commun. ACM, 18(6), 1975.
  32. C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In CVPR, 2017.
  33. C. R. Qi, H. Su, M. Nießner, A. Dai, M. Yan, and L. J. Guibas. Volumetric and multi-view cnns for object classification on 3D data. In IEEE Conference on Computer Vision and Pattern Recognition(CVPR 16), 2016.
  34. C. R. Qi, L. Yi, H. Su, and L. Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In NIPS, 2017.
  35. G. Riegler, A. O. Ulusoy, H. Bischof, and A. Geiger. Octnetfusion: Learning depth fusion from data. In Proceedings of the International Conference on 3D Vision, 2017.
  36. G. Riegler, A. O. Ulusoys, and A. Geiger. Octnet: Learning deep 3D representations at high resolutions. In Proc. CVPR, 2017.
  37. H. Riemenschneider, A. Bódis-Szomorú, J. Weissenberg, and L. Van Gool. Learning where to classify in multi-view semantic segmentation. In Proc. ECCV, 2014.
  38. N. Sedaghat, M. Zolfaghari, E. Amiri, and T. Brox. Orientation-boosted voxel nets for 3d object recognition. In British Machine Vision Conference (BMVC), 2017.
  39. A. Sinha, J. Bai, and K. Ramani. Deep learning 3D shape surfaces using geometry images. In European Conference on Computer Vision (ECCV 16), 2016.
  40. H. Su, S. Maji, E. Kalogerakis, and E. G. Learned-Miller. Multi-view convolutional neural networks for 3D shape recognition. In Proc. ICCV, 2015.
  41. M. Tatarchenko, A. Dosovitskiy, and T. Brox. Octree generating networks: Efficient convolutional architectures for high-resolution 3d outputs. In IEEE International Conference on Computer Vision (ICCV), 2017.
  42. C. Tomasi and R. Manduchi. Bilateral filtering for gray and color images. In Computer Vision, 1998. Sixth International Conference on, pages 839–846. IEEE, 1998.
  43. P.-S. Wang, Y. Liu, Y.-X. Guo, C.-Y. Sun, and X. Tong. O-cnn: Octree-based convolutional neural networks for 3d shape analysis. ACM Trans. Graph., 36(4), 2017.
  44. Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3D shapenets: A deep representation for volumetric shapes. In IEEE Conference on Computer Vision and Pattern Recognition(CVPR 15), 2015.
  45. L. Yi, V. G. Kim, D. Ceylan, I. Shen, M. Yan, H. Su, A. Lu, Q. Huang, A. Sheffer, L. Guibas, et al. A scalable active framework for region annotation in 3D shape collections. ACM Trans. Graph., 35(6):210, 2016.
  46. L. Yi, H. Su, X. Guo, and L. Guibas. Syncspeccnn: Synchronized spectral cnn for 3D shape segmentation. In Proc. CVPR, 2017.
101768
This is a comment super asjknd jkasnjk adsnkj
""
The feedback cannot be empty
Submit
Cancel
Comments 0
""
The feedback cannot be empty
   
Add comment
Cancel

You’re adding your first comment!
How to quickly get a good reply:
  • Offer a constructive comment on the author work.
  • Add helpful links to code implementation or project page.