ACNN: Annularly Convolutional Neural Networks on Point Clouds
Abstract
Analyzing the geometric and semantic properties of 3D point clouds through the deep networks is still challenging due to the irregularity and sparsity of samplings of their geometric structures. This paper presents a new method to define and compute convolution directly on 3D point clouds by the proposed annular convolution. This new convolution operator can better capture the local neighborhood geometry of each point by specifying the (regular and dilated) ringshaped structures and directions in the computation. It can adapt to the geometric variability and scalability at the signal processing level. We apply it to the developed hierarchical neural networks for object classification, part segmentation, and semantic segmentation in largescale scenes. The extensive experiments and comparisons demonstrate that our approach outperforms the stateoftheart methods on a variety of standard benchmark datasets (e.g., ModelNet10, ModelNet40, ShapeNetpart, S3DIS, and ScanNet).
1 Introduction
Nowadays, the ability to understand and analyze 3D data is becoming increasingly important in computer vision and computer graphics communities. During the past few years, the researchers have applied deep learning methods to analyze 3D objects inspired by the successes of these techniques in 2D images and 1D texts. Traditional lowlevel handcrafted shape descriptors suffer from not being able to learn the discriminative and sufficient features from 3D shapes [1]. Recently, deep learning techniques have been applied to extract hierarchical and effective information from 3D shape features captured by lowlevel descriptors [20, 6]. 3D deep learning methods are widely used in shape classification, segmentation, and recognition, etc. But all these methods are still constrained by the representation power of the shape descriptors.
One of the main challenges to directly apply deep learning methods to 3D data is that 3D objects can be represented in different formats, i.e., regular / structured representation (e.g., multiview images and volumes), and irregular / unstructured representation (e.g., point clouds and meshes). There are extensive approaches based on regular / structured representation, such as multiview convolutional neural networks (CNNs) [34, 27, 10] and 3D volumetric / grid CNN methods and its variants [40, 27, 29, 37, 38, 16, 9]. These methods can be conveniently developed and implemented in 3D data structure, but they easily suffer from the heavy computation and large memory expense. So it is better to define the deep learning computations based on 3D shapes directly, i.e., irregular / unstructured representation, such as point cloud based methods [26, 28, 13, 32, 3, 18, 19, 35, 17, 44, 36, 8, 42]. However, defining the convolution on the irregular / unstructured representation of 3D objects is not an easy task. Very few methods on point clouds have defined an effective and efficient convolution on each point. Meanwhile, several approaches have been proposed to develop convolutional networks on 2D manifolds [22, 4, 24, 41]. Their representations (e.g., 3D surface meshes) have point positions as well as connectivities, which makes it relatively easier to define the convolution operator on them.
In this work, we present a new method to define and compute convolutions directly on 3D point clouds effectively and efficiently by the proposed annular convolutions. This new convolution operator can better capture local neighborhood geometry of each point by specifying the (regular and dilated) ringshaped structures and directions in the computation. It can adapt to the geometric variability and scalability at the signal processing level. Then, we apply it along with the developed hierarchical neural networks to object classification, part segmentation, and semantic segmentation in largescale scene as shown in Fig. 1. The key contributions of our work are as follows:

We propose a new approach to define convolutions on point cloud. The proposed annular convolutions can define arbitrary kernel sizes on each local ringshaped region, and help to capture better geometric representations of 3D shapes;

We propose a new multilevel hierarchical method based on dilated rings, which leads to better capturing and abstracting shape geometric details. The new dilated strategy on point clouds benefits our proposed closedloop convolutions and poolings;

Our proposed network models present new stateoftheart performance on object classification, part segmentation, and semantic segmentation of largescale scenes using a variety of standard benchmark datasets.
2 Related Work
Due to the scope of our work, we focus only on recently related deep learning methods, which are proposed on different 3D shape representations.
Volumetric Methods. One traditional way to analyze a 3D shape is to convert it into the regular volumetric occupancy grid and then apply 3D CNNs [40, 27]. The major limitation of these approaches is that 3D convolutions are more expensive in computations than 2D cases. In order to make the computation affordable, the volume grid size is usually in a low resolution. However, lower resolution means loosing some shape geometric information, especially in analyzing largescale 3D shapes / scenes. To overcome these problems, octreebased methods [29, 37, 38] have been proposed to allow applying 3D CNNs on higher / adaptive resolution grids. PointGrid [16] is a 3D CNN that incorporates a constant number of points within each grid cell and allows it to learn better local geometric details. Similarly, Hua et al. [9] presented a 3D convolution operator based on a uniform grid kernel for semantic segmentation and object recognition on point clouds.
Point Cloud based Methods. PointNet [26] is the first attempt of applying deep learning directly on point clouds. PointNet model is invariant to the order of points, but it considers each point independently without including local region information. PointNet++ [28] is a hierarchical extension of PointNet model and learns local structures of point clouds at different scales. But [28] still considers every point in its local region independently. In our work, we address the aforementioned issues by defining the convolution operator that learns the relationship between neighboring points in a local region, which helps to better capture the local geometric properties of the 3D object.
Klokov et al. [13] proposed a new deep learning architecture called Kdnetworks, which uses kdtree structure to construct a computational graph on point clouds. KCNet [32] improves PointNet model by considering the local neighborhood information. It defines a set of learnable pointset kernels for local neighboring points and presents a pooling method based on a nearestneighbor graph. PCNN [3] is another method to apply convolutional neural networks to point clouds by defining extension and restriction operators, and mapping point cloud functions to volumetric functions. SONet [17] is a permutation invariant network that utilizes spatial distribution of point clouds by building a selforganizing map. There are also some spectral convolution methods on point clouds, such as SyncSpecCNN [44] and spectral graph convolution [36]. Point2Sequence [19] learns the correlation of different areas in a local region by using attention mechanism, but it does not propose a convolution on point clouds. PointCNN [18] is a different method that proposes to transform neighboring points to the canonical order and then apply convolution.
Recently, there are several approaches proposed to process and analyze largescale point clouds from indoor and outdoor environments. Engelmann et al. [8] extended PointNet model to exploit the largescale spatial context. Ye et al. [42] proposed a pointwise pyramid pooling to aggregate features at local neighborhoods as well as twodirectional hierarchical recurrent neural networks (RNNs) to learn spatial contexts. However, these methods do not define convolutions on largescale point clouds to learn geometric features in the local neighborhoods. TangentConv [35] is another method that defines the convolution on point clouds by projecting the neighboring points on tangent planes and applying 2D convolutions on them. The orientation of the tangent image is estimated according to the local point / shape curvature, but as we know the curvature computation on the local region of the point clouds is not stable and not robust (see the discussion in Sec. 3.4), which makes it orientationdependent. Instead, our method proposes an annular convolution, which is invariant to the orientations of local patches. Also, ours does not require additional input features while theirs needs such features (e.g., depth, height, etc.).
Mesh based Methods. Besides point cloud based methods, several approaches have been proposed to develop convolutional networks on 3D meshes for shape analysis. Geodesic CNN [22] is an extension of the Euclidean CNNs to nonEuclidean domains and is based on a local geodesic system of polar coordinates to extract local patches. Anisotropic CNN [4] is another generalization of Euclidean CNNs to nonEuclidean domains, where classical convolutions are replaced by projections over a set of oriented anisotropic diffusion kernels. Mixture Model Networks (MoNet) [24] generalizes deep learning methods to nonEuclidean domains (graphs and manifolds) by combining previous methods, e.g., classical Euclidean CNN, Geodesic CNN, and Anisotropic CNN. MoNet proposes a new type of kernel in parametric construction. Directionally Convolutional Networks (DCN) [41] applies convolution operation on the triangular mesh of 3D shapes to address part segmentation problem by combining local and global features. Lastly, Surface Networks [14] propose upgrades to Graph Neural Networks to leverage extrinsic differential geometry properties of 3D surfaces for increasing their modeling power.
3 Method
In this work, we propose a new endtoend framework named as annularly convolutional neural networks (ACNN) that leverages the neighborhood information to better capture local geometric features of 3D point clouds. In this section, we introduce main technique components of the ACNN model on point clouds that include: regular and dilated rings, constraintbased knearest neighbors (kNN) search, ordering neighbors, annular convolution, and pooling on rings.
3.1 Regular and Dilated Rings on Point Clouds
To extract local spatial context of the 3D shape, PointNet++ [28] proposes multiscale architecture. The major limitation of this approach is that multiple scaled regions may have overlaps (i.e., same neighboring points could be duplicately included in different scaled regions), which reduces the performance of the computational architecture. Overlapped points at different scales lead to redundant information at the local region, which limits a network to learn more discriminative features.
In order to address the above issue, our proposed framework is aimed to leverage a neighborhood at different scales more wisely. We propose two ringbased schemes, i.e., regular rings and dilated rings. Comparing to multiscale strategy, the ringbased structure does not have overlaps (no duplicated neighboring points) at the query point’s neighborhood, so that each ring contains its own unique points, as illustrated in Sec. A of Supplementary Material.
The difference between regular rings and dilated rings is that dilated rings have empty space between rings. The idea of proposed dilated rings is inspired by dilated convolutions on image processing [45], which benefits from aggregating multiscale contextual information. Although each ring may define the same number of computation / operation parameters (e.g., number of neighboring points), the coverage area of each ring is different (i.e., dilated rings will have larger coverage than the regular rings) as depicted in Fig. 6. Regular rings can be considered as dilated rings with the dilation factor equal to 0.
The proposed regular rings and dilated rings will contribute to neighboring point search, convolution, and pooling in the followup processes. First, for kNN algorithm, we constrain search areas in the local ringshaped neighborhood to ensure no overlap. Second, the convolutions defined on rings cover larger areas with the same kernel sizes without increasing the number of convolution parameters. Third, the regular / dilated ring architectures will help to aggregate more discriminative features after applying maxpooling at each ring of the local region. We will discuss them in more detail in the following subsections.
To justify the aforementioned statements, we will compare multiscale approach with our proposed multiring scheme on object classification task in the ablation study (Sec. 5.4). The results show that ringbased structure captures better local geometric features than previous multiscale method, since it achieves higher accuracy.
3.2 Constraintbased KNN Search
In the original PointNet++ model, the ball query algorithm returns the first neighbors found inside a search ball specified by a radius and query point , so that it cannot guarantee that the closest points will always be found. However, our proposed kNN search algorithm guarantees returning closest points inside the searching area by using the Euclidean metric. Each ring is defined by two parameters: the inner radius and the outer radius (in Fig. 6); therefore, the constraintbased kNN search ensures that the closest and unique points will be found in each ring.
3.3 Ordering Neighbors
In order to learn relationships between neighboring points in a local regions, we need first to order points in a clockwise / counterclockwise manner and then apply annular convolutions. Our proposed ordering operator consists of two main steps: projection and ordering. The importance of the projection before ordering is that the dot product has its restriction in ordering points. By projecting points on a tangent plane at a query point , we effectively order neighbors in clockwise / counterclockwise direction by taking use of cross product and dot product together. The detailed explanations of normal estimation, orthogonal projection, and ordering are given in the following subsections.
3.3.1 Normal Estimation on Point Clouds
Normal is an important geometric property of a 3D shape. We use it as a tool for projecting and ordering neighboring points at a local domain. The simplest normal estimation method approximates the normal at the given point by calculating the normal of the local tangent plane at that point, which becomes a leastsquare plane fitting estimation problem [30]. To calculate normal , one needs to compute eigenvalues and eigenvectors of the covariance matrix as:
(1)  
where is the number of neighboring points s around query point (e.g., = 10 in our experiments), and are the eigenvalue and eigenvector of the covariance matrix , respectively. The covariance matrix is symmetric and positive semidefinite. The eigenvectors form an orthogonal frame, in respect to the local tangent plane . The eigenvector that corresponds to the smallest eigenvalue is the estimated normal .
3.3.2 Orthogonal Projection
3.3.3 Counterclockwise Ordering
Firstly, we use the geometric definition of the dot product to compute the angle between two vectors (i.e., starts from the query point and connects with a randomly starting point, such as ) and (i.e., starts from the query point and connects with other neighboring points ):
(3) 
We know that lies in , which corresponds to angles between . In order to sort the neighboring points around the query point between , we must to decide which semicircle the considered point belongs to as follows:
(4) 
where is , and is .
Then, we can recompute the cosine value of the angle as:
(5) 
Now the values of the angles lie in , which maps angles between .
Finally, we sort neighboring points by descending the value of to obtain the counterclockwise order. Fig. 3 (b) illustrates the process of ordering in a local neighborhood. The neighboring points can be ordered in the clockwise manner, if we sort neighboring points by ascending the value of .
Our experiments show in Sec. 5.4 that ordering points in the local regions is an important step in our framework and our model achieves better classification accuracy with ordered points than without ordering them.
3.4 Annular Convolution on Rings
Through the previous computation, we have the ordered neighbors represented as an array . In order to develop the annular convolution, we need to loop the array of neighbors with respect to the size of the kernel (e.g., , , …) on each ring. For example, if the convolutional kernel size is , we need to take the first two neighbors and concatenate them with the ending elements in the original array to construct a new circular array . Then, we can perform the standard convolutions on this array as shown in Fig. 3 (c).
There are some nice properties of the proposed annular convolutions as follows: (1) The annular convolution is invariant to the orientation of the local patch. That is because the neighbors are organized and ordered in a closed loop in each ring by concatenating the beginning with the end of the neighboring points’ sequence. Therefore, we can order neighbors based on any random starting position, which does not negatively affect the convolution results. Compared with some previous convolutions defined on 3D shapes [4, 41, 35], they all need to compute the real principal curvature direction as the reference direction to define the local patch operator, which is not robust and cumbersome. In particular, 3D shapes have large areas of flat and spherical regions, where the curvature directions are arbitrary. (2) As we know, in reality, the normal direction flipping issues are widely existing in point clouds, especially the largescale scene datasets. Under the annular convolution strategy, no matter the neighboring points are ordered in clockwise or counterclockwise manner, the results are the same. (3) Another advantage of annular convolution is that we can define an arbitrary kernel size, instead of just kernels [26, 28]. Therefore, the annular convolution can provide the ability to learn the relationship between ordered points inside each ring as shown in Fig. 3 (c).
Annular convolutions can be applied on both regular and dilated rings. By applying annular convolutions with the same kernel size on different rings, we can cover and convolve larger areas by using the dilated structure, which helps us to learn larger spatial contextual information in the local regions. The importance of annular convolutions is shown in the ablation study in Sec. 5.4.
3.5 Pooling on Rings
After applying a set of annular convolutions sequentially, the resulting convolved features encode information about its closest neighbors in each ring as well as spatial remoteness from a query point. Then we aggregate the convolved features across all neighbors on each ring separately. We apply the maxpooling strategy in our framework. Our proposed ringbased scheme allows us to aggregate more discriminative features. The extracted maxpooled features contain the encoded information about neighbors and the relationship between them in the local region, unlike the pooling scheme in PointNet++ [28], where each neighbor is considered independently from its neighbors. In our pooling process, the nonoverlapped regions (rings) will aggregate different types of features in each ring, which can uniquely describe each local region (ring) around the query point. The multiscale approach in PointNet++ does not guarantee this and might aggregate the same features at different scales, which is redundant information for a network. The (regular and dilated) ringbased scheme helps to avoid extracting duplicate information but rather promotes extracting multilevel information from different regions (rings). This provides a network with more diverse features to learn from. After aggregating features at different rings, we concatenate and feed them to another abstract layer to further learn hierarchical features.
4 ACNN Architecture
Our proposed ACNN model follows a design where the hierarchical structure is composed of a set of abstract layers. Each abstract layer consists of several operations performed sequentially and produces a subset of input points with newly learned features. Firstly, we subsample points by using Farthest Point Sampling (FPS) algorithm [23] to extract centroids randomly distributed on the surface of each object. Secondly, our constraintbased kNN extracts neighbors of a centroid for each local region (i.e., regular / dilated rings) and then we order neighbors in a counterclockwise manner using projection. Finally, we apply sequentially a set of annular convolutions on the ordered points and maxpool features across neighbors to produce new feature vectors, which uniquely describe each local region.
Given the point clouds of 3D shapes, our proposed endtoend network is able to classify and segment the objects. In the following, we discuss the classification and segmentation network architectures on 3D point clouds.
4.1 Classification Network
The classification network is illustrated at the top of Fig. 4. It consists of two major parts: encoder and classification. The encoder extracts features from each ring independently inside every layer and concatenates them at the end to process further to extract highlevel features. The proposed architecture includes both regular rings and dilated rings. We end up using two rings per layer, because it gives us pretty good experimental results as shown in the Sec. 5. It can be easily extended to more than two rings per layer, if necessary.
We use regular rings in the first layer and dilated rings in the second layer in the encoder. Annular convolutions with the kernel sizes and stride 1 are applied in the first two layers, followed by a batch normalization [12] (BN) and a rectified linear unit [25] (ReLU). Different rings of the same query point are processed in parallel. Then, the aggregated features from each ring concatenate together to propagate to the next layer. The last layer in the encoder performs convolutions with kernel sizes followed by BN and ReLU layers, where only spatial positions of the sampled points are considered. After that aggregated highlevel features are fed to the set of fullyconnected layers with integrated dropout [33] and ReLU layers to calculate probability of each class. The output size of the classification network is equal to the number of classes in the dataset.
4.2 Segmentation Network
The segmentation network shares encoder part with the classification network as shown in Figure 4. In order to predict the segmentation label per point, we need to upsample the sampled points in the encoder back to the original point cloud size. As pointed out by [46], the consecutive feature propagation proposed by [28] is not the most efficient approach. Inspired from [46], we propagate features from different levels from the encoder directly to the original point cloud size, and concatenate them by allowing the network to learn the most important features from different levels as well as to learn the relationship between them.
The output of each level has different sizes due to the hierarchical feature extractions, so we have to restore hierarchical features from each level back to the original point size by using an interpolation method [28]. The interpolation method is based on the inverse squared Euclidean distance weighted average of the three nearest neighbors as:
(6) 
where is an inverse squared Euclidean distance weight.
Then, we concatenate upsampled features from different levels and pass them through convolution to reduce feature space and learn the relationship between features from different levels. Finally, the segmentation class distribution for each point is calculated.
5 Experiments
We evaluate our ACNN model on various tasks such as point cloud classification, part segmentation, and largescale scene segmentation. In the following subsections, we demonstrate more details on each task. It is noted that for the comparison experiments, best results in the tables are shown in bold font.
All models in this paper are trained on a single NVIDIA Titan Xp GPU with 12 GB GDDR5X. The training time of our model is faster than that of PointNet++ model. More details about the network configurations, training settings and timings in our experiments can be found in Sec. B and Tab. 5 of Supplementary Material. The source code of the framework will be made available later.
5.1 Point Cloud Classification
We evaluate our classification model on two datasets: ModelNet10 and ModelNet40 [40]. ModelNet is a largescale 3D CAD model dataset. ModelNet10 is a subset of ModelNet dataset that consists of 10 different classes with 3991 training and 908 testing objects. ModelNet40 includes 40 different classes with 9843 objects for training and 2468 objects for testing. Point clouds with 10,000 points and normals are sampled from meshes, normalized into a unit sphere, and provided by [28].
For experiments on ModelNet10 and ModelNet40, we sample 1024 points with normals, where normals are only used to order points in the local region. For data augmentation, we randomly scale object sizes, shift object positions, and perturb point locations. For better generalization, we apply point shuffling in order to generate different centroids for the same object at different epochs.
In Tab. 1, we compare our method with several stateoftheart methods in the shape classification results on both ModelNet10 and ModelNet40 datasets. Our model achieves better accuracy among the point cloud based methods (with 1024 points), such as PointNet [26], PointNet++ [28] (5K points + normals), KdNet (depth 15) [13], Pointwise CNN [9], KCNet [32], PointGrid [16], PCNN [3], and PointCNN [18]. Our model is slightly better than Point2Sequence [19] on ModelNet10 and shows comparable performance on ModelNet40.
Meanwhile, our model performs better than other volumetric approaches, such as OCNN [37] and AOCNN [38]; while we are a little worse than SONet [17], which uses denser input points, i.e., 5000 points with normals as the input (1024 points in our ACNN); MVCNNMultiRes [27], which uses multiview 3D volumes to represent an object (i.e., 20 views of volume); the VRN Ensemble [5], which involves an ensemble of six models.
We also provide some feature visualization results in Sec. C of Supplementary Material, including global feature (e.g., tSNE clustering) visualization and local feature (e.g., the magnitude of the gradient per point) visualization.
ModelNet10  ModelNet40  
AAC  OA  AAC  OA  
different methods with additional input or more points  
AOCNN [38]        90.5 
OCNN [37]        90.6 
PointNet++ [28]        91.9 
SONet [17]  95.5  95.7  90.8  93.4 
MVCNNMultiRes [27]      91.4  93.8 
VRN Ensemble [5]    97.1    95.5 
point cloud based methods with 1024 points  
PointNet [26]      86.2  89.2 
KdNet (depth 15) [13]  93.5  94.0  88.5  91.8 
Pointwise CNN [9]      81.4  86.1 
KCNet [32]    94.4    91.0 
PointGrid [16]      88.9  92.0 
PCNN [3]    94.9    92.3 
PointCNN [18]      88.1  92.2 
Point2Sequence [19]  95.1  95.3  90.4  92.6 
ACNN (our)  95.3  95.5  90.3  92.6 
5.2 Point Cloud Segmentation
We evaluate our segmentation model on ShapeNetpart [43] dataset. The dataset contains 16,881 shapes from 16 different categories with 50 label parts in total. The main challenge of this dataset is that all categories are highly imbalanced. There are 2048 points sampled for each shape from the dataset, where most shapes contain less than six parts. We follow the same training and testing splits provided in [26, 43]. For data augmentation, we perturb point locations with the point shuffling for better generalization.
We evaluate our segmentation model with two different inputs. One of the models is trained without feeding normals as additional features and the other model is trained with normals as additional features. The quantitative results are provided in Tab. 2, where mean IoU (IntersectionoverUnion) is reported. The qualitative results are visualized in Fig. 6. Our approach with point locations only as input outperforms PointNet [26], KdNet [13], KCNet [32], and PCNN [3]; and shows slightly worse performance comparing to PointGrid [16] (volumetric method) and PointCNN [18]. Meanwhile, our model achieves the best performance with the input of point locations and normals, compared with PointNet++ [28], SyncSpecCNN [44], SONet [17], SGPN [39], OCNN [37], RSNet [11], and Point2Sequence [19]. The more detailed quantitative results (e.g., percategory IoUs) and more visualization results are provided in Sec. E of Supplementary Material.
5.3 Semantic Segmentation in Scenes
We also evaluate our segmentation model on two largescale indoor datasets Stanford 3D LargeScale Indoor Spaces (S3DIS) [2] and ScanNet [7]. S3DIS contains 6 largescale indoor areas with 271 rooms sampled from 3 different buildings, where each point has the semantic label that belongs to one of the 13 categories. ScanNet includes 1513 scanned indoor point clouds, where each voxel has been labeled with one of the 21 categories.
We employ the same training and testing strategies as PointNet [26] on S3DIS, where we use 6fold cross validation over all six areas. The evaluation results are reported in Tab. 2, and qualitative results are visualized in Fig. 5. Our model demonstrates better segmentation results compared with PointNet [26], MS+CU (2) [8], G+RCU [8], 3PRNN [42], SPGraph [15], and TangentConv [35]. However, our model performs slightly worse than PointCNN [18] due to their nonoverlapping block sampling strategy with paddings which we do not use. Meanwhile, our approach shows the best segmentation results on ScanNet [7] and achieves the stateoftheart performance, compared with PointNet [26], PointNet++ [28], TangentConv [35], and PointCNN [18] according to Tab. 2.
More qualitative visualization results and data preparation details on both datasets are provided in Sec. D and Sec. E, respectively, of Supplementary Material and Video.
ShapeNetpart  S3DIS  ScanNet  


OA  OA  
mean  mean  
PointNet [26]  83.7    78.5  73.9  
PointNet++ [28]    85.1    84.5  
SyncSpecCNN [44]    84.7      
OCNN [37]    85.9      
KdNet [13]  82.3        
KCNet [32]  84.7        
SONet [17]    84.9      
SGPN [39]    85.8      
MS+CU (2) [8]      79.2    
G+RCU [8]      81.1    
RSNet [11]    84.9      
3PRNN [42]      86.9    
SPGraph [15]      85.5    
TangentConv [35]      *  80.9  
PCNN [3]  85.1        
Point2Sequence [19]    85.2      
PointGrid [16]  86.4        
PointCNN [18]  86.1    88.1  85.1  
ACNN (our)  85.9  86.1  87.3  85.4 

Note: * TangentConv [35] OA on S3DIS Area 5 is 82.5 (as reported in their paper), which is worse compared with our OA of 85.5.
5.4 Ablation Study
The goal of our ablation study is to show the importance of the proposed technique components (in Sec. 3) in our ACNN model. We evaluate three proposed components, such as rings without overlaps (Sec. 3.1), ordering (Sec. 3.3), and annular convolution (Sec. 3.4) on the classification task of ModelNet40 dataset as shown in Tab. 4. In the first experiment, we replace our proposed constraintbased kNN on ring regions with ball query in [28], but keep ordering and annular convolutions on. In the second and third experiments, we turn off either annular convolutions or ordering, respectively; and keep the rest two components on. Our experiments show that the proposed ringshaped scheme contributes the most to our model. It is because multilevel rings positively affect annular convolutions. Finally, ACNN model with all three components (i.e., rings without overlaps, ordering, and annular convolutions) achieves the best results. We also discover that reducing overlap / redundancy in multiscale scheme can improve existing methods. We evaluate the original PointNet++ [28] with and without overlap as shown in Sec. A of Supplementary Material.
AAC  OA  
ACNN (without rings / with overlap)  89.2  91.7 
ACNN (without annular conv.)  89.2  91.8 
ACNN (without ordering)  89.6  92.0 
ACNN (with all components)  90.3  92.6 
6 Conclusion
In this work, we propose a new ACNN framework on point clouds, which can better capture local geometric information of 3D shapes. Through extensive experiments on several benchmark datasets, our method has achieved the stateoftheart performance on point cloud classification, part segmentation, and largescale semantic segmentation tasks. Since our work does not solely focus on largescale scene datasets, we will explore some new deep learning architectures to improve the current results. We will also investigate to apply the proposed framework on largescale outdoor datasets in our future work.
Acknowledgment. We would like to thank the reviewers for their valuable comments. This work was partially supported by the NSF IIS1816511, CNS1647200, OAC1657364, OAC1845962, Wayne State University Subaward 4207299A of CNS1821962, NIH 1R56AG06082201A1, and ZJNSF LZ16F020002.
References
 [1] E. Ahmed, A. Saint, A. Shabayek, K. Cherenkova, R. Das, G. Gusev, D. Aouada, and B. Ottersten. Deep learning advances on different 3D data representations: A survey. arXiv preprint arXiv:1808.01462, 2018.
 [2] I. Armeni, O. Sener, A. Zamir, H. Jiang, I. Brilakis, M. Fischer, and S. Savarese. 3D semantic parsing of largescale indoor spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1534–1543, 2016.
 [3] M. Atzmon, H. Maron, and Y. Lipman. Point convolutional neural networks by extension operators. ACM Transactions on Graphics, 37(4):71:1–71:12, 2018.
 [4] D. Boscaini, J. Masci, E. Rodolà, and M. Bronstein. Learning shape correspondence with anisotropic convolutional neural networks. In Advances in Neural Information Processing Systems, pages 3189–3197, 2016.
 [5] A. Brock, T. Lim, J. Ritchie, and N. Weston. Generative and discriminative voxel modeling with convolutional neural networks. arXiv preprint arXiv:1608.04236, 2016.
 [6] S. Bu, P. Han, Z. Liu, J. Han, and H. Lin. Local deep feature learning framework for 3D shape. Computers & Graphics, 46:117–129, 2015.
 [7] A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner. ScanNet: Richlyannotated 3D reconstructions of indoor scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5828–5839, 2017.
 [8] F. Engelmann, T. Kontogianni, A. Hermans, and B. Leibe. Exploring spatial context for 3D semantic segmentation of point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 716–724, 2017.
 [9] B.S. Hua, M.K. Tran, and S.K. Yeung. Pointwise convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 984–993, 2018.
 [10] H. Huang, E. Kalogerakis, S. Chaudhuri, D. Ceylan, V. G. Kim, and E. Yumer. Learning local shape descriptors from part correspondences with multiview convolutional networks. ACM Transactions on Graphics, 37(1):6, 2018.
 [11] Q. Huang, W. Wang, and U. Neumann. Recurrent slice networks for 3D segmentation of point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2626–2635, 2018.
 [12] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
 [13] R. Klokov and V. Lempitsky. Escape from cells: Deep Kdnetworks for the recognition of 3D point cloud models. In Proceedings of the IEEE International Conference on Computer Vision, pages 863–872, 2017.
 [14] I. Kostrikov, Z. Jiang, D. Panozzo, D. Zorin, and J. Bruna. Surface networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2540–2548, 2018.
 [15] L. Landrieu and M. Simonovsky. Largescale point cloud semantic segmentation with superpoint graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4558–4567, 2018.
 [16] T. Le and Y. Duan. PointGrid: A deep network for 3D shape understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9204–9214, 2018.
 [17] J. Li, B. M. Chen, and G. H. Lee. SONet: Selforganizing network for point cloud analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9397–9406, 2018.
 [18] Y. Li, R. Bu, M. Sun, W. Wu, X. Di, and B. Chen. PointCNN: Convolution on Xtransformed points. In Advances in Neural Information Processing Systems, pages 828–838, 2018.
 [19] X. Liu, Z. Han, Y.S. Liu, and M. Zwicker. Point2Sequence: Learning the shape representation of 3D point clouds with an attentionbased sequence to sequence network. In Association for the Advancement of Artificial Intelligence, 2019.
 [20] Z. Liu, S. Chen, S. Bu, and K. Li. Highlevel semantic feature for 3D shape based on deep belief networks. In Proceedings of the IEEE International Conference on Multimedia and Expo, pages 1–6, 2014.
 [21] L. Maaten and G. Hinton. Visualizing data using tSNE. Journal of machine learning research, 9(Nov):2579–2605, 2008.
 [22] J. Masci, D. Boscaini, M. Bronstein, and P. Vandergheynst. Geodesic convolutional neural networks on Riemannian manifolds. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 37–45, 2015.
 [23] C. Moenning and N. A. Dodgson. Fast marching farthest point sampling. Technical report, University of Cambridge, Computer Laboratory, 2003.
 [24] F. Monti, D. Boscaini, J. Masci, E. Rodola, J. Svoboda, and M. Bronstein. Geometric deep learning on graphs and manifolds using mixture model CNNs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5115–1543, 2017.
 [25] V. Nair and G. Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the International Conference on Machine Learning, pages 807–814, 2010.
 [26] C. Qi, H. Su, K. Mo, and L. Guibas. PointNet: Deep learning on point sets for 3D classification and segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 652–660, 2017.
 [27] C. Qi, H. Su, M. Nießner, A. Dai, M. Yan, and L. Guibas. Volumetric and multiview CNNs for object classification on 3D data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5648–5656, 2016.
 [28] C. Qi, L. Yi, H. Su, and L. Guibas. PointNet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in Neural Information Processing Systems, pages 5105–5114, 2017.
 [29] G. Riegler, A. Ulusoy, and A. Geiger. OctNet: Learning deep 3D representations at high resolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3577–3586, 2017.
 [30] R. Rusu. Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments. PhD thesis, Computer Science department, Technische Universitaet Muenchen, Germany, October 2009.
 [31] R. Rusu and S. Cousins. 3D is here: Point cloud library (PCL). In Proceedings of the IEEE International Conference on Robotics and Automation, pages 1–4, 2011.
 [32] Y. Shen, C. Feng, Y. Yang, and D. Tian. Mining point cloud local structures by kernel correlation and graph pooling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4548–4557, 2018.
 [33] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958, 2014.
 [34] H. Su, S. Maji, E. Kalogerakis, and E. LearnedMiller. Multiview convolutional neural networks for 3D shape recognition. In Proceedings of the IEEE International Conference on Computer Vision, pages 945–953, 2015.
 [35] M. Tatarchenko, J. Park, V. Koltun, and Q.Y. Zhou. Tangent convolutions for dense prediction in 3D. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3887–3896, 2018.
 [36] C. Wang, B. Samari, and K. Siddiqi. Local spectral graph convolution for point set feature learning. In Proceedings of The European Conference on Computer Vision, September 2018.
 [37] P.S. Wang, Y. Liu, Y.X. Guo, C.Y. Sun, and X. Tong. OCNN: Octreebased convolutional neural networks for 3D shape analysis. ACM Transactions on Graphics, 36(4):72, 2017.
 [38] P.S. Wang, C.Y. Sun, Y. Liu, and X. Tong. Adaptive OCNN: A patchbased deep representation of 3D shapes. ACM Transactions on Graphics, 37(6), 2018.
 [39] W. Wang, R. Yu, Q. Huang, and U. Neumann. SGPN: Similarity group proposal network for 3D point cloud instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2569–2578, 2018.
 [40] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3D ShapeNets: A deep representation for volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1912–1920, 2015.
 [41] H. Xu, M. Dong, and Z. Zhong. Directionally convolutional networks for 3D shape segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pages 2698–2707, 2017.
 [42] X. Ye, J. Li, H. Huang, L. Du, and X. Zhang. 3D recurrent neural networks with context fusion for point cloud semantic segmentation. In Proceedings of The European Conference on Computer Vision, September 2018.
 [43] L. Yi, V. G. Kim, D. Ceylan, I. Shen, M. Yan, H. Su, C. Lu, Q. Huang, A. Sheffer, L. Guibas, et al. A scalable active framework for region annotation in 3D shape collections. ACM Transactions on Graphics, 35(6):210, 2016.
 [44] L. Yi, H. Su, X. Guo, and L. Guibas. SyncSpecCNN: Synchronized spectral CNN for 3D shape segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6584–6592, 2017.
 [45] F. Yu and V. Koltun. Multiscale context aggregation by dilated convolutions. In International Conference on Learning Representations, 2016.
 [46] L. Yu, X. Li, C.W. Fu, D. CohenOr, and P.A. Heng. PUNet: Point cloud upsampling network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2790–2799, 2018.
Supplementary Material:
ACNN: Annularly Convolutional Neural Networks on Point Clouds
Appendix A Ball Query vs Ringbased Scheme
The comparison of multiscale method proposed in [28] and our ringbased scheme is depicted in Fig. 6. It is noted that comparing to multiscale regions, the ringbased structure does not have overlaps (no neighboring point duplication) at the query point’s neighborhood. It means that each ring contains its own unique points.
AAC  OA  
PointNet++ (multiscale / with overlap)  86.5  90.2 
PointNet++ (multiring / without overlap)  87.3  90.6 
ACNN (with all components)  90.3  92.6 
We have discovered that reducing redundancy can improve the existing multiscale approach in [28]. We test redundancy issue on original PointNet++ model [28] with and without overlap / redundancy. We compare the original PointNet++ multiscale model with ball queries (with redundant points) against PointNet++ with our proposed regular rings (without redundant points). Our experiments show that the proposed multiring (i.e., without redundant points) outperforms the multiscale scheme (i.e., with redundant points) on ModelNet40 according to Tab. 4.
Appendix B Training Details
We use ACNN3L network configuration in Tab. 5 for all experiments on point cloud classification tasks and ACNN4L network configuration in Tab. 5 for both part segmentation and semantic segmentation tasks. We use regular rings in and dilated rings in in our ACNN3L architecture. Similarly, we use regular rings in and dilated rings in and in our ACNN4L architecture.
We use Adam optimization method with learning rate 0.001 and decay rate 0.7 in classification and decay 0.5 in segmentation tasks. We have trained our classification model for 250 epochs, our part segmentation model for 200 epochs, and our largescale semantic segmentation models for 50 epochs on each area of S3DIS and for 200 epochs on ScanNet. The training time of our model is faster than that of PointNet++ model, since we use ringbased neighboring search, which is more efficient and effective than ball query in PointNet++ model. For instance, the training time on the segmentation model for 200 epochs is about 19 hours on a single NVIDIA Titan Xp GPU with 12 GB GDDR5X, and PointNet++ model needs about 32 hours for the same task. The size of our trained model is 22.3 MB and the size of PointNet++ model is 22.1 MB.
ACNN3L (classification)  C  512  128  1   
rings  [[0.0, 0.1], [0.1, 0.2]]      
k  [16, 48]  [16, 48]  128    
F  [[32,32,64], [64,64,128]]  [[64,64,128], [128,128,256]]  [256,512,1024]    
ACNN4L (segmentation)  C  512  128  32  1 
rings  [[0.0, 0.1], [0.1, 0.2]]  [[0.1, 0.2], [0.3, 0.4]]  [[0.2, 0.4], [0.6, 0.8]]    
k  [16, 48]  [16, 48]  [16, 48]  32  
F  [[32,32,64], [64,64,128]]  [[64,64,128], [128,128,256]]  [[128,128,256], [256,256,512]]  [512,768,1024] 

Note: Both of the models represent encoder part. ACNN3L model consists of three layers. ACNN4L model consists of four layers. For each layer, is the number of centroids, is the inner and outer radiuses of a ring: [], is number of neighbors, is feature map size. For example, our ACNN4L model at the first layer has 512 centroids; two regular rings where first ring constrained by radiuses of 0.0 and 0.1 and the second ring has radiuses of 0.1 and 0.2; kNN search returns points in the first ring, and points in the second ring; the feature map size in the first ring is equal to and in the second ring is . Convolutional kernel size across different rings and layers is the same and equal to . Also, we have to double the number of centroids in each layer in model ACNN4L on ScanNet as the number of points in each block is twice more than that in S3DIS.
Appendix C Feature Visualization
Local Feature Visualization. Fig. 8 and Fig. 11 visualize the magnitude of the gradient per point in the classification task on ModelNet10 and ModelNet40 datasets. Blue color represents low magnitude of the gradients and red color represents high magnitude of the gradients. The points with higher magnitudes get greater updates during training and the learning contribution of them is higher. Therefore, this feature visualization could be thought as the object saliency. For example, in ModelNet40 dataset our model considers wings and tails as important regions to classify an object as an airplane; bottle neck is important for a bottle; the flowers and leaves are important for a plant; tube or middle part (usually narrow parts) is important for a lamp; legs are important to classify an object as a stool.
Global Feature Visualization. Fig. 10 and Fig. 9 shows the tSNE clustering visualization [21] of the learned global shape features from the proposed ACNN model for the shape classification tasks in ModelNet10 and ModelNet40 test splits. We reduce 1024dim feature vectors to 2dim features. We can see that similar shapes are well clustered together according to their semantic categories. For example, in ModelNet10 dataset the clusters of desk, dresser, night stand, and table classes are closer and even intersect with each other, because the objects from these classes look similar. The perplexity parameters for ModelNet10 and ModelNet40 datasets are set as 15 and 50, respectively, to reduce spare space between clusters.
Appendix D Data Preparation Details
mean  areo  bag  cap  car  chair  ear phone  guitar  knife  lamp  laptop  motor  mug  pistol  rocket  skate board  table  
# shapes  2690  76  55  898  3758  69  787  392  1547  451  202  184  283  66  152  5271  
PointNet [26]  83.7  83.4  78.7  82.5  74.9  89.6  73.0  91.5  85.9  80.8  95.3  65.2  93.0  81.2  57.9  72.8  80.6 
KdNet [13]  82.3  80.1  74.6  74.3  70.3  88.6  73.5  90.2  87.2  81.0  94.9  57.4  86.7  78.1  51.8  69.9  80.3 
KCNet [32]  84.7  82.8  81.5  86.4  77.6  90.3  76.8  91.0  87.2  84.5  95.5  69.2  94.4  81.6  60.1  75.2  81.3 
PCNN [3]  85.1  82.4  80.1  85.5  79.5  90.8  73.2  91.3  86.0  85.0  95.7  73.2  94.8  83.3  51.0  75.0  81.8 
PointGrid [16]  86.4  85.7  82.5  81.8  77.9  92.1  82.4  92.7  85.8  84.2  95.3  65.2  93.4  81.7  56.9  73.5  84.6 
PointCNN [18]  86.1  84.1  86.5  86.0  80.8  90.6  79.7  92.3  88.4  85.3  96.1  77.2  95.2  84.2  64.2  80.0  83.0 
ACNN (our)  85.9  83.9  86.7  83.5  79.5  91.3  77.0  91.5  86.0  85.0  95.5  72.6  94.9  83.8  57.8  76.6  83.0 
mean  areo  bag  cap  car  chair  ear phone  guitar  knife  lamp  laptop  motor  mug  pistol  rocket  skate board  table  
# shapes  2690  76  55  898  3758  69  787  392  1547  451  202  184  283  66  152  5271  
PointNet++ [28]  85.1  82.4  79.0  87.7  77.3  90.8  71.8  91.0  85.9  83.7  95.3  71.6  94.1  81.3  58.7  76.4  82.6 
SyncSpecCNN [44]  84.7  81.6  81.7  81.9  75.2  90.2  74.9  93.0  86.1  84.7  95.6  66.7  92.7  81.6  60.6  82.9  82.1 
SONet [17]  84.9  82.8  77.8  88.0  77.3  90.6  73.5  90.7  83.9  82.8  94.8  69.1  94.2  80.9  53.1  72.9  83.0 
SGPN [39]  85.8  80.4  78.6  78.8  71.5  88.6  78.0  90.9  83.0  78.8  95.8  77.8  93.8  87.4  60.1  92.3  89.4 
RSNet [11]  84.9  82.7  86.4  84.1  78.2  90.4  69.3  91.4  87.0  83.5  95.4  66.0  92.6  81.8  56.1  75.8  82.2 
OCNN (+ CRF) [37]  85.9  85.5  87.1  84.7  77.0  91.1  85.1  91.9  87.4  83.3  95.4  56.9  96.2  81.6  53.5  74.1  84.4 
Point2Sequence [19]  85.2  82.6  81.8  87.5  77.3  90.8  77.1  91.1  86.9  83.9  95.7  70.8  94.6  79.3  58.1  75.2  82.8 
ACNN (our)  86.1  84.2  84.0  88.0  79.6  91.3  75.2  91.6  87.1  85.5  95.4  75.3  94.9  82.5  67.8  77.5  83.3 

Note: “CRF” stands for conditional random field method for final result refinement in OCNN method.
S3DIS data preparation. To prepare training and testing datasets, we divide every room into blocks with a size of 1 1 2 and with a stride of 0.5 . We has sampled 4096 points from each block. The height of each block is scaled to 2 to ensure that our constraintbased kNN search works optimally with the provided radiuses. In total, the prepared dataset contains 23,585 blocks across all six areas. Each point is represented as a 6D vector (: normalized global point coordinates and centered at origin, : colors). We do not use the relative position of the block in the room scaled between 0 and 1 as used in [26], because our model already achieves better results without using this additional information. We calculate point normals for each room by using the Point Cloud Library (PCL) library [31]. The calculated normals are only used to order points in the local region. For data augmentation, we use the same data augmentation strategy as used in the point cloud segmentation on ShapeNetpart dataset which is point perturbation with point shuffling.
ScanNet data preparation. ScanNet divides original 1513 scanned scenes in 1201 and 312 for training and testing, respectively. We sample blocks from the scenes following the same procedure as in [28], where every block has a size of 1.5 1.5 with 8192 points. We estimate point normals using the PCL library [31]. Each point is represented as a 6D vector (: coordinates of the block centered at origin, : normals) without information. For data augmentation, we use the point perturbation with point shuffling.
Appendix E More Experimental Results
Point Cloud Segmentation. Tab. 6 and Tab. 7 show the quantitative results of part segmentation on ShapeNetpart dataset with two different inputs. Tab. 6 reports results when the input is point position only. Tab. 7 reports results when the input is point position with its normals.
acc  mean  ceiling  floor  wall  beam  column  window  door  table  chair  sofa  bookcase  board  clutter  
PointNet [26]  78.5  47.6  88.0  88.7  69.3  42.4  23.1  47.5  51.6  54.1  42.0  9.6  38.2  29.4  35.2 
MS+CU (2) [8]  79.2  47.8  88.6  95.8  67.3  36.9  24.9  48.6  52.3  51.9  45.1  10.6  36.8  24.7  37.5 
G+RCU [8]  81.1  49.7  90.3  92.1  67.9  44.7  24.2  52.3  51.2  58.1  47.4  6.9  39.0  30.0  41.9 
RSNet [11]    56.5  92.5  92.8  78.6  32.8  34.4  51.6  68.1  59.7  60.1  16.4  50.2  44.9  52.0 
3PRNN [42]  86.9  56.3  92.9  93.8  73.1  42.5  25.9  47.6  59.2  60.4  66.7  24.8  57.0  36.7  51.6 
SPGraph [15]  85.5  62.1  89.9  95.1  76.4  62.8  47.1  55.3  68.4  73.5  69.2  63.2  45.9  8.7  52.9 
PointCNN [18]  88.1  65.4  94.8  97.3  75.8  63.3  51.7  58.4  57.2  71.6  69.1  39.1  61.2  52.2  58.6 
ACNN (our)  87.3  62.9  92.4  96.4  79.2  59.5  34.2  56.3  65.0  66.5  78.0  28.5  56.9  48.0  56.8 
For ShapeNetpart dataset, we visualize more results (besides the segmentation results shown in the paper) in Fig. 31. We compare our results with PointNet++ [28], and our ACNN model can produce better segmentation results than PointNet++ model.
Semantic Segmentation in Scenes. For S3DIS dataset, we pick rooms from all six areas: area 1 (row 1), area 2 (row 2), area 3 (row 3), area 4 (row 4), area 5 (row 5), and area 6 (row 6); and compare them with PointNet [26] results and ground truth. The results are shown in Fig. 7. The detailed quantitative evaluation results for each shape class are reported in Tab. 8. Our model demonstrates good semantic segmentation results and achieves the stateoftheart performance on segmenting walls and chairs. Meanwhile, our model performs slightly worse than PointCNN [18] on other categories due to their nonoverlapping block sampling strategy with paddings which we do not use. Supplementary Video is included for dynamically visualizing each area in detail.
For ScanNet dataset, we pick six challenging scenes and visualize the results of our ACNN model, PointNet++ [28], and ground truth side by side. The visualization results are provided in Fig. 6. Our approach outperforms PointNet++ [28] and other baseline methods, such as PointNet [26], TangentConv [35], and PointCNN [18] according to Tab. 2 in the main paper.