PRS-Net: Planar Reflective Symmetry Detection Net for 3D Models
In geometry processing, symmetry is the universal high level structural information of the 3d models and benefits many geometry processing tasks including shape segmentation, alignment, matching, completion, e.g.. Thus it is an important problem to analyze various forms of the symmetry of 3D shapes. The planar reflective symmetry is the most fundamental one. Traditional methods based on spatial sampling can be time consuming and may not be able to identify all the symmetry planes. In this paper, we present a novel learning framework to automatically discover global planar reflective symmetry of a 3D shape. Our framework trains an unsupervised 3D convolutional neural network to extract global model features and then outputs possible global symmetry parameters, where input shapes are represented using voxels. We introduce a dedicated symmetry distance loss along with a regularization loss to avoid generating duplicated symmetry planes. Our network can also identify isotropic shapes by predicting their rotation axes. We further provide a method to remove invalid and duplicated planes and axes. We demonstrate that our method is able to produce reliable and accurate results. Our neural network based method is hundreds of times faster than the state-of-the-art method, which is based on sampling. Our method is also robust even with noisy or incomplete input surfaces.
Symmetry is a common characteristic of most natural species and man-made objects, which makes numerous beautiful species such as the starfish, the sunflower, etc, and many man-made objects have at least one reflection plane. Symmetry is also an important concept in mathematics. An object is symmetric if some properties do not change under certain transformations. In geometric processing, finding symmetries in geometric data, such as point clouds, polygon meshes and voxels, is an important problem, because numerous applications take advantage of symmetry information to solve their tasks or improve the algorithms, e.g., shape matching , segmentation , completion , etc.
Therefore, detecting symmetry of 3D objects is an essential step for many applications. Among all the symmetry types, the most common and important one is planar reflection symmetry. In a simple case, a shape can be aligned to the principal axes by applying principal component analysis (PCA). Then planes formed by pairs of principal axes (i.e. orthogonal to the remaining principal axis) can be checked to see if they are symmetry planes. This simple approach works for simpler cases, where the symmetry plane is well aligned with principal axes. However, it is unable to detect symmetry planes which are not orthogonal to any principal axis (e.g. any plane that passes through the rotation axis of a cylinder is a symmetry plane which may not be orthogonal to any principal axis). Moreover, the method is highly sensitive to even small changes of geometry, leading to poor detection results. The state-of-the-art methods for symmetry plane detection [10, 15] is based on spatial sampling which is much more robust than PCA. However, since they need to sample many potential candidates, the output may produce poor or inaccurate results depending on the random sampling.
To address such limitations, in this paper, we introduce a novel learning framework to automatically discover global planar reflective symmetry. We use a deep convolutional neural network (CNN) to extract the global model feature and capture possible global symmetry. To make CNN-based learning easier and more effective, we convert shapes in arbitrary representations to voxels, before feeding in to our network. The output of our network involves parameters representing reflection planes and rotation axes. Although our aim is to detect planar reflective symmetry, for shapes with rotation symmetry such as a cylinder, any plane passing through the rotation axis can be a symmetry plane. By detecting rotation axes explicitly, we can ensure such symmetry planes are fully detected. Our network is unsupervised because we do not require any annotations for symmetry planes of the target objects. This makes collecting training data much easier. To achieve this, we introduce a novel symmetry distance loss and a regularization loss to effectively train our network. The former measures deviation of the geometry from symmetry given a potential symmetry plane, and the latter is used to avoid generating duplicated symmetry planes. We further provide a post-processing method to remove invalid and duplicated planes and axes. Compared with , our method can produce more reliable and accurate results. More importantly, our learning based approach is thousands of times faster, achieving real-time performance. We also show our network is robust even for noisy and incomplete input. Our key contributions are:
We develop PRS-Net, the first deep learning approach to detecting global planar reflective symmetry of 3D objects. Our approach is hundreds of times faster than state-of-the-art and also more accurate and reliable.
We model the symmetry detection problem as a differentiable function, which can be attained by a neural network. We further design a dedicated symmetry distance loss along with a regularization loss to avoid generating duplicated symmetry planes. Thanks to the loss functions, our network is trained in an unsupervised manner, making data collection much easier.
2 Related Work
2.1 Symmetry Detection
Symmetry detection is an important topic in shape analysis, and is widely used in images and 3D geometry. Symmetry detection includes global and partial symmetry, as well as intrinsic and extrinsic symmetry. Most methods can cope with certain level of approximate symmetry.
Global symmetry refers to transformations which map the whole object to itself. Atallah  proposes an algorithm for enumerating all the axes of symmetry of a planar figure consisting of segments, circles and points. Martinet et al. propose a method for detecting global accurate symmetry using generalized moments. Kazhdan et al.  detect -fold rotational symmetry based on the correlation of the spherical harmonic coefficients. Raviv et al. present a generalization of symmetries for non-rigid shapes.
Based on whether the symmetry exists in the (Euclidean) embedding space, or based on distance metrics of the geometry, symmetry can be classified as extrinsic and intrinsic.
For extrinsic symmetry, we usually use the Euclidean distance between points to measure the symmetry of a shape, while intrinsic symmetry is measured by different metrics. For global extrinsic symmetry, planar reflective symmetry is the most fundamental one. Zabrodsky et al. introduce a measure of approximate symmetry. Podolak et al. further describe a planar reflective symmetry transform (PRST) that captures a continuous measure to help find the reflective symmetry.
Ovsjanikov et al. introduce a method to compute intrinsic symmetry of a shape using eigenfunctions of Laplace-Beltrami operators. Kim et al.  present a method to discover point correspondences to detect global intrinsic symmetry on 3D models based on the algorithm by Lipman et al. . Mitra et al. present a method to detect intrinsic regularity, where the repetitions are on the intrinsic grid.
For partial symmetry, a shape is said to have partial symmetry w.r.t. a transformation , if there are two subsets such that . Gal and Cohen-Or  introduce local surface descriptors that represent the geometry of local regions of the surface to detect partial symmetry. Mitra et al.  present a method based on transformation space voting schemes to detect partial and approximate symmetry. Pauly et al.  present a method for discovering regular or repeated geometric structures in 3D shapes. Berner et al. present a symmetry detection algorithm based on analyzing a graph of surface features. Lipman et al. introduce the Symmetry Factored Embedding (SFE) and the Symmetry Factored Distance (SFD) to analyze and represent symmetries in a point set. Xu et al. extend PRST  to extract partial intrinsic reflective symmetries.
Our aim is to develop a deep learning approach for effective real-time global planar reflective symmetry detection.
2.2 Geometry Processing with Deep Learning
Neural networks have achieved much success in various areas. In recent years, more and more researchers generalize this tool from 2D images to 3D geometry. Su et al.  use a dimensionality reduction strategy that puts 2D rendered images of a 3D object from multiple views into several classical and mature 2D CNNs. Maturana  argues that many existing systems do not take full advantage of 3D depth information, so they create a volumetric occupancy grid representation and predict 3D targets in real time directly from the 3D CNN. Wu et al.  introduce the 3D ShapeNet to learn the distribution of complex 3D voxel grids and use it for 3D shape classification and recognition. Girdhar et al.  propose a TL-embedding network to generate 3D voxel models from 2D images. Qi et al.  combine a volumetric CNN with a multi-view CNN, enabling it to be used for object classification of 3D data. Wu et al.  propose 3D-GAN (Generative Adversarial Network) to generate 3D objects from a probabilistic space. Tulsiani et al.  present a network to interpret 3D objects with a set of volumetric primitives. Riegler et al.  present OctNet for 3D object classification, using octree structure to reduce memory and computational costs. Such work shows great potential using deep learning for 3D geometry processing, but none of the existing work considers learning to detect 3D object symmetry, which we will address in this paper.
3 Network Architecture
In this section, we describe the network architecture of this method. The overall network is presented in Figure 1. This work is to train the CNN to predict the symmetry planes in the unsupervised manner. The CNN has five 3D convolution layers of kernel 3, padding 1, and stride 1. After each 3D convolution, a max pooling of stride 2 and leaky ReLU  activation are applied, these are followed by fully connected layers to predict the parameters of symmetry planes.
The input of the network is voxels which are voxelized from the input shape. The parameter of this resolution has been evaluated in 7.6.
The 3D convolution and pooling are used to extract global features of the shape. The output includes parameters of reflective planes and rotation parameters. For typical shapes, our network predicts three potential symmetry planes and three potential rotation axes. These will be further validated in the validation stage so the shape may have fewer (or even none) symmetry planes. The symmetry planes () are represented using an implicit representation. We further use quaternions to represent rotations (), because the quaternion is more compact with fewer parameters compared to the rotation matrix, and it can be easily transformed from and to an axis-angle representation. We initialize the normal vectors of the planes and the directions of axes to be three vectors perpendicular to each other to maximize their coverage. In practice, we simply set to initialize them. The initial angle of each rotation axis is set to , thus the corresponding quaternion is , where , , and are the fundamental quaternion units, is the rotation angle, and is the component of ( corresponding to , and ). In our network, the predicted quaternions are normalized to a unit vector after each iteration of optimization. After finishing training, we transform the quaternion representation to the axis-angle representation.
Our network is trained unsupervisedly because we do not require any annotations for the reflection plane parameters that best describe the global extrinsic symmetry of the object. This greatly reduces the effort of obtaining training data as only a collection of (symmetric) shapes is required. In order to achieve this, we propose a novel symmetry distance loss to promote planar symmetry. Moreover, to avoid producing duplicated symmetry planes, we further introduce a regularization loss.
4 Loss Function
Denote by the plane parameters in the implicit form, where is the normal direction of the plane, which uniquely determines a reflection plane , and is the rotation parameter, which represents the quaternion of rotation transform. To train the network to predict symmetry planes and rotation axes, we introduce two loss functions, namely symmetry distance loss and regularization loss.
4.1 Symmetry Distance Loss
To measure whether an input shape is symmetric w.r.t. a given reflection plane or a rotation axis, we first uniformly sample points on the shape to form a point set . We then obtain a transformed point set by applying planar symmetry or rotation transformation to each point to obtain the transformed point .
For the reflection plane, the symmetry point of point is:
where is the normal vector of the reflection plane, and is its offset parameter.
For the rotation axis, the symmetry point of point is
where represents the quaternion of the rotation.
Then we calculate the shortest distance from symmetry points to the shape . To calculate efficiently, we precompute the closest point on the surface to each grid center point of a regular grid, and during training, we calculate the distance between symmetry points to the corresponding closest point in the same grid as the approximate closest distance and their gradients required for back propagation. Finally, the symmetry distance loss of a shape is defined as
where is the number of sample points.
4.2 Regularization Loss
Many shapes have multiple symmetry planes or rotation axes. Since each of them is sufficient to minimize the symmetry distance loss, the network may produce multiple, near-identical outputs (e.g. ). This may lead to output that misses essential symmetry planes/rotation axes. To address this, we aim to constrain the learning of reflection planes and rotation axes not to overlap with each other, by adding a regularization loss to separate each plane and axis from each other as much as possible. Let be a matrix where each row is the unit normal direction of a symmetry plane, ie. . Let be another matrix where each row contains the normalized imaginary of quaternion of rotation symmetry prediction. Let
where is the identity matrix. If each plane (resp. axis) is orthogonal to every other plane (resp. axis), then is an all-zero matrix. We define the regularization loss as
which penalizes planes and axes closer to parallel. Figure 2 compares the results with (a) and without (b) the regularization loss. In this experiment, we initialize all the planes and axes with the same settings. As can be seen, the reflection planes and rotation axes overlap with each other without the regularization loss as shown in the right column, because it is difficult for the network to separate them as they also achieve the same local minimum, while the planes and axes are clearly separated on the left thanks to the regularization. There are still two overlapping planes because the model does not have the third reflection plane with small symmetry distance loss (which will be addressed in validation).
4.3 Overall Loss Function
We define the overall loss function as
where is a weight to balance the importance of two loss terms.
Our network predicts symmetry planes/rotation axes and it always predicts three symmetry planes and three rotations to simplify the architecture. However, real-world shapes may have fewer symmetry planes and rotation axes. In this case, some output planes may overlap with each other. Moreover, due to the local minimum nature of gradient descent optimization, the network may also detect some approximate symmetry which is not sufficiently good. These issues however can be easily addressed by a simple validation stage. We check the detected symmetry planes and rotation axes to remove duplicated outputs: if its dihedral angle is less than , we remove the one with larger symmetry distance error. Meanwhile, if the detected symmetry planes/rotation axes lead to high symmetry distance loss (greater than in our experiments), we also remove them as they are not sufficiently symmetric. As shown in Figure 3(a), the bath only has two reflection planes, but the network always outputs three symmetry planes before validation. Their symmetry distance errors are and respectively, so the validation removes the third plane and retains the other two planes. The symmetry distance errors of the three output planes of the bench in Figure 3(b) are and , so two planes are removed due to high symmetry distance loss. Similarly, two extra rotation axes are removed in Figure 3(c).
6 Application of Shape Completion
With the predicted symmetry planes, many geometric tasks would be benefited. Here, we apply this to the application of shape completion. We show a comparison of shape completion in Figure 5. We visualize the Euclidean error between the ground truth part and the generated part of the piano, which is obtained by mirroring the geometry of the left leg along the symmetry plane detected by different methods. Our method and OBB give better symmetry planes than other methods, and therefore produce good completed geometry. Martinet et al.  produces worse result because it detects accurate symmetry. Mitra et al.  and Podolak et al.  use sample algorithm to get the reflection planes which is affected by the partial shape. OBB is hardly affected by such missing part, and our network learns the global shapes of the model through 3D convolution and numerous training data, and discovers the global symmetry reliably. Thanks to this symmetry detection method, the incomplete shapes with reflective symmetries could be repaired accurately and quickly.
7.1 Dataset and Training
We use the ShapeNet  dataset which contains 55 common object categories with about unique 3D models to train our network. We choose models as the training set and the remaining models for testing. Because the ShapeNet models are often roughly axis aligned, and many categories have fewer than models, we apply different random rotations on each model to obtain augmented models for each category for training. However, many shapes in the ShapeNet are not symmetric. For non-symmetric shapes, it can be difficult to determine which result is better even by human annotators. Although e.g. symmetry distance error (SDE) could be used to give an indication, it does not provide a definite measure. We are not aware of benchmark datasets for symmetry. Therefore, we collect a test set containing shapes which are manually validated to ensure they are symmetric, and for these shapes, we obtain reasonable ground truth symmetry planes to compare ground truth error (GTE) between different methods.
In the preprocessing step, we uniformly sample points on the surface, and generate voxels of size as input to our network. During network training, we set batch size , learning rate and regularization loss weight , and then use ADAM  optimizer to train our network according to the loss described above.
7.2 Computation Time
We calculate the computation time and compare it with alternative methods. Our experiments were carried out on a desktop computer with a 3.6 GHz Intel Core i7-6850K CPU, 128G memory and an NVIDIA TITAN X GPU. For a typical model, such as the piano in Figure 10 with 1052 vertices and 4532 faces, OBB, Kazhdan et al., Martinet et al., Podolak et al. , Podolak et al.  with GEDT , Korman et al.  and Mitra et al.  require , , , , , , seconds, whereas our method only needs 1.81 ms, which is hundreds faster than the state-of-the-art methods and achieves real-time performance. This is because these methods use sampling or iterative algorithms which increase the computation time. This method is also comparable with PCA (typically taking 1.9 ms using CPU) since this trained network is performed on the GPU with powerful computation ability.
7.3 Choice of parameter
We show an example of visual results with different regularization loss weight in Figure 6. We choose for training in our paper because it separates multiple symmetry planes properly, unlike Figure 6(a) and Figure 6(b) with small , and has lower symmetry distance loss than Figure 6(d)
|Error||PCA||Oriented||Kazhdan et.al.||Martinet et.al.||Mitra et.al.||PRST||PRST||Korman et.al.||Our|
|Bounding Box ||||||||with GEDT|||
|Dataset||PCA||Oriented||Kazhdan et.al.||Martinet et.al.||Mitra et.al.||PRST||PRST||Korman et.al.||Our|
|Bounding Box ||||||||with GEDT|||
7.4 Results and Evaluations
We compare our method with PCA, Oriented Bounding Box , Kazhdan et al., Martinet et al., Podolak et al. , Podolak et al.  with Gaussian Euclidean Distance Transform (GEDT), using their default parameters, Korman et al.  and Mitra et al. . To quantitatively evaluate the quality of detected symmetry planes, we first normalize the normal vector of the plane, and adjust the direction of the normal vector to make the angle between the detected normal and ground truth normal no larger than , and the error of the plane w.r.t. the ground truth (GTE) is defined as
Alternatively, we can also use symmetry distance error (SDE) defined in the same way as to measure the symmetry quality.
We compare our method with existing methods. Figure 4 shows the results of reflection plane from different methods. In each figure, the heatmap presents the distance between the sample points on the ground truth plane and the reflection points on each result plane. It shows that oriented bounding box (OBB) produce worst results, because the bounding box calculates for minimum interior volume which could be wrong with misleading shapes. Other existing methods produce better results but also have some errors, because Kazhdan et al. and PRST are sensitive to the resolution of grids and the distribution of sample points. For RPST, it may also produce different results when it runs multiple times due to the random sampling. The computational time of the method grows quickly when more sample points and higher resolution of grids are used. The last column shows that our method obtains the most accurate result. Moreover, Our method only costs a few milliseconds, once the network is trained, while methods except PCA needs several iterations to compute the local optimal result. More results as shown in Figure 8, our method is able to produce reliable and accurate results, including shapes with multiple symmetry plane.
We also evaluate our network on large test sets. Table I shows the mean ground truth error and symmetry distance error of our test set with models, and Figure LABEL:error shows the proportion of correspondences (-axis) where the distance between reflective points and their nearest points on the mesh are within an error bound (-axis). This shows that our method consistently has more points within a given error bound than alternative methods.
Many shapes are not entirely symmetric, so we evaluate our method on human shapes from the SCAPE dataset  to test the capability of handling such general cases. As shown in Figure 7, we visualize the symmetry plane with lowest symmetry distance error of each method except Korman et al.  which doesn’t detect any reflective symmetry, and our method discovers a more plausible symmetry plane, which has lowest symmetry distance error , compared with of PCA, of oriented bounding box, of Kazhdan et al., of Martinet et al., of Mitra et al. , of PRST  and of PRST with GEDT. It demonstrates that our method has the ability to detect approximate symmetry planes of general shapes, outperforming existing methods. This is because we use unsupervised loss to train the network, and the ShapeNet dataset  has various categories including symmetric and asymmetric shapes. Note that our training set does not include any shape from SCAPE or even any human shape. This also shows that our network generalizes well to new shapes and unseen shape categories.
In Table II, we also report accuracy comparison of our method with alternative methods (in terms of SDE as no ground truth is available) on ABC  and Thingi10K , which contain a large number of asymmetric shapes. In this experiment, we randomly select 80% data for training and 20% for testing, and use the training data to fine-tune the network pre-trained on ShapeNet. Our method turned out to be the best one on both ABC  and Thingi10K . This demonstrates that our method generalizes well to new datasets, producing minimum average SDE and dealing well with more complex and asymmetric shapes.
|Error||PCA||Oriented||Kazhdan et.al.||Martinet et.al.||Mitra et.al.||PRST||PRST||Korman et.al.||Our|
|Bounding Box ||||||||with GEDT|||
In order to test the robustness of our network, we present two different experiments, including noisy and incomplete models. These experiments are motivated by the fact that scanned models often contain noisy and/or incomplete surfaces. As Figure 9 shows, because most ShapeNet models are non-manifold and some of them have fewer than vertices which result in poor noisy models, we first use  to convert the model to a manifold, before adding Gaussian noise on each vertex along the normal direction. It shows that Our method produces stable output with smallest error, demonstrating its robustness to small changes of vertex positions.
The second experiment is shown in Figure 10, where we remove the left leg surface of the piano and calculate the distance measure based on the original complete model. The distance heatmap shows that our method and oriented bounding box (OBB) is least affected, because our network extracts the global feature through 3D convolution and pooling, and it learns the global extrinsic shapes of the model. Therefore, the feature vector of partial piano is close to the complete one because they have very close global shapes. OBB is also insensitive to this situation. Kazhdan et al. has some error because it’s feature is obtained from voxel which is changed. Martinet et al. are suitable for accurate symmetry detection, and the incomplete shape affects the results significantly. Podolak et al.  and Mitra et al.  use sample points to get the reflection planes, and the distribution of the partial piano points is somewhat different from the complete shape, so the reflection planes have some minor changes. The PCA result is also affected because the shape changes.
We further evaluate on a dataset with large continuous regions removed. We take the test set and for each shape we randomly choose a radius and a center point and remove triangles of the shape that fall inside the sphere. The average GTE is reported in Table III. Our method is robust and produces minimum average GTE. While voxelization is beneficial, the robustness of our method does not come from it alone, as evidenced by our better performance than Podolak et al. which also use voxel data.
7.6 Voxel Resolution and Network Pre-training
The parameter of the voxel resolution for the CNN from to has bee tested and evaluated with the symmetry distance error on the same test set in Section 7.1. In this experiment, we change the number of convolution layers to suit the input voxel size. The number of convolution layers , where is the input voxel size. As shown in Table IV, the resolution with performs best, which is used as the default resolution. The performance drops with higher resolutions, probably due to the overfitting with large number of parameters. Since Kazhdan et al.  produces fairly good results, we could use the results to form supervised loss to train our network and initialize it with these pre-trained weights. As shown in Figure 11, we found that the network with pre-training leads to faster convergence and only requires 5000 steps to converge, compared to the network without the pre-training that needs 9000 steps. Although the accuracy on the test data is nearly identical, the initialization with pre-training is helpful for faster convergence.
In this paper, we introduce a novel unsupervised 3D convolutional neural network named PRS-Net, which can discover the planar reflective symmetry of a shape. To achieve this, we develop symmetry distance loss along with regularization loss to avoid generating duplicated symmetry planes. We also describe a method to remove invalid and duplicated planes and rotation axes. We demonstrate that our network is robust even when the input has noisy or incomplete surfaces.
-  (2005) SCAPE: shape completion and animation of people. In ACM transactions on graphics (TOG), Vol. 24, pp. 408–416. Cited by: Fig. 7, §7.4.
-  (1985) On symmetry detection. IEEE Transactions on Computers (7), pp. 663–666. Cited by: §2.1.
-  (2008) A graph-based approach to symmetry detection.. In Volume Graphics, Vol. 40, pp. 1–8. Cited by: §2.1.
-  (2015) Shapenet: an information-rich 3d model repository. arXiv preprint arXiv:1512.03012. Cited by: Fig. 8, §7.1, §7.4.
-  (2011) Fast oriented bounding box optimization on the rotation group so (3, ℝ). ACM Transactions on Graphics (TOG) 30 (5), pp. 122. Cited by: §7.2, §7.4, TABLE I, TABLE II, TABLE III.
-  (2006) Salient geometric features for partial shape matching and similarity. ACM Transactions on Graphics (TOG) 25 (1), pp. 130–150. Cited by: §2.1.
-  (2016) Learning a predictable and generative vector representation for objects. In European Conference on Computer Vision, pp. 484–499. Cited by: §2.2.
-  (2018) Robust watertight manifold surface generation method for shapenet models. arXiv preprint arXiv:1802.01698. Cited by: §7.5.
-  (2002) A reflective symmetry descriptor. In European Conference on Computer Vision, pp. 642–656. Cited by: §7.2, §7.4, §7.4, §7.4, §7.5, TABLE I, TABLE II, TABLE III.
-  (2004) A reflective symmetry descriptor for 3d models. Algorithmica 38 (1), pp. 201–225. Cited by: §1, §1, §2.1, §7.6.
-  (2004) Symmetry descriptors and 3d shape matching. In Proceedings of the 2004 Eurographics/ACM SIGGRAPH symposium on Geometry processing, pp. 115–123. Cited by: §1.
-  (2010) Möbius transformations for global intrinsic symmetry analysis. In Computer Graphics Forum, Vol. 29, pp. 1689–1700. Cited by: §2.1.
-  (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §7.1.
-  (2019) ABC: a big CAD model dataset for geometric deep learning. In CVPR, Cited by: §7.4, TABLE II.
-  (2014) Probably approximately symmetric: fast rigid symmetry detection with global guarantees. Computer Graphics Forum. Cited by: §1, §7.2, §7.4, §7.4, TABLE I, TABLE II, TABLE III.
-  (2010) Symmetry factored embedding and distance. In ACM Transactions on Graphics (TOG), Vol. 29, pp. 103. Cited by: §2.1.
-  (2009) Möbius voting for surface correspondence. ACM Transactions on Graphics (TOG) 28 (3), pp. 72. Cited by: §2.1.
-  (2013) Rectifier nonlinearities improve neural network acoustic models. In Proc. icml, Vol. 30, pp. 3. Cited by: §3.
-  (2006) Accurate detection of symmetries in 3d shapes. ACM Transactions on Graphics (TOG) 25 (2), pp. 439–464. Cited by: §2.1, §6, §7.2, §7.4, §7.4, §7.5, TABLE I, TABLE II, TABLE III.
-  (2015) Voxnet: a 3d convolutional neural network for real-time object recognition. In Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, pp. 922–928. Cited by: §2.2.
-  (2010) Intrinsic regularity detection in 3d geometry. In European Conference on Computer Vision, pp. 398–410. Cited by: §2.1.
-  (2006) Partial and approximate symmetry detection for 3d geometry. ACM Transactions on Graphics (TOG) 25 (3), pp. 560–568. Cited by: §2.1, §6, §7.2, §7.4, §7.4, §7.5, TABLE I, TABLE II, TABLE III.
-  (2008) Global intrinsic symmetries of shapes. In Computer graphics forum, Vol. 27, pp. 1341–1348. Cited by: §2.1.
-  (2008) Discovering structural regularity in 3d geometry. ACM transactions on graphics (TOG) 27 (3), pp. 43. Cited by: §2.1.
-  (2006) A planar-reflective symmetry transform for 3d shapes. ACM Transactions on Graphics (TOG) 25 (3), pp. 549–559. Cited by: §1, §2.1, §2.1, §6, Fig. 11, §7.2, §7.4, §7.4, §7.5, §7.5, TABLE I, TABLE II, TABLE III.
-  (2016) Volumetric and multi-view cnns for object classification on 3d data. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5648–5656. Cited by: §2.2.
-  (2010) Full and partial symmetries of non-rigid shapes. International journal of computer vision 89 (1), pp. 18–39. Cited by: §2.1.
-  (2017) Octnet: learning deep 3d representations at high resolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vol. 3. Cited by: §2.2.
-  (2006) Folding meshes: hierarchical mesh segmentation based on planar symmetry. In Symposium on geometry processing, Vol. 256, pp. 111–119. Cited by: §1.
-  (2015) Multi-view convolutional neural networks for 3d shape recognition. In Proceedings of the IEEE international conference on computer vision, pp. 945–953. Cited by: §2.2.
-  (2017) Learning shape abstractions by assembling volumetric primitives. In Proc. CVPR, Vol. 2. Cited by: §2.2.
-  (2016) Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In Advances in Neural Information Processing Systems, pp. 82–90. Cited by: §2.2.
-  (2015) 3d shapenets: a deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1912–1920. Cited by: §2.2.
-  (2009) Partial intrinsic reflectional symmetry of 3d shapes. In ACM Transactions on Graphics (TOG), Vol. 28, pp. 138. Cited by: §2.1.
-  (1995) Symmetry as a continuous feature. IEEE Transactions on Pattern Analysis and Machine Intelligence 17 (12), pp. 1154–1166. Cited by: §2.1.
-  (2016) Thingi10k: a dataset of 10,000 3D-printing models. arXiv preprint arXiv:1605.04797. Cited by: §7.4, TABLE II.