[
Abstract
Automatic synthesis of high quality 3D shapes is an ongoing and challenging area of research. While several datadriven methods have been proposed that make use of neural networks to generate 3D shapes, none of them reach the level of quality that deep learning synthesis approaches for images provide. In this work we present a method for a convolutional point cloud decoder/generator that makes use of recent advances in the domain of image synthesis. Namely, we use Adaptive Instance Normalization and offer an intuition on why it can improve training. Furthermore, we propose extensions to the minimization of the commonly used Chamfer distance for autoencoding point clouds. In addition, we show that careful sampling is important both for the input geometry and in our point cloud generation process to improve results. The results are evaluated in an autoencoding setup to offer both qualitative and quantitative analysis. The proposed decoder is validated by an extensive ablation study and is able to outperform current state of the art results in a number of experiments. We show the applicability of our method in the fields of point cloud upsampling, single view reconstruction, and shape synthesis.
\@xsect• Computing methodologies Shape analysis; Pointbased models;
A Convolutional Decoder for Point Clouds]A Convolutional Decoder for Point Clouds
using Adaptive Instance Normalization
I. Lim, M. Ibing, L. Kobbelt]
Isaak Lim^{†}^{†}thanks: Equal ContributionMoritz Ibing^{1}^{1}footnotemark: 1Leif Kobbelt
Visual Computing Institute, RWTH Aachen University
[
\@tabular
[t]@c@[\@close@row
The question of how to represent 3D geometry as input for neural networks is still an ongoing field of research. Most recent papers (e.g. [QSMG17, QYSG17, AML18, FELWM18, LBS18]) focus on how to encode the input in a manner such that its latent representation can then be used for tasks such as classification or segmentation. However, a smaller amount of work has been done on how highfidelity 3D shapes can be generated by a decoder/generator network. We investigate the problem of generating 3D shapes in an autoencoding setup. This allows us to evaluate results both qualitatively and quantitatively. While a number of previous works focus on the encoder, we mainly target the decoder/generator in this paper.
Synthesis of 3D shapes is a time consuming task (especially for nonexpert users), which is why a number of datadriven approaches have been proposed to tackle this problem. Methods range from combining parts of a shape collection to create novel configurations over deformation based approaches to the full synthesis of voxelized, meshed or point sampled 3D shapes. While impressive results have been presented, generated 3D shapes have not yet reached a quality that is comparable to the state of the art in image generation, such as recently presented by Karras et al. [KLA19].
We are interested in the complete synthesis of 3D shapes. In particular we investigate the generation of 3D point clouds since voxelized representation incur a heavy memory cost. At the same time we want to benefit from recent advances in generating highfidelity images. Thus, in this work we propose a convolutional decoder for point clouds. As shown by Groueix et al. [GFK18], it is difficult to achieve highquality autoencoding results by training a naïve point cloud decoder (i.e. a simple multilayer perceptron). In order to tackle this problem we propose several measures that allow for a better conditioning of the optimization problem.
Our contributions can be summarized as follows.

We propose a convolutional decoder for point clouds that is able to outperform current state of the art results on autoencoding tasks.

Our autoencoder is able to handle a varying number of points both for its input and output. This property makes it straightforward to apply our architecture to the task of point cloud upsampling.

To the best of our knowledge we are the first to apply Adaptive Instance Normalization as used in current image synthesis research [KLA19] to the area of point cloud generation. We give an intuition on why this technique is beneficial to training.

We propose several additional losses to the commonly used Chamfer distance that consider both voxelbased and point cloud differences.
Code and our sampling of the ShapeNet Core dataset (v2) [CFG15] can be found at the project page \@footnotemark\@footnotetextgraphics.rwthaachen.de/publication/03303.
Most work on content synthesis with neural networks has been done on images. The natural extension to 3D data is that of a voxel grid. This allows a straightforward transfer of many image based methods (e.g. by replacing 2D with 3D convolutions). Examples are methods that deal with tasks such as single image shape reconstruction [CXG16], shape completion [HLH17], and shape generation [WZX16, LXC17]. Another option is to represent geometry as planar patches inserted into an Octree [WSLT18]. However, as we are interested in point cloud methods we will restrict our discussion of related work to this domain.
Voxelbased approaches have its drawbacks when it comes to memory consumption, as the required memory scales cubically with the resolution of the grid. To deal with these problems several architectures have emerged, that give up the regular grid structure and instead work directly on unordered point clouds. PointNet [QSMG17] is one of the first among those approaches and does not take any structure or neighbourhood into account. The internal shape representation here is created by aggregating point descriptors. As the relation between nearby points is often important to characterize shape, this work has been extended in PointNet++ [QYSG17] where points are hierarchically grouped based on their neighbourhood and PointNet is applied to those local point clouds. On the other hand, dynamic graph CNNs [WSL19] encode the information of a local neighborhood via graph convolutions. PCNNs [AML18] generalize convolutions over points via the extension of the convolution operation to continuous volumetric functions. In this manner they benefit from translational invariance and parameter sharing of convolutions, without the drawback of the memory size of high resolution voxel grids. Rethage et al. [RWS18] propose to combine the advantages of point clouds and grid structures by extracting features from points in the local neighbourhood of each grid cell using a network similar to PointNet. On the resulting representation, 3D convolutions can be applied. As a single grid cell encodes details of the point cloud and not just a binary occupancy value, a low resolution grid is sufficient. This approach is most similar to the encoder used in our framework.
Most current approaches [FSG17, NW17, ADMG18, GWM18, LCHL18] for the generation of point clouds employ fully connected layers, sometimes in combination with upsampling and convolution layers, to generate a fixed number of points. [LCHL18] employ both a convolution branch to recover the coarse shape and a fully connected branch for the details of the object. Unlike our approach they propose 2D convolutions that result in images with 3 channels, which are interpreted as point coordinates. A different approach is taken by [SUHR17] where instead of learning to output points directly, Sinha et al. propose to learn a mapping from 2D to 3D. By sampling the 2D domain one can obtain a point cloud. This allows the number of generated points to be flexible. Groueix et al. [GFK18] propose a method that builds on this approach. However, instead of a single mapping a whole atlas of those is learned by training several networks in the style of [SUHR17] that do not share parameters. The loss then ensures that each network learns a different mapping and is responsible for a different part of the shape. We employ a similar point generation technique in the sense that we also learn a mapping from 2D to 3D, instead of using fully connected layers to directly generate a fixed number of points. However, we arrive at these maps in a different manner by generating them per grid cell with our proposed convolutional decoder. A different class of networks recently emerged to represent 3D shapes as an implicit function [PFS19, CZ19, MON19]. This function can then be sampled to reconstruct explicit geometry.
We want to represent our geometry as point clouds since they can approximate 3D shapes at a higher resolution without incurring the memory costs that voxel grids entail. However, we also want to benefit from the advantages of grid structures, enabling the use of convolutional layers and Adaptive Instance Normalization (AdaIN). To this purpose we propose our convolutional decoder (Section [), which starts out with a low resolution grid and successively increases the resolution up to the final desired grid size. We then generate points for each grid cell. Conversely, for our encoder (Section [) we embed the input point cloud into a voxel grid. A network then encodes and stores local parts of the point cloud for each corresponding (closest) grid cell. This voxel grid can then be encoded with a 3D convolutional network.
In traditional convolutional autoencoders the output of the encoder is passed to the decoder, who repeatedly upsamples it in order to produce the reconstruction of the input. This means that even the encoding of fine details of the shape has to pass through the entire decoder, since high and lowlevel features are not distinguished. In contrast, our proposed decoder inserts the encoded shape information at various stages of the upsampling process. We will explain our decoder in detail first, followed by the encoder. In order to achieve highquality results we introduce several additional losses.
Inspired by Karras et al. [KLA19] we propose a convolutional decoder for point clouds based on Adaptive Instance Normalization (AdaIN) as used in a number of styletransfer methods [DSK17, GLK17, HB17, DPS18]. Given is an encoder that maps an input point cloud to a latent vector . A naïve decoder would map to via a multilayer perceptron (MLP). One problem with this approach is that a series of fully connected layers means adding a large number of parameters to the network.
Another problem is that in order to reconstruct fine detail of in every layer of the network is required to preserve the entire shape information. A small change in one of the parameters during backpropagation can have widereaching global effects on . While, one can reduce the number of parameters used by introducing a convolutional decoder, the problem of the interplay of different parameters during backpropagation remains. Karras et al. [KLA19] show that using AdaIN with a convolutional decoder/generator can produce impressive results for images. An AdaIN layer works by first normalizing its input features and then applying an affine transformation per instance. The transformation parameters are an additional input (e.g. computed from ). In practise, this means that our decoder is constructed via a series of upsampling, convolution, instance normalization [UVL16] and affine feature transformation layers followed by a nonlinearity (see Figure 2). In contrast to traditional convolutional networks, the entire shape specific information is introduced through the affine transformations and is not passed through all layers of the decoder. Instead the upsampling process is applied to a learnable parameter block . For more details on the architecture see Appendix [.
Thus a given (by some encoder) is mapped to a vector that contains the scaling and translation coefficients for each affine feature transformation layer. For every layer with feature dimension where AdaIN is applied we select a slice . We interpret such that . As we regard only a single layer, we omit in the following. The intermediate features are first normalized and then scaled and translated:
(\theequation) 
where and are the mean and variance of over one instance. Since all operations are done for each channel separately, in the following we will omit for readability.
As a result of this localized interaction the optimization problem becomes more well behaved. Let be the gradient of a loss function (see Section [) with respect to the output of an intermediate normalization layer. The gradient w.r.t. a single cell of its input is given as
(\theequation) 
where and is the number of cells. For a scaling and a constant translation , consider the case where . Then because has zero mean and unit variance
(\theequation)  
Thus, there is no gradient w.r.t. a scaling and translation of running through the normalization layer, which is only natural as such a transformation would be cancelled out by the normalization anyways. AdaIN allows us to set this affine transformation individually for each object. Therefore, the gradient w.r.t. its parameters does not have to pass through the entire decoder. Consequently, the convolutional layers only have to learn nonaffine interactions.
Our proposed convolutional decoder so far only generates volumetric grids. We are however interested in generating point clouds. Therefore, as shown in Figure 2, for each cell we feed its encoded information into a simple MLP. This MLP predicts two values and . is a binary variable predicting whether a cell is filled or empty. is a probability density function (i.e. the likelihood, that a sample should be generated for a particular cell). Distributing the estimation of this information over two variables helps us in dealing with empty cells, as the density prediction seldom actually reaches zero and we thus would introduce points at unwanted locations. For all cells that are classified as filled we then distribute the total number of output samples proportionally to the density estimates of the cells. Thus our network is independent of the number of points we want to generate. This number can even be changed between training and inference (see Figure [).
The actual generation of points is done in a similar manner to Groueix et al. [GFK18] and Yin et al. [YHCOZ18]. The idea is to learn a parameterization from a dimensional domain to . Then by randomly sampling this domain from a uniform distribution and applying the map, we get our dimensional points. In all our experiments we set , since we assume that locally the shape can be approximated with a surface patch. In practice we apply this map by concatenating the dimensional sample to the encoded cell information and feeding the resulting vector into a MLP, which outputs a dimensional point. Thus the MLP represents a map conditioned on . During inference we sample the dimensional domain uniformly and then apply a number of steps of Lloyd’s algorithm [Llo82] to ensure an even coverage of the space. This further improves our results as shown in Section 4. The predicted samples of each cell are offset by the corresponding cell centers.
For our encoder (see Figure 3) we follow a similar approach to Rethage et al. [RWS18]. We isotropically normalize the input point cloud such that the longest edge of its axisaligned bounding box is scaled to the range . This point cloud is then embedded into a volumetric grid consisting of cells. For each grid cell we encode the local neighborhood of (all points within a radius to the cell center) via a small PointNet (proposed by Qi et al. [QSMG17]). Apart from using fewer number of parameters we also aggregate the final encoding of point clouds by computing the mean of the point features instead of the maximum as proposed in the original paper. Since we make use of a PointNet we are able to handle input point clouds with varying number of points.
This results in a grid where each cell has an dimensional feature vector ( in all our experiments). This grid can then be passed through a 3D CNN, which consists of a series of convolution, batchnorm and maxpooling layers. The output is an encoding of . For more details on the architecture see Appendix [.
We define the distance of a point to a point cloud as
(\theequation) 
In order to compare the input point cloud to the reconstructed point cloud we measure the difference with the commonly used Chamfer distance as proposed for point clouds in [FSG17],
(\theequation) 
This gives us a gradient for every point in . However, we found that additionally formulating a sharper version of the Chamfer distance benefits training (see Section [). With the formulation
(\theequation) 
the gradients of points that incur a larger error are weighted more heavily with . For high this measure can be seen as similar to the Hausdorff distance. In our experiments we used .
Since is generated by offsetting generated percell point clouds by the corresponding cell centers , we want to enforce a notion of locality (i.e. each cell only contributes to the part of in its vicinity). Thus we add a loss
(\theequation) 
This penalizes any generated points that are too far away from their cell centers. We choose to allow points to be distributed within their generating cell and its direct neighbours.
We cannot directly train the density estimates and filled cell predictions using only the pointwise differences shown above. This is because the differences do not give a gradient w.r.t. the number of points per cell. For this reason we generate ground truth densities and label the filled cells based on the input. Training the MLP that predicts the density and probability that a cell is filled is done by using the mean squared error
(\theequation) 
and the binary cross entropy loss
(\theequation) 
respectively. Here and denote the ground truth. Thus our loss during training is
(\theequation)  
In all our experiments we chose , , , , and .
We evaluate our decoder network both by showing the effectiveness of several design choices and by comparing our results with the current state of the art on the task of autoencoding 3D point clouds. All our experiments with our proposed method were done on the ShapeNet dataset [CFG15], where we evaluated both our method and the methods proposed in [GFK18, LCHL18]. Additionally, we performed experiments using their respective settings and datasets. This is necessary for a thorough comparison, since prior work employs different datasets, data normalization techniques and evaluation criteria. Furthermore, we can assume that their proposed network architectures were tuned according to the respective datasets. Our networks were trained using AMSGrad [RKK18] (, , learning rate ). For evaluation on the testing set we used the network weights that performed best on the validation set. All other networks were trained using the hyperparameters settings suggested in the respective works.
For our experiments we made use of the official training, validation, and testing split of the ShapeNet Core dataset (v2), which consists of ca. 50k models in 55 different categories. We found that a high quality sampling is important to achieve good results (see Table 1), as the loss is strongly affected by it. Minimizing the Chamfer distance on a noneven sparse sampling does not necessarily mean that we are able to achieve a good approximation of the underlying surface. A large distance from a reconstructed point to the closest target point can be either caused by a great distance to the underlying surface (which we want to penalize) or by the lack of samples in this particular part of the surface (which we do not want to penalize). Therefore, it is desirable that the sampling is as even as possible over the entire shape. To achieve such a sampling, we strongly oversampled the objects uniformly (with roughly 80k points) and then chose a subset (16k points) of those with farthest point sampling.
As our encoder sorts all points into a grid, we normalize the point clouds to the size of the unit cube centered in the origin. No further data augmentation is applied. All metrics are however computed on unnormalized shapes to simplify future comparisons. When not mentioned otherwise, all distances are reported between point clouds with 2500 points.
To motivate our design choices we performed an extensive ablation study, reporting the Chamfer distance obtained on the testing set for different changes in our input, architecture or loss function (Table 1). To show the effect of an evenly distributed point cloud, we trained the network on a uniform random sampling (1) as used in [LCHL18]. We evaluated on our high quality point clouds. To motivate the use of AdaIN, we implemented a strong baseline in the form of a convolutional autoencoder. We used the same encoder as in our proposed network. However, for the decoder we used a convolutional decoder without AdaIN (2) (i.e. is passed directly into the decoder and is no longer necessary). To ensure a fair comparison we used a similar number of parameters.
While our proposed architecture enables the possible application of nine layers of AdaIN (7), we found that this lead to some overfitting on the training data. Therefore, we limit the number of affine feature transformations to the first three layers (8). All subsequent outputs of instance normalization layers are not scaled and translated. This architecture achieved the best result (marked in bold in Table 1).
To show the effectiveness of the additionally introduced losses, we trained networks without them and show the difference in the resulting Chamfer distance (3,4). For further comparison, we trained a network in a fairly simple manner by only using the chamfer distance as a loss and no AdaIN on randomly sampled point clouds (5). Finally, we show that sampling the learned map from 2D to 3D at fixed, well distributed positions (as done in [GFK18]) instead of randomly during inference further improves the results (6). Not using the cell classification loss has a minor negative impact on the results in the order of the fourth decimal. To put these numbers into context, we compare a random sampling of the shape with the ground truth (9).
We compare against AtlasNet [GFK18] and SONet [LCHL18] both on our own dataset (Table 2) as well as on their respective datasets (Table 3). For AtlasNet we trained their best performing network (125 Patches) on our dataset. SONet does not allow to output point clouds with 2500 points without changing the suggested architecture. Instead, we compare against the two presented versions of the network. One generates 1280 points (Table 3) and one has an output size of 4608 points (Table 2). The numbers reported in their paper are from a network outputting 1280 points, consequently we trained ours similarly (i.e. 1024 input points and 1280 output points). Furthermore, they use a slightly different definition of the Chamfer distance. They compute the Euclidean distance between closest points instead of its squared version. For a fair comparison on our dataset we report the Chamfer distance between a target of 2500 points and the entire point cloud (4608 points) as well as subsamplings (2500 points) of it.
Note that the computed distances are not comparable across datasets due to differences in normalization and evaluation methods. As can be seen in Tables 2 and 3 our method outperforms AtlasNet and SONet on our dataset as well as on the ones used by the respective authors. Qualitative results are shown in Figure 4. For these examples, our method is less prone to produce outliers and reconstructs the shape contours more faithfully.
To demonstrate the usefulness of our convolutional decoder we show results in three applications. Our hyperparameters and architecture were not tuned particularly for these demonstrations. We expect that with more carefully chosen settings, better results could be achieved.
For single view reconstruction (see Figure 5) we follow [CXG16] and use a subset of ShapeNet consisting of 13 classes. To be comparable we use their rendered views, as well as their sampling. Similar to [GWM18] we used a pretrained VGG11 [SZ15] as an encoder. The rest of our network is unchanged to the autoencoder setting. We manage to achieve competitive quantitative results as shown in Table 4.
As our network architecture is indifferent to the number of input or output points, it is straightforward to use our model for the task of point cloud upsampling. We train our network on our training set to take between 50 and 500 input points, but output 5000. Although there are several methods that use neural networks for point cloud upsampling [YWH19, YLF18], their setting is different as they regard local patches of the geometry and compute a denser sampling there. In contrast, we regard the shape as a whole. As a result these methods require the input to be sampled densely enough that local patches convey geometric meaning. For our method it is sufficient that the general shape is conveyed in order to get results of a good quality. We demonstrate this on severely undersampled point clouds of the test set with only 50 points as input (Figure 6). Note that our method is able to robustly output point clouds of size 16000 even though the network was trained to output 5000 points.
Our decoder can not only be used to reconstruct point clouds for a given input but is also able to generate new shapes as well. A commonly used generative model is the variational autoencoder (VAE) as proposed by Kingma et al. [KW14]. We implemented a conditional VAE version of our network, with only minor changes to the original autoencoder. Conditioning on different classes is done by passing the category as a onehot encoding vector into a MLP, which generates (see Figure 2). The latent vector is sampled from a multivariate Gaussian, whose parameters are predicted by the encoder. This allows us to sample the latent space in order to generate shapes for a specified category as shown in Figure 7.
In this work we have introduced a convolutional decoder that can generate high quality point clouds of arbitrary size. Our method is able to achieve state of the art results for autoencoding tasks by making use of the benefits offered by AdaIN, careful consideration of even sampling, as well as several additions to the Chamfer distance as losses. We outline several possible applications for our method in the fields of single view reconstruction, point cloud upsampling and synthesis.
Our architecture inherits some of the common limitations that come with voxelbased representations. That is, our method is not invariant to rotations of the input and could incur a larger memory cost at higher grid resolutions. However, we show that with a relatively low resolution () we are able to generate results of a high quality. Furthermore, we approximate the geometry in each filled grid cell as a surface patch. For locally more complex geometries this might be a limitation.
Nevertheless, we are convinced that our method is useful in future research on 3D shape synthesis. One direction is the use of a generator similar to our decoder in the setting of generative adversial networks (GANs) as originally proposed by Goodfellow et al. [GPAM14]. Another interesting research direction are more detailed shape modifications enabled by affine feature transformations at varying levels of detail.
The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/20072013)/ERC grant agreement n [340884], as well as the Deutsche Forschungsgemeinschaft DFG – 392037563.
Our encoder consists of a small PointNet and an 3D CNN. The PointNet is constructed as FC8FC16FC32FC32. FC is a fully connected layer (in this case without bias) with output dimensionality . After every fully connected layer we apply batchnorm as proposed by Ioffe and Szegedy [IS15]. We also apply the exponential linear unit (ELU) as an activation function as proposed by Clevert et al. [CUH16] after every batchnorm layer except for the last one. In order to construct the final dimensional feature vector for each cell we compute the mean feature instead of taking the maximum.
The 3D CNN is constructed as C64C64C64MPC128C128MPC256C256MPC512C512MPC512C1024. C is a 3D convolution layer with kernel size , zeropadding of 1, stride of 1, and output feature dimensionality . For C1024 we use no padding and a kernel size of in order to reduce the output to a 1024dimensional vector. We do not use bias for our convolution operations. After every convolution layer we apply batchnorm and ELU. MP refers to a maxpooling layer with kernel size and stride 1.
For our decoder we use a fully connected layer with bias to map to . The convolutional decoder is constructed as PC512UC512C256UC256C128UC128C64UC64C62. P refers to the learnable constant parameter block of size . C refers to 3D convolution layers with output feature dimensionality , kernel size , stride of 1, and zeropadding of 1. We do not use bias for our convolution operations. After every convolution and P we apply dropout as proposed by Srivastava et al. [SHK14] with a probabilty of 0.2. AdaIN is applied after every dropout layer and after P with the scaling and translation parameters provided by . For every convolution layer we apply ELU after AdaIN.
Our point cloud generation MLP is structured as FC64FC64FC32FC32FC16FC16FC8FC3. We apply ELU after every FC layer except for the last one.
The MLP that estimates the density and classifies whether a grid cell contains points or not is constructed as FC16FC8FC4FC2. After every fully connected layer we apply batchnorm and ELU except for the last one.
 [ADMG18] Achlioptas P., Diamanti O., Mitliagkas I., Guibas L. J.: Learning representations and generative models for 3d point clouds. International Conference on International Conference on Machine Learning (2018).
 [AML18] Atzmon M., Maron H., Lipman Y.: Point convolutional neural networks by extension operators. ACM Transactions on Graphics 37 (03 2018).
 [CFG15] Chang A. X., Funkhouser T., Guibas L., Hanrahan P., Huang Q., Li Z., Savarese S., Savva M., Song S., Su H., et al.: Shapenet: An informationrich 3d model repository. arXiv preprint arXiv:1512.03012 (2015).
 [CUH16] Clevert D.A., Unterthiner T., Hochreiter S.: Fast and accurate deep network learning by exponential linear units (elus). International Conference on Learning Representations (2016).
 [CXG16] Choy C. B., Xu D., Gwak J., Chen K., Savarese S.: 3dr2n2: A unified approach for single and multiview 3d object reconstruction. European Conference on Computer Vision (2016), 628–644.
 [CZ19] Chen Z., Zhang H.: Learning implicit fields for generative shape modeling. IEEE Conf. on Computer Vision and Pattern Recognition (2019).
 [DPS18] Dumoulin V., Perez E., Schucher N., Strub F., Vries H. d., Courville A., Bengio Y.: Featurewise transformations. Distill 3, 7 (2018), e11.
 [DSK17] Dumoulin V., Shlens J., Kudlur M.: A learned representation for artistic style. International Conference on Learning Representations (2017).
 [FELWM18] Fey M., Eric Lenssen J., Weichert F., Müller H.: Splinecnn: Fast geometric deep learning with continuous bspline kernels. IEEE Conf. on Computer Vision and Pattern Recognition (2018), 869–877.
 [FSG17] Fan H., Su H., Guibas L. J.: A point set generation network for 3d object reconstruction from a single image. IEEE Conf. on Computer Vision and Pattern Recognition (2017), 2463–2471.
 [GFK18] Groueix T., Fisher M., Kim V. G., Russell B., Aubry M.: AtlasNet: A PapierMâché Approach to Learning 3D Surface Generation. IEEE Conf. on Computer Vision and Pattern Recognition (2018).
 [GLK17] Ghiasi G., Lee H., Kudlur M., Dumoulin V., Shlens J.: Exploring the structure of a realtime, arbitrary neural artistic stylization network. British Machine Vision Conference (2017).
 [GPAM14] Goodfellow I., PougetAbadie J., Mirza M., Xu B., WardeFarley D., Ozair S., Courville A., Bengio Y.: Generative adversarial nets. Advances in Neural Information Processing Systems (2014), 2672–2680.
 [GWM18] Gadelha M., Wang R., Maji S.: Multiresolution tree networks for 3d point cloud processing. European Conference on Computer Vision (2018).
 [HB17] Huang X., Belongie S.: Arbitrary style transfer in realtime with adaptive instance normalization. IEEE International Conference on Computer Vision (2017), 1501–1510.
 [HLH17] Han X., Li Z., Huang H., Kalogerakis E., Yu Y.: Highresolution shape completion using deep neural networks for global structure and local geometry inference. IEEE Conf. on Computer Vision and Pattern Recognition (2017), 85–93.
 [IS15] Ioffe S., Szegedy C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. International Conference on International Conference on Machine Learning 37 (2015), 448–456.
 [KLA19] Karras T., Laine S., Aila T.: A stylebased generator architecture for generative adversarial networks. IEEE Conf. on Computer Vision and Pattern Recognition (2019).
 [KW14] Kingma D. P., Welling M.: Autoencoding variational bayes. International Conference on Learning Representations (2014).
 [LBS18] Li Y., Bu R., Sun M., Wu W., Di X., Chen B.: Pointcnn: Convolution on xtransformed points. Advances in Neural Information Processing Systems (2018), 828–838.
 [LCHL18] Li J., Chen B. M., Hee Lee G.: Sonet: Selforganizing network for point cloud analysis. IEEE Conf. on Computer Vision and Pattern Recognition (2018), 9397–9406.
 [LKL18] Lin C.H., Kong C., Lucey S.: Learning efficient point cloud generation for dense 3d object reconstruction. ThirtySecond AAAI Conference on Artificial Intelligence (2018).
 [Llo82] Lloyd S.: Least squares quantization in pcm. IEEE transactions on information theory 28, 2 (1982), 129–137.
 [LXC17] Li J., Xu K., Chaudhuri S., Yumer E., Zhang H., Guibas L.: Grass: Generative recursive autoencoders for shape structures. ACM Transactions on Graphics 36, 4 (2017), 52.
 [MON19] Mescheder L., Oechsle M., Niemeyer M., Nowozin S., Geiger A.: Occupancy networks: Learning 3d reconstruction in function space. IEEE Conf. on Computer Vision and Pattern Recognition (2019).
 [NW17] Nash C., Williams C. K.: The shape variational autoencoder: A deep generative model of partsegmented 3d objects. Computer Graphics Forum 36, 5 (2017), 1–12.
 [PFS19] Park J. J., Florence P., Straub J., Newcombe R., Lovegrove S.: Deepsdf: Learning continuous signed distance functions for shape representation. IEEE Conf. on Computer Vision and Pattern Recognition (2019).
 [QSMG17] Qi C. R., Su H., Mo K., Guibas L. J.: Pointnet: Deep learning on point sets for 3d classification and segmentation. IEEE Conf. on Computer Vision and Pattern Recognition 1, 2 (2017), 4.
 [QYSG17] Qi C. R., Yi L., Su H., Guibas L. J.: Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in Neural Information Processing Systems (2017), 5099–5108.
 [RKK18] Reddi S. J., Kale S., Kumar S.: On the convergence of adam and beyond. International Conference on Learning Representations (2018).
 [RWS18] Rethage D., Wald J., Sturm J., Navab N., Tombari F.: Fullyconvolutional point networks for largescale point clouds. European Conference on Computer Vision (2018).
 [SHK14] Srivastava N., Hinton G., Krizhevsky A., Sutskever I., Salakhutdinov R.: Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15 (2014), 1929–1958.
 [SUHR17] Sinha A., Unmesh A., Huang Q., Ramani K.: Surfnet: Generating 3d shape surfaces using deep residual networks. IEEE Conf. on Computer Vision and Pattern Recognition (2017), 6040–6049.
 [SZ15] Simonyan K., Zisserman A.: Very deep convolutional networks for largescale image recognition. International Conference on Learning Representations (2015).
 [UVL16] Ulyanov D., Vedaldi A., Lempitsky V.: Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022 (2016).
 [WSL19] Wang Y., Sun Y., Liu Z., Sarma S. E., Bronstein M. M., Solomon J. M.: Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics (2019).
 [WSLT18] Wang P.S., Sun C.Y., Liu Y., Tong X.: Adaptive OCNN: A Patchbased Deep Representation of 3D Shapes. ACM Transactions on Graphics 37, 6 (2018).
 [WZX16] Wu J., Zhang C., Xue T., Freeman B., Tenenbaum J.: Learning a probabilistic latent space of object shapes via 3d generativeadversarial modeling. Advances in Neural Information Processing Systems (2016), 82–90.
 [YHCOZ18] Yin K., Huang H., CohenOr D., Zhang H.: P2pnet: Bidirectional point displacement net for shape transform. ACM Transactions on Graphics 37, 4 (2018), 152:1–152:13.
 [YLF18] Yu L., Li X., Fu C.W., CohenOr D., Heng P.A.: Punet: Point cloud upsampling network. IEEE Conf. on Computer Vision and Pattern Recognition (2018), 2790–2799.
 [YWH19] Yifan W., Wu S., Huang H., CohenOr D., SorkineHornung O.: Patchbased progressive 3d point set upsampling. IEEE Conf. on Computer Vision and Pattern Recognition (2019).