Variational Autoencoders for Deforming 3D Mesh Models

Variational Autoencoders for Deforming 3D Mesh Models

Qingyang Tan1,2, Lin Gao1, Yu-Kun Lai 3 , Shihong Xia1
1
Beijing Key Laboratory of Mobile Computing and Pervasive Device,
Institute of Computing Technology, Chinese Academy of Sciences
2School of Computer and Control Engineering, University of Chinese Academy of Sciences
3School of Computer Science & Informatics, Cardiff University
tanqingyang14@mails.ucas.ac.cn, {gaolin, xsh}@ict.ac.cn, LaiY4@cardiff.ac.uk
Abstract

3D geometric contents are becoming increasingly popular. In this paper, we study the problem of analyzing deforming 3D meshes using deep neural networks. Deforming 3D meshes are flexible to represent 3D animation sequences as well as collections of objects of the same category, allowing diverse shapes with large-scale non-linear deformations. We propose a novel framework which we call mesh variational autoencoders (mesh VAE), to explore the probabilistic latent space of 3D surfaces. The framework is easy to train, and requires very few training examples. We also propose an extended model which allows flexibly adjusting the significance of different latent variables by altering the prior distribution. Extensive experiments demonstrate that our general framework is able to learn a reasonable representation for a collection of deformable shapes, and produce competitive results for a variety of applications, including shape generation, shape interpolation, shape space embedding and shape exploration, outperforming state-of-the-art methods.

1 Introduction

With the development of performance capture techniques, 3D mesh sequences of deforming objects (such as human bodies) become increasingly popular, which are widely used in computer animation. Deforming meshes can also be used to represent a collection of objects with different shapes and poses, where large-scale non-linear deformations are common. There are still challenging problems to analyze deforming 3D meshes and synthesize plausible new 3D models. From the analysis perspective, complex deformations mean it is difficult to embed such 3D meshes into a meaningful space using existing methods. Acquiring high quality 3D models is still time consuming, since multiple scans are usually required to address unavoidable occlusion. Thus effectively generating plausible new models is also highly demanded.

In this paper, we propose a novel framework called mesh variational autoencoders (mesh VAE), which leverages the power of neural networks to explore the latent space behind deforming 3D shapes, and is able to generate new models not existing in the original dataset. Our mesh VAE model is trained using a collection of 3D shapes with the same connectivity. Many existing deformable object collections satisfy this. They are not restricted to a single deformable object; examples include the MPI FAUST dataset [2] which includes human bodies of different shapes in different poses. In general, meshes with the same connectivity can be obtained using consistent remeshing. Although convolutional neural networks (CNNs) have been widely used for image analysis and synthesis, applying them to 3D meshes is still quite limited. Previous work focuses on generalizing CNNs from 2D to 3D while preserving image-like regular grid connectivity, including 3D convolution over voxels (e.g. [44]), and 2D convolution over geometry images (e.g. [36]). However, the voxel representation is inefficient and given the practical limit can only represent rough 3D models without details. The parameterization process to generate geometry images involves unavoidable distortions and may destroy useful structure information.

Our work aims to produce a generative model capable of analyzing model collections and synthesizing new shapes. To achieve this, instead of using representations with image-like connectivity, we propose to use a state-of-the-art surface representation called RIMD (Rotation Invariant Mesh Difference) [9] to effectively represent deformations, along with a variational autoencoder [19]. To cope with meshes of arbitrary connectivity, we propose to use a fully-connected network, along with a simple reconstruction loss based on MSE (mean square error). As we will show later, the large number of coefficients in this network can be efficiently trained, even with a small number of training examples. We also propose a new extended model where an additional adjustable parameter is introduced to control the variations of different latent variables in the prior distribution. This provides the flexibility of enforcing certain dimensions in the latent space to represent the most important differences in the original dataset. As we will show later, by using an effective feature representation and a suitable network architecture, our general framework is able to produce competitive results for various applications, including shape generation, shape interpolation, shape embedding and shape exploration, outperforming state-of-the-art methods, where traditionally dedicated methods are needed for different applications.

2 Related Work

3D Shape Representation and Interpolation. To effectively represent a collection of shapes with the same connectivity, a naive solution is to take the vertex positions as the representation. Such representations however are not translation or rotation invariant. Among various representations, the RIMD representation [9] is translation and rotation invariant, and suitable for data-driven shape analysis. Therefore, we use it to represent shapes in our framework.

A natural application for shape representations is to interpolate or blend shapes. Existing methods can be largely categorized into geometry-based methods (e.g. [14]) and data-driven methods (e.g. [8]). The latter exploits latent knowledge of example shapes, and so can produce more convincing interpolation results when such examples are available. Our method is a generic framework and as we will show later, it produces comparable or even better results than state-of-the-art data-driven methods. Rustamov et al. [32] and Corman et al. [7] propose map-based methods to describe distortions between models, which can be used with PCA (Principal Component Analysis) to produce linear embedding, while our embedding application is non-linear.

Deep Learning for 3D Shape Analysis. In recent years, effort has been made to generalize image-based CNN models to analyze 3D shapes. Su et al. [38] and Qi et al. [30] use multi-view CNN models for 3D object classification. Li et al. [22] produce joint embeddings of shapes and images, which are useful for shape-based image retrieval. Shi et al. [34] propose a method that converts 3D shapes into a panoramic view, and propose a variant of CNN to learn the representation from such views. Maturana and Scherer [27] combine the volumetric occupancy grid representation with a 3D CNN to recognize 3D objects in real time. Wu et al. [43] interpret 3D object structure and 2D keypoint heatmaps from 2D images. Some other works generalize CNNs from the Euclidean domain to non-Euclidean domain [3, 26, 5, 4, 25], which is useful for 3D shape analysis such as establishing correspondences. Maron et al. [24] define a natural convolution operator on surfaces for CNNs by parameterizing the surface to a planar flat-torus. Our work has a rather different focus for analyzing and synthesizing 3D shapes with large deformations, although it may benefit from [24] to generate consistent meshes. Tan et al. [39] propose a specific method to analyze sparse deformation patterns using CNNs, while our work provides a more generic framework for various applications.

3D Model Synthesis. Traditional methods for 3D model synthesis use probabilistic inference. Xue et al. [45] recover 3D geometry from 2D line drawings using examples and a maximum-a-posteriori (MAP) probability optimization. Huang et al. [13] use a probabilistic model to generate component-based 3D shapes. Such methods are generally suitable for specific types of 3D shapes. Recent works consider exploiting the capability of deep learning for 3D model synthesis. For this purpose, the voxel representation is resemblance to 2D images and widely used. Wu et al. [44] propose a generative model based on a deep belief network trained on a large, unannotated database of voxelized 3D shapes. Girdhar et al. [10] jointly train an encoder for 2D images, and a decoder for voxelized 3D shapes, allowing 3D reconstruction from a 2D image. Yan et al. [46] generate 3D objects with deep networks using 2D images during training with a 3D to 2D projection layer. Rezende et al. [31] learn strong deep generative models for 3D structures, which is able to recover 3D structures from 2D images via probabilistic inference. Choy et al. [6] develop a recurrent neural network which takes in images of an object instance from arbitrary viewpoints and outputs a reconstruction of the object in the form of 3D occupancy grids. Sharma et al. [33] propose a convolutional volumetric autoencoder without object labels, and learn the representation from noisy data by estimating the voxel occupancy grids, which is useful for shape completion and denoising. Such methods however suffer from high complexity of the voxel representation and can only practically synthesize coarse 3D shapes lack of details.

Sinha et al. [35] propose to learn CNN models using geometry images [12], which is then extended to synthesize 3D models [36]. Geometry images allow surface details to be well preserved. However, the parameterization used for generating geometry images involves unavoidable distortions and is not unique. This is more challenging for shapes with complex topology (e.g. high-genus models). Tulsiani et al. [40] develop a learning framework to abstract complex shapes and assemble objects using 3D volumetric primitives. Li et al. [21] propose a neural network architecture for encoding and synthesizing 3D shapes based on their structures. Nash et al. [28] use a variational encoder to synthesize 3D objects segmented into parts. Both [21] and  [28] are designed to synthesize man-made objects and require part-based segmentation as input, and cannot cope with unsegmented or deformed shapes addressed in this paper. Such methods can produce shapes with complex structures, but the level of details is restricted to the components and primitives used. We propose a general framework that uses a rotation invariant mesh representation as features, along with a variational encoder to analyze shape collections and synthesize new models. Our method can generate novel plausible shapes with rich details, as well as a variety of other applications.

3 Feature Representation

In this section, we briefly summarize the feature used for training neural networks, as well as reconstructing and generating new models. We use the rotation-invariant mesh difference (RIMD) feature [9]. We assume that we have () models, with vertices that are in one-to-one correspondence. Generally, the first model is considered as the reference model and other models are deformed models. We show that the choices of base mesh will not affect the results in Sec. 5.1. The deformation gradient at vertex (the -th vertex) can be obtained by minimizing the energy function:

(1)

where is the one-ring neighbors of vertex , , and . and are the positions of on the reference and deformed models, respectively. is the cotangent weight , where and are angles opposite to the edge connecting and . As shown in [20, 9], the cotangent weight is helpful to avoid discretization bias from underlying smooth surfaces to meshes. The deformation gradient can be decomposed into a rotation part and a scaling/shear part as , where the scaling/shear part is rotation invariant [17]. The rotation difference from to an adjacent vertex cancels out global rotation and is thus also rotation invariant: . The RIMD representation is obtained by collecting the logarithm of the rotation difference matrix of each edge () and scaling/shear matrix of each vertex :

(2)

The use of matrix logarithm helps make the feature linearly combinable. Given a RIMD feature, which can be the output of our mesh VAE, the mesh model can be reconstructed efficiently by optimizing the energy in Eqn. 1. See [9] for details. As shown in Table 2 of [9], the mean reconstruction error between the reconstructed and ground truth shapes is extremely small () with no visual difference, which shows the high representation power of RIMD feature.

4 Mesh VAE

In this section, we introduce our mesh VAE. We first discuss how to preprocess the feature representation to make it more suitable for neural networks, and then introduce the basic mesh VAE architecture, as well as extended models taking conditions into account for improved control of the synthesized shapes and with improved embedding capability. Finally we summarize a variety of applications.

4.1 Feature Preprocessing

In our architecture, we propose to use hyperbolic tangent () as the activation function of the probabilistic decoder’s output layer. The range of is , and it has the gradient vanishing problem during training when the output is near and . Although alternative activation function does not have the gradient vanishing problem, its range of is too large. As a generative model, we expect the network output to be in a reasonable range. With the limited range of the output, the broad range of would make the model difficult to train. The activation function is also applied in recent generation networks [16, 21]. To avoid the gradient vanishing problem of , we use uniform normalization to preprocess the feature to the range of , where is close to but not equal to . The preprocessing is a mapping as follows:

(3)

where is the -th component of model ’s RIMD feature, is the preprocessed feature. If for examples in the dataset, we replace them with . The use of and also allows the network to generate reasonable models with features outside the range of the original dataset. In our experiments, we choose and .

Figure 1: Pipeline of mesh VAE. The model in the figure has vertices and edges. Each of a directed edge has elements, and S of each vertex has elements, so the total feature dimension .

4.2 Mesh VAE Architecture

Following [19], our mesh VAE aims to find a probabilistic encoder and a probabilistic decoder. The encoder aims to map the posterior distribution from datapoint to the latent vector , and the decoder produces a plausible corresponding datapoint from a latent vector . In our mesh VAE, the datapoint is the preprocessed mesh feature . Denote by the dimension of . For mesh models with faces, a typical input RIMD feature dimension is . By default, we set the dimension of the latent space to , which can be adjusted proportionally with varying . In order to train a complicated model to fit the large dimension of mesh features, we replace the reconstruction loss from a probabilistic formulation to a simpler mean square error (MSE). The total loss function for the model is defined as

(4)

where is the -th dimension of model ’s preprocessed RIMD feature, is the -th dimension of model ’s output of the mesh VAE framework, is the number of models in the dataset, is parameters to tune the priority over the reconstruction loss and latent loss, is the latent vector, is the prior probability, is the posterior probability, and is the KL divergence. See [19] for more details. The pipeline for our mesh VAE is summarized in Fig. 1.

Network structure. Since the RIMD features from different datasets have different neighboring structures, the spatial relationship inside the feature is not regular, so we use a fully-connected neural network for both the encoder and the decoder, expecting the network to learn suitable relationships, rather than encoding such neighborhood structure in the network architecture explicitly. All the internal layers use batch normalization [15] and Leaky ReLU [23] as activation layers. The output of the encoder involves a mean vector and a deviation vector. The mean vector does not have an activation function, and the deviation vector uses sigmoid as the activation function, which is then multiplied by an upper bound value of the expected deviation. The output of the decoder uses as the activation function, as we have discussed.

Training details. For most tasks in this paper, we set , which is optimized through unseen data to avoid overfitting. We set the prior probability over latent variables to be (Gaussian distribution with 0 mean and unit variation). We set , such that the output deviation can cover , for consistency with prior distribution. We set the learning rate to be and use ADAM algorithm [18] to train the model.

4.3 Conditional Mesh VAE

When synthesizing new shapes, it is often desirable to control the types of shapes to be generated, especially when the training dataset contains a diverse range of shapes, e.g. the dataset from [29] which contains different shapes and motion sequences. To achieve this, we extend our model to condition on labels in the training dataset following [37]. Specifically, conditions are incorporated in both the encoder and the decoder as additional input, and the loss function is changed to

(5)

where is the output of the conditional mesh VAE framework, and are conditional prior and posterior probabilities respectively. More experimental details are provided in Sec. 5.

Figure 2: MSE reconstruction loss over training iterations.
Figure 3: Randomly generated new shapes using our framework, along with their nearest neighbors in the original dataset shown below.
Figure 4: Randomly generated new shapes using mesh VAE and conditional mesh VAE.

4.4 Extended Model with Improved Low-Dimensional Embedding

The latent space provides an embedding that facilitates various applications. However, to ensure reconstruction quality, the dimension of is typically high (over 100), thus not suitable for low-dimensional embedding. To address this, we propose an extended model that modifies the prior distribution of the latent variables to , where is a tunable vector representing the deviation of the latent space, and is the diagonal matrix of a given vector. Intuitively, if we set the -th component to be a small value, the approximate posterior will be encouraged to also have a small deviation, due to the latent cost. As a result, in order to reduce the reconstruction loss, needs to be aligned to capture dominant changes. This behaviour is similar to PCA, although our model is non-linear and is thus able to better capture the characteristics of shapes in the collection.

Figure 5: Comparison of mesh interpolation results. First row is the result of direct linear interpolation of the RIMD feature, second row is the result of [8], third row is our result. All the results are shown in two views, with artifacts marked using red circles.
Figure 6: Inter-class interpolation with different body shapes and poses.

4.5 Applications

Our mesh VAE model has various applications, which are summarized below:

Generation and Interpolation. Leveraging the capability of VAE, we can produce reasonable vectors in the latent space, and use them as input to the decoder to generate new models outside of the training dataset, which still satisfy the characteristics of the original dataset and are plausible. Meanwhile, similar to other autoencoder frameworks, we can interpolate the encoder results between two different shapes from the original dataset and then use the decoder to reconstruct the interpolated mesh sequence.

Embedding for Visualization and Exploration. Based on our extended model for low-dimensional embedding, we can map the distribution of shapes to a low-dimensional (typically two) latent space for visualization. We further allow users to easily browse the latent space to find the model they want, probably not in the original training dataset, using a synthesis based exploration approach. More details are provided in Sec. 5.

Figure 7: 2D embedding of Dyna dataset. Different colors represent different body shapes. PCA embedding is very sparse so we include two zoom-in subfigures as shown in the circles.
Figure 8: Precision Recall Curve with style and pose as labels using 2D embedding of Dyna dataset.
Figure 9: Exploring the latent shape space.
Latent Dimension
Dataset
Jumping [41] ()
Face [47] ()
Table 1: Per-vertex reconstruction error on held-out shapes with different latent dimensions.
Feature
Dataset 3D Aligned RIMD
Coordinates Coordinates
SCAPE [1] () 3.7418
Jumping [41] () 4.4325
Bouncing [41] () 3.7358
Face [47] () 7.8025
Flag () 1.7916
Table 2: Per-vertex reconstruction error on held-out shapes with different feature representations.
Base Mesh No. No. No.
Points 6890 6002 8002 6890 6890
Error()
Table 3: Per-vertex reconstruction error on held-out shapes of chicken wings dataset from [29] with different mesh densities or base mesh choices.

5 Experiments

5.1 Framework Evaluation

To more thoroughly justify the design choices of our framework and demonstrate the effectiveness of using RIMD features, we compare results with different parameters, settings and input within our framework.

Latent Dimensions. We compare the reconstruction loss (per-vertex position errors) on held-out models with different latent dimensions, and the results are shown in Table 1. This suggests that using 128 dimensions is effective in improving reconstruction quality. Lower dimensions cannot capture enough information, while higher dimensions cannot improve results significantly and can cause overfitting.

MSE Loss. We compare MSE loss we use with alternative probabilistic reconstruction loss following [11] which was originally used for images. The RIMD features are mapped to similar to probability, with sigmoid as output layer, and cross entropy as reconstruction loss. However, this method does not converge even with three times of epochs we use for MSE loss.

RIMD Feature. To verify the effectiveness of the RIMD feature, we work out per-vertex position errors of held-out shapes in different datasets. We compare our method using RIMD feature with baseline methods using 3D vertex coordinates and aligned coordinates by finding global rotations that minimize vertex differences as input, with network layers and parameters adjusted accordingly to optimize their performance. In all the datasets we tested, our method produces visually high quality meshes, substantially reducing reconstruction loss by compared with 3D coordinates and compared with aligned coordinates. The details are shown in Table 2. Our proposed framework is easy to train, as it can be effectively trained with small datasets as these examples in the paper typically contain - models, while avoiding overfitting as demonstrated by the generalizability. We test our method on simplified, original and refined chicken wings dataset from [29] with , and points, and also compare the results when different base meshes are chosen, the per-vertex reconstruction errors of unseen data are shown in Table 3. The differences of per-vertex errors between different density or base choices, are below , with no visual difference. This demonstrates that our results do not depend on the density of meshes or the choice of the base mesh.

5.2 Generalization Ability

To test the generalization ability of the network, we divide a given shape collection into training and validation sets, and reconstruct shapes not seen during the training. We use the chicken wings dataset from [29], with training models and validation models. The MSE reconstruction loss of the RIMD feature over training iterations is shown in Fig. 2. Our framework is efficient to train. The whole training time is only 36.98 minutes using a computer with an Intel Xeon E5-2620 CPU and an NVIDIA Tesla K40C GPU. We can see that the network has a fairly strong generalization ability as the loss for unseen shapes is reasonably low. Note that the total loss involves both the reconstruction loss and the latent loss, so the reconstruction loss may sometimes fluctuate during optimization.

5.3 Generation of Novel Models

Mesh VAE. We use standard setting as the input to the probabilistic decoder, and test the capability of the mesh VAE framework to generate new models. We train the network on the SCAPE dataset [1], ‘swing’ dataset from [41], face dataset [47] and hand dataset. The results are shown in Fig. 3. We can see that, since we learn the latent space of 3D models from our proposed framework, we can easily generate new models, which are generally plausible. To validate that the network does not only memorize the dataset, we also show the models with their nearest neighbors in the original dataset. The nearest neighbors are based on the Euclidean distance in the RIMD feature space, following [9] which considers this as a suitable distance measure to evaluate results related to RIMD features. The model in the original dataset with the shortest distance to the newly generated model is chosen as the nearest neighbor. It is clear that mesh VAE can generate plausible new models by combining local deformations of different model parts.

Conditional Mesh VAE. With the help of labels as conditions, users can generate more specific models. To show this, we train the conditional mesh VAE on the dataset from [29], conditioned on shapes and motion sequences. We then randomly generate models either conditioned on action with the label ‘running on spot’ which contains shapes of a running action, as well as on body shape ‘50022’ — a female model with BMI , and compare the generation results from the normal mesh VAE. As the results in Fig 4 show, more specific models can be synthesized with the given conditions.

5.4 Mesh Interpolation

We test the capability of our framework to interpolate two different models in the original dataset. First, we generate the mean output of the probabilistic encoder for models in the original dataset. Then, we linearly interpolate between the two outputs and generate a list of inputs to the probabilistic decoder. Finally, we use the outputs of the probabilistic decoder to reconstruct a 3D deformation sequence. We compare the results on ‘jumping’ dataset from [41] with direct linear interpolation of the RIMD feature, and the state-of-the-art data-driven deformation method [8], as shown in Fig. 5. We can see that direct linear interpolation produces models with self-intersections. The interpolation result in the latent space can avoid these problems. Meanwhile, the data-driven method tends to follow the movement sequences from the original dataset which has similar start and end states, and the results have some redundant motions such as the swing of right arm. Our interpolation result gives a reasonable motion sequence from start to end. We also test inter-class interpolation on ‘jumping jacks’ and ‘punching’ datasets from [29], interpolating between models with different body shapes and poses. The results are shown in Fig. 6.

5.5 Embedding

As mentioned in Sec. 4, we adjust the deviation of latent probability to let some dimensions of capture more important deformations. We utilize this capability to embed shapes in a low-dimensional space. We use different motion sequences of different body shapes from [29], and compare the results of our method with standard dimensionality reduction methods PCA, NPE (Neighborhood Preserving Embedding) and t-SNE (t-Distributed Stochastic Neighbor Embedding). When training meshVAE, we set the deviation of the first two dimensions of the latent space to be , and the remaining . The K-nearest neighbor setting for NPE is set to and the perplexity for t-SNE is set to 30, which are optimized to give more plausible results. The comparative results are shown in Fig. 7. Since the dataset contains a large number of diverse models, our embedding is able to capture more than one major deformation patterns in one dimension. When using the first two dimensions for visualization, our method effectively divides all the models according to their shapes, while allowing models of similar poses to stay in close places. PCA is a linear model, and can only represent one deformation pattern in each direction. In this case, it can discriminate body shapes but the distribution is rather sparse. It also cannot capture other modes of variation such as poses in 2D embedding. The results of t-SNE have relatively large distances between primary clusters, similar to PCA. This phenomenon is also mentioned in [42]. NPE cannot even discriminate body shapes, which shows the difficulty of the task. To quantitatively analyze the performance of embeddings, we perform experiments on the retrieval task using styles and poses as labels. The precision-recall curves are shown in Fig. 8, which shows our embedding is more discriminative than alternative methods.

5.6 Synthesis based Exploration of the Shape Space

By adjusting the parameters , our meshVAE model is able to capture different importance levels of features, and thus allows the user to easily explore the latent space of the dataset, and find new models they want. We test this exploration application over the SCAPE dataset [1] and demonstrate the results on 2 levels (each in a 2D space). We set the first two dimensions of to , the next two dimensions , and remaining . In the main figure of Fig. 9, we show the result of browsing the latent space in the first two dimensions. We set the range for both dimensions as , while ignoring the remaining dimensions. We can see that these feature dimensions correspond to dominant shape deformation. The first dimension appears to control the total height of the model; when the value increases, the model changes from standing to squatting. The second dimension appears to control the supporting leg; when the value increases, the model starts to change its supporting leg from left to right. After picking up interested locations in the first level browsing, the user can fix the values of the first two dimensions, and use the next two dimensions to explore the shape space in more detail. In the subfigures of Fig. 9, we show the results of browsing the third and fourth dimensions of the latent space. The values of the first two dimensions are selected based on the choice in the first step, and then the range for the third and fourth dimensions is set to . The third dimension appears to control the height of arms; when the value increases, the model gradually lifts the arms. The fourth dimension appears to control the direction of the body; when the value increases, the model gradually turns to the left.

6 Conclusions

In this paper, we introduce mesh variational autoencoders (mesh VAE), which use a variational autoencoder model with a mesh-based rotation invariant feature representation. We further study an extended model where the variation of latent variables can be controlled. We demonstrate that our generic model has various interesting applications, including analysis of shape collections as well as generating novel shapes. Unlike existing methods, our method can generate high quality deformable models with rich details. Experiments show that our method outperforms state-of-the-art methods in various applications.

7 Acknowledgments

This work was supported by the National Natural Science Foundation of China (No. 61502453 and No. 61611130215), Royal Society-Newton Mobility Grant (No. IE150731), CCF-Tencent Open Research Fund (No. AGR20160118).

References

  • [1] D. Anguelov, P. Srinivasan, D. Koller, S. Thrun, J. Rodgers, and J. Davis. SCAPE: shape completion and animation of people. ACM Trans. Graph., 24(3):408–416, 2005.
  • [2] F. Bogo, J. Romero, M. Loper, and M. J. Black. FAUST: Dataset and evaluation for 3D mesh registration. In IEEE CVPR, 2014.
  • [3] D. Boscaini, J. Masci, S. Melzi, M. M. Bronstein, U. Castellani, and P. Vandergheynst. Learning class-specific descriptors for deformable shapes using localized spectral convolutional networks. In Computer Graphics Forum, volume 34, pages 13–23, 2015.
  • [4] D. Boscaini, J. Masci, E. Rodolà, and M. Bronstein. Learning shape correspondence with anisotropic convolutional neural networks. In NIPS, pages 3189–3197, 2016.
  • [5] D. Boscaini, J. Masci, E. Rodolà, M. M. Bronstein, and D. Cremers. Anisotropic diffusion descriptors. In Computer Graphics Forum, volume 35, pages 431–441, 2016.
  • [6] C. B. Choy, D. Xu, J. Gwak, K. Chen, and S. Savarese. 3D-R2N2: A unified approach for single and multi-view 3D object reconstruction. In ECCV, pages 628–644, 2016.
  • [7] E. Corman, J. Solomon, M. Ben-Chen, L. Guibas, and M. Ovsjanikov. Functional characterization of intrinsic and extrinsic geometry. ACM Trans. Graph., 36(2):14:1–14:17, Mar. 2017.
  • [8] L. Gao, S.-Y. Chen, Y.-K. Lai, and S. Xia. Data-driven shape interpolation and morphing editing. Computer Graphics Forum, 2016.
  • [9] L. Gao, Y.-K. Lai, D. Liang, S.-Y. Chen, and S. Xia. Efficient and flexible deformation representation for data-driven surface modeling. ACM Trans. Graph., 35(5):158:1–158:17, 2016.
  • [10] R. Girdhar, D. Fouhey, M. Rodriguez, and A. Gupta. Learning a predictable and generative vector representation for objects. In ECCV, 2016.
  • [11] K. Gregor, I. Danihelka, A. Graves, D. Rezende, and D. Wierstra. Draw: A recurrent neural network for image generation. In International Conference on Machine Learning, pages 1462–1471, 2015.
  • [12] X. Gu, S. Gortler, and H. Hoppe. Geometry images. In ACM SIGGRAPH, pages 355–361, 2002.
  • [13] H. Huang, E. Kalogerakis, and B. Marlin. Analysis and synthesis of 3D shape families via deep-learned generative models of surfaces. In Computer Graphics Forum, volume 34, pages 25–38, 2015.
  • [14] P. Huber, R. Perl, and M. Rumpf. Smooth interpolation of key frames in a riemannian shell space. Computer Aided Geometric Design, 52:313–328, 2017.
  • [15] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pages 448–456, 2015.
  • [16] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In IEEE CVPR, July 2017.
  • [17] P. Kelly. Mechanics Lecture Notes Part III. 2015.
  • [18] D. Kingma and J. Ba. ADAM: A method for stochastic optimization. In ICLR, 2015.
  • [19] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  • [20] Z. Levi and C. Gotsman. Smooth rotation enhanced as-rigid-as-possible mesh animation. IEEE Trans. Vis. Comput. Graph., 21(2):264–277, 2015.
  • [21] J. Li, K. Xu, S. Chaudhuri, E. Yumer, H. Zhang, and L. Guibas. Grass: Generative recursive autoencoders for shape structures. ACM Trans. Graph., 36(4), 2017.
  • [22] Y. Li, H. Su, C. R. Qi, N. Fish, D. Cohen-Or, and L. J. Guibas. Joint embeddings of shapes and images via cnn image purification. ACM Trans. Graph., 34(6):234, 2015.
  • [23] A. L. Maas, A. Y. Hannun, and A. Y. Ng. Rectifier nonlinearities improve neural network acoustic models. In Proc. ICML, volume 30, 2013.
  • [24] H. Maron, M. Galun, N. Aigerman, M. Trope, N. Dym, E. Yumer, V. G. Kim, and Y. Lipman. Convolutional neural networks on surfaces via seamless toric covers. ACM Trans. Graph., 36(4):71:1–71:10, July 2017.
  • [25] J. Masci, D. Boscaini, M. Bronstein, and P. Vandergheynst. Geodesic convolutional neural networks on riemannian manifolds. In IEEE ICCV Workshops, pages 37–45, 2015.
  • [26] J. Masci, D. Boscaini, M. Bronstein, and P. Vandergheynst. Shapenet: Convolutional neural networks on non-euclidean manifolds. Technical report, 2015.
  • [27] D. Maturana and S. Scherer. Voxnet: a 3D convolutional neural network for real-time object recognition. In IEEE Conference on Intelligent Robots and Systems, pages 922–928, 2015.
  • [28] C. Nash and C. K. Williams. The shape variational autoencoder: A deep generative model of part-segmented 3d objects. 2017.
  • [29] G. Pons-Moll, J. Romero, N. Mahmood, and M. J. Black. Dyna: A model of dynamic human shape in motion. ACM Trans. Graph., 34(4):120:1–120:14, 2015.
  • [30] C. R. Qi, H. Su, M. Nießner, A. Dai, M. Yan, and L. J. Guibas. Volumetric and multi-view cnns for object classification on 3D data. In IEEE CVPR, pages 5648–5656, 2016.
  • [31] D. J. Rezende, S. A. Eslami, S. Mohamed, P. Battaglia, M. Jaderberg, and N. Heess. Unsupervised learning of 3D structure from images. In NIPS, pages 4997–5005, 2016.
  • [32] R. M. Rustamov, M. Ovsjanikov, O. Azencot, M. Ben-Chen, F. Chazal, and L. Guibas. Map-based exploration of intrinsic shape differences and variability. ACM Trans. Graph., 32(4):72:1–72:12, July 2013.
  • [33] A. Sharma, O. Grau, and M. Fritz. Vconv-dae: Deep volumetric shape learning without object labels. In ECCV Workshops, pages 236–250, 2016.
  • [34] B. Shi, S. Bai, Z. Zhou, and X. Bai. Deeppano: Deep panoramic representation for 3-d shape recognition. IEEE Signal Processing Letters, 22(12):2339–2343, 2015.
  • [35] A. Sinha, J. Bai, and K. Ramani. Deep Learning 3D Shape Surfaces Using Geometry Images, pages 223–240. 2016.
  • [36] A. Sinha, A. Unmesh, Q. Huang, and K. Ramani. Surfnet: Generating 3d shape surfaces using deep residual networks. In IEEE CVPR, July 2017.
  • [37] K. Sohn, H. Lee, and X. Yan. Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems, pages 3483–3491, 2015.
  • [38] H. Su, S. Maji, E. Kalogerakis, and E. Learned-Miller. Multi-view convolutional neural networks for 3d shape recognition. In IEEE ICCV, pages 945–953, 2015.
  • [39] Q. Tan, L. Gao, Y.-K. Lai, J. Yang, and S. Xia. Mesh-based autoencoders for localized deformation component analysis. CoRR, abs/1709.04304, 2017.
  • [40] S. Tulsiani, H. Su, L. J. Guibas, A. A. Efros, and J. Malik. Learning shape abstractions by assembling volumetric primitives. arXiv preprint arXiv:1612.00404, 2016.
  • [41] D. Vlasic, I. Baran, W. Matusik, and J. Popović. Articulated mesh animation from multi-view silhouettes. ACM Trans. Graph., 27(3):97:1–9, 2008.
  • [42] M. Wattenberg, F. Viégas, and I. Johnson. How to use t-sne effectively. Distill, 2016.
  • [43] J. Wu, T. Xue, J. J. Lim, Y. Tian, J. B. Tenenbaum, A. Torralba, and W. T. Freeman. Single image 3D interpreter network. In ECCV, pages 365–382. Springer, 2016.
  • [44] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3D ShapeNets: A deep representation for volumetric shapes. In IEEE CVPR, pages 1912–1920, 2015.
  • [45] T. Xue, J. Liu, and X. Tang. Example-based 3D object reconstruction from line drawings. In IEEE CVPR, pages 302–309, 2012.
  • [46] X. Yan, J. Yang, E. Yumer, Y. Guo, and H. Lee. Perspective transformer nets: Learning single-view 3d object reconstruction without 3d supervision. In NIPS, pages 1696–1704. 2016.
  • [47] L. Zhang, N. Snavely, B. Curless, and S. M. Seitz. Spacetime faces: High-resolution capture for modeling and animation. In ACM Annual Conference on Computer Graphics, pages 548–558, August 2004.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
4928
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description