Learning Category-Specific Mesh Reconstruction from Image Collections

Learning Category-Specific Mesh Reconstruction from Image Collections

Abstract

We present a learning framework for recovering the 3D shape, camera, and texture of an object from a single image. The shape is represented as a deformable 3D mesh model of an object category where a shape is parameterized by a learned mean shape and per-instance predicted deformation. Our approach allows leveraging an annotated image collection for training, where the deformable model and the 3D prediction mechanism are learned without relying on ground-truth 3D or multi-view supervision. Our representation enables us to go beyond existing 3D prediction approaches by incorporating texture inference as prediction of an image in a canonical appearance space. Additionally, we show that semantic keypoints can be easily associated with the predicted shapes. We present qualitative and quantitative results of our approach on the CUB dataset, and show that we can learn to predict the diverse shapes and textures across birds using only an annotated image collection. We also demonstrate the the applicability of our method for learning the 3D structure of other generic categories. The project website can be found at https://akanazawa.github.io/cmr/. 1

\newfloatcommand

capbtabboxtable[][\FBwidth]

Figure 1: Given an annotated image collection of an object category, we learn a predictor that can map a novel image to its 3D shape, camera pose, and texture.
\figlabel

teaser

1 Introduction

Consider the image of the bird in \figrefteaser. Even though this flat two-dimensional picture printed on a page may be the first time we are seeing this particular bird, we can infer its rough 3D shape, understand the camera pose, and even guess what it would look like from another view. We can do this because all the previously seen birds have enabled us to develop a mental model of what birds are like, and this knowledge helps us to recover the 3D structure of this novel instance.

In this work, we present a computational model that can similarly learn to infer a 3D representation given just a single image. As illustrated in \figrefteaser, the learning only relies on an annotated 2D image collection of a given object category, comprising of foreground masks and semantic keypoint labels. Our training procedure, depicted in \figrefoverview, forces a common prediction model to explain all the image evidences across many examples of an object category. This allows us to learn a meaningful 3D structure despite only using a single-view per training instance, without relying on any ground-truth 3D data for learning.

At inference, given a single unannotated image of a novel instance, our learned model allows us to infer the shape, camera pose, and texture of the underlying object. We represent the shape as a 3D mesh in a canonical frame, where the predicted camera transforms the mesh from this canonical space to the image coordinates. The particular shape of each instance is instantiated by deforming a learned category-specific mean shape with instance-specific predicted deformations. The use of this shared 3D space affords numerous advantages as it implicitly enforces correspondences across 3D representations of different instances. As we detail in \secrefapproach, this allows us to formulate the task of inferring mesh texture of different objects as that of predicting pixel values in a common texture representation. Furthermore, we can also easily associate semantic keypoints with the predicted 3D shapes.

Our shape representation is an instantiation of deformable models, the history of which can be traced back to D’Arcy Thompson [1], who in turn was inspired by the work of Dürer [2]. Thompson observed that shapes of objects of the same category may be aligned through geometrical transformations. Cootes and Taylor [3] operationalized this idea to learn a class-specific model of deformation for 2D images. Pioneering work of Blanz and Vetter [4] extended these ideas to 3D shapes to model the space of faces. These techniques have since been applied to model human bodies [5, 6], hands [7, 8], and more recently on quadruped animals [9]. Unfortunately, all of these approaches require a large collection of 3D data to learn the model, preventing their application to categories where such data collection is impractical. In contrast, our approach is able to learn using only an annotated image collection.

Sharing our motivation for relaxing the requirement of 3D data to learn morphable models, some related approaches have examined the use of similarly annotated image collections. Cashman and Fitzgibbon [10] use keypoint correspondences and segmentation masks to learn a morphable model of dolphins from images. Kar \etal [11] extend this approach to general rigid object categories. Both approaches follow a fitting-based inference procedure, which relies on mask (and optionally keypoint) annotations at test-time and is computationally inefficient. We instead follow a prediction-based inference approach, and learn a parametrized predictor which can directly infer the 3D structure from an unannotated image. Moreover, unlike these approaches, we also address the task of texture prediction which cannot be easily incorporated with these methods.

While deformable models have been a common representation for 3D inference, the recent advent of deep learning based prediction approaches has resulted in a plethora of alternate representations being explored using varying forms of supervision. Relying on ground-truth 3D supervision (using synthetic data), some approaches have examined learning voxel [12, 13, 14, 15], point cloud [16] or octree [17, 18] prediction. While some learning based methods do pursue mesh prediction [19, 20, 21], they also rely on 3D supervision which is only available for restricted classes or in a synthetic setting. Reducing the supervision to multi-view masks [22, 23, 24, 25] or depth images [24] has been explored for voxel prediction, but the requirement of multiple views per instance is still restrictive. While these approaches show promising results, they rely on stronger supervision (ground-truth 3D or multi-view) compared to our approach.

In the context of these previous approaches, the proposed approach differs primarily in three aspects:

  • Shape representation and inference method. We combine the benefits of the classically used deformable mesh representations with those of a learning based prediction mechanism. The use of a deformable mesh based representation affords several advantages such as memory efficiency, surface-level reasoning and correspondence association. Using a learned prediction model allows efficient inference from a single unannotated image

  • Learning from an image collection. Unlike recent CNN based 3D prediction methods which require either ground-truth 3D or multi-view supervision, we only rely on an annotated image collection, with only one available view per training instance, to learn our prediction model.

  • Ability to infer texture. There is little past work on predicting the 3D shape and the texture of objects from a single image. Recent prediction-based learning methods use representations that are not amenable to textures (\egvoxels). The classical deformable model fitting-based approaches cannot easily incorporate texture for generic objects. An exception is texture inference on human faces [4, 26], but these approaches require a large-set of 3D ground truth data with high quality texture maps. Our approach enables us to pursue the task of texture inference from image collections alone, and we address the related technical challenges regarding its incorporation in a learning framework.

2 Approach

\seclabel

approach We aim to learn a predictor (parameterized as a CNN) that can infer the 3D structure of the underlying object instance from a single image . The prediction is comprised of the 3D shape of the object in a canonical frame, the associated texture, as well as the camera pose. The shape representation we pursue in this work is of the form of a 3D mesh. This representation affords several advantages over alternates like probabilistic volumetric grids \egamenability to texturing, correspondence inference, surface level reasoning and interpretability.

The overview of the proposed framework is illustrated in \figrefoverview. The input image is passed through an encoder to a latent representation that is shared by three modules that estimate the camera pose, shape deformation, and texture parameters. The deformation is added to the learned category-level mean shape to obtain the final predicted shape. The objective of the network is to minimize the corresponding losses when the shape is rendered onto the image. We train a separate model for each object category.

Figure 2: Overview of the proposed framework. An image is passed through a convolutional encoder to a latent representation that is shared by modules that estimate the camera pose, deformation and texture parameters. Deformation is an offset to the learned mean shape, which when added yield instance specific shapes in a canonical coordinate frame. We also learn correspondences between the mesh vertices and the semantic keypoints. Texture is parameterized as an UV image, which we predict through texture flow (see \secreftexturepred). The objective is to minimize the distance between the rendered mask, keypoints and textured rendering with the corresponding ground truth annotations. We do not require ground truth 3D shapes or multi-view cues for training.
\figlabel

overview

We first present the representations predicted by our model in \secrefshaperep, and then describe the learning procedure in \secreflearning. We initially present our framework for predicting shape and camera pose, and then describe how the model is extended to predict the associated texture in \secreftexturepred.

2.1 Inferred 3D Representation

\seclabel

shaperep Given an image of an instance, we predict , a mesh and camera pose to capture the 3D structure of the underlying object. In addition to these directly predicted aspects, we also learn the association between the mesh vertices and the category-level semantic keypoints. We describe the details of the inferred representations below.

Shape Parametrization. We represent the shape as a 3D mesh , defined by vertices and faces . We assume a fixed and pre-determined mesh connectivity, and use the faces corresponding to a spherical mesh. The vertex positions are instantiated using (learned) instance-independent mean vertex locations and instance-dependent predicted deformations , which when added, yield instance vertex locations . Intuitively, the mean shape can be considered as a learnt bias term for the predicted shape .

Camera Projection. We model the camera with weak-perspective projection and predict, from the input image , the scale , translation , and rotation (captured by quaternion ). We use to denote the projection of a set of 3D points onto the image coordinates via the weak-perspective projection defined by .

Associating Semantic Correspondences. As we represent the shape using a category-specific mesh in the canonical frame, the regularities across instances encourage semantically consistent vertex positions across instances, thereby implicitly endowing semantics to these vertices. We can use this insight and learn to explicitly associate semantic keypoints \eg, beak, legs \etcwith the mesh via a keypoint assignment matrix s.t. . Here, each row represents a probability distribution over the mesh vertices of corresponding to keypoint , and can be understood as approximating a one-hot vector of vertex selection for each keypoint. As we describe later in our learning formulation, we encourage each to be a peaked distribution. Given the vertex positions , we can infer the location for the keypoint as . More concisely, the keypoint locations induced by vertices can be obtained as . We initialize the keypoint assignment matrix uniformly, but over the course of training it learns to better associate semantic keypoints with appropriate mesh vertices.

In summary, given an image of an instance, we predict the corresponding camera and the shape deformation as . In addition, we also learn (across the dataset), instance-independent parameters . As described above, these category-level (learned) parameters, in conjunction with the instances-specific predictions, allow us to recover the mesh vertex locations and coordinates of semantic keypoints .

2.2 Learning from an Image Collection

\seclabel

learning We present an approach to train without relying on strong supervision in the form of ground truth 3D shapes or multi-view images of an object instance. Instead, we guide the learning from an image collection annotated with sparse keypoints and segmentation masks. Such a setting is more natural and easily obtained, particularly for animate and deformable objects such as birds or animals. It is extremely difficult to obtain scans, or even multiple views of the same instance for these classes, but relatively easier to acquire a single image for numerous instances.

Given the annotated image collection, we train by formulating an objective function that consists of instance specific losses and priors. The instance-specific energy terms ensure that the predicted 3D structure is consistent with the available evidence (masks and keypoints) and the priors encourage generic desired properties \egsmoothness. As we learn a common prediction model across many instances, the common structure across the category allows us to learn meaningful 3D prediction despite only having a single-view per instance.

Training Data. We assume an annotated training set for each object category, where is the image, is the instance segmentation, and is the set of keypoint locations. As previously leveraged by  [27, 11], applying structure-from-motion to the annotated keypoint locations additionally allows us to obtain a rough estimate of the weak-perspective camera for each training instance. This results in an augmented training set which we use for training our predictor .

Instance Specific Losses. We ensure that the predicted 3D structure matches the available annotations. Using the semantic correspondences associated to the mesh via the keypoint assignment matrix , we formulate a keypoint reprojection loss. This term encourages the predicted 3D keypoints to match the annotated 2D keypoints when projected onto the image:

(1)

Similarly, we enforce that the predicted 3D mesh, when rendered in the image coordinates, is consistent with the annotated foreground mask: . Here, denotes a rendering of the segmentation mask image corresponding to the 3D mesh when rendered through camera . In all of our experiments, we use Neural Mesh Renderer [28] to provide a differentiable implementation of .

We also train the predicted camera pose to match the corresponding estimate obtained via structure-from-motion using a regression loss . We found it advantageous to use the structure-from-motion camera , and not the predicted camera , to define and losses. This is because during training, in particular the initial stages when the predictions are often incorrect, an error in the predicted camera can lead to high errors despite accurate shape, and possibly adversely affect learning.

Priors. In addition to the data-dependent losses which ensure that the predictions match the evidence, we leverage generic priors to encourage additional properties. The prior terms that we use are:

Smoothness. In the natural world, shapes tend to have a smooth surface and we would like our recovered 3D shapes to behave similarly. An advantage of using a mesh representation is that it naturally affords reasoning at the surface level. In particular, enforcing smooth surface has been extensively studied by the Computer Graphics community [29, 30]. Following the literature, we formulate surface smoothness as minimization of the mean curvature. On meshes, this is captured by the norm of the graph Laplacian, and can be concisely written as , where is the discrete Laplace-Beltrami operator. We construct once using the connectivity of the mesh and this can be expressed as a simple linear operator on vertex locations. See appendix for details.

Deformation Regularization. In keeping with a common practice across deformable model approaches [4, 10, 11], we find it beneficial to regularize the deformations as it discourages arbitrarily large deformations and helps learn a meaningful mean shape. The corresponding energy term is expressed as .

Keypoint association. As discussed in \secrefshaperep, we encourage the keypoint assignment matrix to be a peaked distribution as it should intuitively correspond to a one-hot vector. We therefore minimize the average entropy over all keypoints: .

In summary, the overall objective for shape and camera is

(2)

Symmetry Constraints. Almost all common object categories, including the ones we consider, exhibit reflectional symmetry. To exploit this structure, we constrain the predicted shape and deformations to be mirror-symmetric. As our mesh topology corresponds to that of a sphere, we identify symmetric vertex pairs in the initial topology. Given these pairs, we only learn/predict parameters for one vertex in each pair for the mean shape and deformations . See appendix for details.

Initialization and Implementation Details. While our mesh topology corresponds to a sphere, following previous fitting based deformable model approaches [11], we observe that a better initialization of the mean vertex positions speeds up learning. We compute the convex hull of the mean keypoint locations obtained during structure-from-motion and initialize the mean vertex locations to lie on this convex hull – the procedure is described in more detail in the appendix. As the different energy terms in (LABEL:loss_overview) have naturally different magnitudes, we weight them accordingly to normalize their contribution.

2.3 Incorporating Texture Prediction

\seclabel

texturepred

Figure 3: Illustration of the UV mapping. We illustrate how a texture image can induce a corresponding texture on the predicted meshes. A point on a sphere can be mapped onto the image via using spherical coordinates. As our mean shape has the same mesh geometry (vertex connectivity) as a sphere we can transfer this mapping onto the mean shape. The different predicted shapes, in turn, are simply deformations of the mean shape and can use the same mapping.
\figlabel

uv_map

In our formulation, all recovered shapes share a common underlying 3D mesh structure – each shape is a deformation of the mean shape. We can leverage this property to reduce texturing of a particular instance to predicting the texture of the mean shape. Our mean shape is isomorphic to a sphere, whose texture can be represented as an image , the values of which get mapped onto the surface via a fixed UV mapping (akin to unrolling a globe into a flat map) [31]. Therefore, we formulate the task of texture prediction as that of inferring the pixel values of . This image can be thought of as a canonical appearance space of the object category. For example, a particular triangle on the predicted shape always maps to a particular region in , irrespective of how it was deformed. This is illustrated in \figrefuv_map. In this texture parameterization, each pixel in the UV image has a consistent semantic meaning, thereby making it easier for the prediction model to leverage common patterns such as correlation between the bird back and the body color.

Figure 4: Illustration of texture flow. We predict a texture flow that is used to bilinearly sample the input image to generate the texture image . We can use this predicted UV image to then texture the instance mesh via the UV mapping procedure illustrated in \figrefuv_map.
\figlabel

texture_flow

We incorporate texture prediction module into our framework by setting up a decoder that upconvolves the latent representation to the spatial dimension of . While directly regressing the pixel values of is a feasible approach, this often results in blurry images. Instead, we take inspiration from [32] and formulate this task as that of predicting the appearance flow. Instead of regressing the pixel values of , the texture module outputs where to copy the color of the pixel from the original input image. This prediction mechanism, depicted in \figreftexture_flow, easily allows our predicted texture to retain the details present in the input image. We refer to this output as ‘texture flow’ , where are the height and width of , and indicates the coordinates of the input image to sample the pixel value from. This allows us to generate the UV image by bilinear sampling of the original input image according to the predicted flow . This is illustrated in \figreftexture_flow.

Now we formulate our texture loss, which encourages the rendered texture image to match the foreground image:

(3)

is the rendering of the 3D mesh with texture defined by . We use the perceptual metric of Zhang \etal[33] as the distance metric.

The loss function above provides supervisory signals to regions of corresponding to the foreground portion of the image, but not to other regions of corresponding to parts that are not directly visible in the image. While the common patterns across the dataset \egsimilar colors for bird body and back can still allow meaningful prediction, we find it helpful to add a further loss that encourages the texture flow to select pixels only from the foreground region in the image. This can be simply expressed by sampling the distance transform field of the foreground mask (where for all points in the foreground, ) according to and summing the resulting image:

(4)

In contrast to inferring the full texture map, directly sampling the actual pixel values that the predicted mesh projects onto creates holes and leaking of the background texture at the boundaries. Similarly to the shape parametrization, we also explicitly encode symmetry in our prediction, where symmetric faces gets mapped on to the same UV coordinate in . Additionally, we only back-propagate gradients from to the predicted texture (and not the predicted shape) since bilinear sampling often results in high-frequency gradients that destabilize shape learning. Our shape prediction is therefore learned only using the objective in (LABEL:loss_overview), and the losses and can be viewed as encouraging prediction of correct texture ‘on top’ of the learned shape.

3 Experiments

We demonstrate the ability of our presented approach to learn single-view inference of shape, texture and camera pose using only a category-level annotated image collection. As a running example, we consider the ‘bird’ object category as it represents a challenging scenario that has not been addressed via previous approaches. We first present, in \secrefexpsetup, our experimental setup, describing the annotated image collection and CNN architecture used.

As ground-truth 3D is not available for benchmarking, we present extensive qualitative results in \secrefqualitative, demonstrating that we learn to predict meaningful shapes and textures across birds. We also show we capture the shape deformation space of the category and that the implicit correspondences in the deformable model allow us to have applications like texture transfer across instances.

We also present some quantitative results to provide evidence for the accuracy of our shape and camera estimates in \secrefquantitative. While there has been little work for reconstructing categories like birds, some approaches have examined the task of learning shape prediction using an annotated image collection for some rigid classes. In \secrefpascal3d we present our method’s results on some additional representative categories, and show that our method performs comparably, if not better than the previously proposed alternates while having several additional advantages \eglearning semantic keypoints and texture prediction.

3.1 Experimental Setup

\seclabel

expsetup Dataset. We use the CUB-200-2011 dataset [34], which has 6000 training and test images of 200 species of birds. Each image is annotated with the bounding box, visibility indicator and locations of 14 semantic keypoints, and the ground truth foreground mask. We filter out nearly 300 images where the visible number of keypoints are less than or equal to 6, since these typically correspond to truncated close shots. We divide the test set in half to create a validation set, which we use for hyper-parameter tuning.

Figure 5: Sample results. We show predictions of our approach on images from the test set. For each input image on the left, we visualize (in order): the predicted 3D shape and texture viewed from the predicted camera, and textured shape from three novel viewpoints. See the appendix for additional randomly selected results and video at https://akanazawa.github.io/cmr/.
\figlabel

big_figure

Network Architecture. A schematic of the various modules of our prediction network is depicted in \figrefoverview. The encoder consists of an ImageNet pretrained ResNet-18 [35], followed by a convolutional layer that downsamples the spatial and the channel dimensions by half. This is vectorized to form a 4096-D vector, which is sent to two fully-connected layers to get to the shared latent space of size 200. The deformation and the camera prediction components are linear layers on top of this latent space. The texture flow component consists of 5 upconvolution layers where the final output is passed through a function to keep the flow in a normalized [-1, 1] space. We use the neural mesh renderer [28] so all rendering procedures are differentiable. All images are cropped using the instance bounding box and resized such that the maximum image dimension is 256. We augment the training data on the fly by jittering the scale and translation of the bounding box and with image mirroring. Our mesh geometry corresponds to that of a perfectly symmetric sphere with 642 vertices and 1280 faces.

3.2 Qualitative Results

\seclabel

qualitative We visualize the results and application of our learned predictor using the CUB dataset. We show various reconstructions corresponding to different input images, visualize some of the deformation modes learned, and show that the common deformable model parametrization allows us to transfer the texture of one instance onto another.

Single-view 3D Reconstruction. We show sample reconstruction results on images from the CUB test set in \figrefbig_figure. We show the predicted shape and texture from the inferred camera viewpoint, as well as from novel views. Please see appendix for additional randomly selected samples and videos showing the results from 360 views.

We observe that our learned model can accurately predict the shape, estimate the camera and also infer meaningful texture from the corresponding input image. Our predicted 3D shape captures the overall shape (fat or thin birds), and even some finer details \egbeaks or large deformations \egflying birds. Additionally, our learned pose and texture prediction are accurate and realistic across different instances. We observe that the error modes corresponds to not predicting rare poses, and inability to incorporate asymmetric articulation. However, we feel that these predictions learned using only an annotated image collection are encouraging.

Figure 6: Learned deformation modes. We visualize the space of learned shapes by depicting the mean shape (centre) and three common modes of deformation as obtained by PCA on the predicted deformations across the dataset.
\figlabel

pca

Learned shape space. The presented approach represents the shape of an instance via a category-level learned mean shape and a per-instance predicted deformation . To gain insight into the common modes of deformation captured via our predictor, obtained the principal deformation modes by computing PCA on the predicted deformations across all instances in the training set.

We visualize in \figrefpca our mean shape deformed in directions corresponding three common deformation modes. We note that these plausibly correspond to some of the natural factors of variation in the 3D structure across birds \egfat or thin birds, opening of wings, deformation of tails and legs.

Texture Transfer. Recall that the textures of different instance in our formulation are captured in a canonical appearance space in the form of a predicted ‘texture image’ . This parametrization allows us to easily modify the surface appearance, and in particular transfer texture across instances.

We show some results in \figreftexture_transfer where we sample pairs of instances, and transfer the texture from one image onto the predicted shape of the other. We can achieve this by simply using the predicted texture image corresponding to the first when rendering the predicted 3D for the other. We note that even though the two views might be different, since the underlying ‘texture image’ space is consistent, the transferred texture is also semantically consistent \egthe colors corresponding to the one bird’s body are transferred onto the other bird’s body.

Figure 7: Texture Transfer Results. Our representation allows us to easily transfer the predicted texture across instances using the canonical appearance image (see text for details). We visualize sample results of texture transfer across different pairs of birds. For each pair, we show (left): the input image, (middle): the predicted textured mesh from the predicted viewpoint, and (right): the predicted mesh textured using the predicted texture of the other bird.
\figlabel

texture_transfer

3.3 Quantitative Evaluation

\seclabel

quantitative We attempt to indirectly measure the quality of our recovered reconstructions on the CUB dataset. As there is no ground-truth 3D available for benchmarking, we instead evaluate the mask reprojection accuracy. For each test instance in the CUB dataset, we obtain a mask prediction via rendering the predicted 3D shape from the predicted camera viewpoint. We then compute the intersection over union (IoU) of this predicted mask with the annotated ground-truth mask. Note that to correctly predict the foreground mask, we need both, accurate shape and accurate camera.

Our results are plotted in \figrefmaskiou. We compare the accuracy our full shape prediction (using learned mean shape and predicted deformation ) against only using the learned mean shape to obtain the predicted mask. We observe that the predicted deformations result in improvements, indicating that we are able to capture the specifics of the shape of different instances. Additionally, we also report the performance using the camera obtained via structure from motion (which uses ground-truth annotated keypoints) instead of using the predicted camera. We note that comparable results in the two settings demonstrate the accuracy of our learned camera estimation. Lastly, we can also measure our keypoint reprojection accuracy using the percentage of correct keypoints (PCK) metric [36]. We similarly observe that our full predicted shape performs (slightly) better than only relying on the category-level mean shape – by obtaining a PCK (at normalized distance threshold 0.1) of 0.72 compared to 0.71. The improvement over the mean shape is less prominent in this scenario as most of the semantic keypoints defined are on the torso and therefore typically undergo only small deformations.

\TopFloatBoxes{floatrow}\ffigbox

[1.2\FBwidth] \figlabelmaskiou \capbtabbox Method Aeroplane Car CSDM [11] 0.40 0.60 DRC [24] 0.42 0.67 Ours 0.46 0.64 \tablelabelp3d

Figure 8: Mask reprojection accuracy evaluation on CUB. We plot the fraction of test instances with IoU between the predicted and ground-truth mask higher than different thresholds (higher is better) and compare the predictions using the full model against only using the learned mean shape. We report the reprojection accuracy using predicted cameras and cameras obtained via structure-from-motion based on keypoint annotation.
Figure 9: Reconstruction evaluation using PASCAL 3D+. We report the mean intersection over union (IoU) on PASCAL 3D+ to benchmark the obtained 3D reconstructions (higher is better). We compare to previous deformable model fitting-based [11] and volumetric prediction  [24] approaches that use similar image collection supervision. Note that our approach can additionally predict texture and semantics.

3.4 Evaluation on Other Object Classes

\seclabel

pascal3d

Figure 10: Pascal 3D+ results. We show predictions of our approach on images from the test set. For each input image on the left, we visualize (in order): the predicted 3D shape viewed from the predicted camera, the predicted shape with texture viewed from the predicted camera, and the shape with texture viewed from a novel viewpoint.
\figlabel

p3dresults

While our primary results focus on predicting the 3D shape and texture of birds using the CUB dataset, we note that some previous approaches have examined the task of shape inference/prediction using a similar annotated image collection as supervision. While these previous methods do not infer texture, we can compare our shape predictions against those obtained by these techniques.

We compare to previous deformable model fitting-based [11] and volumetric prediction  [24] methods using the PASCAL 3D+ dataset and examine the car and aeroplane categories. Both of these approaches can leverage the annotation we have available \iesegmentation masks and keypoints to learn 3D shape inference (although [24] requires annotated cameras instead of keypoints). Similar to [24], we use PASCAL VOC and Imagenet images with available keypoint annotations from PASCAL3D+ to train our model, and use an off-the shelf segmentation algorithm [37] to obtain foreground masks for the ImageNet subset.

We report the mean IoU evaluation on the test set in \tablerefp3d and observe that we perform comparably, if not better than these alternate methods. We also note that our approach yields additional outputs \egtexture, that these methods do not. We visualize some predictions in \figrefp3dresults. While our predicted shapes are often reasonable, the textures have more errors due to shiny regions (\egfor cars) or smaller amount of training data (\egfor aeroplanes).

4 Discussion

We have presented a framework for learning single-view prediction of a textured 3D mesh using an image collection as supervision. While our results represent an encouraging step, we have by no means solved the problem in the general case, and a number of interesting challenges and possible directions remain. Our formulation addresses shape change and articulation via a similar shape deformation mechanism, and it may be beneficial to extend our deformable shape model to explicitly allow articulation. Additionally, while we presented a method to synthesize texture via copying image pixels, a more sophisticated mechanism that allows both, copying image content and synthesizing novel aspects might be desirable. Finally, even though we can learn using only a single-view per training instance, our approach may be equally applicable, and might yield perhaps even better results, for the scenario where multiple views per training instance are available. However, on the other end of the supervision spectrum, it would be desirable to relax the need of annotation even further, and investigate learning similar prediction models using unannotated image collections.

Acknowledgements. We thank David Fouhey for the creative title suggestions, and members of the BAIR community for helpful discussions and comments. This work was supported in part by Intel/NSF VEC award IIS-1539099, NSF Award IIS-1212798, and BAIR sponsors.

Appendix

A1. Optimization Details

Mesh Geometry. The geometry of predicted mesh corresponds to that of an ‘Icoshpere’ (subdivided Icosahedron) at subdivision level 3 (see  [38] for an excellent description and implementation). This results in a mesh with 642 vertices and 1280 faces . We keep the faces fixed during our learning process, and predict (via a learned mean shape and predicted deformations), the positions of the vertices .

Symmetry. We enforce reflectional symmetry along the X-axis. As the initial icosphere is perfectly symmetric, each vertex either lies on the plane, or has a corresponding symmetric vertex. For each pair of symmetric vertices, say , we only treat the location of one vertex in the pair (say ) as a free parameter. As a consequence, we predict the location of vertices (32 of these are on the plane, and 305 from one symmetric vertex pair each) to instantiate the mesh vertex locations .

Mean Shape Initialization. While the initial icosphere yields some default positions for the vertices , following previous approaches [10, 11], we find it beneficial to instead use a better initialization for the vertex locations in the mean shape. To this end, we use the convex hull of the mean keypoint locations obtained after running structure-from-motion using the annotated keypoints. For each of the vertices , its initial position is computed by projecting it onto this convex hull. Our learning process therefore starts with this coarse convex-hull mean shape initialization, and over the course of the training, learns a better mean shape. The initial and the final meanshapes are illustrated in \figrefmeanshape.

Figure 11: Initial and Learned Mean Shapes. On the left we show the initial mean shape obtained from running SfM on the annotated keypoints. We use this as initialization. On the right we show the final learned mean shape.
\figlabel

meanshape

Laplacian Smoothness. As a prior for smoothness, we minimize the mean mesh curvature. The curvature at the vertices can be computed via a discretization of the continuous Laplace-Beltrami operator. Assuming vertices, this (discrete) operator (denoted as the laplacian ) is simply a fixed sparse matrix, and the matrix yields the normal direction at each vertex weighted by the curvature. We can therefore minimize the mean norm of the rows of to minimize mean curvature. We use the ‘cotangent weights’ [29] to define , as it accounts for the local geometry instead of just adjacency. We refer the reader to Section 2.1 of [39] for a concise review of the concepts involved.

A2. Additional Results

In \figrefsupp1 and \figrefsupp2 we show predictions of our approach on 40 randomly selected images from the test set. In each column, we show the input image followed by the predicted 3D shape and texture from the predicted camera view, and three views of the textured shape corresponding to a rotation of 60, 180 and -60 degrees around y-axis.

Figure 12: Randomly selected results. We show predictions of our approach on random images from the test set. For each column, we show the input image on the left and visualize (in order): the predicted 3D shape and texture viewed from the predicted camera, and textured shape from three novel views corresponding to a rotation of 60, 180 and -60 degrees around y-axis.
\figlabel

supp1

Figure 13: Randomly selected results. We show predictions of our approach on random images from the test set. For each column, we show the input image on the left and visualize (in order): the predicted 3D shape and texture viewed from the predicted camera, and textured shape from three novel views corresponding to a rotation of 60, 180 and -60 degrees around y-axis.
\figlabel

supp2

Footnotes

  1. The first two authors procrastinated equally on this work.

References

  1. Thompson, D.: On Growth and Form. Cambridge Univ. Press (1917)
  2. Dürer, A.: Four Books on Human Proportion. Formschneyder (1528)
  3. Cootes, T.F., Taylor, C.J.: Active shape models—‘smart snakes’. In: BMVC. (1992)
  4. Blanz, V., Vetter, T.: A morphable model for the synthesis of 3d faces. In: ACM SIGGRAPH. (1999) 187–194
  5. Anguelov, D., Srinivasan, P., Koller, D., Thrun, S., Rodgers, J., Davis, J.: SCAPE: Shape Completion and Animation of PEople. ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH (2005)
  6. Loper, M., Mahmood, N., Romero, J., Pons-Moll, G., Black, M.J.: SMPL: A skinned multi-person linear model. ACM Trans. Graphics (Proc. SIGGRAPH Asia) (2015)
  7. Taylor, J., Stebbing, R., Ramakrishna, V., Keskin, C., Shotton, J., Izadi, S., Hertzmann, A., Fitzgibbon, A.: User-specific hand modeling from monocular depth sequences. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR. (2014)
  8. Khamis, S., Taylor, J., Shotton, J., Keskin, C., Izadi, S., Fitzgibbon, A.: Learning an efficient model of hand shape variation from depth images. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR. (2015)
  9. Zuffi, S., Kanazawa, A., Jacobs, D., Black, M.J.: 3d menagerie: Modeling the 3d shape and pose of animals. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR. (2017)
  10. Cashman, T.J., Fitzgibbon, A.W.: What shape are dolphins? building 3D morphable models from 2D images. IEEE Trans. Pattern Anal. Mach. Intell 35(1) (2013) 232–244
  11. Kar, A., Tulsiani, S., Carreira, J., Malik, J.: Category-specific object reconstruction from a single image. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR. (2015)
  12. Choy, C.B., Xu, D., Gwak, J., Chen, K., Savarese, S.: 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In: European Conference on Computer Vision, ECCV. (2016)
  13. Girdhar, R., Fouhey, D., Rodriguez, M., Gupta, A.: Learning a predictable and generative vector representation for objects. In: European Conference on Computer Vision, ECCV. (2016)
  14. Zhu, R., Kiani, H., Wang, C., Lucey, S.: Rethinking reprojection: Closing the loop for pose-aware shape reconstruction from a single image. In: IEEE International Conference on Computer Vision, ICCV. (2017)
  15. Wu, J., Wang, Y., Xue, T., Sun, X., Freeman, W.T., Tenenbaum, J.B.: MarrNet: 3D Shape Reconstruction via 2.5D Sketches. In: Advances in Neural Information Processing Systems. (2017)
  16. Fan, H., Su, H., Guibas, L.J.: A point set generation network for 3d object reconstruction from a single image. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR. (2017)
  17. Häne, C., Tulsiani, S., Malik, J.: Hierarchical surface prediction for 3d object reconstruction. In: 3DV. (2017)
  18. Tatarchenko, M., Dosovitskiy, A., Brox, T.: Octree generating networks: Efficient convolutional architectures for high-resolution 3d outputs. In: IEEE International Conference on Computer Vision, ICCV. (2017)
  19. Kanazawa, A., Black, M.J., Jacobs, D.W., Malik, J.: End-to-end recovery of human shape and pose. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR. (2018)
  20. Yang, B., Rosa, S., Markham, A., Trigoni, N., Wen, H.: 3d object dense reconstruction from a single depth view. arXiv preprint arXiv:1802.00411 (2018)
  21. Laine, S., Karras, T., Aila, T., Herva, A., Saito, S., Yu, R., Li, H., Lehtinen, J.: Production-level facial performance capture using deep convolutional neural networks. In: Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, ACM (2017)  10
  22. Yan, X., Yang, J., Yumer, E., Guo, Y., Lee, H.: Perspective transformer nets: Learning single-view 3d object reconstruction without 3d supervision. In: Advances in Neural Information Processing Systems. (2016)
  23. Rezende, D.J., Eslami, S.A., Mohamed, S., Battaglia, P., Jaderberg, M., Heess, N.: Unsupervised learning of 3d structure from images. In: Advances in Neural Information Processing Systems. (2016)
  24. Tulsiani, S., Zhou, T., Efros, A.A., Malik, J.: Multi-view supervision for single-view reconstruction via differentiable ray consistency. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR. (2017)
  25. Gwak, J., Choy, C.B., Garg, A., Chandraker, M., Savarese, S.: Weakly supervised 3d reconstruction with adversarial constraint. In: 3DV. (2017)
  26. Saito, S., Wei, L., Hu, L., Nagano, K., Li, H.: Photorealistic facial texture inference using deep neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR. (2017)
  27. Vicente, S., Carreira, J., Agapito, L., Batista, J.: Reconstructing pascal voc. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR. (2014)
  28. Kato, H., Ushiku, Y., Harada, T.: Neural 3d mesh renderer. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR. (2018)
  29. Pinkall, U., Polthier, K.: Computing discrete minimal surfaces and their conjugates. Experimental mathematics 2(1) (1993) 15–36
  30. Sorkine, O., Cohen-Or, D., Lipman, Y., Alexa, M., Rössl, C., Seidel, H.P.: Laplacian surface editing. In: Proceedings of the 2004 Eurographics/ACM SIGGRAPH symposium on Geometry processing, ACM (2004) 175–184
  31. Hughes, J.F., Foley, J.D.: Computer graphics: principles and practice. Pearson Education (2014)
  32. Zhou, T., Tulsiani, S., Sun, W., Malik, J., Efros, A.A.: View synthesis by appearance flow. In: European Conference on Computer Vision, ECCV. (2016)
  33. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep networks as a perceptual metric. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR. (2018)
  34. Wah, C., Branson, S., Welinder, P., Perona, P., Belongie, S.: The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology (2011)
  35. He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: European Conference on Computer Vision, ECCV. (2016)
  36. Yang, Y., Ramanan, D.: Articulated pose estimation with flexible mixtures-of-parts. In: Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, IEEE (2011) 1385–1392
  37. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: IEEE International Conference on Computer Vision, ICCV. (2017)
  38. Kahler, A.: Icosphere. http://blog.andreaskahler.com/2009/06/creating-icosphere-mesh-in-code.html Accessed: 2018-03-17.
  39. Sorkine, O.: Differential representations for mesh processing. In: Computer Graphics Forum, Wiley Online Library (2006)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
130614
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description