Learning Detailed Face Reconstruction from a Single Image
Reconstructing the detailed geometric structure of a face from a given image is a key to many computer vision and graphics applications, such as motion capture and reenactment. The reconstruction task is challenging as human faces vary extensively when considering expressions, poses, textures, and intrinsic geometries. While many approaches tackle this complexity by using additional data to reconstruct the face of a single subject, extracting facial surface from a single image remains a difficult problem. As a result, single-image based methods can usually provide only a rough estimate of the facial geometry. In contrast, we propose to leverage the power of convolutional neural networks to produce a highly detailed face reconstruction from a single image. For this purpose, we introduce an end-to-end CNN framework which derives the shape in a coarse-to-fine fashion. The proposed architecture is composed of two main blocks, a network that recovers the coarse facial geometry (CoarseNet), followed by a CNN that refines the facial features of that geometry (FineNet). The proposed networks are connected by a novel layer which renders a depth image given a mesh in 3D. Unlike object recognition and detection problems, there are no suitable datasets for training CNNs to perform face geometry reconstruction. Therefore, our training regime begins with a supervised phase, based on synthetic images, followed by an unsupervised phase that uses only unconstrained facial images. The accuracy and robustness of the proposed model is demonstrated by both qualitative and quantitative evaluation tests.
Faces, with all their complexities and vast number of degrees of freedom, allow us to communicate and express ourselves through expressions, mimics, and gestures. Facial muscles enable us to express our emotions and feelings, while facial geometric features determine one’s identity. However, the flexibility of these qualities makes the recovery of facial geometry from a flat image a challenge. Moreover, additional ambiguities arise as the projection of a face onto an image depends also on its texture and material properties, lighting conditions, and viewing direction.
Various methods mitigate this uncertainty by using additional data such as a large photo collection of the same subject [36, 35, 21, 28, 33], continuous video frames [44, 40, 5, 11] or a rough depth map [44, 18]. In many cases, however, we only have access to a single facial image. In this setup, common schemes can be divided to 3D morphable model (3DMM) techniques [3, 4], template-based methods [20, 15] and data-driven approaches [26, 41, 34].
Here, we propose an end-to-end neural network for reconstructing a detailed facial surface in 3D from a single image. At the core of our method is the idea of breaking the reconstruction problem into two phases, each solved by a dedicated neural network architecture. First, we introduce CoarseNet, a network for recovering the coarse facial geometry as well as the pose of the face directly from the image. To train CoarseNet, a synthetic dataset of facial images with their matching face geometry and pose is synthetically generated. The rough facial geometries are modeled using a 3DMM , which provides a compact representation that can be recovered using the proposed network. However, this representation can only capture coarse geometry reconstruction. Next, in order to capture fine details, we introduce FineNet, a network that operates on depth maps and thus is not constrained by the morphable model representation. FineNet receives a coarse depth map, alongside the original input images and applies a shape-from-shading like refinement, capturing the fine facial details. To train FineNet, we use an unlabeled set of facial images, where a dedicated loss criterion is introduced, to allow unsupervised training. Finally, to connect between the CoarseNet 3DMM output and the FineNet depth map input, we introduce a novel layer which takes the 3DMM representation and pose parameters from CoarseNet, and produces a depth map that can be fed into FineNet. This layer supports back-propagation to the 3DMM representation allowing joint training of the two networks, possibly refining the weights of CoarseNet.
The usage of an end-to-end network here is exciting as it connects the problem of face reconstruction to the rapidly expanding applications solved by CNNs, potentially allowing us to further improve our results following new advances in CNN architectures. Moreover, it allows fast reconstructions without the need for external initialization or post-processing algorithms. The potential of using a CNN for reconstructing face geometries was recently demonstrated in . However, their network can only produce the coarse geometry, and must be given an aligned template model as initialization. These limitations force their solution to depend on external algorithms for pose alignment and detail refinement.
The main contributions of the proposed method include:
An end-to-end network-based solution for facial surface reconstruction from a single image, capable of producing detailed geometric structures.
A novel rendering layer, allowing back-propagation from a rendered depth map to the 3DMM model.
A network for data refinement, using a dedicated loss criterion, motivated by axiomatic shape-from-shading objectives.
A training scheme that bypasses the need for manually labeled data by utilizing only synthetic data and unlabeled facial images.
2 Related Work
Automatic face reconstruction attracts a lot of attention in the computer vision and computer graphics research communities. The available solutions differ in their assumptions about the input data, the priors and the techniques they use. When dealing with geometry reconstruction from a single image, the problem is ill-posed. Still, there are ways for handling the intrinsic ambiguities in geometry reconstruction from one image. These solutions can be roughly divided into the following categories:
3DMM Methods. In , Vetter and Blantz introduced the 3D Morphable Model (3DMM), a principal components analysis (PCA) basis for representing faces. One of the advantages of using the 3DMM is that the solution space is constrained to represent only likely solutions, thereby simplifying the problem. While the original paper assumes manual initialization, more recent efforts propose an automatic reconstruction process [4, 48]. Still, the automated initialization pipelines usually do not produce the same quality of reconstructions when only one image is used, as noted in . In addition, the 3DMM solutions cannot extract fine details since they are not spanned by the principal components.
Template-Based Methods. An alternative approach is to solve the problem by deforming a template to match the input image. One notable paper is that of Kemelmacher-Shlizerman and Basri . There, a reference model is aligned with the face image and a shape-from-shading (SfS) process is applied to mold the reference model to better match the image. Similarly, Hassner  proposed to jointly maximize the appearance and depth similarities between the input image and a template face using SIFTflow . While these methods do a better job in recovering the fine facial features, their capability to capture the global face structure is limited by the provided template initialization.
Data-Driven Methods. A different approach to the problem uses some form of regression to connect between the input image and the reconstruction representation. Some methods apply a regression model from a set of sparse landmarks [1, 10, 25], while others apply a regression on features derived from the image [22, 7].  applies a joint optimization process that ties the sparse landmarks with the face geometry, recovering both. Recently, a network was proposed to directly reconstruct the geometry from the image , without using sparse information or explicit features. That paper demonstrated the potential of using a network for face reconstruction. Still, it required external procedures for fine details extraction as well as initial guess of the face location, size, and pose.
In a sense, the proposed solution combines all of these different procedures. Specifically, a 3DMM is used to define the input for a Template-Based refinement step, where both parts are learned using a Data-Driven model.
3 Coarse Geometry Reconstruction
The first step in our framework is to extract the coarse facial geometry and pose from the given image. Our solution is motivated by two recent efforts,  which proposed to train a network for face reconstruction using synthetic data, and  which solved the face alignment problem using a network. Although the methods focus on different problems, they both use an iterative framework which utilizes a 3D morphable model. The proposed method integrates both concepts into a holistic alignment and geometry reconstruction solution.
3.1 Modeling The Solution Space
In order to solve the reconstruction problem using a CNN, a representation of the solution space is required. To model the facial geometries we use a 3D morphable model , where an additional blendshape basis is used to model expressions, as suggested in . This results in the following linear representation
Where is the average 3D face, is the principal component basis, is the blendshape basis, and and are the corresponding coefficient vectors. and are collected from the Bosphorus dataset  as in , where the identity is modeled using coefficients, and the expression using .
For projecting the 3D model to the image plane, we assume a parallel weak perspective projection.
where are the pixel location in the image plane and in the world coordinate system, respectively, is the focal length, and is the extrinsic matrix of the camera. Hence, the face alignment is modeled using only parameters: Euler angles, a 2D translation vector and a scale. The pose parameters are normalized so that a zero vector would correspond to a centralized front facing face. Overall, we have a representation of parameters for both geometry and pose. We will denote this representation as .
3.2 The CoarseNet Training Framework
The realization that the power of single-pass systems is limited, has made the application of iterative networks popular. While some methods [39, 23] use a cascade of networks to refine their results, it has been shown that a single network can also be trained to iteratively correct its prediction. This is done by adding feedback channels to the network that represent the previous output of the network as a set of feature maps. The network is then trained to refine its prediction based on both the original input and the feedback channels. This idea was first proposed by Carreira et al. in .
3.2.1 Feedback Representation
Defining the feedback channels of the previous output of the network is crucial, as it would affect the overall performance of our iterative framework. Roughly speaking, we would like the feedback channels to properly represent the current state of the coarse facial geometry. In practice, different types of feedback channels would emphasize different features of the current state. For instance, in  the Projected Normalized Coordinate Code (PNCC) was introduced. This feature map is computed by first normalizing the average face and painting the RGB channels of the current vertices with the , and coordinates of the corresponding vertex on the average model, see Figures 0(b) and 0(e).
Next, we propose to use the normal map as an additional channel, where each vertex is associated with its normal coordinates. These normal values are then rendered as RGB values. The purpose of the normal map is to represent more local features of the coarse geometry, which are not emphasized by the PNCC. The proposed solution uses both feedbacks, creating a richer representation of the shape. Examples of these representations are shown in Figure 1.
3.2.2 Acquiring The Data
In order to train the proposed framework, a large dataset of 3D faces is required. However, due to the complexity in acquiring accurate 3D scans for a large group of people, no such dataset is currently available. Note that unlike different annotations, such as landmark positions, which can be manually collected for an existing set of unlabeled images, the 3D geometry has to be captured jointly with the photometric data. A possible solution would be to apply existing reconstruction methods to 2D images and use these reconstructions as labels. However, such an approach would limit the reconstruction quality to that of the reconstruction method we use.
Here, we choose to follow the line of thought proposed in  and create a synthetic dataset by drawing random representations of geometry and pose, , which are then rendered using random texture, lighting, and reflectance. This process provides a dataset of 2D images, for which the pose and corresponding geometry are known by construction. The iterative refinement process is then simulated by drawing another set of parameters, , which is sampled between and a random set of parameters, .
represents the current estimation of the solution, and is used to generate the PNCC and normal map. The network is then trained to predict the ground-truth, , representation from the current one, . Note, that unlike  our representation captures not only the geometry, but also the pose. Hence and can vary also in their position and orientation.
3.3 The CoarseNet Architecture and Criterion
CoarseNet is based on the ResNet architecture , and is detailed in Figure 2. Note that the input layer includes the feedback channel and that a grayscale image is used. The last element in the proposed architecture is the training criterion. As our representation is composed of both geometry and pose parameters, we choose to apply a different training criterion for each part of the representation. For the geometry we apply the Geometry Mean Square Error (GMSE) suggested in ,
where is the geometry received from the network, and is the known geometry. The idea behind GMSE is to take into account how the different coefficients affect the resulting geometry. For the pose parameters we found that a simple MSE loss over the 6 parameters is sufficient. We weigh the two loss criteria so that we get approximately the same initial error for both.
3.4 Using CoarseNet
We feed CoarseNet with a image of a face. Such an image can be automatically acquired using a standard face detector, such as the Viola-Jones detector . The initial parameters vector, , is set to zeros, corresponding to a centered mean face . In addition, the input image is always masked in accordance with the visible vertices in the feedback channel. The masking is applied in order to improve our generalization capability from synthetic data to real-world images, as our synthetic data is more accurate for the head region. Although the mask is inaccurate in the first iteration, it is gradually refined. The network is then applied iteratively, producing the updated geometry , which is used to create the new feedback input. This process is repeated until convergence, as shown in Figure 3.
4 The Coarse to Fine Approach
For many tasks, such as face frontalization [48, 16], reconstructing the coarse geometry is sufficient. However, reconstructing fine geometric structures such as wrinkles could be useful for other applications, see [5, 38]. It is clear that while working in the morphable model domain, we cannot capture such details. To solve that, we transfer the problem to the unconstrained image plane, representing the geometry as a depth map. The role of the proposed FineNet would then be to modify the given coarse depth map, based on the original image, for capturing the fine details.
4.1 The Rendering Layer
To connect CoarseNet with FineNet we propose a novel rendering layer. The layer receives the geometry and pose representation vector as the input and outputs a depth map of the geometry in the corresponding pose. This is done in two steps, first the 3D mesh is calculated from the geometry parameters and positioned above the image plane,
The 3D mesh is then rendered using a z-buffer renderer, where each pixel is associated with a single triangular face from the mesh. In order to handle potential occlusions, when a single pixel resides in more than one triangle, the one that is closest to the image plane is chosen. The value of each pixel is determined by interpolating the z-values of the mesh face using barycentric coordinates
where is the z-value of the vertex in the respective triangle and is the corresponding coordinate. During back-propagation the gradients are passed from each pixel to the matching vertex, weighted by the corresponding coordinate,
where is the loss criterion. Note that we assume that the barycentric coordinates are fixed. Alternatively, one could derive the coordinates with respect to and . Note that no gradients are propagated to hidden vertices since they do not appear in the output depth map. A similar approach was applied for example in . Finally, the gradients are propagated from each vertex back to the geometry basis, by taking the derivative of Equation 5 with respect to . The gradients transfer is visualized in Figure 4.
4.2 FineNet Framework
Delicate facial features such as wrinkles and dimples are difficult to represent by a 3DMM low dimensional space, mainly due to their high diversity. Hence, in contrast to CoarseNet, we need to use a pixel-based framework to recover the fine details. Recently, several notable pixel-based CNN architectures [12, 27, 14] were used for various fine grained tasks like semantic and instance segmentation [27, 14], optical flow , and human pose estimation . First successful attempts to reconstruct surface normals using these architectures [2, 45] have motivated our FineNet architecture. The proposed framework differs from both these networks in its output (depth map vs. normal map) and training regime (unsupervised vs. supervised).
The FineNet is based on the hypercolumn architecture suggested in . The main idea behind this architecture is to generate a per-pixel feature map which incorporates both structural and semantic data. This is achieved by concatenating the output responses from several convolution layers along the path of the network. Due to pooling layers, the output maps size of inner layers does not match the size of the input image, therefore, they are interpolated back to the original size, to create a dense per-pixel volume of features. This volume is then processed by several convolution layers to create the final prediction.
We choose the VGG-Face  as a base for our hypercolumn network since it was fine tuned on a domain of faces. For interpolating, we apply a slightly different scheme than that of . Instead of directly upsampling each feature map to the original size using bilinear interpolation, we use cascaded 2-strided upconvolution layers to upsample the feature maps. This is done in order to improve the quality of the features, as the interpolation is now also part of the learning process. In contrast to recognition problems, refining the facial features is a relatively local problem. Therefore, we truncate the VGG-Face network before the third pooling layer and form a hypercolumn feature volume. This volume is then processed by a set of convolutional layers used as a linear regressor. Note, that this fully convolutional framework allows us to use any size of input images. Figure 2 describes the FineNet architecture.
4.3 FineNet Unsupervised Criterion
To train FineNet some form of loss function is required. One possible solution would be to simply use an MSE criterion between the network output and a high-quality ground-truth depth map. This would allow the network to implicitly learn how to reconstruct detailed faces from a single image. Unfortunately, as mentioned in Section 3.2.2, a large dataset of detailed facial geometries with their corresponding 2D images is currently unavailable. Furthermore, a synthetic dataset for this task cannot be generated using morphable models as there is no known model that captures the diversity of fine facial details. Instead, we propose an unsupervised learning process where the loss criterion is determined by an axiomatic model. To achieve that, we need to find a measure that relates the output depth map to the 2D image. To that end, we resort to Shape from Shading (SfS).
Recent results in SfS [20, 46, 13, 30, 29] have shown that when given an initial rough surface, subtle geometry details can be accurately recovered under various lighting conditions and multiple surface albedos. This is achieved by optimizing some objective function which ties the geometry to the input image. In our case, an initial surface is produced by CoarseNet and its depth map representation is fed into FineNet along with the input image. We then formulate an unsupervised loss criterion based on the SfS objective function, transforming the problem from an online optimization problem to a regression one.
4.3.1 From SfS Objective to Unsupervised Loss
Our unsupervised loss criterion was formulated in the spirit of [30, 29]. The core of our loss function is an image formation term, which describes the connection between the network’s output depth map and its input image. This term drives the network to learn fine detail recovery and is defined as
Here, is the reconstructed depth map, is the input intensity image, is the albedo image, and are the first-order spherical harmonics coefficients. represents the matching spherical harmonics basis,
where is the normal expressed as a function of the depth. Notice that while is an input to FineNet, the scene lighting and albedo map are unknowns. Generally, the need to recover both lighting and albedo is part of the ambiguity in SfS problems. However, here we can utilize the fact we do not solve a general SfS problem, but one constrained to human faces. This is done by limiting the space of possible albedos to a low dimensional 3DMM texture subspace.
where is the average face texture, is a principal component basis and is the corresponding coefficients vector. In our implementation, coefficients were used.
Now, as shown in , the global lighting can be correctly recovered by assuming the average facial albedo, , using the coarse depth map, , as follows
Note that this is an overdetermined linear problem that can be easily solved using least squares. Given the lighting coefficients, the albedo can also be easily recovered as
As in Equation 11, this is an overdetermined linear problem that can be solved directly. Based on the resulting albedo and lighting coefficients we can calculate and its gradient with respect to . A few recovery samples are presented in Figure 5.
To regularize the solution, fidelity and smoothness terms are added to the criterion of FineNet.
where is the discrete Laplacian operator. These terms guarantee that the solution would be smooth and would not stray from the prediction of CoarseNet. The final per-pixel loss function is then defined as
Where the s determine the balance between the terms and were set as , , . The gradient of with respect to is then calculated and used for backpropagation.
4.3.2 Unsupervised Loss - a Discussion
The usage of unsupervised criterion has some desired traits. First, it eliminates the need for an annotated dataset. Second, it ensures that the network is not limited by the performance of any algorithm or the quality of the dataset. This results from the fact that the loss function is entirely dependent on the input, in contrast to supervised learning SfS schemes such as  and , where the data is generated by either photometric stereo or raw Kinect scans, respectively. In addition, unlike traditional SfS algorithms, the fact that the albedo and lighting coefficients are calculated only as part of the loss function means that at test time the network can produce accurate results directly from the intensity and depth inputs, without explicitly calculating the albedo and lighting information. Although the CoarseNet can be trained to generate the lighting and albedo parameters, we chose not to include them in the pipeline for two reasons. First, the lighting and albedo are only needed for the training stage and have no use during testing. Second, both (11) and (12) are over-determined systems which can be solved efficiently with least squares, thus, using a CNN for this task would be redundant.
4.4 End-to-End Network Training
Finally, in order to train FineNet, we connect it to CoarseNet using the proposed rendering layer which is added between the two networks. Thus, a single end-to-end network is created. We then use images from the VGG face dataset , and propagate them through the framework. The forward pass can be divided into three main steps. First, each such image is propagated through CoarseNet for four iterations, creating the coarse geometry representation. Then, the rendering layer transforms the 3DMM representation to a depth map. Finally, the depth map, alongside the original input image, is propagated through FineNet resulting in the dense updated depth map. The criterion presented in 4.3 is then used to calculate the loss gradient. The gradient is backpropagated through the network allowing us to train FineNet and fine-tune CoarseNet.
Note that the fact that CoarseNet was already trained is crucial for a successful training. This stems from the fact that the unsupervised loss function depends on the coarse initialization, which cannot be achieved without the synthetic data. In order to prevent CoarseNet from deviating too much from the original coarse solution, a fidelity criterion is added to CoarseNet’s output. This criterion is the MSE between the current CoarseNet solution and the original one. Gradients from both FineNet and the fidelity loss are then weighted and passed through CoarseNet, fine-tuning it, as presented in Figure 6.
In order to evaluate the proposed framework we performed several experiments to test its accuracy on both 3D facial datasets and in the wild inputs. Both qualitative and quantitative evaluations are used to demonstrate the strength of the proposed solution. Our method is compared to the template based method of , to the 3DMM based method introduced as part of  and to the data driven method of . Note that unlike our method, all of the above require alignment information. We use the state-of-the-art alignment method of  to provide input for these algorithms.
For a qualitative analysis we show our results on in-the-wild images of faces. As can be seen in Figure 9, our method exposes the fine facial details as opposed to [48, 34] and is more robust to expressions and different poses than . In addition, we compare our reconstructions with a state of the art method for reconstruction from multiple images . The results are shown in Figure 7, one can see that our method is able to produce a comparable high quality geometry from only a single image. Finally, Figure 8 shows our method robustness to different poses, while Figure LABEL:fig:teaser shows some more reconstruction results.
For a quantitative analysis of our results we used the Face Recognition Grand Challenge dataset V2 . This dataset consists of roughly two thousand color facial images aligned with ground truth depth of each pixel. Each method provided an estimated depth image and a binary mask representing the valid pixels. For the purpose of fair judgment, we evaluated the accuracy of each method on pixels which were denoted as valid by all the methods. As shown in Table 1, our method produce the lowest depth error among the tested methods.
Finally, as noted in Section 4.2 the fully convolutional FineNet can receive inputs with varying sizes. This size invariance is a vital property for our detail extraction network, as it allows the network to extract more details when a high quality input image is available. Figure 10 shows that although our network was trained only on images it gracefully scales up for inputs.
The proposed framework separated the training process into two phases, starting with the training of CoarseNet using synthetic data. While using artificial data allows us to gather the large amounts of data required for training, it does present some limitations in terms of generalization. For example, we found that our network might fail when tested upon unique facial features that were not part of the training data, such as beards, makeup, and glasses, as can be seen in the supplementary material. The second phase of the training is the unsupervised end-to-end training scheme. While we found that this step successfully trains FineNet, it only slightly tunes CoarseNet. We believe that is because the loss function of FineNet is more sensitive to high frequencies, while the 3DMM model captures mainly coarse facial geometries. Still, it would be interesting to see whether one can push the idea of end-to-end training further, to significantly affect CoarseNet and possibly even to remove its dependency on synthetic data.
We proposed an end-to-end approach for detailed face reconstruction from a single image. The method is comprised of two main blocks, a network for recovering a rough estimation of the face geometry followed by a fine details reconstruction network. While the former is trained with synthetic images, the latter is trained with real facial images in an end-to-end unsupervised training scheme. To connect the two networks a differentiable rendering layer is introduced. As demonstrated by our comparisons, the proposed framework outperforms recent state-of-the-art approaches.
Research leading to these results was supported by European Community’s FP7- ERC program, grant agreement no. 267414.
-  O. Aldrian and W. A. Smith. A linear approach of 3D face shape and texture recovery using a 3d morphable model. In Proceedings of the British Machine Vision Conference, pages, pages 75–1, 2010.
-  A. Bansal, B. Russell, and A. Gupta. Marr revisited: 2D-3D alignment via surface normal prediction. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5965–5974, 2016.
-  V. Blanz and T. Vetter. A morphable model for the synthesis of 3D faces. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, pages 187–194. ACM Press/Addison-Wesley Publishing Co., 1999.
-  P. Breuer, K.-I. Kim, W. Kienzle, B. Scholkopf, and V. Blanz. Automatic 3D face reconstruction from single images or video. In Automatic Face & Gesture Recognition, 2008. FG’08. 8th IEEE International Conference on, pages 1–8. IEEE, 2008.
-  C. Cao, D. Bradley, K. Zhou, and T. Beeler. Real-time high-fidelity facial performance capture. ACM Transactions on Graphics (TOG), 34(4):46, 2015.
-  J. Carreira, P. Agrawal, K. Fragkiadaki, and J. Malik. Human pose estimation with iterative error feedback. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
-  M. Castelán and J. Van Horebeek. 3D face shape approximation from intensities using partial least squares. In Computer Vision and Pattern Recognition Workshops, 2008. CVPRW’08. IEEE Computer Society Conference on, pages 1–8. IEEE, 2008.
-  B. Chu, S. Romdhani, and L. Chen. 3d-aided face recognition robust to expression and pose variations. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 1907–1914. IEEE, 2014.
-  A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. van der Smagt, D. Cremers, and T. Brox. Flownet: Learning optical flow with convolutional networks. In The IEEE International Conference on Computer Vision (ICCV), pages 2758–2766, December 2015.
-  P. Dou, Y. Wu, S. K. Shah, and I. A. Kakadiaris. Robust 3D face shape reconstruction from single images via two-fold coupled structure learning. In Proc. British Machine Vision Conference, pages 1–13, 2014.
-  P. Garrido, M. Zollhöfer, D. Casas, L. Valgaerts, K. Varanasi, P. Pérez, and C. Theobalt. Reconstruction of personalized 3D face rigs from monocular video. ACM Transactions on Graphics (TOG), 35(3):28, 2016.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680, 2014.
-  Y. Han, J.-Y. Lee, and I. So Kweon. High quality shape from a single RGB-D image under uncalibrated natural illumination. In Proceedings of the IEEE International Conference on Computer Vision, pages 1617–1624, 2013.
-  B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Hypercolumns for object segmentation and fine-grained localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 447–456, 2015.
-  T. Hassner. Viewing real-world faces in 3d. In Proceedings of the IEEE International Conference on Computer Vision, pages 3607–3614, 2013.
-  T. Hassner, S. Harel, E. Paz, and R. Enbar. Effective face frontalization in unconstrained images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4295–4304, 2015.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
-  V. Kazemi, C. Keskin, J. Taylor, P. Kohli, and S. Izadi. Real-time face reconstruction from a single depth image. In 2014 2nd International Conference on 3D Vision, volume 1, pages 369–376. IEEE, 2014.
-  V. Kazemi and J. Sullivan. One millisecond face alignment with an ensemble of regression trees. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014.
-  I. Kemelmacher-Shlizerman and R. Basri. 3D face reconstruction from a single image using a single reference face shape. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(2):394–405, 2011.
-  I. Kemelmacher-Shlizerman and S. M. Seitz. Face reconstruction in the wild. In 2011 International Conference on Computer Vision, pages 1746–1753. IEEE, 2011.
-  Z. Lei, Q. Bai, R. He, and S. Z. Li. Face shape recovery from a single image using cca mapping between tensor spaces. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–7. IEEE, 2008.
-  H. Li, Z. Lin, X. Shen, J. Brandt, and G. Hua. A convolutional neural network cascade for face detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5325–5334, 2015.
-  C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. T. Freeman. Sift flow: Dense correspondence across different scenes. In European conference on computer vision, pages 28–42. Springer, 2008.
-  F. Liu, D. Zeng, J. Li, and Q. Zhao. Cascaded regressor based 3D face reconstruction from a single arbitrary view image. arXiv preprint arXiv:1509.06161, 2015.
-  F. Liu, D. Zeng, Q. Zhao, and X. Liu. Joint face alignment and 3D face reconstruction. In Proc. European Conference on Computer Vision, Amsterdam, The Netherlands, October 2016.
-  J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3431–3440, 2015.
-  F. Maninchedda, C. Häne, M. R. Oswald, and M. Pollefeys. Face reconstruction on mobile devices using a height map shape model and fast regularization. In 3D Vision (3DV), 2016 International Conference on, pages 489–498. IEEE, 2016.
-  R. Or-El, R. Hershkovitz, A. Wetzler, G. Rosman, A. M. Bruckstein, and R. Kimmel. Real-time depth refinement for specular objects. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4378–4386, 2016.
-  R. Or-El, G. Rosman, A. Wetzler, R. Kimmel, and A. M. Bruckstein. RGBD-Fusion: Real-time high precision depth recovery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5407–5416, 2015.
-  O. M. Parkhi, A. Vedaldi, and A. Zisserman. Deep face recognition. In British Machine Vision Conference, pages 41.1–41.12, 2015.
-  P. J. Phillips, P. J. Flynn, T. Scruggs, K. W. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek. Overview of the face recognition grand challenge. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), volume 1, pages 947–954. IEEE, 2005.
-  M. Piotraschke and V. Blanz. Automated 3D face reconstruction from multiple images using quality measures. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3418–3427, 2016.
-  E. Richardson, M. Sela, and R. Kimmel. 3D face reconstruction by learning from synthetic data. In 3D Vision (3DV), 2016 International Conference on, pages 460–469. IEEE, 2016.
-  J. Roth, Y. Tong, and X. Liu. Unconstrained 3D face reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2606–2615, 2015.
-  J. Roth, Y. Tong, and X. Liu. Adaptive 3D face reconstruction from unconstrained photo collections. CVPR, 2016.
-  A. Savran, N. Alyüz, H. Dibeklioğlu, O. Çeliktutan, B. Gökberk, B. Sankur, and L. Akarun. Bosphorus database for 3D face analysis. In European Workshop on Biometrics and Identity Management, pages 47–56. Springer, 2008.
-  M. Sela, Y. Aflalo, and R. Kimmel. Computational caricaturization of surfaces. Computer Vision and Image Understanding, 141:1 – 17, 2015.
-  Y. Sun, X. Wang, and X. Tang. Deep convolutional network cascade for facial point detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3476–3483, 2013.
-  S. Suwajanakorn, I. Kemelmacher-Shlizerman, and S. M. Seitz. Total moving face reconstruction. In European Conference on Computer Vision, pages 796–812. Springer, 2014.
-  S. Tulyakov and N. Sebe. Regressing a 3D face shape from a single image. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 3748–3755. IEEE, 2015.
-  P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on, volume 1, pages I–511. IEEE, 2001.
-  S.-E. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh. Convolutional pose machines. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4724–4732, June 2016.
-  T. Weise, H. Li, L. Van Gool, and M. Pauly. Face/off: Live facial puppetry. In Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA ’09, pages 7–16, New York, NY, USA, 2009. ACM.
-  Y. Yoon, G. Choe, N. Kim, J.-Y. Lee, and I. S. Kweon. Fine-scale surface normal estimation using a single NIR image. In European Conference on Computer Vision, pages 486–500, 2016.
-  L.-F. Yu, S.-K. Yeung, Y.-W. Tai, and S. Lin. Shading-based shape refinement of RGB-D images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1415–1422, 2013.
-  X. Zhu, Z. Lei, X. Liu, H. Shi, and S. Z. Li. Face alignment across large poses: A 3D solution. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
-  X. Zhu, Z. Lei, J. Yan, D. Yi, and S. Z. Li. High-fidelity pose and expression normalization for face recognition in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 787–796, 2015.
-  J. Zienkiewicz, A. Davison, and S. Leutenegger. Real-time height map fusion using differentiable rendering. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016.
Appendix A Supplementary Qualitative Results
In Figure 11 we present additional qualitative comparisons. First, note how our network correctly infers the face alignment without any external information, producing similar alignment to the state-of-the-art alignment from . The proposed method is able to produce fine facial details, as opposed to [34, 48], while being more robust to different expressions compared to the template-based method of . Figure 12 shows additional reconstructions.
Appendix B Synthetic Data
In Figure 13 sampled synthetic examples are visualized, where random backgrounds are used. The rendered faces differ extensively in their geometry, texture, illumination, and reflectance properties.
b.1 Generalizing from Synthetic Data
While data generation grants us flexibility and allows generation of large-scale datasets, there are still some limitations for synthetic data. As noted in , while training on synthetic faces generally produces plausible results on in-the-wild images, the network might fail when the input contains details that are not seen in the synthetic dataset, such as glasses or facial hair. In Figure 14 we show how our method handles such examples compared to . From the results we can see that both methods show some robustness to eyeglasses, even when the eyes themselves are occluded. Regarding facial hair, one can see that a dominant beard might confuse both methods and make them misalign the chin or the mouth. Still, our method is able to produce more viable results than those of .
Appendix C Further Analysis
Next, we present a few additional experiments conducted on the different elements of the proposed network.
A key property of iterative networks is convergence. To validate that CoarseNet meets this requirement, we calculated the average change in the output of CoarseNet between different iterations. As can be seen in Figure 15, the network indeed converges after a few iterations.
As detailed in the paper, FineNet starts with a set of convolutional blocks from the VGG Face Net , each followed by a pooling layer. The output of these blocks is then connected together to form a set of dense feature maps. While using more VGG blocks could possibly provide more data for the final prediction, it would also result in a larger network, increasing the overall training and runtime complexity. As Shape-from-Shading mainly relies on local features we choose to truncate the network after the third pooling operation. As shown in Figure 16, while using only a single block results in discontinuances and artifacts, using two or more blocks produces reasonable results.
Another interesting property of FineNet presented in the paper is its robustness to different input sizes, allowing it to extract more details when a high-resolution input is given. Note that the same does not hold for CoarseNet which uses a fixed averaging operator. However, as CoarseNet recovers only the coarse geometry it does not require a high-resolution input and would not benefit from it. In practice, we always scale the input given to CoarseNet to , while feeding FineNet with inputs in the desired scale.
Appendix D Supplementary Quantitative Analysis
Here, we demonstrate a quantitative analysis of the performance of the proposed method. The absolute error heat maps in Figure 17 present the typical error distribution of the proposed method versus those of other techniques [20, 34, 48].