A Neural Network for Detailed Human Depth Estimation from a Single Image
This paper presents a neural network to estimate a detailed depth map of the foreground human in a single RGB image. The result captures geometry details such as cloth wrinkles, which are important in visualization applications. To achieve this goal, we separate the depth map into a smooth base shape and a residual detail shape and design a network with two branches to regress them respectively. We design a training strategy to ensure both base and detail shapes can be faithfully learned by the corresponding network branches. Furthermore, we introduce a novel network layer to fuse a rough depth map and surface normals to further improve the final result. Quantitative comparison with fused ‘ground truth’ captured by real depth cameras and qualitative examples on unconstrained Internet images demonstrate the strength of the proposed method. Our code will be released at Link
Understanding human images is an important problem in computer vision with many applications ranging from human-computer-interaction and surveillance to telecommunication. Many works [25, 22, 34, 33, 21, 18] have been developed to recover 2D or 3D skeleton joints from a RGB image. Since the skeleton only captures sparse information of the human body, DensePose  estimates a dense UV map (i.e. a correspondence map between the input image and a 3D template model). But this UV map can not recover 3D shape without additional 3D pose information, which limits its application.
On the other hand, there are many works [3, 19, 7, 31, 13, 17, 4, 10, 27] to recover a dense 3D deformable model of the human body from a single image, e.g. the SCAPE  and SMPL  models, which are learned from a large dataset of scanned body shapes. While generating 3D models, these methods only inference the naked body shape without capturing the clothes details.
This paper aims at recovering a detailed depth map for the foreground human object from a single RGB image. This problem has been studied in the earlier work  with synthetic human images. Another recent work  recovers a volumetric 3D model of the imaged person. Results from both methods are too coarse for many applications. In comparison, we design a neural network to estimate highly detailed depth maps that are fine enough to capture cloth wrinkles, which might potentially be exploited for telepresence applications like the Microsoft Holoportation .
Our network is designed with two novel insights. Firstly, we argue it is important to separate the depth to a smooth base shape and a residual detail shape and regress them respectively. The base shape captures the large overall geometry layout, while the detail shape captures small bumps such as cloth wrinkles. The value range of the base shape is at the scale of one meter, while that of the detail shape is at a few centimeters. Thus, we design a network with two branches for the base and detail shapes respectively to facilitate the training process. Specifically, we propose a 2-stage training strategy to ensure the effectiveness of this separation. These two branches are trained respectively in the 1st stage and then finetuned together in the 2nd stage. Secondly, we follow the intuition in  to estimate surface normals to facilitate depth map estimation. Specifically, we generalize the algorithm in  that fuses surface normals and a coarse depth to an iterative formulation. In this way, we build a parameter-free network layer to fuse the estimated normals and a coarse depth map for improved results.
Our final network captures visually appealing detailed depth images from a single RGB image. The evaluation on our own captured real data and some unconstrained online images demonstrate its effectiveness. We will publish our dataset and source code with the paper to facilitate further research.
2 Related works
3D Human Pose Estimation. With the recent development of deep convolutional neural networks (CNNs), there are significant improvements on 3D human pose estimation [21, 33, 18, 22]. Despite the differences in network architectures, many works [25, 29, 22, 33, 35] use a likelihood heatmap to represent the distribution of each joint’s location and show better performance than directly regressing the joint location. Instead of taking the maximum from a heatmap, Sun et al.  compute the expected coordinates from a heatmap to reduce the artifacts due to quantization. The recent work DensePose is even able to recover dense UV coordinate for each pixel on human body. Unlike our method, most of these methods only recover sparse 3D joint positions. While DensePose provides dense result, its result is not in 3D but rather a 2D UV coordinate map. We adopt a pose estimation network as an intermediate layer and use its results to guide the dense depth recovery.
Body Shape Estimation. The 3D shape of a human body can be parameterized by the SCAPE or SMPL models [3, 19] with two sets of independent parameters, controlling the skeleton pose and body shape respectively. Both models are derived from a large set of scanned 3D human shapes. Given these parametric human models, many methods [3, 19, 7, 31, 13, 17, 27] recover dense human body shape from a single RGB image by estimating the shape and pose parameters. Meanwhile, there are also some non-paramterized methods [36, 35] which directly regress discretized body shape representation from a RGB image. The above methods only recover the 3D shape of the naked human body and geometry details like the clothes are not modeled, which make them not suitable for visualization tasks. While the method  can predict the SMPL model with clothe wrinkle, it needs to be fed a video of a moving person with designed pose. To overcome this limitation, our network aims at recovering shape details from a single image.
Generic Dense Depth Estimation. Depth estimation from a single image has gained increasing attention in the computer vision community. Most works like [37, 38, 20, 15, 39, 41, 9, 16] are proposed for indoor and outdoor scenes. We focus on depth estimation of humans, which allows us to build much stronger shape prior than these generic depth estimation methods. Specifically, our network first estimates the skeleton joints and a body part segmentation to facilitate the depth estimation.
The overall structure of the proposed network is shown in Figure 1. The input is a 256256 3-channel RGB image containing a human as the foreground. The network first computes the heatmaps of the 3D skeleton joints and a body part segmentation through two Hourglass networks , which are referred as Skeleton-Net and Segmentation-Net respectively in this paper. We then concatenate the outputs of these two modules with the input RGB image and feed them to the Depth-Net to compute the initial depth maps, which consists of a base shape and a detail shape.
In a separate branch, another Hourglass network, referred as Normal-Net, computes a surface normal map of the human body from the input RGB image and the segmentation mask generated by the Segmentation-Net. We then compose the base shape and detail shape, and fuse the composed shape and normal map through a parameter-free shape refinement module to produce the final shape.
During training, we first pre-train the Skeleton-Net, Segmentation-Net, and Depth-Net on synthetic data  respectively. Meanwhile, the Normal-Net is pre-trained on the deforming fibre dataset . Then we finetune the complete network on the real image dataset captured by ourselves with a depth camera, while keeping the parameters of Skeleton-Net and Segmentation-Net fixed.
4 Segmentation and Skeleton Networks
Inspired by the BodyNet , 3D joints and body part segmentation are highly correlated with the final estimation of human shapes. We therefore apply two Hourglass networks  to estimate the heatmaps of 3D joints and a body part segmentation from the input RGB image. As demonstrated in the ablation studies, this intermediate supervision of 3D joints and body part segmentation is essential for the depth estimation, especially for the base shape.
Here, a human body contains 16 joints and 14 body parts. For each joint, our Skeleton-Net predicts a heatmap indicating the probability of its position . The 3D joints are defined in the camera coordinate system, where the -axes are aligned with the image axes, and the axis is the camera principal direction. We discretize the coordinate between meters into 19 bins and set the depth of the pelvis joint as . The and coordinates are discretized into 64 bins over the image plane. Therefore, the network estimate a heatmap of size 646419 for each joint, resulting in a skeleton representation as a 64641916 heatmap.
Unlike , we discard the 2D joint estimation sub-network and predict the 3D joints directly, which makes our network more compact. In order to achieve good accuracy with this compact network, we adopt the integral regression  to train the Skeleton-Net.
For body part segmentation, the Segmentation-Net predicts the probability heatmap for the 14 body parts and the background, which results in a 646415 heatmap. Following the previous work of human part segmentation , we adopt the spatial cross-entropy loss in training.
5 Depth Estimation Network
To better estimate a detailed depth map with cloth wrinkles, we divide the depth map of a human body into a smooth base shape and a residual detail shape: the base shape captures the main geometry layout of the human body, while the detail shape is responsible for describing local geometry details such as cloth wrinkles.
As shown in Figure 2 which corresponds to the part in the red dashed rectangle in Figure 1, the Depth-Net is composed of a U-Net  and a two-branch architecture. The concatenation of the RGB image and bilinearly-upsampled heatmaps (6464 to 256256) of 3D joints and segmentation is fed into this network, and the two branches, namely base and detail shape branch, output a base shape and detail shape separately. Because the human layout is approximately one-meter range with low frequency in image plane and the detailed cloth wrinkles is just several centimeters with higher frequency, the two branches concentrate on these two different distributions respectively.
To effectively train the Depth-Net, we set the median of the ground-truth depth as 0 and decouple this zero-median depth image into a base shape and detail shape. Specifically, we apply the bilateral filter to the depth image to smooth out the details and obtain the base shape. We denote this base shape as , where is the ground-truth depth image and is the operation of the bilateral filter. In our work, the depth sigma is set as meters and the space sigma is set as pixels for the bilateral filter. The ground-truth of the detail shape is computed as a residual :
For the base shape, we discretize the depth range between [-0.6, 0.6] meters into 19 bins for each pixel. The softmax layer which follows a residual block in the base branch generates a 25625619 heatmap indicating the probability of the depth bin. Afterwards, a 256256 depth map can be calculated from the heatmap by an integral operation . Meanwhile, in the detail branch, a residual depth map of detail shape which has a higher frequency is regressed directly. At last, we add the base shape and detail shape together to obtain the composed shape.
In order to guide the base and detail branch to focus on their target domain (base shape and detail shape) , we train our Depth-Net following a two-stage strategy. In the first stage, the base and detail branch are pre-trained separately to obtain well-conditioned initial values. In the second stage, we perform end-to-end training on three combined weights with the supervision of the intermediate base and detail shape branches.
5.1 Training stage 1
Once we have the ground-truth base and detail shape, we pre-train these two branches independently with the following loss functions:
where and are the base and detail depth to be regressed respectively. is the Huber loss function, and are set as 0.2 meters and 0.05 meters. Here, is defined as:
This pre-training helps the two branches focus on different aspect of the shape estimation, where the base shape captures the main geometry layout and the detail shape adds on high-frequency wrinkles.
5.2 Training stage 2
In this stage, we jointly train these two branches by using the combined loss below:
where , , are set as . Here, the composed loss is formulated as:
where is set to meters in our experiments. is the truncated loss and it is defined as:
The stage 2 improves the consistency between the combined shape and the ground truth, and the truncated loss is used to define the composed loss which clips the loss value to a bounded range. This truncated loss helps to avoid the training being biased by large shape errors due to imprecise poses, which could overwhelm the errors due to missing cloth wrinkles. As we will see in experiments, this loss helps the detail shape branch to capture details.
6 Normal Network and Depth Refinement
As observed in , regressing surface normal is often more reliable than regressing depth directly. We include a network to regress the surface normal at every pixel and use this information to refine the composed depth.
6.1 Normal Network
Here, a Hourglass network takes a RGB image concatenated with a segmentation mask from the Segmentation-Net as input and outputs a normal map.
This network is trained with the ground-truth normal computed from the ground-truth depth map . To compute the ground-truth normal , we take the nearby 3D points at each pixel to estimate its normal direction by the standard linear least square fitting. The loss function is the mean angular difference between the ground-truth and the regressed normal.
6.2 Depth Refinement
We fuse the composed depth and surface normal here to improve the depth quality. Similar to , we formulate the problem with two constraints. Firstly, the tangent vector of the final shape should be perpendicular to the input surface normal at each pixel. Secondly, the final shape should be close to the initial shape. Rather than solving a large linear system for a global optimization which is impractical for a neural network, we introduce an iterative solution.
At each iteration, we update the depth assuming its neighboring depth is fixed. Concretely, we define as the normal of pixel in directions, and as the position of pixel after the -th iteration. At the +-th iteration, we update for each pixel with the depth of neighboring pixels fixed at . Here, is a neighboring pixel of and there are 4 neighbors for each pixel in cardinal directions. The update function is defined as:
where is the depth of that makes the edge and perpendicular, and is the depth of that makes and perpendicular. Specifically, they can be computed as:
Here, is the hyper-parameter (fixed at 0.4).
The above shape refinement is iterated for 5 times in our network to simulate the iterative solution of the original energy equation in . Figure 3 compares our method with the ‘Kernel Regression’ layer  on a toy example, which is also designed to fuse the surface normal and depth. Figure 4 shows a comparison with the work  on real data and our method also produces more convincing result.
To demonstrate the effectiveness of our method, we evaluate it using ablation studies and both qualitative and quantitative comparisons with other relevant works [36, 35], a surface-from-normals method  and a general depth estimation network . To test the performance of human shape estimation with fine-grain geometrical details, we build up our own dataset for evaluation.
Implementation Details. All input RGB images are cropped to center the person with size 256256, assuming that the bounding box of person is given. The RMSprop  algorithm with a fixed learning rate of 1 is used. We first train our Segmentation-Net, Skeleton-Net and Depth-Net on SURREAL , a large-scale synthetic human body dataset without geometrical details. At this stage, the batch size is set to 6 for these three networks, and for Depth-Net we only add base shape loss to train base shape branch since the synthetic data does not have much geometrical details. The Normal-Net is pre-trained on a deforming fibre dataset . After the base shape branch of Depth-Net converges, which takes 10 epochs, 12 hours on a GTX 2080 GPU, we fix the weight of Skeleton-Net and Segmentation-Net and fine-tune the Depth-Net and Normal-Net jointly on our own captured data with a batch size of 1. It takes another 12 epochs, 10 hours for stage 1 and another 8 epochs, 6 hours on stage 2. During inference, our network takes 75.5ms for the whole pipeline, and 61.1ms without iterative depth refinement on a RTX 2080.
Dataset. We collect a RGBD dataset for real persons. Here the dataset contains 26 different people performing simple actions captured by a Microsoft Kinect2 camera.
For the training data, we capture approximate 800 frames for each person, leading to over 20,000 training depth images in total. For quantitative evaluation, we use depth cameras to capture video clips of a person with a fixed pose and employ the InfiniTAM  to fuse captured sequences. The high-quality depth maps are rendered according to the fused mesh and camera poses with Blender . Our testing data contains 5 different persons, each person is captured with 12 different poses and 3 different clothing styles.
Note that we only use the fused depth maps for evaluation, the training data are raw depth maps since it is infeasible to fuse all the meshes with thousands of poses for rendering the depth maps.
|Ours (Final Shape)||30.06||51.57||75.76||3.208|
|Ours (Base + Detail)||29.24||50.93||75.52||3.282|
|Ours (Base Shape)||28.03||50.10||75.32||3.396|
|Laina et al. ||19.84||36.48||60.94||4.902|
|Kovesi et al. ||15.51||29.87||55.39||5.789|
7.1 Quantitative Results
Figure 5 shows our results compared with the fused ground-truth depth. We can see that our method can successfully capture cloth wrinkles and produce visually appealing 3D mesh from testing real images, despite our model is trained on the noisy raw depth images.
Comparison with [36, 16, 35, 14]. There are only a few works that can compute a depth map of human body from a single image. We compare with the two most recent works [36, 35] and a representative general depth estimation framework , and since we use normal map to refine human depth in our framework, we also evaluate a surface-from-normals method  with the normals from our Normal-Net. At last, to show the generalizability of our network, we replace our segmentation and 3D pose estimation module with off-the-shelf networks [32, 36] and evaluate the performance of Depth-Net. To make the comparison fair, we fine-tune [36, 16] on our dataset. Unfortunately, the BodyNet  needs a volumetric shape representation and its loss function contains the multiview constraints, thus it can not be fine-tuned on our data. Here, the pixel accuracy as percentage of pixels with depth errors smaller than some specified threshold is employed as the evaluation metric. It shows in Table 1 that the final shape after refinement always produces the highest accuracy. Here we notice that our network still works well with off-the-shelf segmentation and 3D pose estimation methods, and deducing the correct human shape just from normal is difficult due to noisy normal estimation and depth discontinuities. We also use the Mean Absolute Error (MAE) as a more global metric to prove that our method captures not only details but also overall shapes. Furthermore, we plot the Cumulative Distribution Function (CDF) of the shape errors by different methods in Figure 6, which illustrates that our method outperforms others with different shape scales.
Figure 7 shows a more intuitive visualization for the comparison with , ,  and . At the first row we show the heatmaps for depth errors. The method in SURREAL  produces incorrect human body segmentation, which leads to large errors at the boundary. The BodyNet  has significant quantization errors due to the coarse volumetric representation.  generates very rough depth maps with large structure error because of lacking intermediate supervision of 3D joints and segmentation. The result of  shows it can not handle depth discontinous cases such as when putting hands in front of the torso.
7.2 Ablation Studies
In this section, we verify the effectiveness of the individual components of our method. To this end, we trained another 5 networks in the following settings and compared their results with ours.
Without Skeleton and Segmentation Cues: We discard Skeleton-Net and Segmentation-Net and only feed RGB image to Depth-Net to predict human body depth while the other conditions keep the same.
Without Depth Separation: We replace the two-branch architecture of the Depth-Net with only one branch. We train this network for the same epochs with the Huber loss defined as:
where is set as 0.20 meters in this setting.
Only Stage 1 Training: We keep the two-branches architecture and trained it only on stage 1 for the same total epochs.
Only Stage 2 Training: The network is the same, while we train it directly on stage 2 without well-initialized weight of the base and detail branches.
Huber Loss on Composed Shape: We follow the two stages training strategy on the same network but use Huber loss instead of truncated L1 loss to define the composed Loss in stage 2:
where is 0.20m in this setting.
We tested the five different settings mentioned above. Figure 8, 9, 10, 11 12, show some qualitative comparisons of the results from these different settings. Specifically, Figure 8 shows that in the setting without Segmentation-Net and Skeleton-Net, the Depth-Net will lose the high-level human body information such as 3D joints and body part segmentation, hence the results show some structural issues, like broken meshes on some examples. Figure 9 clearly demonstrates that the network without a two-branch architecture is not able to recover small-scale geometry details. From the results of Figure 10 and Figure 11 we can see that the recovered surface under these two settings are very coarse. Because without using truncated L1 loss which clips the composed error in stage 2 to improve the consistency of two branches, the large layout error may overwhelm the detail error and leads to unstable results from two branches. Figure 12 shows without stage 1 guiding two branches focusing on their target distribution, the detail branch is not working on recovering the small wrinkle specifically. In summary, it is clear that our method produces the best shape details, main layout and smooth surface, which demonstrates the effectiveness of separating the base shape and detail shape and two-stage training with truncated L1 loss on the composed shape.
|Only stage 1||26.64||48.14||72.61||3.592|
|Only stage 2||27.89||50.31||74.87||3.332|
|W/o truncated loss||28.03||49.84||74.23||3.410|
7.3 Qualitative Results
To demonstrate our network can be generalized to unconstrained data, Figure 13 shows our results on some unconstrained Internet images. Our method also successfully recovers certain shape details on these images. We further visualize the estimated surface normal map, which encodes the cloth wrinkles.
In our demo video, we demonstrate the performance of our method on some video clips, which are processed in a frame-by-frame fashion. The result shows that our method can even generate temporally coherent results without explicitly modeling it.
This paper proposes a neural network to estimate a detailed depth map for the human body in a single input RGB image. The recovered result can capture fine cloth wrinkles and produce temporally coherent depths for video inputs. It might be used in visualization applications such as the Microsoft Holoportation. This result is achieved by separating and estimating the base shape and detail shape respectively with a novel truncated L1 loss. We also introduce a novel parameter free shape refinement layer to further improve the final result with surface normals. Quantitative evaluation on lab data and qualitative examples on unconstrained Internet data demonstrate the success of the proposed method.
- (2018) Video based reconstruction of 3d people models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8387–8397. Cited by: §2.
- (2018) Densepose: dense human pose estimation in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7297–7306. Cited by: §1, §2.
- (2005) SCAPE: shape completion and animation of people. In ACM Trans. on Graph., Vol. 24, pp. 408–416. Cited by: §1, §2.
- (2007-06) Detailed human shape and pose from images. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Minneapolis, pp. 1–8. Cited by: §1.
- (2018) Learning to reconstruct texture-less deformable surfaces from a single view. In 2018 International Conference on 3D Vision (3DV), pp. 606–615. Cited by: §3, §7.
- () Blender - a 3D modelling and rendering package. Blender Foundation, Blender Institute, Amsterdam. Cited by: §7.
- (2016) Keep it SMPL: automatic estimation of 3d human pose and shape from a single image. In Proc. of European Conference on Computer Vision (ECCV), pp. 561–578. Cited by: §1, §2.
- (2016) Fusion4D: real-time performance capture of challenging scenes. ACM Trans. on Graph. 35 (4), pp. 114. Cited by: §1.
- (2014) Depth map prediction from a single image using a multi-scale deep network. In Advances in neural information processing systems, pp. 2366–2374. Cited by: §2.
- (2009-09) Estimating human shape and pose from a single image. In Proc. of International Conference on Computer Vision (ICCV), pp. 1381–1388. External Links: Cited by: §1.
- Neural networks for machine learning, lecture 6a overview of mini-batch gradient descent. Cited by: §7.
- (2015) Very High Frame Rate Volumetric Integration of Depth Images on Mobile Device. IEEE Transactions on Visualization and Computer Graphics 22 (11). Cited by: §7.2, §7.
- (2018) End-to-end recovery of human shape and pose. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2.
- (2005) Shapelets correlated with surface normals produce surfaces. In Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, Vol. 2, pp. 994–1001. Cited by: Figure 7, §7.1, §7.1, Table 1, §7.
- (2018) DepthNet: a recurrent neural network architecture for monocular depth prediction. In 1st International Workshop on Deep Learning for Visual SLAM,(CVPR), Vol. 2. Cited by: §2.
- (2016) Deeper depth prediction with fully convolutional residual networks. In 3D Vision (3DV), Cited by: §2, Figure 6, Figure 7, §7.1, §7.1, Table 1, §7.
- (2017-07) Unite the people: closing the loop between 3d and 2d human representations. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), External Links: Cited by: §1, §2.
- (2015) Maximum-margin structured learning with deep networks for 3D human pose estimation. In Proc. of International Conference on Computer Vision (ICCV), pp. 2848–2856. Cited by: §1, §2.
- (2015-10) SMPL: a skinned multi-person linear model. Proc. of SIGGRAPH (ACM Trans. on Graph.) 34 (6), pp. 248:1–248:16. Cited by: §1, §2.
- (2018) Unsupervised learning of depth and ego-motion from monocular video using 3d geometric constraints. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5667–5675. Cited by: §2.
- (2017) A simple yet effective baseline for 3D human pose estimation. In Proc. of International Conference on Computer Vision (ICCV), Cited by: §1, §2.
- (2017-07) VNect: real-time 3d human pose estimation with a single rgb camera. Vol. 36. External Links: Cited by: §1, §2.
- (2005-08) Efficiently combining positions and normals for precise 3D geometry. ACM Trans. on Graph. 24 (3). Cited by: §1.
- (2005) Efficiently combining positions and normals for precise 3d geometry. In ACM Trans. on Graph., Vol. 24, pp. 536–543. Cited by: §6.2, §6.2.
- (2016) Stacked hourglass networks for human pose estimation. In Proc. of European Conference on Computer Vision (ECCV), pp. 483–499. Cited by: §1, §2, §3, §4.
- (2016) Deep learning for human part discovery in images. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 1634–1641. Cited by: §4.
- (2018) Neural body fitting: unifying deep learning and model based human pose and shape estimation. In 2018 International Conference on 3D Vision (3DV), pp. 484–494. Cited by: §1, §2.
- (2017) Coarse-to-fine volumetric prediction for single-image 3d human pose. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7025–7034. Cited by: §4.
- (2017) Coarse-to-fine volumetric prediction for single-image 3D human pose. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.
- (2018) GeoNet: geometric neural network for joint depth and surface normal estimation. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 283–291. Cited by: Figure 3, Figure 4, §6.2.
- (2016) General automatic human shape and motion capture using volumetric contour cues. In Proc. of European Conference on Computer Vision (ECCV), pp. 509–526. Cited by: §1, §2.
- (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §5, §7.1.
- (2018) Integral human pose regression. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 529–545. Cited by: §1, §2, §4, §5.
- (2014-06) DeepPose: human pose estimation via deep neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1.
- (2018) BodyNet: volumetric inference of 3d human body shapes. In Proc. of European Conference on Computer Vision (ECCV), Cited by: §1, §2, §2, §4, §4, Figure 6, Figure 7, §7.1, §7.1, Table 1, §7.
- (2017) Learning from synthetic humans. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §2, §3, Figure 6, Figure 7, §7.1, §7.1, Table 1, §7, §7.
- (2018) Learning depth from monocular videos using direct methods. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2022–2030. Cited by: §2.
- (2018) GeoNet: unsupervised learning of dense depth, optical flow and camera pose. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 2. Cited by: §2.
- (2018) Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 340–349. Cited by: §2.
- (2018-06) Deep depth completion of a single rgb-d image. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1, §6.
- (2017) Unsupervised learning of depth and ego-motion from video. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.